path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
homework/week04_assignment.ipynb | ###Markdown
ExerciseDownload the [gene expression cancer RNA-Seq Data Set](https://archive.ics.uci.edu/ml/datasets/gene+expression+cancer+RNA-Seq) from the UCI Machine Learning Repository. This dataset contains gene expressions of patients having different types of tumor: BRCA, KIRC, COAD, LUAD and PRAD.1. Perform at least two different types of clustering. Check the clustering module in [scikit-learn](https://scikit-learn.org/stable/modules/clustering.html) and/or [scipy](https://docs.scipy.org/doc/scipy/reference/cluster.html). Do some preprocessing if necessary (check attributes, check for NaN values, perform dimension reduction, etc).2. Evaluate clustering performance using the [adjusted Rand score.](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.htmlsklearn.metrics.adjusted_rand_score) Try to achieve a clustering with Rand score higher than 0.985.3. Try some manifold learning methods for projecting the data to 2 dimensional space for visualization. Read the documentation for [Multidimensional scaling](https://scikit-learn.org/stable/modules/manifold.htmlmultidimensional-scaling) and [Stochastic neighbourhood embedding](https://scikit-learn.org/stable/modules/manifold.htmlt-sne) and try them. You may try other methods as well.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from sklearn.metrics import adjusted_rand_score
# add more imports if necessary
# from sklearn.cluster import ...
# from scipy.cluster import ...
###Output
_____no_output_____ |
1_retrain_from_checkpoint.ipynb | ###Markdown
**Jupyter Kernel**:* If you are in SageMaker Notebook instance, please make sure you are using **conda_pytorch_latest_p36** kernel* If you are on SageMaker Studio, please make sure you are using **SageMaker JumpStart PyTorch 1.0** kernel**Run All**:* If you are in SageMaker notebook instance, you can go to *Cell tab -> Run All** If you are in SageMaker Studio, you can go to *Run tab -> Run All Cells***Note**: To *Run All* successfully, make sure you have executed the entire demo notebook `0_demo.ipynb` first. Resume TrainingIn this notebook, we retrain our pretrained detector for a few more epochs and compare its results. The same process can be applied when finetuning on another dataset. For the purpose of this notebook, we use the same **NEU-DET** dataset. FinetuningFinetuning is one way to do Transfer Learning. Finetuning a Deep Learning model on one particular task, involves using the learned weights from a particular dataset to enhance the performace of the model on usually another dataset. In a sense, finetuning can be done over the same dataset used in the intial training but perhaps with different hyperparameters.
###Code
import json
import sagemaker
from sagemaker.s3 import S3Downloader
sagemaker_session = sagemaker.Session()
sagemaker_config = json.load(open("../stack_outputs.json"))
role = sagemaker_config["IamRole"]
solution_bucket = sagemaker_config["SolutionS3Bucket"]
region = sagemaker_config["AWSRegion"]
solution_name = sagemaker_config["SolutionName"]
bucket = sagemaker_config["S3Bucket"]
###Output
_____no_output_____
###Markdown
First, we download our **NEU-DET** dataset from our public S3 bucket
###Code
original_bucket = f"s3://{solution_bucket}-{region}/{solution_name}"
original_pretained_checkpoint = f"{original_bucket}/pretrained"
original_sources = f"{original_bucket}/build/lib/source_dir.tar.gz"
###Output
_____no_output_____
###Markdown
Note that for easiler data processing, we have already executed `prepare_data` once in our `0_demo.ipynb` and have already uploaded the prepared data to our S3 bucket
###Code
DATA_PATH = !echo $PWD/neu_det
DATA_PATH = DATA_PATH.n
###Output
_____no_output_____
###Markdown
After data preparation, we need to setup some paths that will be used throughtout the notebook
###Code
prefix = "neu-det"
neu_det_s3 = f"s3://{bucket}/{prefix}"
sources = f"{neu_det_s3}/code/"
train_output = f"{neu_det_s3}/output/"
neu_det_prepared_s3 = f"{neu_det_s3}/data/"
s3_checkpoint = f"{neu_det_s3}/checkpoint/"
sm_local_checkpoint_dir = "/opt/ml/checkpoints/"
s3_pretrained = f"{neu_det_s3}/pretrained/"
###Output
_____no_output_____
###Markdown
VisualizationLet examine some datasets that we will use later by providing an `ID`
###Code
import copy
import numpy as np
import torch
from PIL import Image
from torch.utils.data import DataLoader
try:
import sagemaker_defect_detection
except ImportError:
import sys
from pathlib import Path
ROOT = Path("../src").resolve()
sys.path.insert(0, str(ROOT))
from sagemaker_defect_detection import NEUDET, get_preprocess
SPLIT = "test"
ID = 30
assert 0 <= ID <= 300
dataset = NEUDET(DATA_PATH, split=SPLIT, preprocess=get_preprocess())
images, targets, _ = dataset[ID]
original_image = copy.deepcopy(images)
original_boxes = targets["boxes"].numpy().copy()
original_labels = targets["labels"].numpy().copy()
print(f"first images size: {original_image.shape}")
print(f"target bounding boxes: \n {original_boxes}")
print(f"target labels: {original_labels}")
###Output
first images size: torch.Size([3, 200, 200])
target bounding boxes:
[[144. 38. 192. 81.]
[ 3. 0. 59. 171.]
[157. 94. 198. 141.]
[ 99. 127. 162. 169.]
[ 29. 32. 94. 97.]]
target labels: [4 4 4 4 4]
###Markdown
And we can now visualize it using the provided utilities as follows
###Code
from sagemaker_defect_detection.utils.visualize import unnormalize_to_hwc, visualize
original_image_unnorm = unnormalize_to_hwc(original_image)
visualize(
original_image_unnorm,
[original_boxes],
[original_labels],
colors=[(255, 0, 0)],
titles=["original", "ground truth"],
)
###Output
_____no_output_____
###Markdown
Here we resume from a provided pretrained checkpoint `epoch=294-loss=0.654-main_score=0.349.ckpt` that we have copied into our `s3_pretrained`. This takes about **10 minutes** to complete
###Code
%%time
import logging
from os import path as osp
from sagemaker.pytorch import PyTorch
NUM_CLASSES = 7 # 6 classes + 1 for background
# Note: resnet34 was used in the pretrained model and it has to match the pretrained model backbone
# if need resnet50, need to train from scratch
BACKBONE = "resnet34"
assert BACKBONE in [
"resnet34",
"resnet50",
], "either resnet34 or resnet50. Make sure to be consistent with model_fn in detector.py"
EPOCHS = 5
LEARNING_RATE = 1e-4
SEED = 123
hyperparameters = {
"backbone": BACKBONE, # the backbone resnet model for feature extraction
"num-classes": NUM_CLASSES, # number of classes + background
"epochs": EPOCHS, # number of epochs to finetune
"learning-rate": LEARNING_RATE, # learning rate for optimizer
"seed": SEED, # random number generator seed
}
assert not isinstance(sagemaker_session, sagemaker.LocalSession), "local session as share memory cannot be altered"
finetuned_model = PyTorch(
entry_point="detector.py",
source_dir=osp.join(sources, "source_dir.tar.gz"),
role=role,
train_instance_count=1,
train_instance_type="ml.g4dn.2xlarge",
hyperparameters=hyperparameters,
py_version="py3",
framework_version="1.5",
sagemaker_session=sagemaker_session,
output_path=train_output,
checkpoint_s3_uri=s3_checkpoint,
checkpoint_local_path=sm_local_checkpoint_dir,
# container_log_level=logging.DEBUG,
)
finetuned_model.fit(
{
"training": neu_det_prepared_s3,
"pretrained_checkpoint": osp.join(s3_pretrained, "epoch=294-loss=0.654-main_score=0.349.ckpt"),
}
)
###Output
_____no_output_____
###Markdown
Then, we deploy our new model which takes about **10 minutes** to complete
###Code
%%time
finetuned_detector = finetuned_model.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge",
endpoint_name=sagemaker_config["SolutionPrefix"] + "-finetuned-endpoint",
)
###Output
_____no_output_____
###Markdown
InferenceWe change the input depending on whether we are providing a list of images or a single image. Also the model requires a four dimensional array / tensor (with the first dimension as batch)
###Code
input = list(img.numpy() for img in images) if isinstance(images, list) else images.unsqueeze(0).numpy()
###Output
_____no_output_____
###Markdown
Now the input is ready and we can get some results
###Code
%%time
finetuned_predictions = finetuned_detector.predict(input)
###Output
_____no_output_____
###Markdown
Here we want to compare the results of the new model and the pretrained model that we already deployed in `0_demo.ipynb` visually by calling our endpoint from SageMaker runtime using `boto3`
###Code
import boto3
import botocore
config = botocore.config.Config(read_timeout=200)
runtime = boto3.client("runtime.sagemaker", config=config)
payload = json.dumps(input.tolist() if isinstance(input, np.ndarray) else input)
response = runtime.invoke_endpoint(
EndpointName=sagemaker_config["SolutionPrefix"] + "-demo-endpoint", ContentType="application/json", Body=payload
)
demo_predictions = json.loads(response["Body"].read().decode())
###Output
_____no_output_____
###Markdown
Here comes the slight changes in inference
###Code
visualize(
original_image_unnorm,
[original_boxes, demo_predictions[0]["boxes"], finetuned_predictions[0]["boxes"]],
[original_labels, demo_predictions[0]["labels"], finetuned_predictions[0]["labels"]],
colors=[(255, 0, 0), (0, 0, 255), (127, 0, 127)],
titles=["original", "ground truth", "pretrained", "finetuned"],
dpi=250,
)
###Output
_____no_output_____
###Markdown
Optional: Delete the endpoint and modelWhen you are done with the endpoint, you should clean it up.All of the training jobs, models and endpoints we created can be viewed through the SageMaker console of your AWS account.
###Code
finetuned_detector.delete_model()
finetuned_detector.delete_endpoint()
###Output
_____no_output_____ |
notebooks/POS-HMM.ipynb | ###Markdown
Implementing HMM for POS TaggingIn this notebook we will implement a Hidden Markov Model for Parts-of-Speech Tagging.Associating each word in a sentence with a proper POS (part of speech) is known as POS tagging or POS annotation. POS tags are also known as word classes, morphological classes, or lexical tags. The tag in case of is a POS tag, and signifies whether the word is a noun, adjective, verb, and so on.
###Code
import nltk
import numpy as np
from tqdm import tqdm
# Inorder to get the notebooks running in current directory
import os, sys, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
import hmm
###Output
_____no_output_____
###Markdown
We will be making use of the Treebank corpora with the Universal Tagset.The Treebank corpora provide a syntactic parse for each sentence. The NLTK data package includes a 10% sample of the Penn Treebank (in treebank), as well as the Sinica Treebank (in sinica_treebank).Not all corpora employ the same set of tags. Initially we want to avoid the complications of these tagsets, so we use a built-in mapping to the "Universal Tagset".
###Code
# Download the treebank corpus from nltk
nltk.download('treebank')
# Download the universal tagset from nltk
nltk.download('universal_tagset')
# Reading the Treebank tagged sentences
nltk_data = list(nltk.corpus.treebank.tagged_sents(tagset='universal'))
###Output
_____no_output_____
###Markdown
A Look At Our DataLet's take a look at the data we haveWe have a total of *100,676* words tagged.This includes a total of *12* unique tags with *12,408* unique words.
###Code
# Sample Output
for (word, tag) in nltk_data[0]:
print(f"Word: {word} | Tag: {tag}")
tagged_words = [tags for sent in nltk_data for tags in sent]
print(f"Size of tagged words: {len(tagged_words)}")
print(f"Example: {tagged_words[0]}")
tags = list({tag for (word, tag) in tagged_words})
print(f"Tags: {tags} | Number of tags: {len(tags)}")
words = list({word for (word, tag) in tagged_words})
print(f"First 15 Words: {words[:15]} | Number of words: {len(words)}")
###Output
First 15 Words: ['sweeten', 'Walbrecher', 'tasty', 'breathe', 'Chapter', 'slipped', '7.272', '62', '85.1', 'Senate-House', 'directly', 'promote', 'insured', 'Jacob', 'Andy'] | Number of words: 12408
###Markdown
Computing Transition and Emission MatricesOnce we have our data ready, we will need to create our transition and emission matrices.Inorder to do this, we need to understand how we calculate these probability matrices. For Transition Matrices- For a given source_tag and destination_tag do: - Get total counts of source_tag in corpus (all_tags) - Loop through all_tags and do: - Get all counts of instances where at timestep i, the source_tag had dest_tag at timestep i + 1 - Get probability for dest_tag given source_tag as *P(destintation tag | source tag) = Count of destination tag to source tag / Count of source tag* For Emission Matrices- For a given word and tag do: - Get a list of (word, tag) from each pair of tagged words such that the iterating tag matches the given tag. - From this stored tags that was created from the given tag, create a list of words for which the iterating word matches the given word - Using the counts of the word given a tag and the total occurances of a tag, we compute the conditional probability *P(word | tag) = Count of word and tag / Count of given tag*
###Code
def compute_transition_matrix(tags, tagged_words):
all_tags = [tag for (_, tag) in tagged_words]
def compute_counts(dest_tag, source_tag):
count_source = len([t for t in all_tags if t == source_tag])
count_dest_source = 0
for i in range(len(all_tags) - 1):
if all_tags[i] == source_tag and all_tags[i + 1] == dest_tag:
count_dest_source += 1
return count_dest_source, count_source
trans_matrix = np.zeros((len(tags), len(tags)))
for i, source_tag in enumerate(tags):
for j, dest_tag in enumerate(tags):
count_dest_source, count_source = compute_counts(dest_tag, source_tag)
trans_matrix[i, j] = count_dest_source / count_source
return trans_matrix
# transition_matrix = compute_transition_matrix(tags, tagged_words)
# Computing Emission Probability
def compute_emission_matrix(words, tags, tagged_words):
def compute_counts(given_word, given_tag):
tags = [word_tag for word_tag in tagged_words if word_tag[1] == given_tag]
word_given_tag = [word for (word, _) in tags if word == given_word]
return len(word_given_tag), len(tags)
emi_matrix = np.zeros((len(tags), len(words)))
for i, tag in enumerate(tags):
for j, word in enumerate(tqdm(words, desc=f"Current Tag - {tag}")):
count_word_given_tag, count_tag = compute_counts(word, tag)
emi_matrix[i, j] = count_word_given_tag / count_tag
return emi_matrix
# emission_matrix = compute_emission_matrix(words, tags, tagged_words)
def save_matrices(observable_states, emission_matrix, hidden_states, transition_matrix, save_dir="state"):
try:
os.mkdir(save_dir)
except FileExistsError:
raise FileExistsError(
"Directory already exists! Please provide a different output directory!"
)
np.save(save_dir + '/observable_states', observable_states)
np.save(save_dir + '/emission_matrix', emission_matrix)
np.save(save_dir + '/hidden_states', hidden_states)
np.save(save_dir + '/transition_matrix', transition_matrix)
# save_matrices(words, emission_matrix, tags, transition_matrix)
def load_matrices(save_dir):
observable_states = np.load(save_dir + '/observable_states.npy')
emission_matrix = np.load(save_dir + '/emission_matrix.npy')
hidden_states = np.load(save_dir + '/hidden_states.npy')
transition_matrix = np.load(save_dir + '/transition_matrix.npy')
return observable_states.tolist(), emission_matrix, hidden_states.tolist(), transition_matrix
observable_states, emission_matrix, hidden_states, transition_matrix = load_matrices('./state')
###Output
_____no_output_____
###Markdown
Let us take a look at some of the observed and hidden states
###Code
observable_states[:15], hidden_states
###Output
_____no_output_____
###Markdown
We will need to write a function that tokenizes the input sentence with the index specified by our saved words list.
###Code
def tokenize(input_sent, words):
lookup = {word: i for i, word in enumerate(words)}
tokenized = []
input_sent = input_sent.split(' ')
for word in input_sent:
idx = lookup[word]
tokenized.append(idx)
return tokenized
###Output
_____no_output_____
###Markdown
Run The Markov ModelLet us now run our Hidden Markov Model with the observered and hidden states with our transition and emission matrices.
###Code
model = hmm.HiddenMarkovModel(
observable_states, hidden_states, transition_matrix, emission_matrix
)
model.print_model_info()
input_sent = 'Pierre Vinken , 61 years old , will join the board as a nonexecutive director Nov. 29 .'
input_tokens = tokenize(input_sent, observable_states)
print(input_tokens)
###Output
[459, 9165, 5817, 483, 3096, 10713, 5817, 9219, 166, 8733, 2232, 12367, 9530, 2522, 2367, 1640, 1038, 9079]
###Markdown
Forward Algorithm
###Code
alpha, a_probs = model.forward(input_tokens)
hmm.print_forward_result(alpha, a_probs)
###Output
**************************************************
Alpha:
[[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 2.07674884e-32 0.00000000e+00 0.00000000e+00
5.78704351e-39 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.34737310e-22 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.70169077e-35 0.00000000e+00 0.00000000e+00
2.90053235e-42 9.27114011e-43 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 6.67948823e-27
3.34058483e-31 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
4.46630861e-43 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 3.84616569e-39
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 2.07617756e-15
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 2.15306770e-36 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
2.10577512e-54 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 1.81956333e-11 0.00000000e+00
0.00000000e+00 0.00000000e+00 3.64486873e-24 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 8.03347109e-56]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 8.23340211e-38
1.87783614e-43 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[9.93262388e-06 1.81845001e-10 0.00000000e+00 0.00000000e+00
2.92029200e-18 0.00000000e+00 0.00000000e+00 2.80120070e-29
0.00000000e+00 1.27718813e-36 1.37892628e-35 0.00000000e+00
0.00000000e+00 0.00000000e+00 7.18788466e-46 1.57913668e-49
0.00000000e+00 0.00000000e+00]]
Probability of sequence: 8.033471093531899e-56
###Markdown
Backward AlgorithmLet us verify the output of the Forward Algorithm by running the Backward Algorithm.
###Code
beta, b_probs = model.backward(input_tokens)
hmm.print_backward_result(beta, b_probs)
###Output
**************************************************
Beta:
[[1.95362856e-50 3.24947814e-47 1.20687388e-45 6.99495502e-41
4.58408154e-37 1.62217948e-34 1.00401297e-32 2.81231267e-30
1.05664092e-26 3.86450232e-24 3.45435652e-22 1.67030781e-20
1.38792444e-17 7.90791958e-14 2.69963398e-10 1.18991076e-06
5.76746439e-03 1.00000000e+00]
[2.14067059e-50 1.19434578e-46 1.13434538e-45 7.66465786e-41
1.48615065e-37 5.96232114e-34 3.22995506e-33 8.53625372e-31
8.93840665e-27 4.23449300e-24 2.55158414e-21 1.46002723e-20
4.49962503e-18 8.66503040e-14 2.95809919e-10 1.11840168e-06
2.11983170e-02 1.00000000e+00]
[3.37803568e-51 6.44709638e-47 1.24693532e-45 1.20950359e-41
1.45785637e-37 3.21846986e-34 4.18398796e-32 1.19833922e-29
2.40480978e-25 6.68214368e-25 3.25678231e-21 4.00630030e-19
4.41395832e-18 1.36736507e-14 4.66795997e-11 1.22940912e-06
1.14428832e-02 1.00000000e+00]
[7.56871765e-51 7.89253368e-47 3.10170241e-45 2.70997469e-41
1.89710743e-37 3.94004995e-34 9.92991925e-32 2.84418739e-29
1.81458960e-25 1.49717953e-24 7.02657392e-22 3.02039069e-19
5.74388075e-18 3.06367401e-14 1.04588803e-10 3.05810669e-06
1.40083746e-02 1.00000000e+00]
[1.89764752e-51 3.01222267e-46 1.48505999e-46 6.79451525e-42
3.78852149e-38 1.50373863e-33 5.06712889e-32 1.45202162e-29
9.76955926e-26 3.75376538e-25 4.79997386e-21 1.62876741e-19
1.14705236e-18 7.68131890e-15 2.62227623e-11 1.46418685e-07
5.34636216e-02 1.00000000e+00]
[6.40781608e-51 7.46631856e-47 3.98680701e-46 2.29431460e-41
1.63457726e-37 3.72727812e-34 1.20191362e-31 3.44352408e-29
1.72478457e-26 1.26753983e-24 8.53700462e-22 2.84538785e-20
4.94901697e-18 2.59376297e-14 8.85468117e-11 3.93077077e-07
1.32518899e-02 1.00000000e+00]
[9.74889392e-52 2.50810572e-46 1.72057565e-45 3.49058546e-42
2.89931789e-37 1.25207724e-33 8.52771109e-32 2.44442581e-29
1.23407169e-25 1.92844351e-25 4.13757851e-21 2.05214849e-19
8.77827787e-18 3.94616821e-15 1.34715707e-11 1.69639224e-06
4.45161032e-02 1.00000000e+00]
[1.07025481e-50 6.50249632e-47 2.26427756e-45 3.83204075e-41
2.62702185e-37 3.24612619e-34 3.88974575e-32 1.11150926e-29
2.13634472e-25 2.11708729e-24 1.91344635e-21 3.55563020e-19
7.95384594e-18 4.33218942e-14 1.47893838e-10 2.23245219e-06
1.15412120e-02 1.00000000e+00]
[1.08067676e-50 2.14941336e-46 1.00933451e-44 3.86935647e-41
7.44378003e-38 1.07301360e-33 4.66185374e-33 1.29995275e-30
6.29376228e-27 2.13770310e-24 1.15083878e-21 1.01970958e-20
2.25375665e-18 4.37437549e-14 1.49334001e-10 9.95147892e-06
3.81497104e-02 1.00000000e+00]
[6.79038189e-51 1.71922756e-46 4.41505440e-45 2.43129205e-41
9.81456558e-38 8.58259555e-34 2.20404949e-32 6.29569944e-30
3.11193978e-25 1.34321575e-24 3.15248124e-21 5.18477934e-19
2.97156046e-18 2.74861839e-14 9.38333216e-11 4.35299896e-06
3.05143881e-02 1.00000000e+00]
[9.85891645e-51 7.34017016e-47 3.42069012e-45 3.52997896e-41
2.38510891e-37 3.66430328e-34 2.17410459e-33 5.89959279e-31
5.79950138e-25 1.95020724e-24 5.98383970e-22 9.66130641e-19
7.22140502e-18 3.99070325e-14 1.36236060e-10 3.37261089e-06
1.30279904e-02 1.00000000e+00]
[8.08796467e-51 4.41775747e-46 5.15977792e-46 2.89589077e-41
2.75091363e-38 2.20539890e-33 3.64294358e-32 1.04163879e-29
2.36048402e-26 1.59989257e-24 5.82588876e-21 3.93938966e-20
8.32895365e-19 3.27385540e-14 1.11764051e-10 5.08725508e-07
7.84103101e-02 1.00000000e+00]]
Probability of sequence: 8.033471093531906e-56
###Markdown
Viterbi AlgorithmThis algorithm will give us the POS for each token or word. This is useful to generate POS Tagger for different sentences.
###Code
path, delta, phi = model.viterbi(input_tokens)
hmm.print_viterbi_result(input_tokens, observable_states, hidden_states, path, delta, phi)
###Output
**************************************************
Starting Forward Walk
State=0 : Sequence=1 | phi[0, 1]=11.0
State=1 : Sequence=1 | phi[1, 1]=11.0
State=2 : Sequence=1 | phi[2, 1]=11.0
State=3 : Sequence=1 | phi[3, 1]=11.0
State=4 : Sequence=1 | phi[4, 1]=11.0
State=5 : Sequence=1 | phi[5, 1]=11.0
State=6 : Sequence=1 | phi[6, 1]=11.0
State=7 : Sequence=1 | phi[7, 1]=11.0
State=8 : Sequence=1 | phi[8, 1]=11.0
State=9 : Sequence=1 | phi[9, 1]=11.0
State=10 : Sequence=1 | phi[10, 1]=11.0
State=11 : Sequence=1 | phi[11, 1]=11.0
State=0 : Sequence=2 | phi[0, 2]=11.0
State=1 : Sequence=2 | phi[1, 2]=11.0
State=2 : Sequence=2 | phi[2, 2]=11.0
State=3 : Sequence=2 | phi[3, 2]=11.0
State=4 : Sequence=2 | phi[4, 2]=11.0
State=5 : Sequence=2 | phi[5, 2]=11.0
State=6 : Sequence=2 | phi[6, 2]=11.0
State=7 : Sequence=2 | phi[7, 2]=11.0
State=8 : Sequence=2 | phi[8, 2]=11.0
State=9 : Sequence=2 | phi[9, 2]=11.0
State=10 : Sequence=2 | phi[10, 2]=11.0
State=11 : Sequence=2 | phi[11, 2]=11.0
State=0 : Sequence=3 | phi[0, 3]=9.0
State=1 : Sequence=3 | phi[1, 3]=9.0
State=2 : Sequence=3 | phi[2, 3]=9.0
State=3 : Sequence=3 | phi[3, 3]=9.0
State=4 : Sequence=3 | phi[4, 3]=9.0
State=5 : Sequence=3 | phi[5, 3]=9.0
State=6 : Sequence=3 | phi[6, 3]=9.0
State=7 : Sequence=3 | phi[7, 3]=9.0
State=8 : Sequence=3 | phi[8, 3]=9.0
State=9 : Sequence=3 | phi[9, 3]=9.0
State=10 : Sequence=3 | phi[10, 3]=9.0
State=11 : Sequence=3 | phi[11, 3]=9.0
State=0 : Sequence=4 | phi[0, 4]=8.0
State=1 : Sequence=4 | phi[1, 4]=8.0
State=2 : Sequence=4 | phi[2, 4]=8.0
State=3 : Sequence=4 | phi[3, 4]=8.0
State=4 : Sequence=4 | phi[4, 4]=8.0
State=5 : Sequence=4 | phi[5, 4]=8.0
State=6 : Sequence=4 | phi[6, 4]=8.0
State=7 : Sequence=4 | phi[7, 4]=8.0
State=8 : Sequence=4 | phi[8, 4]=8.0
State=9 : Sequence=4 | phi[9, 4]=8.0
State=10 : Sequence=4 | phi[10, 4]=8.0
State=11 : Sequence=4 | phi[11, 4]=8.0
State=0 : Sequence=5 | phi[0, 5]=11.0
State=1 : Sequence=5 | phi[1, 5]=11.0
State=2 : Sequence=5 | phi[2, 5]=11.0
State=3 : Sequence=5 | phi[3, 5]=11.0
State=4 : Sequence=5 | phi[4, 5]=11.0
State=5 : Sequence=5 | phi[5, 5]=11.0
State=6 : Sequence=5 | phi[6, 5]=11.0
State=7 : Sequence=5 | phi[7, 5]=11.0
State=8 : Sequence=5 | phi[8, 5]=11.0
State=9 : Sequence=5 | phi[9, 5]=11.0
State=10 : Sequence=5 | phi[10, 5]=11.0
State=11 : Sequence=5 | phi[11, 5]=11.0
State=0 : Sequence=6 | phi[0, 6]=1.0
State=1 : Sequence=6 | phi[1, 6]=1.0
State=2 : Sequence=6 | phi[2, 6]=1.0
State=3 : Sequence=6 | phi[3, 6]=1.0
State=4 : Sequence=6 | phi[4, 6]=1.0
State=5 : Sequence=6 | phi[5, 6]=1.0
State=6 : Sequence=6 | phi[6, 6]=1.0
State=7 : Sequence=6 | phi[7, 6]=1.0
State=8 : Sequence=6 | phi[8, 6]=1.0
State=9 : Sequence=6 | phi[9, 6]=1.0
State=10 : Sequence=6 | phi[10, 6]=1.0
State=11 : Sequence=6 | phi[11, 6]=1.0
State=0 : Sequence=7 | phi[0, 7]=9.0
State=1 : Sequence=7 | phi[1, 7]=9.0
State=2 : Sequence=7 | phi[2, 7]=9.0
State=3 : Sequence=7 | phi[3, 7]=9.0
State=4 : Sequence=7 | phi[4, 7]=9.0
State=5 : Sequence=7 | phi[5, 7]=9.0
State=6 : Sequence=7 | phi[6, 7]=9.0
State=7 : Sequence=7 | phi[7, 7]=9.0
State=8 : Sequence=7 | phi[8, 7]=9.0
State=9 : Sequence=7 | phi[9, 7]=9.0
State=10 : Sequence=7 | phi[10, 7]=9.0
State=11 : Sequence=7 | phi[11, 7]=9.0
State=0 : Sequence=8 | phi[0, 8]=2.0
State=1 : Sequence=8 | phi[1, 8]=2.0
State=2 : Sequence=8 | phi[2, 8]=2.0
State=3 : Sequence=8 | phi[3, 8]=2.0
State=4 : Sequence=8 | phi[4, 8]=2.0
State=5 : Sequence=8 | phi[5, 8]=2.0
State=6 : Sequence=8 | phi[6, 8]=2.0
State=7 : Sequence=8 | phi[7, 8]=2.0
State=8 : Sequence=8 | phi[8, 8]=2.0
State=9 : Sequence=8 | phi[9, 8]=2.0
State=10 : Sequence=8 | phi[10, 8]=2.0
State=11 : Sequence=8 | phi[11, 8]=2.0
State=0 : Sequence=9 | phi[0, 9]=2.0
State=1 : Sequence=9 | phi[1, 9]=2.0
State=2 : Sequence=9 | phi[2, 9]=2.0
State=3 : Sequence=9 | phi[3, 9]=2.0
State=4 : Sequence=9 | phi[4, 9]=2.0
State=5 : Sequence=9 | phi[5, 9]=2.0
State=6 : Sequence=9 | phi[6, 9]=2.0
State=7 : Sequence=9 | phi[7, 9]=2.0
State=8 : Sequence=9 | phi[8, 9]=2.0
State=9 : Sequence=9 | phi[9, 9]=2.0
State=10 : Sequence=9 | phi[10, 9]=2.0
State=11 : Sequence=9 | phi[11, 9]=2.0
State=0 : Sequence=10 | phi[0, 10]=0.0
State=1 : Sequence=10 | phi[1, 10]=0.0
State=2 : Sequence=10 | phi[2, 10]=0.0
State=3 : Sequence=10 | phi[3, 10]=0.0
State=4 : Sequence=10 | phi[4, 10]=0.0
State=5 : Sequence=10 | phi[5, 10]=0.0
State=6 : Sequence=10 | phi[6, 10]=0.0
State=7 : Sequence=10 | phi[7, 10]=0.0
State=8 : Sequence=10 | phi[8, 10]=0.0
State=9 : Sequence=10 | phi[9, 10]=0.0
State=10 : Sequence=10 | phi[10, 10]=0.0
State=11 : Sequence=10 | phi[11, 10]=0.0
State=0 : Sequence=11 | phi[0, 11]=11.0
State=1 : Sequence=11 | phi[1, 11]=11.0
State=2 : Sequence=11 | phi[2, 11]=11.0
State=3 : Sequence=11 | phi[3, 11]=11.0
State=4 : Sequence=11 | phi[4, 11]=11.0
State=5 : Sequence=11 | phi[5, 11]=11.0
State=6 : Sequence=11 | phi[6, 11]=11.0
State=7 : Sequence=11 | phi[7, 11]=11.0
State=8 : Sequence=11 | phi[8, 11]=11.0
State=9 : Sequence=11 | phi[9, 11]=11.0
State=10 : Sequence=11 | phi[10, 11]=11.0
State=11 : Sequence=11 | phi[11, 11]=11.0
State=0 : Sequence=12 | phi[0, 12]=10.0
State=1 : Sequence=12 | phi[1, 12]=10.0
State=2 : Sequence=12 | phi[2, 12]=6.0
State=3 : Sequence=12 | phi[3, 12]=10.0
State=4 : Sequence=12 | phi[4, 12]=10.0
State=5 : Sequence=12 | phi[5, 12]=10.0
State=6 : Sequence=12 | phi[6, 12]=10.0
State=7 : Sequence=12 | phi[7, 12]=10.0
State=8 : Sequence=12 | phi[8, 12]=10.0
State=9 : Sequence=12 | phi[9, 12]=10.0
State=10 : Sequence=12 | phi[10, 12]=10.0
State=11 : Sequence=12 | phi[11, 12]=10.0
State=0 : Sequence=13 | phi[0, 13]=0.0
State=1 : Sequence=13 | phi[1, 13]=0.0
State=2 : Sequence=13 | phi[2, 13]=0.0
State=3 : Sequence=13 | phi[3, 13]=0.0
State=4 : Sequence=13 | phi[4, 13]=0.0
State=5 : Sequence=13 | phi[5, 13]=0.0
State=6 : Sequence=13 | phi[6, 13]=0.0
State=7 : Sequence=13 | phi[7, 13]=0.0
State=8 : Sequence=13 | phi[8, 13]=0.0
State=9 : Sequence=13 | phi[9, 13]=0.0
State=10 : Sequence=13 | phi[10, 13]=0.0
State=11 : Sequence=13 | phi[11, 13]=0.0
State=0 : Sequence=14 | phi[0, 14]=1.0
State=1 : Sequence=14 | phi[1, 14]=1.0
State=2 : Sequence=14 | phi[2, 14]=1.0
State=3 : Sequence=14 | phi[3, 14]=1.0
State=4 : Sequence=14 | phi[4, 14]=1.0
State=5 : Sequence=14 | phi[5, 14]=1.0
State=6 : Sequence=14 | phi[6, 14]=1.0
State=7 : Sequence=14 | phi[7, 14]=1.0
State=8 : Sequence=14 | phi[8, 14]=1.0
State=9 : Sequence=14 | phi[9, 14]=1.0
State=10 : Sequence=14 | phi[10, 14]=1.0
State=11 : Sequence=14 | phi[11, 14]=1.0
State=0 : Sequence=15 | phi[0, 15]=11.0
State=1 : Sequence=15 | phi[1, 15]=11.0
State=2 : Sequence=15 | phi[2, 15]=11.0
State=3 : Sequence=15 | phi[3, 15]=11.0
State=4 : Sequence=15 | phi[4, 15]=11.0
State=5 : Sequence=15 | phi[5, 15]=11.0
State=6 : Sequence=15 | phi[6, 15]=11.0
State=7 : Sequence=15 | phi[7, 15]=11.0
State=8 : Sequence=15 | phi[8, 15]=11.0
State=9 : Sequence=15 | phi[9, 15]=11.0
State=10 : Sequence=15 | phi[10, 15]=11.0
State=11 : Sequence=15 | phi[11, 15]=11.0
State=0 : Sequence=16 | phi[0, 16]=11.0
State=1 : Sequence=16 | phi[1, 16]=11.0
State=2 : Sequence=16 | phi[2, 16]=11.0
State=3 : Sequence=16 | phi[3, 16]=11.0
State=4 : Sequence=16 | phi[4, 16]=11.0
State=5 : Sequence=16 | phi[5, 16]=11.0
State=6 : Sequence=16 | phi[6, 16]=11.0
State=7 : Sequence=16 | phi[7, 16]=11.0
State=8 : Sequence=16 | phi[8, 16]=11.0
State=9 : Sequence=16 | phi[9, 16]=11.0
State=10 : Sequence=16 | phi[10, 16]=11.0
State=11 : Sequence=16 | phi[11, 16]=11.0
State=0 : Sequence=17 | phi[0, 17]=8.0
State=1 : Sequence=17 | phi[1, 17]=8.0
State=2 : Sequence=17 | phi[2, 17]=8.0
State=3 : Sequence=17 | phi[3, 17]=8.0
State=4 : Sequence=17 | phi[4, 17]=8.0
State=5 : Sequence=17 | phi[5, 17]=8.0
State=6 : Sequence=17 | phi[6, 17]=8.0
State=7 : Sequence=17 | phi[7, 17]=8.0
State=8 : Sequence=17 | phi[8, 17]=8.0
State=9 : Sequence=17 | phi[9, 17]=8.0
State=10 : Sequence=17 | phi[10, 17]=8.0
State=11 : Sequence=17 | phi[11, 17]=8.0
**************************************************
Start Backtrace
Path[16]=8
Path[15]=11
Path[14]=11
Path[13]=1
Path[12]=0
Path[11]=10
Path[10]=11
Path[9]=0
Path[8]=2
Path[7]=2
Path[6]=9
Path[5]=1
Path[4]=11
Path[3]=8
Path[2]=9
Path[1]=11
Path[0]=11
**************************************************
Viterbi Result
Delta:
[[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 2.06920587e-32 0.00000000e+00 0.00000000e+00
5.70384443e-39 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.34737310e-22 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.69551005e-35 0.00000000e+00 0.00000000e+00
2.73202689e-42 9.13615385e-43 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 6.67948823e-27
3.32845149e-31 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
4.31245133e-43 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 3.82844161e-39
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 2.07617756e-15
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 2.14524754e-36 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
2.07511538e-54 0.00000000e+00]
[0.00000000e+00 0.00000000e+00 1.81956333e-11 0.00000000e+00
0.00000000e+00 0.00000000e+00 3.64486873e-24 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 7.91650508e-56]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 8.19546058e-38
1.40864100e-43 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[9.93262388e-06 1.81845001e-10 0.00000000e+00 0.00000000e+00
2.92029200e-18 0.00000000e+00 0.00000000e+00 2.80120070e-29
0.00000000e+00 1.27254925e-36 1.37257185e-35 0.00000000e+00
0.00000000e+00 0.00000000e+00 7.08323025e-46 1.55614471e-49
0.00000000e+00 0.00000000e+00]]
Phi:
[[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 6. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]
[ 0. 11. 11. 9. 8. 11. 1. 9. 2. 2. 0. 11. 10. 0. 1. 11. 11. 8.]]
Result:
Observation BestPath
0 Pierre NOUN
1 Vinken NOUN
2 , .
3 61 NUM
4 years NOUN
5 old ADJ
6 , .
7 will VERB
8 join VERB
9 the DET
10 board NOUN
11 as ADP
12 a DET
13 nonexecutive ADJ
14 director NOUN
15 Nov. NOUN
16 29 NUM
17 . .
|
docs/notebooks/python/example_contrast.ipynb | ###Markdown
Example of spectral contrastingThis example demonstrates how to use the contrasting module for finding the frequency bands that provide maximal separation between two timeseries arrays. To open this notebook in collab, right-click the link below and open in a new tab:[](https://colab.research.google.com/github/theonlyid/spectral/blob/release/1.1/docs/notebooks/python/example_contrast.ipynb)
###Code
# Run this cell if running notebook from Google Colab
if 'google.colab' in str(get_ipython()):
print('Running on CoLab')
# clone repo from github
!git clone -b release/1.1 --depth 1 https://github.com/theonlyid/spectral.git
# install the package
%cd spectral
!python setup.py install
%cd ..
# import depencies
import numpy as np
from spectral.contrast import decimate, contrast, filter
from spectral.data_handling import *
# The dataset is composed of two objects: a timeseries and params
# The 'simulate()' method simulates timeseries data
# We'll generate two timeseries' and contrast them
ts1 = DataArray.simulate_recording(fs=1000, nchannels=10, ntrials=5, seed=10)
ts2 = DataArray.simulate_recording(fs=1000, nchannels=10, ntrials=5, seed=30)
ts = np.append(ts1, ts2, axis=-1)
print(f"shape of ts array={np.shape(ts)}")
# DataArray stores the timeseries array along with its sampling frequency
da = DataArray(ts, fs=1000)
# TsParams stores the params for time-frequency analysis
params = TsParams(nperseg=64, noverlap=48)
# A dataset object is a combination of DataArray and TsParams
# This ensures that they are locked together to enable correct processing
ds = Dataset(da, params)
# We're going to decimate our data by a factor of 10
ds.data_array.data = decimate(ds.data_array.data, 10)
ds.data_array.fs = ds.data_array.fs//10
# y stores binary labels of the trials to contrast
y = np.ones((ds.data_array.data.shape[-1]))
y[5:] = 0
# Contrast then returns an SNR matrix with combinations of band_start and band_stop
# This will inform timseries filtering that enables maximal separability between signals
snr, f = contrast(ds, y, fs=100, nperseg=64, noverlap=48)
# Plot the SNR matrix to visualize results
import matplotlib.pyplot as plt
plt.pcolormesh(f, f, np.log(snr));
plt.xlabel('Band stop (Hz)');
plt.ylabel('Band start (Hz)');
plt.grid();
plt.colorbar();
# Find the frequency range where SNR is highest
idx = np.where(snr==max(snr.ravel()))
start_band, stop_band = np.squeeze(f[idx[0]]), np.squeeze(f[idx[1]])
print(f"optimal band for separating the signals is {start_band:.1f} - {stop_band:.1f} Hz")
# filter the signal within that range
# first check if band ranges aren't extremums
start_band = 0.1 if start_band == 0 else start_band
stop_band = 49.9 if stop_band ==50 else stop_band
filtered_signal = filter(data=ds.data_array.data, low_pass=start_band, high_pass=stop_band, fs=100)
# generate t for plotting
t = np.arange(start=0, stop=filtered_signal.shape[1]/100, step=1/100)
plt.plot(t, filtered_signal[0,:,1]);
plt.plot(t, filtered_signal[0,:,-1]);
plt.xlabel('time (s)')
plt.ylabel('amplitude')
###Output
_____no_output_____ |
Ajiboye Azeezat WT-21-011/2-checkpoint.ipynb | ###Markdown
Grouping your data
###Code
import warnings
warnings.simplefilter('ignore', FutureWarning)
import matplotlib
matplotlib.rcParams['axes.grid'] = True # show gridlines by default
%matplotlib inline
import pandas as pd
if pd.__version__.startswith('0.23'):
# this solves an incompatibility between pandas 0.23 and datareader 0.6
# taken from https://stackoverflow.com/questions/50394873/
core.common.is_list_like = api.types.is_list_like
from pandas_datareader.wb import download
?download
YEAR = 2013
GDP_INDICATOR = 'NY.GDP.MKTP.CD'
gdp = download(indicator=GDP_INDICATOR, country=['GB','CN'],
start=YEAR-5, end=YEAR)
gdp = gdp.reset_index()
gdp
gdp.groupby('country')['NY.GDP.MKTP.CD'].aggregate(sum)
gdp.groupby('year')['NY.GDP.MKTP.CD'].aggregate(sum)
LOCATION='comtrade_milk_uk_monthly_14.csv'
# LOCATION = 'http://comtrade.un.org/api/get?max=5000&type=C&freq=M&px=HS&ps=2014&r=826&p=all&rg=1%2C2&cc=0401%2C0402&fmt=csv'
milk = pd.read_csv(LOCATION, dtype={'Commodity Code':str, 'Reporter Code':str})
milk.head(3)
COLUMNS = ['Year', 'Period','Trade Flow','Reporter', 'Partner', 'Commodity','Commodity Code','Trade Value (US$)']
milk = milk[COLUMNS]
milk_world = milk[milk['Partner'] == 'World']
milk_countries = milk[milk['Partner'] != 'World']
milk_countries.to_csv('countrymilk.csv', index=False)
load_test = pd.read_csv('countrymilk.csv', dtype={'Commodity Code':str, 'Reporter Code':str})
load_test.head(2)
milk_imports = milk[milk['Trade Flow'] == 'Imports']
milk_countries_imports = milk_countries[milk_countries['Trade Flow'] == 'Imports']
milk_world_imports=milk_world[milk_world['Trade Flow'] == 'Imports']
milkImportsInJanuary2014 = milk_countries_imports[milk_countries_imports['Period'] == 201401]
milkImportsInJanuary2014.sort_values('Trade Value (US$)',ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Make sure you run all the cell above! Grouping dataOn many occasions, a dataframe may be organised as groups of rows where the group membership is identified based on cell values within one or more 'key' columns. **Grouping** refers to the process whereby rows associated with a particular group are collated so that you can work with just those rows as distinct subsets of the whole dataset.The number of groups the dataframe will be split into is based on the number of unique values identified within a single key column, or the number of unique combinations of values for two or more key columns.The `groupby()` method runs down each row in a data frame, splitting the rows into separate groups based on the unique values associated with the key column or columns.The following is an example of the steps and code needed to split a dataframe. Grouping the data Split the data into two different subsets of data (imports and exports), by grouping on trade flow.
###Code
groups = milk_countries.groupby('Trade Flow')
###Output
_____no_output_____
###Markdown
Inspect the first few rows associated with a particular group:
###Code
groups.get_group('Imports').head()
###Output
_____no_output_____
###Markdown
As well as grouping on a single term, you can create groups based on multiple columns by passing in several column names as a list. For example, generate groups based on commodity code *and* trade flow, and then preview the keys used to define the groups.
###Code
GROUPING_COMMFLOW = ['Commodity Code','Trade Flow']
groups = milk_countries.groupby(GROUPING_COMMFLOW)
groups.groups.keys()
###Output
_____no_output_____
###Markdown
Retrieve a group based on multiple group levels by passing in a tuple that specifies a value for each index column. For example, if a grouping is based on the `'Partner'` and `'Trade Flow'` columns, the argument of `get_group` has to be a partner/flow pair, like `('France', 'Import')` to get all rows associated with imports from France.
###Code
GROUPING_PARTNERFLOW = ['Partner','Trade Flow']
groups = milk_countries.groupby(GROUPING_PARTNERFLOW)
GROUP_PARTNERFLOW= ('France','Imports')
groups.get_group( GROUP_PARTNERFLOW )
###Output
_____no_output_____
###Markdown
To find the leading partner for a particular commodity, group by commodity, get the desired group, and then sort the result.
###Code
groups = milk_countries.groupby(['Commodity Code'])
groups.get_group('0402').sort_values("Trade Value (US$)", ascending=False).head()
###Output
_____no_output_____ |
HouMath/Face_classification_HouMath_XGB_04.ipynb | ###Markdown
In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found Our work will be orginized in the follwing order:•Background•Exploratory Data Analysis•Data Prepration and Model Selection•Final Results Background The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.The seven predictor variables are:•Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.•Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)The nine discrete facies (classes of rocks) are:1.Nonmarine sandstone2.Nonmarine coarse siltstone 3.Nonmarine fine siltstone 4.Marine siltstone and shale 5.Mudstone (limestone)6.Wackestone (limestone)7.Dolomite8.Packstone-grainstone (limestone)9.Phylloid-algal bafflestone (limestone)These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.Facies/ Label/ Adjacent Facies1 SS 2 2 CSiS 1,3 3 FSiS 2 4 SiSh 5 5 MS 4,6 6 WS 5,7 7 D 6,8 8 PS 6,7,9 9 BS 7,8 Exprolatory Data Analysis After the background intorduction, we start to import the pandas library for some basic data analysis and manipulation. The matplotblib and seaborn are imported for data vislization.
###Code
%matplotlib inline
import pandas as pd
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import matplotlib.colors as colors
import xgboost as xgb
import numpy as np
from sklearn.metrics import confusion_matrix, f1_score, accuracy_score, roc_auc_score
from classification_utilities import display_cm, display_adj_cm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import validation_curve
from sklearn.datasets import load_svmlight_files
from sklearn.model_selection import StratifiedKFold, cross_val_score, LeavePGroupsOut
from sklearn.datasets import make_classification
from xgboost.sklearn import XGBClassifier
from scipy.sparse import vstack
#use a fixed seed for reproducibility
seed = 123
np.random.seed(seed)
filename = './facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head(10)
###Output
_____no_output_____
###Markdown
Set columns 'Well Name' and 'Formation' to be category
###Code
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data.info()
training_data.describe()
###Output
/Users/littleni/anaconda/lib/python3.5/site-packages/numpy/lib/function_base.py:3834: RuntimeWarning: Invalid value encountered in percentile
RuntimeWarning)
###Markdown
Check distribution of classes in whole dataset
###Code
plt.figure(figsize=(5,5))
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',
'#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_counts = training_data['Facies'].value_counts().sort_index()
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')
###Output
_____no_output_____
###Markdown
Check distribution of classes in each well
###Code
wells = training_data['Well Name'].unique()
plt.figure(figsize=(15,9))
for index, w in enumerate(wells):
ax = plt.subplot(2,5,index+1)
facies_counts = pd.Series(np.zeros(9), index=range(1,10))
facies_counts = facies_counts.add(training_data[training_data['Well Name']==w]['Facies'].value_counts().sort_index())
#facies_counts.replace(np.nan,0)
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title=w)
ax.set_ylim(0,160)
###Output
_____no_output_____
###Markdown
We can see that classes are very imbalanced in each well
###Code
plt.figure(figsize=(5,5))
sns.heatmap(training_data.corr(), vmax=1.0, square=True)
###Output
_____no_output_____
###Markdown
Data Preparation and Model Selection Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
###Code
X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 )
Y_train = training_data['Facies' ] - 1
dtrain = xgb.DMatrix(X_train, Y_train)
features = ['GR','ILD_log10','DeltaPHI','PHIND','PE','NM_M','RELPOS']
###Output
_____no_output_____
###Markdown
The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
###Code
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
###Output
_____no_output_____
###Markdown
Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
###Code
skf = StratifiedKFold(n_splits=5)
cv = skf.split(X_train, Y_train)
def modelfit(alg, Xtrain, Ytrain, useTrainCV=True, cv_fold=skf):
#Fit the algorithm on the data
alg.fit(Xtrain, Ytrain,eval_metric='merror')
#Predict training set:
dtrain_prediction = alg.predict(Xtrain)
#dtrain_predprob = alg.predict_proba(Xtrain)[:,1]
#Pring model report
print ("\nModel Report")
print ("Accuracy : %.4g" % accuracy_score(Ytrain,dtrain_prediction))
print ("F1 score (Train) : %f" % f1_score(Ytrain,dtrain_prediction,average='micro'))
#Perform cross-validation:
if useTrainCV:
cv_score = cross_val_score(alg, Xtrain, Ytrain, cv=cv_fold, scoring='f1_micro')
print ("CV Score : Mean - %.7g | Std - %.7g | Min - %.7g | Max - %.7g" %
(np.mean(cv_score), np.std(cv_score), np.min(cv_score), np.max(cv_score)))
#Pring Feature Importance
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar',title='Feature Importances')
plt.ylabel('Feature Importance Score')
###Output
_____no_output_____
###Markdown
General Approach for Parameter Tuning We are going to preform the steps as follows:1.Choose a relatively high learning rate, e.g., 0.1. Usually somewhere between 0.05 and 0.3 should work for different problems. 2.Determine the optimum number of tress for this learning rate.XGBoost has a very usefull function called as "cv" which performs cross-validation at each boosting iteration and thus returns the optimum number of tress required.3.Tune tree-based parameters(max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees. 4.Tune regularization parameters(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.5.Lower the learning rate and decide the optimal parameters. Step 1:Fix learning rate and number of estimators for tuning tree-based parameters In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:1.max_depth = 52.min_child_weight = 1 3.gamma = 0 4.subsample, colsample_bytree = 0.8 : This is a commonly used used start value. 5.scale_pos_weight = 1Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.
###Code
xgb1= XGBClassifier(
learning_rate=0.05,
objective = 'multi:softmax',
nthread = 4,
seed = seed
)
xgb1
modelfit(xgb1, X_train, Y_train)
###Output
Model Report
Accuracy : 0.6756
F1 score (Train) : 0.675584
CV Score : Mean - 0.5253533 | Std - 0.05590592 | Min - 0.4323671 | Max - 0.5795181
###Markdown
Step 2: Tune max_depth and min_child_weight
###Code
param_test1={
'n_estimators':range(20, 100, 10)
}
gs1 = GridSearchCV(xgb1,param_grid=param_test1,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs1.fit(X_train, Y_train)
gs1.grid_scores_, gs1.best_params_,gs1.best_score_
gs1.best_estimator_
param_test2={
'max_depth':range(5,16,2),
'min_child_weight':range(1,15,2)
}
gs2 = GridSearchCV(gs1.best_estimator_,param_grid=param_test2,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs2.fit(X_train, Y_train)
gs2.grid_scores_, gs2.best_params_,gs2.best_score_
gs2.best_estimator_
modelfit(gs2.best_estimator_, X_train, Y_train)
###Output
Model Report
Accuracy : 0.7932
F1 score (Train) : 0.793203
CV Score : Mean - 0.5371519 | Std - 0.06337505 | Min - 0.4227053 | Max - 0.6024096
###Markdown
Step 3: Tune gamma
###Code
param_test3={
'gamma':[0,.05,.1,.15,.2,.3,.4],
'subsample':[0.6,.7,.75,.8,.85,.9],
'colsample_bytree':[i/10.0 for i in range(4,10)]
}
gs3 = GridSearchCV(gs2.best_estimator_,param_grid=param_test3,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs3.fit(X_train, Y_train)
gs3.grid_scores_, gs3.best_params_,gs3.best_score_
gs3.best_estimator_
modelfit(gs3.best_estimator_,X_train,Y_train)
###Output
Model Report
Accuracy : 0.7857
F1 score (Train) : 0.785732
CV Score : Mean - 0.5477532 | Std - 0.06775601 | Min - 0.4311594 | Max - 0.6228916
###Markdown
Step 5: Tuning Regularization Parameters
###Code
param_test4={
'reg_alpha':[0, 1e-5, 1e-2, 0.1, 0.2],
'reg_lambda':[0, .25,.5,.75,.1]
}
gs4 = GridSearchCV(gs3.best_estimator_,param_grid=param_test4,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs4.fit(X_train, Y_train)
gs4.grid_scores_, gs4.best_params_,gs4.best_score_
modelfit(gs4.best_estimator_,X_train, Y_train)
gs4.best_estimator_
param_test5={
'reg_alpha':[.15,0.2,.25,.3,.4],
}
gs5 = GridSearchCV(gs4.best_estimator_,param_grid=param_test5,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs5.fit(X_train, Y_train)
gs5.grid_scores_, gs5.best_params_,gs5.best_score_
modelfit(gs5.best_estimator_, X_train, Y_train)
gs5.best_estimator_
###Output
_____no_output_____
###Markdown
Step 6: Reducing Learning Rate
###Code
xgb4 = XGBClassifier(
learning_rate = 0.025,
n_estimators=120,
max_depth=7,
min_child_weight=7,
gamma = 0.05,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.2,
reg_lambda =0.75,
objective='multi:softmax',
nthread =4,
seed = seed,
)
modelfit(xgb4,X_train, Y_train)
xgb5 = XGBClassifier(
learning_rate = 0.00625,
n_estimators=480,
max_depth=7,
min_child_weight=7,
gamma = 0.05,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.2,
reg_lambda =0.75,
objective='multi:softmax',
nthread =4,
seed = seed,
)
modelfit(xgb5,X_train, Y_train)
###Output
Model Report
Accuracy : 0.7862
F1 score (Train) : 0.786214
CV Score : Mean - 0.5431752 | Std - 0.06553735 | Min - 0.4323671 | Max - 0.6144578
###Markdown
Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
###Code
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
X_train_nowell = X_train.drop(['Well Name'], axis=1)
Y_train = data['Facies' ] - 1
# Final recommended model based on the extensive parameters search
model_final = gs5.best_estimator_
model_final.fit( X_train_nowell , Y_train , eval_metric = 'merror' )
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
# Split data for training and testing
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
# Train the model based on training data
# Predict on the test set
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
###Output
------------------------------------------------------
Validation on the leaving out well SHRIMPLIN
Model Report
-Accuracy: 0.861996
-Adjacent Accuracy: 0.978769
-F1 Score: 0.861164
Confusion Matrix Results
Pred SS CSiS FSiS SiSh MS WS D PS BS Total
True
SS 0
CSiS 107 11 118
FSiS 13 110 123
SiSh 15 1 2 18
MS 3 44 10 6 63
WS 2 54 1 6 63
D 2 3 5
PS 1 4 1 63 69
BS 1 11 12
Precision 0.00 0.89 0.91 0.83 0.94 0.78 0.50 0.78 1.00 0.87
Recall 0.00 0.91 0.89 0.83 0.70 0.86 0.40 0.91 0.92 0.86
F1 0.00 0.90 0.90 0.83 0.80 0.82 0.44 0.84 0.96 0.86
------------------------------------------------------
Validation on the leaving out well ALEXANDER D
Model Report
-Accuracy: 0.793991
-Adjacent Accuracy: 0.933476
-F1 Score: 0.783970
Confusion Matrix Results
Pred SS CSiS FSiS SiSh MS WS D PS BS Total
True
SS 0
CSiS 105 12 117
FSiS 9 82 91
SiSh 40 1 2 1 44
MS 2 11 6 7 26
WS 11 3 32 4 19 69
D 1 13 2 16
PS 7 4 1 86 98
BS 2 2 1 5
Precision 0.00 0.92 0.87 0.67 0.79 0.70 0.59 0.75 1.00 0.80
Recall 0.00 0.90 0.90 0.91 0.42 0.46 0.81 0.88 0.20 0.79
F1 0.00 0.91 0.89 0.77 0.55 0.56 0.68 0.81 0.33 0.78
------------------------------------------------------
Validation on the leaving out well SHANKLE
Model Report
-Accuracy: 0.839644
-Adjacent Accuracy: 0.986637
-F1 Score: 0.841908
Confusion Matrix Results
Pred SS CSiS FSiS SiSh MS WS D PS BS Total
True
SS 61 28 89
CSiS 2 80 7 89
FSiS 13 104 117
SiSh 5 2 7
MS 15 2 1 1 19
WS 1 1 60 1 8 71
D 17 17
PS 5 35 40
BS 0
Precision 0.97 0.66 0.93 0.83 1.00 0.90 0.89 0.76 0.00 0.86
Recall 0.69 0.90 0.89 0.71 0.79 0.85 1.00 0.88 0.00 0.84
F1 0.80 0.76 0.91 0.77 0.88 0.87 0.94 0.81 0.00 0.84
------------------------------------------------------
Validation on the leaving out well LUKE G U
Model Report
-Accuracy: 0.811280
-Adjacent Accuracy: 0.969631
-F1 Score: 0.820805
Confusion Matrix Results
Pred SS CSiS FSiS SiSh MS WS D PS BS Total
True
SS 0
CSiS 3 102 12 117
FSiS 22 107 129
SiSh 31 1 3 35
MS 1 1 2
WS 4 6 65 8 1 84
D 1 13 5 1 20
PS 3 1 14 56 74
BS 0
Precision 0.00 0.82 0.90 0.82 0.00 0.77 1.00 0.80 0.00 0.84
Recall 0.00 0.87 0.83 0.89 0.00 0.77 0.65 0.76 0.00 0.81
F1 0.00 0.85 0.86 0.85 0.00 0.77 0.79 0.78 0.00 0.82
------------------------------------------------------
Validation on the leaving out well KIMZEY A
Model Report
-Accuracy: 0.760820
-Adjacent Accuracy: 0.933941
-F1 Score: 0.742198
Confusion Matrix Results
Pred SS CSiS FSiS SiSh MS WS D PS BS Total
True
SS 5 4 9
CSiS 79 6 85
FSiS 17 57 74
SiSh 40 2 1 43
MS 2 27 11 13 53
WS 1 3 1 34 3 9 51
D 3 1 21 2 27
PS 3 7 4 76 90
BS 1 6 7
Precision 0.00 0.78 0.84 0.83 0.82 0.62 0.75 0.72 0.00 0.74
Recall 0.00 0.93 0.77 0.93 0.51 0.67 0.78 0.84 0.00 0.76
F1 0.00 0.85 0.80 0.88 0.63 0.64 0.76 0.78 0.00 0.74
------------------------------------------------------
Validation on the leaving out well CROSS H CATTLE
Model Report
-Accuracy: 0.702595
-Adjacent Accuracy: 0.926148
-F1 Score: 0.709076
Confusion Matrix Results
Pred SS CSiS FSiS SiSh MS WS D PS BS Total
True
SS 102 49 7 158
CSiS 7 107 27 1 142
FSiS 5 39 1 2 47
SiSh 1 3 13 7 1 25
MS 4 3 12 7 2 28
WS 1 25 5 31
D 1 1 2
PS 4 9 2 53 68
BS 0
Precision 0.94 0.64 0.47 0.87 1.00 0.52 0.33 0.82 0.00 0.77
Recall 0.65 0.75 0.83 0.52 0.43 0.81 0.50 0.78 0.00 0.70
F1 0.76 0.69 0.60 0.65 0.60 0.63 0.40 0.80 0.00 0.71
------------------------------------------------------
Validation on the leaving out well NOLAN
Model Report
-Accuracy: 0.756627
-Adjacent Accuracy: 0.913253
-F1 Score: 0.752754
Confusion Matrix Results
Pred SS CSiS FSiS SiSh MS WS D PS BS Total
True
SS 4 4
CSiS 2 109 6 1 118
FSiS 16 49 1 2 68
SiSh 1 19 1 5 1 1 28
MS 2 1 19 12 1 12 47
WS 1 15 14 30
D 3 1 4
PS 5 2 7 100 2 116
BS 0
Precision 0.00 0.83 0.79 0.90 0.95 0.38 0.60 0.76 0.00 0.78
Recall 0.00 0.92 0.72 0.68 0.40 0.50 0.75 0.86 0.00 0.76
F1 0.00 0.88 0.75 0.78 0.57 0.43 0.67 0.81 0.00 0.75
###Markdown
Use final model to predict the given test data set
###Code
# Load test data
test_data = pd.read_csv('validation_data_nofacies.csv')
test_data['Well Name'] = test_data['Well Name'].astype('category')
X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Predict facies of unclassified data
Y_predicted = model_final.predict(X_test)
test_data['Facies'] = Y_predicted + 1
# Store the prediction
test_data.to_csv('Prediction4.csv')
test_data[test_data['Well Name']=='STUART'].head()
test_data[test_data['Well Name']=='CRAWFORD'].head()
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors)
###Output
_____no_output_____ |
Linkedlist/sll.ipynb | ###Markdown
Implement a linked list with append, prepend, find, insert, delete, length
###Code
class Node:
"""
An object for storing a single node in a linked list
Attributes:
data: Data stored in node
next_node: Reference to next node in linked list
"""
def __init__(self, data, next_node = None):
self.data = data
self.next_node = next_node
def is_empty(self):
return self.data == None
def __repr__(self):
return f"<Node data: {self.data}>"
class LinkedList:
"""
Singly Linked List
Linear data structure that stores values in nodes.
The list maintains a reference to the first node, also called head.
Each node points to the next node in the list
Attributes:
head: The head node of the list
"""
def __init__(self, head = None):
self.head = head
self.tail = None
def __repr__(self):
"""
Return a string representation of the list.
Takes O(n) time.
"""
nodes = []
current = self.head
while current:
if current is self.head:
nodes.append(f"[Head: {current.data}]")
elif current.next_node is None:
nodes.append(f"[Tail: {current.data}]")
else:
nodes.append(f"[{current.data}]")
current = current.next_node
return '-> '.join(nodes)
def is_empty(self):
"""
Determines if the linked list is empty
Takes O(1) time
"""
return self.head == None
def size(self):
"""
Returns the number of nodes in a Linked list.
Takes O(n) time
"""
current = self.head
count = 0
while current:
count += 1
current = current.next_node
return count
def append(self, new_data):
"""
Adds new node to containing data to the tail of the list
This method can also be optimized to work in O(1) by keeping an extra pointer to the tail of linked list
Takes O(n) time
"""
node = Node(new_data)
current = self.head
if self.head:
while current.next_node:
current = current.next_node
current.next_node = node
else:
self.head = node
def prepend(self, new_data):
"""
Adds new Node containing data to head of the list
Also called prepend
Takes O(1) time
"""
node = Node(new_data)
current = self.head
node.next_node = self.head
self.head = node
def search(self, key):
"""
Determine if an key exist.
Takes O(n) time
Attributes:
key: The element being searched
"""
node = Node(key)
if self.head:
current = self.head
while current:
if current.data == node.data:
return current.data
current = current.next_node
return None
def insert(self, new_data, pos):
"""
Inserts a new Node containing data at pos position
Insertion takes O(1) time but finding node at insertion point takes
O(n) time.
Takes overall O(n) time.
"""
if pos == None or pos == 0:
self.prepend(new_data)
# elif self.size() < pos:
# return None
else:
node = Node(new_data)
current = self.head
cnt = 1
while current.next_node:
if cnt == pos:
node.next_node = current.next_node
current.next_node = node
current = current.next_node
cnt += 1
def remove(self, key):
"""
Removes Node containing data that matches the key
Returns the node or `None` if key doesn't exist
Takes O(n) time
"""
current = self.head
previous = None
found = False
while current and not found:
if current.data == key and current is self.head:
self.head = current.next_node
found = True
elif current.data == key:
previous.next_node = current.next_node
found = True
else:
previous = current
current = current.next_node
return False
import unittest
class TestLink(unittest.TestCase):
def setUp(self):
self.n1 = Node(19)
self.ll = LinkedList(self.n1)
self.ll2 = LinkedList(self.n1)
self.ll3 = LinkedList(self.n1)
def test_node_is_empty(self):
self.assertEqual(self.n1.is_empty(), False)
def test_linked_list_is_empty(self):
self.assertEqual(self.ll.is_empty(), False)
def test_linked_list_size(self):
self.assertEqual(self.ll.size(), 1)
def test_linked_list_append(self):
self.assertEqual(self.ll.size(), 1)
self.ll.append(20)
self.assertEqual(self.ll2.size(), 2)
self.ll.append('a')
self.ll.append('bc')
self.assertEqual(self.ll.size(), 4)
def test_linked_list_prepend(self):
self.assertEqual(self.ll2.size(), 1)
self.ll2.prepend(21)
self.assertEqual(self.ll2.size(), 2)
self.assertEqual(self.ll2.head.data, 21)
self.ll2.prepend('a')
self.ll2.prepend('bc')
self.assertEqual(self.ll2.size(), 4)
self.assertEqual(self.ll2.head.data, 'bc')
def test_linked_list_search(self):
self.ll2.prepend(21)
self.assertEqual(self.ll2.search(21), 21)
self.ll2.append('a')
self.ll2.append('bc')
self.assertEqual(self.ll2.search(""), None)
self.assertEqual(self.ll2.search(None), None)
self.assertEqual(self.ll2.search(19), 19)
def test_linked_list_insert(self):
self.ll3.append(40)
self.ll3.insert(42, 1)
self.ll3.insert('a',0)
self.assertEqual(self.ll3.size(), 4)
self.assertEqual(self.ll3.search(40), 40)
self.assertEqual(self.ll3.search('a'), 'a')
def test_linked_list_remove(self):
self.ll3.append(40)
self.ll3.insert(42, 1)
self.ll3.insert('a',0)
self.assertEqual(self.ll3.remove(90), False)
self.ll3.remove(42)
self.assertEqual(self.ll3.search(42), None)
self.assertEqual(self.ll3.search('a'), 'a')
def test_linked_list_remove_index(self):
self.ll3.append(40)
self.ll3.insert(42, 1)
self.ll3.insert('a',0)
self.assertEqual(self.ll3.remove(90), False)
self.ll3.remove(42)
self.assertEqual(self.ll3.search(42), None)
self.assertEqual(self.ll3.search('a'), 'a')
unittest.main(argv=[''], verbosity=2, exit=False)
###Output
_____no_output_____ |
project/main.ipynb | ###Markdown
Instalacija neophodnih modula:* __gdown__ - preuzimanje baze podataka sa _Google Drive_ platforme* __pymongo__ - konektovanje na udaljenu _MongoDB_ bazu podataka* __pymongo-schema__ - ekstrahovanje šeme _MongoDB_ baze podataka
###Code
!pip install gdown
!pip install 'pymongo[srv]'
!pip install --upgrade https://github.com/pajachiet/pymongo-schema/archive/master.zip
###Output
_____no_output_____
###Markdown
Preuzimanje baze podataka sa _Google Drive_ platforme.
###Code
!gdown --id 1A3ILSwvIBbNTEnl5Av2EU450SdBbaNS7
!unzip database.sqlite.zip
###Output
_____no_output_____
###Markdown
Importovanje neophodnih modula.
###Code
import math
import json
import sqlite3
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from time import sleep
from functools import reduce
from pymongo import MongoClient
from itertools import combinations
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pymongo_schema.extract import extract_pymongo_client_schema
###Output
_____no_output_____
###Markdown
Konektovanje na udaljenu _MongoDB_ bazu podataka.
###Code
# Definisanje kredencijala za MongoDB bazu podataka
MONGO_URL = 'mongodb+srv://databricks:[email protected]/vp-projekat?retryWrites=true&w=majority'
MONGO_DB = 'vp-projekat'
# Konektovanje na udaljenu MongoDB bazu podataka
client = MongoClient(MONGO_URL)
db = client[MONGO_DB]
# Ispisivanje šeme MongoDB baze podataka
schema = extract_pymongo_client_schema(client, database_names=[MONGO_DB])
print(json.dumps(schema, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
Konvertovanje _SQLite_ tabela u _MongoDB_ kolekcije uz izbacivanje suvišnih kolona.* Tabela: __League__ * Kolona __id__ - jedinstveni identifikator lige * Kolona __name__ - naziv lige* Tabela: __Match__ * Kolona __id__ - jedinstveni identifikator utakmice * Kolona __league_id__ - identifikator lige u kojoj je utakmica odigrana * Kolona __season__ - sezona u kojoj je utakmica odigrana * Kolona __date__ - datum odigravanje utakmice * Kolona __home_team_goal__ - ukupno golova datih od strane domaćeg tima * Kolona __away_team_goal__ - ukupno golova datih od strane gostujućeg tima * Kolona __B365H__ - kvota u slučaju da domaći tim pobedi * Kolona __B365D__ - kvota u slučaju da rezultat bude nerešen * Kolona __B365A__ - kvota u slučaju da gostujući tim pobedi
###Code
# Konektovanje na lokalnu SQLite bazu podataka
cnx = sqlite3.connect('database.sqlite')
# Definisanje SQLite tabela i kolona koje je potrebno konvertovati u MongoDB kolekcije
metadata = [
{'table':'League', 'columns':['id', 'name']},
{'table':'Match', 'columns':['id', 'league_id', 'season', 'date', 'home_team_goal', 'away_team_goal', 'B365H', 'B365D', 'B365A']},
{'table':'Team', 'columns':['id', 'team_long_name', 'team_short_name']}
]
# Iteriranje kroz sve tabele/kolekcije
for meta in metadata:
# Proveravanje postojanja i brisanje MongoDB kolekcija sa istim nazivom
if db[meta['table']].count_documents({}) > 0:
db[meta['table']].delete_many({})
# Formiranje stringa za izolovanje odabranih kolona SQLite tabele
table, select = meta['table'], ', '.join(meta['columns'])
# Izolovanje odabranih kolona SQLite tabele i kovertovanje u MongoDB kolekciju
df = pd.read_sql_query(f'SELECT {select} FROM {table}', cnx)
df.reset_index(inplace=True)
records = df.to_dict('records')
db[table].insert_many(records)
###Output
_____no_output_____
###Markdown
Konvertovanje _MongoDB_ kolekcija u _Pandas_ okvire podataka. Tabela __Match__ se sortira po datumu odigravanja utakmice. Izbacuju se utakmice koje nemaju definisane kvote.
###Code
# Učitavanje MongoDB kolekcija u Pandas okvire podataka
leagues = pd.DataFrame.from_dict(db['League'].find())
matches = pd.DataFrame.from_dict(db['Match'].find().sort('date'))
# Brisanje utakmica bez definisanih kvota
matches.dropna(inplace=True)
# Ispisivanje šema Pandas okvira podataka
print('--------- Leagues ---------')
print(leagues.infer_objects().dtypes)
print('--------- Matches ---------')
print(matches.infer_objects().dtypes)
###Output
_____no_output_____
###Markdown
Kreira se novi _Pandas_ okvir podataka koji za ključ ima ligu i sezonu, a za vrednost broj utakmica te lige odigranih u toj sezoni.
###Code
# Prebrojavanje ukupnog broja odigranih utakmica po ligi po sezoni
league_season = pd.DataFrame(matches.groupby(['league_id', 'season'], as_index=False)['_id'].count())
league_season.rename(columns={'_id':'matches_total'}, inplace=True)
###Output
_____no_output_____
###Markdown
Definiše se funkcija __predicted__ za evidentiranje da li se rezultat prosleđene utakmice može predvideti na osnovu kvota. Definiše se funkcija __entropy__ za evidentiranje mere entropije za određenu sezonu određene lige na osnovu broja tačno predviđenih rezultata utakmica.
###Code
def _predicted(x):
# Čuvanje kvota i rezultata utakmice
gh, ga = x['home_team_goal'], x['away_team_goal']
oh, od, oa = x['B365H'], x['B365D'], x['B365A']
rh, rd, ra = gh > ga, gh == ga, gh < ga
omin = np.min([oh, od, oa])
bh, bd, ba = oh == omin, od == omin, oa == omin
# Ispitivanje da li se rezultat utakmice može predvideti na osnovu kvota
return 1 if rh and bh or rd and bd or ra and ba else 0
def _entropy(x):
# Računanje mere entropije kao količnik tačno predviđenih rezultata kroz ukupan broj odigranih utakmica
return x['matches_predicted'] / x['matches_total']
###Output
_____no_output_____
###Markdown
Funkcija __predicted__ se poziva nad svakom utakmicom. Dobijeni rezultati se agregiraju kako bi se dobila mera entropije pozivom funkcije __entropy__ nad svakom sezonom svake lige.
###Code
# Prebrojavanje tačno predviđenih rezultata svih utakmica
matches['predicted'] = matches.apply(lambda x: _predicted(x), axis=1)
# Agregiranje tačno predviđenih rezultata po ligi po sezoni
matches_predicted = pd.DataFrame(matches.groupby(['league_id', 'season'])['predicted'].sum())
matches_predicted.rename(columns={'predicted':'matches_predicted'}, inplace=True)
# Brisanje privremene kolone kreirane prilikom prebrojavanja
matches = matches.drop(columns=['predicted'])
# Dodeljivanje broja predviđenih utakmica i mere entropije svakoj ligi svake sezone
league_season = pd.merge(league_season, matches_predicted, on=['league_id', 'season'])
league_season['entropy'] = league_season.apply(lambda x: _entropy(x), axis=1)
###Output
_____no_output_____
###Markdown
Definiše se funkcija __simulate__ koja za sve utakmice određene ligu određene sezone simulira klađenje po prosleđenom sistemu sa početnim ulaganjem. Definiše se pomoćna funkcija __comb__ koja računa broj kombinacija sa parametrima __n__ i __k__. Definiše se pomoćna funkcija __bet__ koja evidentira rezultat klađenja, tj. u slučaju da je pogođen rezultat utakmice, vraća vrednost pobedničke kvote koja kasnije služi za računanje dobitka tiketa u kojem se nalazi prosleđena utakmica, inače, vraća nulu što odgovara gubitku uloženog novca.* Moguće vrednosti promenljive __type__: * __h__ - Domaći tim pobeđuje * __d__ - Rezultat nerešen * __a__ - Gostujući tim pobeđuje * __min__ - Klađenje na najmanju kvotu (ishod sa najvećom verovatnoćom) * __max__ - Klađenje na najveću kvotu (ishod sa najmanjom verovatnoćom)
###Code
def _simulate(league_id, season, output=False, system=(6, 4), bet_size=1000, type_='min'):
def _comb(n, k):
# Računanje broja kombinacija po standardnoj formuli
nf, kf, nkf = math.factorial(n), math.factorial(k), math.factorial(n-k)
return nf / (kf * nkf);
def _bet(x, t):
# Čuvanje kvota i rezultata utakmice
gh, ga = x['home_team_goal'], x['away_team_goal']
oh, od, oa = x['B365H'], x['B365D'], x['B365A']
odds = {'h':oh, 'd':od, 'a':oa}
rh, rd, ra = gh > ga, gh == ga, gh < ga
# Proveravanje tipa klađenja
if t in 'hda':
# Ispitivanje da li je pogođen rezultat utakmice
return odds[t] if rh and t == 'h' or rd and t == 'd' or ra and t == 'a' else 0
elif t == 'min' or t == 'max':
minmax = np.min(odds.values()) if t == 'min' else np.max(odds.values())
# Ispitivanje da li je pogođen rezultat utakmice
if rh and oh == minmax:
return oh
elif rd and od == minmax:
return od
elif ra and oa == minmax:
return oa
else:
return 0
# Čuvanje prosleđene lige i sezone
season = matches.loc[(matches['league_id'] == league_id) & (matches['season'] == season)]
# Definisanje početnih parametara
balance = 0
balances = []
cash_per_ticket = bet_size / _comb(*system)
# Iteriranje kroz sve utakmice prosleđene lige i sezone
for s in range(0, season.shape[0], system[0]):
# Umanjenje bilansa stanja sa definisani ulog
balance -= bet_size
# Proveravnje rezultata klađenja
result = season[s:s+system[0]].apply(lambda x: _bet(x, type_), axis=1).to_numpy()
# Čuvanje dobitnih tiketa
correct = result[result > 0]
# Proveravanje da li je klađenje po sistemu uspešno
if correct.shape[0] >= system[1]:
# Generisanje svih mogućih tiketa po prosleđenom sistemu
tickets = [x for x in combinations(correct, system[1])]
# Iteriranje kroz sve tikete
for t in tickets:
# Računanje novčanog dobitka
prod = reduce((lambda x, y: x * y), t)
balance += prod * cash_per_ticket
# Čuvanje trenutnog bilansa stanja
balances.append(balance)
# Proveravanje da li je potrebno kreirati JSON datoteke za Apache Spark Structured Streaming
if output:
# Simuliranje računanja radi testiranja Apache Spark Structured Streaming
sleep(1)
# Kreiranje JSON datoteke sa informacija o tiketu i bilansu stanja
path = STREAM_PATH + str(s) + '.json'
dbutils.fs.put(path, "{'ticket':" + str(s) + ", 'balance':" + str(balance) + "}")
# Računanje najmanjeg i najvećeđ bilansa stanja
balances = np.array(balances)
min_balance, max_balance = np.min(balances), np.max(balances)
# Vraćanje završnog, najmanjeg i najvećeg bilansa stanja
return pd.Series([balance, min_balance, max_balance])
###Output
_____no_output_____
###Markdown
Definiše se funkcija __simulate_all__ koja sa svaku ligu svake sezone poziva funkciju __simulate__ radi testiranja hipoteze o predidivosti liga. Definiše se pomoćna funkcija __scatter__ za iscrtavanje grafika zavisnosti novčanog dobitka/gubitka od prediktivnosti liga.
###Code
def _simulate_all():
def _scatter(data, index, total, cols=4):
# Pozicioniranje grafika
plt.subplot(total // cols, cols, index + 1)
# Iscrtavanje tačaka
plt.scatter(data[:,1], data[:,2])
# Pridruživanje naziva lige svakoj tački na grafiku
for i, ann in enumerate(data[:,0]):
plt.annotate(ann, (data[i,1], data[i,2]))
# Postavljanje dimenzija grafika
plt.figure(figsize=(30, 15))
# Čuvanje jedinstvenih sezona
seasons = league_season['season'].unique()
# Iteriranje kroz sve sezone
for i, season in enumerate(seasons):
# Odabiranje svih liga trenutne sezone
selected_season = league_season.query(f'season == "{season}"')
selected_season.index.name = '_id'
# Simuliranje klađenja po sistemu za svaku sezonu svake lige
predictive_leagues = selected_season.apply(lambda x: _simulate(x['league_id'], season), axis=1)
# Čuvanje završnog, najmanjeg i najvećeg bilansa stanja
predictive_leagues.rename(columns={0:'balance', 1:'min_balance', 2:'max_balance'}, inplace=True)
# Spajanje okvira podataka radi povezivanja konkretne lige sa isplativošću klađenja po sistemu
selected_season = pd.merge(selected_season, predictive_leagues, on='_id')
selected_season = pd.merge(leagues, selected_season, left_on='id', right_on='league_id')
# Iscrtavanje pojedinačnih grafika
data = selected_season[['name', 'entropy', 'balance']].to_numpy()
_scatter(data, i, seasons.shape[0])
# Prikazivanje svih pojedinačnih grafika
plt.show()
###Output
_____no_output_____
###Markdown
__Hipoteza 1:__ Predvidivost lige utiče na visinu novčanog dobitka/gubitka.__Rezultat:__ U opštem slučaju, klađenje po sistemu se ne isplati nezavisno od prediktivnosti lige.
###Code
_simulate_all()
###Output
_____no_output_____
###Markdown
Pokreće se _Apache Spark Structured Streaming_ za uživo praćenje simulacije klađenja po sistemu. Definiše se šema _JSON_ dokumenta koji se kreira u funkciji __simulate__ za svaki tiket odigran po sistemu. Iscrtava se grafik koji pokazuje bilans stanja kroz vreme koje se označava rednim brojem utakmice odabrane sezone u odabranoj ligi.
###Code
# Definisanje direktorijuma na kojoj će se čuvati JSON datoteke simulacije
STREAM_PATH = '/test/'
# Brisanje JSON datoteka sačuvanih u prethodnoj simulaciji
dbutils.fs.rm(STREAM_PATH, True)
# Kreiranje direktorijuma na kojoj će se čuvati JSON datoteke trenutne simulacije
dbutils.fs.mkdirs(STREAM_PATH)
# Prikazivanje grafika koji pokazuje promenu bilansa stanja kroz vreme
display(
spark
.readStream
# Pridruživanje šeme JSON datoteka
.schema(StructType([StructField('ticket', LongType(), False), StructField('balance', DecimalType(), False)]))
# Učitavanje JSON datoteka
.json(STREAM_PATH)
)
###Output
_____no_output_____
###Markdown
__Hipoteza 2:__ Klađenje po sistemu se isplati.__Rezultat:__ U opštem slučaju, klađenje po sistemu se ne isplati, ali se može doći do izuzetaka u zavisnosti od parametara sistema i tipa klađenja.
###Code
# Definisanje indeksa željene kombinacije lige i sezone
# Klađenje po sistemu se isplati za STREAM_INDEX=6
STREAM_INDEX = 7
# Definisanje željenog sistema klađenja
SIMULATE_SYSTEM = (6, 4)
# Definisanje željenog tipa klađenja
SIMULATE_TYPE = 'h'
# Odabiranje kombinacije lige i sezone
selected = league_season.iloc[STREAM_INDEX]
# Simuliranje klađenja po sistemu za odabranu ligu i sezonu
balances = _simulate(selected['league_id'], selected['season'], output=True, system=SIMULATE_SYSTEM, type_=SIMULATE_TYPE).to_numpy()
# Ispisivanje završnog, najmanjeg i najvećeg bilansa stanja
print('Current balance:', balances[0])
print('Minimum balance:', balances[1])
print('Maximum balance:', balances[2])
###Output
_____no_output_____
###Markdown
Original notebook: https://github.com/tugstugi/dl-colab-notebooks/blob/master/notebooks/RealTimeVoiceCloning.ipynb
###Code
import os
import sys
import numpy as np
import ipywidgets as widgets
from pathlib import Path
from os.path import exists, join, basename, splitext
from IPython.utils import io
from IPython.display import display, Audio, clear_output
%tensorflow_version 1.x
git = 'https://github.com/jelic98/Real-Time-Voice-Cloning.git'
dir = splitext(basename(git))[0]
%cd '/content'
!rm -rf '{dir}'
!git clone -q --recursive '{git}'
%cd '{dir}'
!pip install -q -r requirements.txt
!pip install -q gdown
!apt-get install -qq libportaudio2
!pip install -q https://github.com/tugstugi/dl-colab-notebooks/archive/colab_utils.zip
!gdown https://drive.google.com/uc?id=1n1sPXvT34yXFLT47QZA6FIRGrwMeSsZc && unzip pretrained.zip
from encoder import inference as encoder
from vocoder import inference as vocoder
from synthesizer.inference import Synthesizer
from dl_colab_notebooks.audio import record_audio, upload_audio
encoder.load_model(Path('encoder/saved_models/pretrained.pt'))
vocoder.load_model(Path('vocoder/saved_models/pretrained/pretrained.pt'))
synthesizer = Synthesizer(Path('synthesizer/saved_models/logs-pretrained/taco_pretrained'))
RATE = 44100
source = "Upload" #@param ["Record", "Upload"]
duration = 5 #@param {type:"number", min:1, max:10, step:1}
embedding = None
def compute_embedding(audio):
global embedding
display(Audio(audio, rate=RATE, autoplay=True))
embedding = encoder.embed_utterance(encoder.preprocess_wav(audio, RATE))
def click_record(button):
clear_output()
audio = record_audio(duration, sample_rate=RATE)
compute_embedding(audio)
def click_upload(button):
clear_output()
audio = upload_audio(sample_rate=RATE)
compute_embedding(audio)
if source == "Record":
button = widgets.Button(description="Record your voice")
button.on_click(click_record)
display(button)
else:
button = widgets.Button(description="Upload voice file")
button.on_click(click_upload)
display(button)
text = "Companies scramble to define the future of work as COVID-19 lingers" #@param {type:"string"}
specs = synthesizer.synthesize_spectrograms([text], [embedding])
generated_wav = vocoder.infer_waveform(specs[0])
generated_wav = np.pad(generated_wav, (0, synthesizer.sample_rate), mode="constant")
clear_output()
display(Audio(generated_wav, rate=synthesizer.sample_rate, autoplay=True))
###Output
_____no_output_____ |
Cat Classification.ipynb | ###Markdown
1. Basic Design of the Model1. Output of the model: 0 or 1 (Binary Classification)2. Hypothesis to be tested: $Z = W \cdot X + b$3. Activation Function: $\frac{1}{1 + e^{-x}} $ (Signmoid Function) 2. Import Packages1. numpy2. matplotlib3. seaborn
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Next Libraries are unimportant, they just make everyhting look better
import matplotlib.style as style
import seaborn as sns
style.use('seaborn-poster') #sets the size of the charts
style.use('ggplot')
###Output
_____no_output_____
###Markdown
3. Loading the dataset
###Code
dataset = np.load('dataset.npz', encoding='ASCII')
## Get the numpy arrays from the dictionary
X_train = dataset['X_train']
Y_train = dataset['Y_train']
X_test = dataset['X_test']
Y_test = dataset['Y_test']
print(X_train.shape)
print(Y_train.shape)
print(X_test.shape)
print(Y_test.shape)
idx = np.random.randint(X_train.shape[1])
plt.imshow(X_train[:, idx].reshape(28, 28))
label = "cat" if Y_train[:, idx][0] else "bat"
print(f"Label: {label}")
###Output
Label: cat
###Markdown
4. Normalizing the dataNormalizing the data with the following equation:$$ X_{norm} = \frac {X - X_{min}}{X_{max} - X_{min}} $$For this pixel data, $X_{max} = 255$ and $X_{min} = 0$> After running the next cell, go back and view the raw array again
###Code
## Normalizing the training and testing data
X_min = 0
X_max = 255
X_train = X_train / X_max
X_test = X_test / X_max
###Output
_____no_output_____
###Markdown
5. Helper functions for the Model: Sigmoid Function and Initialize Parameters function
###Code
def sigmoid(z):
"""
Computes the element sigmoid of scalar or numpy array(element wise)
Arguments:
z: Scalar or numpy array
Returns:
s: Sigmoid of z (element wise in case of Numpy Array)
"""
s = 1/(1+np.exp(-z))
return s
def initialize_parameters(n_x):
"""
Initializes w to a zero vector, and b to a 0 with datatype float
Arguments:
n_x: Number of features in each sample of X
Returns:
w: Initialized Numpy array of shape (1, n_x) (Weight)
b: Initialized Scalar (bias)
"""
w = np.full((1,X_train.shape[0]),0)
b = 0
return w, b
###Output
_____no_output_____
###Markdown
Here is a summary of the equations for Forward Propagation and Backward Propagation we have used so far:For m training examples $ X_{train} $ and $ Y_{train} $: 5.1 Forward Propagation$$ Z^{(i)} = w \cdot X_{train}^{(i)} + b $$$$ \hat Y^{(i)} = A^{(i)} = \sigma(Z^{(i)}) = sigmoid(Z^{(i)}) $$$$ \mathcal{L}(\hat Y^{(i)}, Y_{train}^{(i)}) = \mathcal{L}(A^{(i)}, Y_{train}^{(i)}) = -[Y_{train}^{(i)} \log(A^{(i)}) + (1 - Y_{train}^{(i)}) \log(1 - A^{(i)})] $$$$ J = \frac{1}{m} \sum_1^m \mathcal{L} (A^{(i)}, Y_{train}^{(i)}) $$ 5.2 Backward Propagation - Batch Gradient Descent$$ \frac{\partial J}{\partial w} = \frac{1}{m} (A - Y) \cdot X^T $$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_1^m (A - Y) $$> Note: $ \frac{\partial J}{\partial w} $ is represented as dw, and $ \frac{\partial J}{\partial b}$ is represented as db
###Code
def compute_cost(A, Y, m):
"""
Calculates the Cost using the Cross Entropy Loss
Arguments:
A: Computer Probabilities, numpy array
Y: Known Labels, numpy array
Returns:
cost: The computed Cost
"""
cost = np.sum(((- np.log(A))*Y + (-np.log(1-A))*(1-Y)))/m
return np.squeeze(cost)
def propagate(w, b, X, Y):
"""
Performs forward and backward propagation for the Logistic Regression model
Arguments:
w: The Weight Matrix of dimension (1, n_x)
b: Bias
X: Input Matrix, with shape (n_x, m)
Y: Label Matrix of shape (1, m)
Returns:
dw: Gradient of the weight matrix
db: Gradient of the bias
cost: Cost computed on Calculated Probability, and output Label
"""
m = X.shape[1]
A = sigmoid((w @ X)+b)
cost = compute_cost(A, Y, m)
dw = (np.dot(X,(A-Y).T).T)/m
db = (np.sum(A-Y))/m
assert(dw.shape == w.shape)
assert(db.dtype == float)
return dw, db, cost
###Output
_____no_output_____
###Markdown
5.3 OptimizationFor a parameter $ \theta $, the gradient descent update rule is given by:$$ \theta := \theta - \alpha \frac{\partial J}{\partial \theta} $$where $\alpha$ is the learning rate
###Code
def fit(w, b, X, Y, num_iterations, learning_rate, print_freq=100):
"""
Given the parameters of the model, fits the model corresponding to the given Input Matrix aand output labels, by performing batch gradient descent for given number of iterations.
Arguments:
w: The Weight Matrix of dimension (1, n_x)
b: Bias
X: Input Matrix, with shape (n_x, m)
Y: Label Matrix of shape (1, m)
num_iterations: The number of iteratios of bgd to be performed
print_freq: Frequency of recording the cost
Returns:
w: Optimized weight matrix
b: optimized bias
costs: print the cost at frequency given by print_freq, no prints if freq is 0
"""
costs = []
for i in range(num_iterations):
## 1. Calculate Gradients and cost
dw, db, cost = propagate(w, b, X, Y)
costs.append(cost)
if print_freq and i % print_freq == 0:
print(f"Cost after iteration {i}: {cost}")
## 2. Update parameters
w = w - (learning_rate*dw)
b = b - (learning_rate*db)
return w, b, costs
###Output
_____no_output_____
###Markdown
5.4 PredictionUsing the following equation to determine the class that a given sample belongs to:$$\begin{equation} Y_{prediction}^{(i)} = \begin{cases} 1 \text{, if } \hat Y^{(i)} \ge 0.5\\ 0 \text{, if } \hat Y^{(i)} \lt 0.5\\ \end{cases}\end{equation}$$
###Code
def predict(w, b, X):
"""
Predicts the class which the given feature vector belongs to given Weights and Bias of the model
Arguments:
w: The Weight Matrix of dimension (1, n_x)
b: Bias
X: Input Matrix, with X.shape[0] = n_X
Returns:
Y_prediction: Predicted labels
"""
m = X.shape[1]
Y_prediction = np.full((1,m),0)
A = sigmoid((w @ X) + b)
Y_prediction = (A >= 0.5) * 1.0
return Y_prediction
###Output
_____no_output_____
###Markdown
6. Building the ModelNow we have assembled all the individual pieces required to create the Logistic Regression model.Next function is creating the model and calculating its train and test accuracy.
###Code
def model(X_train, Y_train, X_test, Y_test, num_iterations, learning_rate, print_freq):
"""
Creates a model and fit it to the train and test data. Use this model to compute the train and test accuracy after 2500 iterations
Arguments:
X_train: Training Data X
Y_train: Training Data Y
X_test: Testing Data X
Y_test: Testing data Y
num_iterations: Number of iterations of bgd to perform
learning_rate: Learning Rate of the model
print_freq: Frequency of recording the cost
Returns:
-None-
"""
w, b = initialize_parameters(X_train.shape[0])
w, b, costs = fit(w, b, X_train, Y_train, num_iterations, learning_rate, print_freq)
Y_prediction_train = predict(w, b, X_train)
Y_prediction_test = predict(w, b, X_test)
costs = np.squeeze(costs)
print(f"train accuracy: {100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100} %")
print(f"test accuracy: {100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100} %")
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title(f"Learning rate = {learning_rate}")
plt.show()
model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.1, print_freq=100)
###Output
Cost after iteration 0: 0.6931471805599454
Cost after iteration 100: 0.3433501584110635
Cost after iteration 200: 0.32897409842493286
Cost after iteration 300: 0.32326638115534495
Cost after iteration 400: 0.31970436704308053
Cost after iteration 500: 0.3170647019548738
Cost after iteration 600: 0.3149517208351771
Cost after iteration 700: 0.3131873793654107
Cost after iteration 800: 0.3116736166745069
Cost after iteration 900: 0.31034955432445077
Cost after iteration 1000: 0.3091744749560511
Cost after iteration 1100: 0.30811971405331506
Cost after iteration 1200: 0.30716429510898796
Cost after iteration 1300: 0.30629238107355694
Cost after iteration 1400: 0.30549169980607177
Cost after iteration 1500: 0.30475252920478246
Cost after iteration 1600: 0.3040670204069449
Cost after iteration 1700: 0.3034287332047725
Cost after iteration 1800: 0.3028323089289211
Cost after iteration 1900: 0.30227323480515816
train accuracy: 87.2875 %
test accuracy: 87.3 %
|
Real Estate Price Forecast/Real Estate Forecast - National Model.ipynb | ###Markdown
Using mulit-linear regressions to predict housing pricesThe below workbook finds creates a model based on a handful of key variables that can be used to predict housing prices. It explores different variables and different models. The result is reasonably successful but uses metrics that are delayed in reporting.Next steps are to leverage this model with metrics that are more current or forward looking. This would create a model with a view into the market today, as opposed to a month or two ago.Idea Source: https://python-bloggers.com/2021/01/predicting-home-price-trends-based-on-economic-factors-with-python/ Supply Side Factors That Affect Home PricesVarious factors that affect the supply of homes available for sale are discussed below: Months of SupplyMonths of supply is the basic measure of supply itself in the real estate market (not a factor as such). Houses for sale is another measure of the same.Months of Supply: https://fred.stlouisfed.org/graph/?g=zneA Monthly Homes For Sale and Homes Sold (SA): https://www.census.gov/construction/nrs/historical_data/index.html Differential Migration Across CitiesThe differential migration across cities can possibly be measured directly via change-of-address requests, but since that data is not readily available, the total number of residence moves can be used. What this, however, does not reflect is the change in pattern of movement. The move can be from rural or suburban to cities or the other way round, and both have a very different impact on the housing market. So, net domestic migration into or out of metropolises is a better measure of the differential migration, and hence that has been taken as a parameter along with the number of total movers.Data Source (quarterly: https://www.census.gov/data/tables/time-series/demo/geographic-mobility/historic.html'Interpolated Movers' and 'Interpolated Migration' NOT USED as not monthly UnemploymentUnemployment can also affect both demand and supply in the real estate industry. A high unemployment rate can mean that people simply do not have the money to spend on houses. It can also mean that there is lower investment in the industry and hence lower supply.Monthly UNEMP: https://fred.stlouisfed.org/series/UNRATE Mortgage RateMortgage rates are a huge factor that decide how well the real estate market will perform. It plugs into both supply and demand side of the equation. It affects the availability of financing options to buyers, as well as the ease of financing new constructions. It also affects the delinquency rate and the number of refinances for mortgages. People are more likely to default on a higher mortgage rate!Monthly Mortgage Rate: https://fred.stlouisfed.org/graph/?g=zneW Federal Funds RateAlthough mortgage rate and Federal Funds Rate are usually closely related, sometimes they may not be. Historically, there have been occasions when the Fed lowered the Fed Funds Rate, but the banks did not lower mortgage rates, or not in the same proportion. Moreover, Federal Funds Rate influences multiple factors in the economy, beyond just the real estate market, many of which factors indirectly influence the real estate market. It is a key metric to change the way an economy is performing.Monthly Fed Funds Rate: https://fred.stlouisfed.org/series/DFF0 USA GDPThe GDP is a measure of output of the economy overall, and the health of the economy. An economy that is doing well usually implies more investment and economic activity, and more buying.Data Sources:Monthly GDP Index: https://fred.stlouisfed.org/graph/?g=znfe Quarterly Real GDP (adjusted for inflation): https://fred.stlouisfed.org/series/GDPC10NOT USED as not monthly Building PermitsNumber of building permits allotted is a measure of not just health of real estate industry, but how free the real estate market is, effectively. It is an indicator of the extent of regulation/de-regulation of the market. It affects the supply through ease of putting a new property on the market.Monthly Permits-Number and Permit-Valuation: https://www.census.gov/construction/bps/ Housing StartsThis is a measure of the number of units of new housing projects started in a given period. Sometimes it is also measured in valuation of housing projects started in a given period.Monthly Housing Starts: https://www.census.gov/construction/nrc/historical_data/index.htmlSeasonally Adjusted 1 unit Construction SpendingThe amount spent (in millions of USD, seasonally adjusted), is a measure of the activity in the construction industry, and an indicator of supply for future months. It can also be taken as a measure of confidence, since home builders will spend money in construction only if they expect the industry to do well in the future months.Monthly CONST: https://www.census.gov/construction/c30/c30index.htmlPrivate Residential, Seasonally Adjusted Demand Side Factors That Affect Home PricesDemand for housing, and specifically, home ownership, is affected by many factors, some of which are closely inter-related. Many of these factors also affect the supply in housing market. Below are a few factors that are prominent in influencing the demand for home buying: Affordability: Wages & Disposable Personal IncomeThe “weekly earnings” are taken as a measure of overall wages and earning of all employed persons.The other measure is disposable personal income: how much of the earning is actually available to an individual for expenditure. This is an important measure as well, as it takes into account other factors like taxes etc.Data Sources:Median usual weekly nominal earnings, Wage and salary workers 25 years and over: https://fred.stlouisfed.org/series/LEU0252887700Q0Monthly Disposable Income: https://fred.stlouisfed.org/series/DSPIC960 Delinquency Rate on MortgagesThe delinquency rate on housing mortgages are an indicator of the number of foreclosures in real estate. This is an important factor in both, demand and supply. Higher delinquency rate (higher than credit card delinquency rate) in the last economic recession was a key indicator of the recession and the poorly performing industry and the economy as a whole. It also indicates how feasible it is for a homeowner to buy a house at a certain point of time and is an indicator of the overall demand in the industry.Data Source: https://fred.stlouisfed.org/series/DRSFRMACBS0NOT USED as not monthly Personal SavingsThe extent to which people are utilizing their personal income for savings matters in overall investments and capital availability, and the interest rate for loans (and not just the mortgage rate). It is also an indicator of how much the current population is inclined to spend their money, vs save it for future use. This is an indicator of the demand for home ownership as well.Monthly Savings: https://fred.stlouisfed.org/series/PMSAVE Behavioural Changes & Changes in PreferencesChanges in home ownership indicate a combination of factors including change in preferences and attitudes of people towards home buying. Change in cultural trends can only be captured by revealed preferences, and this metric can be taken as a revealed metric for propensity for home buying.The other metric to track changes in preferences is personal consumption expenditure. For eg, if expenditure is increasing, but there is no such increase in homeownership, it would indicate a change in preferences towards home buying and ownership. Maybe people prefer to rent a home than buying one. Hence, both of these parameters are used.Date Sources:Home Ownership Rate (NOT USED as not monthly): http://bit.ly/homeownershiprateMonthly Consumption: https://fred.stlouisfed.org/series/PCE Building The ModelThe S&P Case-Shiller Housing Price Index is taken as the y variable, or dependent variable, as an indicator of change in prices. Monthly HPI: https://fred.stlouisfed.org/series/CSUSHPISA Data CleanupI have run the regression with fewer parameters, using only monthly data. SImilar anlaysis ahs been doen by others with more parameters reduced to quarterly, but it did not generate better results.It's important to note that not all variables will be relavant and contribut positiviely to the model. Some variables that are tested will be discarded for the end analysis.The data we are working with is time series data. So, all the time-date data in every variable must be converted to Python’s datetime format, for it to be read as time-date data and processed as such when up-sampling or down-sampling. This will also help with merging different series together, using the datetime columns.The regression itself does not run on time-series data, so the datetime columns are removed in the final data for the regression. Multiple Linear Regression for PredictionWe'll first run a multi linear analysis and then comare it to some other models to verify it is a decent fit.
###Code
# load modules
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import SGDRegressor
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sb
# load data
date_col1 = ['Period']
date_col2 = ['Quarter Starting']
date_col3 = ['Year']
date_col4 = ['Quarter']
x_monthly = pd.read_csv('x_monthly.csv', parse_dates=date_col1, dayfirst=True) #Monthly Data
y_monthly = pd.read_csv('y.csv', parse_dates=date_col1, dayfirst=True) #Monthly y Data
print(x_monthly.dtypes, x_monthly.shape)
print(y_monthly.dtypes, y_monthly.shape)
###Output
Period datetime64[ns]
UNEMP float64
Months of Supply float64
Homes for Sale int64
Homes for Sale NE int64
Mortgage Rate float64
Permits-Valuation float64
Permits-Valuation NE float64
Permits-Number float64
Permits-Number NE float64
Housing Starts int64
Housing Starts NE int64
CONST int64
Consumption float64
Disposable Income float64
Savings float64
Fed Funds Rate float64
Homes Sold int64
Homes Sold NE int64
GDP Monthly Index float64
dtype: object (348, 20)
Period datetime64[ns]
HPI float64
dtype: object (348, 2)
###Markdown
Variables w NE are versions for Northeastern US and are not used in national analysis
###Code
# run initial regression
x = x_monthly[['UNEMP','Months of Supply','Homes for Sale','CONST','Consumption',
'Disposable Income', ]] # removing date column for partial vairable list
"""
x = x_monthly[['UNEMP', 'Months of Supply','Homes for Sale', 'Mortgage Rate',
'Permits-Number', 'Permits-Valuation', 'CONST','Housing Starts', 'Consumption',
'Disposable Income', 'Fed Funds Rate', 'Savings', 'Homes Sold',
'GDP Monthly Index']] # includes non-relavant variables"""
y = y_monthly['HPI'] #Removing date column
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.35, shuffle=False, stratify=None)
reg = LinearRegression()
reg.fit(x_train, y_train)
y_predict = reg.predict(x_test)
print(reg.intercept_)
print('R^2 Value of Train:', reg.score(x_train, y_train))
print('R^2 Value of Test:', reg.score(x_test, y_test))
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_predict))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_predict))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_predict)))
print('HPI Stats:')
print(y_test.describe())
###Output
-52.44072010133786
R^2 Value of Train: 0.9943744239817508
R^2 Value of Test: 0.985189225239867
Mean Absolute Error: 3.239201522536801
Mean Squared Error: 17.36855882919804
Root Mean Squared Error: 4.167560297008076
HPI Stats:
count 122.000000
mean 189.025828
std 34.385887
min 136.530000
25% 163.766000
50% 185.209000
75% 208.930750
max 279.801000
Name: HPI, dtype: float64
###Markdown
The R2 value above .95 is normally considered a good value, so we are in good shape there.We want the Root Mean Squared Error to be less than 10% of the mean value of HPI, and much less than the standard deviation of the HPI in the test data. This indicates the the model is fairly accurate in predicting values and, again, we are good here.Variable Notes: 0 UNEMP increases accuracy a godo bit 1 Months of Supply does little 2 Homes for Sale does help good bit 3 Mortgage Rate actually actually hurts but just a little - DONT INCLUDE 4 Permits Numbers burts a little - DONT INCLUDE 5 Permit Evalution hurts a little - DONT INCLUDE 6 CONST helps a good bit7 Housing Starts hurts a little - DONT INCLUDE 8 Consumption Helps a good bit9 Disposable income helps some10 Fed Rate elps a little 11 Savings brings accuracy down a good bit. - DONT INCLUDE 12 Homes Sold brought accuracy down and increased error - DONT INCLUDE 13 GDP Index brings both down - DONT INCLUDE
###Code
# get importance of features
importance = reg.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# lets plot predictions vs actuals to see how the spread looks
plt.figure(figsize=(8,8))
sb.regplot(x=y_test, y=y_predict, ci=None, color="Blue")
plt.xlabel("Actual HPI")
plt.ylabel('Predicted HPI')
plt.title("Actual vs Predicted HPI: Monthly Data")
###Output
_____no_output_____
###Markdown
**Graph shows that model is less accurate the higher the price is, and it undervalues higher prices. This is where we have been in late 2021, suggesting market is overvalued, or that certain variables are not being taken into account.**
###Code
# predict for all periods and compare on timeline to HPI
x_whole = x
y_predict_whole = reg.predict(x_whole)
y_monthly['y_predict_whole'] = y_predict_whole
y_monthly
y_monthly
plt.figure(figsize=(15,7))
plt.plot(y_monthly['Period'], y_monthly['HPI'], label = "HPI",linestyle=":")
plt.plot(y_monthly['Period'], y_monthly['y_predict_whole'], label = "Predicted",linestyle="-")
plt.title("Actual vs Predicted HPI: Monthly Data")
plt.legend()
plt.show
# Trying different regression model
# Lasso
from sklearn.linear_model import Lasso
from sklearn.metrics import r2_score
alpha = 0.1
lasso = Lasso(alpha=alpha, max_iter=2000)
y_pred_lasso = lasso.fit(x_train, y_train).predict(x_test)
r2_score_lasso = r2_score(y_test, y_pred_lasso)
print(lasso)
print("r^2 on test data : %f" % r2_score_lasso)
plt.figure(figsize=(8,8))
sb.regplot(x=y_test, y=y_pred_lasso, ci=None, color="Blue")
plt.xlabel("Actual HPI")
plt.ylabel('Predicted HPI')
plt.title("Actual vs Predicted HPI: Monthly Data")
# get importance of features
importance = lasso.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
print('##############################################')
# ElasticNet
from sklearn.linear_model import ElasticNet
enet = ElasticNet(alpha=alpha, l1_ratio=0.7, max_iter=2000)
y_pred_enet = enet.fit(x_train, y_train).predict(x_test)
r2_score_enet = r2_score(y_test, y_pred_enet)
print(enet)
print("r^2 on test data : %f" % r2_score_enet)
plt.figure(figsize=(8,8))
sb.regplot(x=y_test, y=y_pred_enet, ci=None, color="Blue")
plt.xlabel("Actual HPI")
plt.ylabel('Predicted HPI')
plt.title("Actual vs Predicted HPI: Monthly Data")
# get importance of features
importance = enet.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
###Output
Lasso(alpha=0.1, max_iter=2000)
r^2 on test data : 0.985706
Feature: 0, Score: 2.87260
Feature: 1, Score: 0.12847
Feature: 2, Score: 0.10782
Feature: 3, Score: 0.00008
Feature: 4, Score: 0.01137
Feature: 5, Score: 0.00057
##############################################
ElasticNet(alpha=0.1, l1_ratio=0.7, max_iter=2000)
r^2 on test data : 0.986077
Feature: 0, Score: 2.78929
Feature: 1, Score: 0.20772
Feature: 2, Score: 0.10533
Feature: 3, Score: 0.00008
Feature: 4, Score: 0.01147
Feature: 5, Score: 0.00040
|
day2/08. matplotlib - introduction.ipynb | ###Markdown
Visualization with Matplotlib General Matplotlib TipsBefore we dive into the details of creating visualizations with Matplotlib, there are a few useful things you should know about using the package. Just as we use the ``np`` shorthand for NumPy and the ``pd`` shorthand for Pandas, we will use some standard shorthands for Matplotlib imports:
###Code
import matplotlib as mpl
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
The ``plt`` interface is what we will use most often, as we shall see throughout this chapter. ``show()`` or No ``show()``? How to Display Your Plots A visualization you can't see won't be of much use, but just how you view your Matplotlib plots depends on the context.The best use of Matplotlib differs depending on how you are using it; roughly, the three applicable contexts are using Matplotlib in a script, in an IPython terminal, or in an IPython notebook. Plotting from a scriptIf you are using Matplotlib from within a script, the function ``plt.show()`` is your friend.``plt.show()`` starts an event loop, looks for all currently active figure objects, and opens one or more interactive windows that display your figure or figures.So, for example, you may have a file called *myplot.py* containing the following:```python ------- file: myplot.py ------import matplotlib.pyplot as pltimport numpy as npx = np.linspace(0, 10, 100)plt.plot(x, np.sin(x))plt.plot(x, np.cos(x))plt.show()```You can then run this script from the command-line prompt, which will result in a window opening with your figure displayed:```$ python myplot.py```The ``plt.show()`` command does a lot under the hood, as it must interact with your system's interactive graphical backend.The details of this operation can vary greatly from system to system and even installation to installation, but matplotlib does its best to hide all these details from you.One thing to be aware of: the ``plt.show()`` command should be used *only once* per Python session, and is most often seen at the very end of the script.Multiple ``show()`` commands can lead to unpredictable backend-dependent behavior, and should mostly be avoided. Plotting from an IPython notebookPlotting interactively within a Jupyter notebook can be done with the ``%matplotlib`` command, and works in a similar way to the IPython shell.In the IPython notebook, you also have the option of embedding graphics directly in the notebook, with two possible options:- ``%matplotlib notebook`` will lead to *interactive* plots embedded within the notebook- ``%matplotlib inline`` will lead to *static* images of your plot embedded in the notebookFor this book, we will generally opt for ``%matplotlib inline``:
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
After running this command (it needs to be done only once per kernel/session), any cell within the notebook that creates a plot will embed a PNG image of the resulting graphic:
###Code
import numpy as np
x = np.linspace(0, 10, 100)
fig = plt.figure()
plt.plot(x, np.sin(x), '-')
plt.plot(x, np.cos(x), '--');
###Output
_____no_output_____
###Markdown
Saving Figures to FileOne nice feature of Matplotlib is the ability to save figures in a wide variety of formats.Saving a figure can be done using the ``savefig()`` command.For example, to save the previous figure as a PNG file, you can run this:
###Code
fig.savefig('my_figure.png')
###Output
_____no_output_____
###Markdown
We now have a file called ``my_figure.png`` in the current working directory:
###Code
!ls -lh my_figure.png
###Output
_____no_output_____
###Markdown
To confirm that it contains what we think it contains, let's use the IPython ``Image`` object to display the contents of this file:
###Code
from IPython.display import Image
Image('my_figure.png')
###Output
_____no_output_____
###Markdown
In ``savefig()``, the file format is inferred from the extension of the given filename.Depending on what backends you have installed, many different file formats are available.The list of supported file types can be found for your system by using the following method of the figure canvas object:
###Code
fig.canvas.get_supported_filetypes()
###Output
_____no_output_____
###Markdown
Note that when saving your figure, it's not necessary to use ``plt.show()`` or related commands discussed earlier. Two Interfaces for the Price of OneA potentially confusing feature of Matplotlib is its dual interfaces: a convenient MATLAB-style state-based interface, and a more powerful object-oriented interface. We'll quickly highlight the differences between the two here. MATLAB-style InterfaceMatplotlib was originally written as a Python alternative for MATLAB users, and much of its syntax reflects that fact.The MATLAB-style tools are contained in the pyplot (``plt``) interface.For example, the following code will probably look quite familiar to MATLAB users:
###Code
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(x, np.sin(x))
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(x, np.cos(x));
###Output
_____no_output_____
###Markdown
It is important to note that this interface is *stateful*: it keeps track of the "current" figure and axes, which are where all ``plt`` commands are applied.You can get a reference to these using the ``plt.gcf()`` (get current figure) and ``plt.gca()`` (get current axes) routines.While this stateful interface is fast and convenient for simple plots, it is easy to run into problems.For example, once the second panel is created, how can we go back and add something to the first?This is possible within the MATLAB-style interface, but a bit clunky.Fortunately, there is a better way. Object-oriented interfaceThe object-oriented interface is available for these more complicated situations, and for when you want more control over your figure.Rather than depending on some notion of an "active" figure or axes, in the object-oriented interface the plotting functions are *methods* of explicit ``Figure`` and ``Axes`` objects.To re-create the previous plot using this style of plotting, you might do the following:
###Code
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(x, np.sin(x))
ax[1].plot(x, np.cos(x));
###Output
_____no_output_____ |
predict_fx_trade.ipynb | ###Markdown
値動きのみでFXに勝てるかDeepLeaningに推測させる 概要DeepLearningを使って、過去の値動きから次の瞬間の価格を予測している記事はよく見かけるしかし、一時的な価格だけ予測できたところでトレードで勝つことはできない必要なのは、特定の期間内においてどういった値動きをして、最終的にどのくらい上がる(もしくは下がる)のかを知ることであるでは「特定の期間」とはどのくらいの期間を指すのか?それはトレードを行う人の性格やトレードスタイル、戦略等によって変わるため、現状明確な正解はないと思われる本来は、利益を最大化させることのできるトレード期間も合わせて学習できれば良いが、計算量が跳ね上がる上、本当に正解があるかどうかも分からないため、ここではそういった議論は行わないそこで今回は、以下のようなトレード戦略を採用することにする- チャートは1分足を採用(足が長くなるほど経済的な影響を受けやすくなると考えたため)- スプレッドは考慮しない- 「買い」を入れるタイミングで過去20期間分(20分間分)のATRを算出 - 算出したATR値を`atr1`とする - 「買い」を入れたタイミングでの価格を`price1`とする- 利確価格を`price1 + 5*atr1`, 損切価格を`price1 - 3*atr1`とする- 以降、1分ごとに次の方法で手仕舞いを行う - 価格が利確価格以上になったら利確する - 価格が損切価格以下になったら損切する - 価格が`price1 + atr1`以上になったら、利確・損切価格を上にずらす(トレーリングストップ) - 新しい利確価格:`その1分足の終値 + 5*atr1` - 新しい損切価格:`その1分足の終値 - 3*atr1`以上のトレード戦略をとった場合、どのようなタイミングで「買い」を入れれば勝てるのかを予測する チャートデータ- チャートデータはTickStoryで取得- 通貨ペアはUSD/JOY- 過去5年分の1分足データを学習対象とする 環境- CPU: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz- Memory: 16GB DDR2- GPU: NVIDIA GeForce GTX 1070- Python 3.5 - DeepLearningライブラリとして Keras(バックグラウンド: TensorFlow)採用 チャートデータ読み込み
###Code
import pandas as pd
import talib
# データフレームを読み込む
df = pd.read_csv('USDJPY_1M_Pretty.csv', index_col='Date')
df.index = pd.to_datetime(df.index)
# ATR20計算
df['ATR'] = talib.ATR(df['High'].values, df['Low'].values, df['Close'].values, timeperiod=20)
del df['Volume']
df = df.dropna(how='any') # NaN行の削除
df.head()
###Output
_____no_output_____
###Markdown
トレード戦略に沿ってトレードした場合の結果を計算概要で定めたトレード戦略に沿って、毎分「買い」を入れた場合のトレード結果を計算する今回はついでに、毎分「売り」を入れた場合についても同様に計算している
###Code
import numpy as np
from numba import jit
import time
# TrailingStop
@jit
def buy_trail(data, index, price, atr):
# TrailingStop: TP=ATR*5, SL=3*ATR, TS=ATR
tp, sl, ts = price + 5 * atr, price - 3 * atr, price + atr
for i in range(index, len(data)):
row = data[i + 1]
if row[2] <= tp <= row[1]: return tp
elif row[2] <= sl <= row[1]: return sl
elif row[2] <= ts <= row[1]: # Move TakeProfit & StopLoss
tp, sl, ts = row[3] + 5 * atr, row[3] - 3 * atr, row[3] + atr
return data[-1][3]
@jit
def sell_trail(data, index, price, atr):
# TrailingStop: TP=ATR*5, SL=3*ATR, TS=ATR
tp, sl, ts = price - 5 * atr, price + 3 * atr, price - atr
for i in range(index, len(data)):
row = data[i + 1]
if row[2] <= tp <= row[1]: return tp
elif row[2] <= sl <= row[1]: return sl
elif row[2] <= ts <= row[1]: # Move TakeProfit & StopLoss
tp, sl, ts = row[3] - 5 * atr, row[3] + 3 * atr, row[3] - atr
return data[-1][3]
# Buy & TrailingStop
@jit
def buy(data):
result = []
for i in range(len(data)):
# i番目の瞬間に買いを入れると仮定
row = data[i]
result.append(buy_trail(data, i, row[3], row[4]) - row[3]) # 買いを入れた結果(損益)を保存
return result
# Sell & TrailingStop
@jit
def sell(data):
result = []
for i in range(len(data)):
# i番目の瞬間に売りを入れると仮定
row = data[i]
result.append(sell_trail(data, i, row[3], row[4]) - row[3]) # 売りを入れた結果(損益)を保存
return result
start = time.time()
buy_profit = np.array(buy(df.values))
print("Elapsed Time: {0} sec\nBuyProfits: {1}".format(time.time() - start, buy_profit))
start = time.time()
sell_profit = np.array(sell(df.values))
print("Elapsed Time: {0} sec\nSellProfits: {1}".format(time.time() - start, sell_profit))
# numpyデータをcsv化
val = df.values
size = val.shape[0]
data = np.zeros((size, 7))
for i in range(size):
for x in range(5): data[i][x] = val[i][x]
data[i][5] = buy_profit[i]
data[i][6] = sell_profit[i]
csv = pd.DataFrame(index=df.index[:size], columns=['Open', 'High', 'Low', 'Close', 'ATR', 'Buy', 'Sell'], data=data)
csv.to_csv("USDJPY_TrailingStop.csv")
csv.head()
###Output
_____no_output_____
###Markdown
目的変数の設定当初、目的変数は単純にトレード結果の価格(いくら儲かり、いくら損したか)にすれば良いと思っていたが、これではうまく学習できなかったそのため、トレード結果を以下の4つに分類することにした1. 大きく勝った(`5*atr1`以上の利益)2. 勝った(`0`以上の利益)3. 負けた(`0`未満の利益)4. 大きく負けた(`-3*atr1`以下の利益)
###Code
import numpy as np
# 損益額を4パターン(2: 5*ATRより大きい, 1: 0より大きい, -1: 0より小さい, -2: -3*ATRより小さい)にカテゴライズ
df['Buy_cat'] = np.where(5 * df['ATR'] < df['Buy'], 2, np.where(0 < df['Buy'], 1, np.where(df['Buy'] < -3 * df['ATR'], -2, -1)))
df['Sell_cat'] = np.where(5 * df['ATR'] < df['Sell'], 2, np.where(0 < df['Sell'], 1, np.where(df['Sell'] < -3 * df['ATR'], -2, -1)))
df.to_csv('USDJPY_TrailingStop2.csv')
df.head()
###Output
_____no_output_____
###Markdown
おまけ:相場の大半がレンジ相場であることを確認ちなみにトレード結果を以下のようにカテゴライズして分散を確認してみると、相場の大半がレンジ相場であることが確認できた1. 大きな利益2. 小さな利益3. 損益なし4. 小さな損失5. 大きな損失
###Code
''' 損益結果をカテゴライズ '''
# 5つのパターン(0:大きな損失、1:小さな損失、2:損益なし、3:小さな利益、4:大きな利益)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
Ybuy = csv['Buy'].values.reshape(-1, 1)
scaler.fit(Ybuy)
Ybuy = scaler.transform(Ybuy)
Ybuy = np.floor(Ybuy * 5) # 5パターンにカテゴライズ
Ybuy = np.where(Ybuy == 5., 4., Ybuy) # 最大値が5になっているため4に直す
Ybuy
import matplotlib.pyplot as plt
%matplotlib inline
Ynum = [len(Ybuy[Ybuy == i]) for i in range(5)]
plt.bar(range(5), Ynum)
###Output
_____no_output_____
###Markdown
特徴量と学習モデルの選択今回は特徴量として、単純に過去60期間分(1時間分)の値動きデータを採用することにしたまた、学習モデルとして画像認識によく使われるCNNモデルを選択したこれは、値動きのデータを一つの画像としてパターン学習できるのではないかと考えたからであるちなみに、数値予測に採用されることの多いLSTMモデルでも学習させてみたが、今回のケースではCNNモデルでの学習と大差なかった
###Code
''' CNNモデルで学習させてみる '''
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from numba import jit
from keras.utils.np_utils import to_categorical
# データフレームを読み込む
df = pd.read_csv('USDJPY_TrailingStop2.csv', index_col='Date')
df.index = pd.to_datetime(df.index)
del df['ATR'], df['Open'], df['High'], df['Low'], df['Buy'], df['Sell'], df['Market']
# 特徴量、目的変数
x = df.loc[:, 'Close']
y_buy = df.loc[:, 'Buy_cat']
y_sell = df.loc[:, 'Sell_cat']
# 訓練データ、テストデータに分割
x_train, x_test = x[x.index < '2018'], x[x.index >= '2018']
y_buy_train, y_buy_test = y_buy[y_buy.index < '2018'], y_buy[y_buy.index >= '2018']
y_sell_train, y_sell_test = y_sell[y_sell.index < '2018'], y_sell[y_sell.index >= '2018']
# 特徴量をClose(60)、目的変数をTrailingStop損益結果の4パターンとする
@jit
def gen_xy(x, y_buy, y_sell, window_len=60):
X, Ybuy, Ysell = [], [], []
for i in range(len(x) - window_len):
X.append(x[i : i + window_len].copy())
Ybuy.append(y_buy[i + window_len - 1])
Ysell.append(y_sell[i + window_len - 1])
X, Ybuy, Ysell = np.array(X), np.array(Ybuy), np.array(Ysell)
# 特徴量を正規化
scaler = MinMaxScaler()
scaler.fit(X) # window_len区間の価格の最大値・最小値にフィット
X = scaler.transform(X)
# 目的変数を正規化
Ybuy = np.where(Ybuy > 0, Ybuy + 1, Ybuy + 2)
Ybuy = to_categorical(Ybuy.astype('int32'))
Ysell = np.where(Ysell > 0, Ysell + 1, Ysell + 2)
Ysell = to_categorical(Ysell.astype('int32'))
return X, Ybuy, Ysell
Xtrain, Ytrain_buy, Ytrain_sell = gen_xy(x_train.values, y_buy_train.values, y_sell_train.values)
x_train.values.shape, x_train.values, Xtrain.shape, Xtrain, y_buy_train.values.shape, y_buy_train.values, Ytrain_buy.shape, Ytrain_buy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Conv1D
from keras.layers import MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dropout
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
''' CNNで買い専用AIの訓練 '''
model = Sequential()
model.add(Conv1D(32, 3, activation='relu', padding='valid', input_shape=(60, 1)))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(64, 3, activation='relu', padding='valid'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(4, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])
early_stopping = EarlyStopping(monitor='val_acc', mode='auto', patience=8)
model_checkpoint = ModelCheckpoint(filepath="BuyerCNN.h5")
# 訓練実行
model.fit(Xtrain.reshape(Xtrain.shape[0], 60, 1), Ytrain_buy,
batch_size=2048, # 訓練データが多い場合は、ミニバッチサイズを大きくしないとオーバーフローが起きる
epochs=256,
shuffle=True,
validation_split=0.1, # 訓練データのうち10%を検証データとして仕様
callbacks=[early_stopping, model_checkpoint]
)
Xtest, Ytest_buy, Ytest_sell = gen_xy(x_test.values, y_buy_test.values, y_sell_test.values)
model.evaluate(Xtest.reshape(Xtest.shape[0], 60, 1), Ytest_buy)
###Output
285854/285854 [==============================] - 12s 40us/step
###Markdown
上記より、トレード結果を4つにカテゴライズ(大きく勝つ・勝つ・負ける・大きく負ける)する場合、およそ48%の精度で予測できることがわかった当てずっぽうに4択したら25%の確率であるため、それよりは倍程度良いのだが、個人的にはこの程度の精度でFXに勝つことはできないと考えている(体感的に)以下、おまけとして、トレード結果を「勝つ・負ける」の2つにカテゴライズした場合についても学習させてみた
###Code
''' 目的変数を勝てる・負けるの2パターンにして学習させてみる '''
@jit
def gen_xy(x, y_buy, y_sell, window_len=60):
X, Ybuy, Ysell = [], [], []
for i in range(len(x) - window_len):
X.append(x[i : i + window_len].copy())
Ybuy.append(y_buy[i + window_len - 1])
Ysell.append(y_sell[i + window_len - 1])
X, Ybuy, Ysell = np.array(X), np.array(Ybuy), np.array(Ysell)
# 特徴量を正規化
scaler = MinMaxScaler()
scaler.fit(X) # window_len区間の価格の最大値・最小値にフィット
X = scaler.transform(X)
# 目的変数を正規化
Ybuy = np.where(Ybuy > 0, 1, 0)
Ybuy = to_categorical(Ybuy.astype('int32'))
Ysell = np.where(Ysell > 0, 1, 0)
Ysell = to_categorical(Ysell.astype('int32'))
return X, Ybuy, Ysell
Xtrain, Ytrain_buy, Ytrain_sell = gen_xy(x_train.values, y_buy_train.values, y_sell_train.values)
model = Sequential()
model.add(Conv1D(32, 3, activation='relu', padding='valid', input_shape=(60, 1)))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(64, 3, activation='relu', padding='valid'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])
early_stopping = EarlyStopping(monitor='val_acc', mode='auto', patience=8)
model_checkpoint = ModelCheckpoint(filepath="Buyer2.h5")
# 訓練実行
model.fit(Xtrain.reshape(Xtrain.shape[0], 60, 1), Ytrain_buy,
batch_size=2048,
epochs=256,
shuffle=True,
validation_split=0.1,
callbacks=[early_stopping, model_checkpoint]
)
Xtest, Ytest_buy, Ytest_sell = gen_xy(x_test.values, y_buy_test.values, y_sell_test.values)
model.evaluate(Xtest.reshape(Xtest.shape[0], 60, 1), Ytest_buy)
###Output
285854/285854 [==============================] - 12s 41us/step
|
Spark/08 Using Null.ipynb | ###Markdown
Using Null- teacherid | dept | name | phone | mobile----|-------|-------|-------|-----101 | 1 | Shrivell | 2753 | 07986 555 1234102 | 1 | Throd | 2754 | 07122 555 1920103 | 1 | Splint | 2293 |104 | | Spiregrain | 3287 |105 | 2 | Cutflower | 3212 | 07996 555 6574106 | | Deadyawn | 3345 | ... | | | |- deptid | name----|----1 | Computing2 | Design3 | Engineering... | Teachers and DepartmentsThe school includes many departments. Most teachers work exclusively for a single department. Some teachers have no department.[Selecting NULL values](https://sqlzoo.net/wiki/Selecting_NULL_values).
###Code
import os
import pandas as pd
import findspark
os.environ['SPARK_HOME'] = '/opt/spark'
findspark.init()
from pyspark.sql import SparkSession
sc = (SparkSession.builder.appName('app08')
.config('spark.sql.warehouse.dir', 'hdfs://quickstart.cloudera:8020/user/hive/warehouse')
.config('hive.metastore.uris', 'thrift://quickstart.cloudera:9083')
.enableHiveSupport().getOrCreate())
###Output
_____no_output_____
###Markdown
1. NULL, INNER JOIN, LEFT JOIN, RIGHT JOINList the teachers who have NULL for their department.> _Why we cannot use =_ > You might think that the phrase dept=NULL would work here but it doesn't - you can use the phrase dept IS NULL> > _That's not a proper explanation._ > No it's not, but you can read a better explanation at Wikipedia:NULL.
###Code
teacher = sc.read.table('sqlzoo.teacher')
dept = sc.read.table('sqlzoo.dept')
from pyspark.sql.functions import *
teacher.filter(isnull(teacher['dept'])).select('name').toPandas()
###Output
_____no_output_____
###Markdown
2.Note the INNER JOIN misses the teachers with no department and the departments with no teacher.
###Code
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'])
.select('teacher', 'name')
.toPandas())
###Output
_____no_output_____
###Markdown
3.Use a different JOIN so that all teachers are listed.
###Code
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'], how='left')
.select('teacher', 'name')
.toPandas())
###Output
_____no_output_____
###Markdown
4.Use a different JOIN so that all departments are listed.
###Code
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'], how='right')
.select('teacher', 'name')
.toPandas())
###Output
_____no_output_____
###Markdown
5. Using the [COALESCE](https://sqlzoo.net/wiki/COALESCE) functionUse COALESCE to print the mobile number. Use the number '07986 444 2266' if there is no number given. **Show teacher name and mobile number or '07986 444 2266'**
###Code
teacher.select('name', 'mobile').fillna({'mobile': '07986 444 2266'}).toPandas()
###Output
_____no_output_____
###Markdown
6.Use the COALESCE function and a LEFT JOIN to print the teacher name and department name. Use the string 'None' where there is no department.
###Code
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'], how='left')
.select('teacher', 'name')
.fillna({'name': 'None'})
.toPandas())
###Output
_____no_output_____
###Markdown
7.Use COUNT to show the number of teachers and the number of mobile phones.
###Code
teacher.agg({'name': 'count', 'mobile': 'count'}).toPandas()
###Output
_____no_output_____
###Markdown
8.Use COUNT and GROUP BY **dept.name** to show each department and the number of staff. Use a RIGHT JOIN to ensure that the Engineering department is listed.
###Code
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'], how='right')
.groupBy('name')
.agg({'teacher': 'count'})
.toPandas())
###Output
_____no_output_____
###Markdown
9. Using [CASE](https://sqlzoo.net/wiki/CASE)Use CASE to show the **name** of each teacher followed by 'Sci' if the teacher is in **dept** 1 or 2 and 'Art' otherwise.
###Code
(teacher.select('name', 'dept', when(teacher['dept'].isin([1, 2]), 'Sci')
.otherwise('Art').alias('label'))
.toPandas())
###Output
_____no_output_____
###Markdown
10.Use CASE to show the name of each teacher followed by 'Sci' if the teacher is in dept 1 or 2, show 'Art' if the teacher's dept is 3 and 'None' otherwise.
###Code
(teacher.select('name', 'dept',
when(teacher['dept'].isin([1, 2]), 'Sci')
.when(teacher['dept'].isin([3, ]), 'Art')
.otherwise('None').alias('label'))
.toPandas())
sc.stop()
###Output
_____no_output_____ |
site/_build/jupyter_execute/assignments/midterm/hm.ipynb | ###Markdown
Midterm - CorrectionWe are going to go over the Midter in class Thursday. To prepare, please try the grading code by adding to the bottom of your notebook. Try and understand and correct what you missed so you can pay extra attention when you go over that part. Feel free to post questions to Webex Teams. You are also allowed to ask questions of others for the homeowork. However, all the work has to be yours.
###Code
files = "https://github.com/rpi-techfundamentals/introml_website_fall_2020/raw/master/files/midterm.zip"
!pip install otter-grader && wget $files && unzip -o midterm.zip
#This runs all tests.
import otter
grader = otter.Notebook()
grader.check_all()
###Output
_____no_output_____ |
BigGAN_Generation_with_TF_Hub.ipynb | ###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
BigGAN DemoThis notebook is a demo for the *BigGAN* generators available on [TF Hub](https://tfhub.dev/deepmind/biggan-256).See the [BigGAN paper on arXiv](https://arxiv.org/abs/1809.11096) [1] for more information about these models.After connecting to a runtime, start by follow these instructions:1. (Optional) Update the selected **`module_path`** in the first code cell below to load a BigGAN generator for a different image resolution.2. Click **Runtime > Run all** to run each cell in order. Afterwards, the interactive visualizations should update automatically when you modify the settings using the sliders and dropdown menus. (If not, press the **Play** button by the cell to re-render outputs manually.)Note: if you run into any issues, it can help to click **Runtime > Restart and run all...** to restart your runtime and rerun all cells from scratch.[1] Andrew Brock, Jeff Donahue, and Karen Simonyan. "[Large Scale GAN Training for High Fidelity Natural Image Synthesis](https://arxiv.org/abs/1809.11096)". *arxiv:1809.11096*, 2018. First, set the module path.By default, we load the BigGAN generator for 256x256 images from **`https://tfhub.dev/deepmind/biggan-256/1`**.To generate 128x128 or 512x512 images, comment out the middle line and uncomment one of the others.
###Code
# module_path = 'https://tfhub.dev/deepmind/biggan-128/1' # 128x128 BigGAN
module_path = 'https://tfhub.dev/deepmind/biggan-256/1' # 256x256 BigGAN
# module_path = 'https://tfhub.dev/deepmind/biggan-512/1' # 512x512 BigGAN
###Output
_____no_output_____
###Markdown
Setup
###Code
import cStringIO
import IPython.display
import numpy as np
import PIL.Image
from scipy.stats import truncnorm
import tensorflow as tf
import tensorflow_hub as hub
###Output
_____no_output_____
###Markdown
Load a BigGAN generator module from TF Hub
###Code
tf.reset_default_graph()
print 'Loading BigGAN module from:', module_path
module = hub.Module(module_path)
inputs = {k: tf.placeholder(v.dtype, v.get_shape().as_list(), k)
for k, v in module.get_input_info_dict().iteritems()}
output = module(inputs)
print
print 'Inputs:\n', '\n'.join(
' {}: {}'.format(*kv) for kv in inputs.iteritems())
print
print 'Output:', output
###Output
_____no_output_____
###Markdown
Define some functions for sampling and displaying BigGAN images
###Code
input_z = inputs['z']
input_y = inputs['y']
input_trunc = inputs['truncation']
dim_z = input_z.shape.as_list()[1]
vocab_size = input_y.shape.as_list()[1]
def truncated_z_sample(batch_size, truncation=1., seed=None):
state = None if seed is None else np.random.RandomState(seed)
values = truncnorm.rvs(-2, 2, size=(batch_size, dim_z), random_state=state)
return truncation * values
def one_hot(index, vocab_size=vocab_size):
index = np.asarray(index)
if len(index.shape) == 0:
index = np.asarray([index])
assert len(index.shape) == 1
num = index.shape[0]
output = np.zeros((num, vocab_size), dtype=np.float32)
output[np.arange(num), index] = 1
return output
def one_hot_if_needed(label, vocab_size=vocab_size):
label = np.asarray(label)
if len(label.shape) <= 1:
label = one_hot(label, vocab_size)
assert len(label.shape) == 2
return label
def sample(sess, noise, label, truncation=1., batch_size=8,
vocab_size=vocab_size):
noise = np.asarray(noise)
label = np.asarray(label)
num = noise.shape[0]
if len(label.shape) == 0:
label = np.asarray([label] * num)
if label.shape[0] != num:
raise ValueError('Got # noise samples ({}) != # label samples ({})'
.format(noise.shape[0], label.shape[0]))
label = one_hot_if_needed(label, vocab_size)
ims = []
for batch_start in xrange(0, num, batch_size):
s = slice(batch_start, min(num, batch_start + batch_size))
feed_dict = {input_z: noise[s], input_y: label[s], input_trunc: truncation}
ims.append(sess.run(output, feed_dict=feed_dict))
ims = np.concatenate(ims, axis=0)
assert ims.shape[0] == num
ims = np.clip(((ims + 1) / 2.0) * 256, 0, 255)
ims = np.uint8(ims)
return ims
def interpolate(A, B, num_interps):
alphas = np.linspace(0, 1, num_interps)
if A.shape != B.shape:
raise ValueError('A and B must have the same shape to interpolate.')
return np.array([(1-a)*A + a*B for a in alphas])
def imgrid(imarray, cols=5, pad=1):
if imarray.dtype != np.uint8:
raise ValueError('imgrid input imarray must be uint8')
pad = int(pad)
assert pad >= 0
cols = int(cols)
assert cols >= 1
N, H, W, C = imarray.shape
rows = int(np.ceil(N / float(cols)))
batch_pad = rows * cols - N
assert batch_pad >= 0
post_pad = [batch_pad, pad, pad, 0]
pad_arg = [[0, p] for p in post_pad]
imarray = np.pad(imarray, pad_arg, 'constant', constant_values=255)
H += pad
W += pad
grid = (imarray
.reshape(rows, cols, H, W, C)
.transpose(0, 2, 1, 3, 4)
.reshape(rows*H, cols*W, C))
if pad:
grid = grid[:-pad, :-pad]
return grid
def imshow(a, format='png', jpeg_fallback=True):
a = np.asarray(a, dtype=np.uint8)
str_file = cStringIO.StringIO()
PIL.Image.fromarray(a).save(str_file, format)
png_data = str_file.getvalue()
try:
disp = IPython.display.display(IPython.display.Image(png_data))
except IOError:
if jpeg_fallback and format != 'jpeg':
print ('Warning: image was too large to display in format "{}"; '
'trying jpeg instead.').format(format)
return imshow(a, format='jpeg')
else:
raise
return disp
###Output
_____no_output_____
###Markdown
Create a TensorFlow session and initialize variables
###Code
initializer = tf.global_variables_initializer()
sess = tf.Session()
sess.run(initializer)
###Output
_____no_output_____
###Markdown
Explore BigGAN samples of a particular categoryTry varying the **`truncation`** value.(Double-click on the cell to view code.)
###Code
#@title Category-conditional sampling { display-mode: "form", run: "auto" }
num_samples = 10 #@param {type:"slider", min:1, max:20, step:1}
truncation = 0.4 #@param {type:"slider", min:0.02, max:1, step:0.02}
noise_seed = 0 #@param {type:"slider", min:0, max:100, step:1}
category = "933) cheeseburger" #@param ["0) tench, Tinca tinca", "1) goldfish, Carassius auratus", "2) great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "3) tiger shark, Galeocerdo cuvieri", "4) hammerhead, hammerhead shark", "5) electric ray, crampfish, numbfish, torpedo", "6) stingray", "7) cock", "8) hen", "9) ostrich, Struthio camelus", "10) brambling, Fringilla montifringilla", "11) goldfinch, Carduelis carduelis", "12) house finch, linnet, Carpodacus mexicanus", "13) junco, snowbird", "14) indigo bunting, indigo finch, indigo bird, Passerina cyanea", "15) robin, American robin, Turdus migratorius", "16) bulbul", "17) jay", "18) magpie", "19) chickadee", "20) water ouzel, dipper", "21) kite", "22) bald eagle, American eagle, Haliaeetus leucocephalus", "23) vulture", "24) great grey owl, great gray owl, Strix nebulosa", "25) European fire salamander, Salamandra salamandra", "26) common newt, Triturus vulgaris", "27) eft", "28) spotted salamander, Ambystoma maculatum", "29) axolotl, mud puppy, Ambystoma mexicanum", "30) bullfrog, Rana catesbeiana", "31) tree frog, tree-frog", "32) tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "33) loggerhead, loggerhead turtle, Caretta caretta", "34) leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "35) mud turtle", "36) terrapin", "37) box turtle, box tortoise", "38) banded gecko", "39) common iguana, iguana, Iguana iguana", "40) American chameleon, anole, Anolis carolinensis", "41) whiptail, whiptail lizard", "42) agama", "43) frilled lizard, Chlamydosaurus kingi", "44) alligator lizard", "45) Gila monster, Heloderma suspectum", "46) green lizard, Lacerta viridis", "47) African chameleon, Chamaeleo chamaeleon", "48) Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "49) African crocodile, Nile crocodile, Crocodylus niloticus", "50) American alligator, Alligator mississipiensis", "51) triceratops", "52) thunder snake, worm snake, Carphophis amoenus", "53) ringneck snake, ring-necked snake, ring snake", "54) hognose snake, puff adder, sand viper", "55) green snake, grass snake", "56) king snake, kingsnake", "57) garter snake, grass snake", "58) water snake", "59) vine snake", "60) night snake, Hypsiglena torquata", "61) boa constrictor, Constrictor constrictor", "62) rock python, rock snake, Python sebae", "63) Indian cobra, Naja naja", "64) green mamba", "65) sea snake", "66) horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "67) diamondback, diamondback rattlesnake, Crotalus adamanteus", "68) sidewinder, horned rattlesnake, Crotalus cerastes", "69) trilobite", "70) harvestman, daddy longlegs, Phalangium opilio", "71) scorpion", "72) black and gold garden spider, Argiope aurantia", "73) barn spider, Araneus cavaticus", "74) garden spider, Aranea diademata", "75) black widow, Latrodectus mactans", "76) tarantula", "77) wolf spider, hunting spider", "78) tick", "79) centipede", "80) black grouse", "81) ptarmigan", "82) ruffed grouse, partridge, Bonasa umbellus", "83) prairie chicken, prairie grouse, prairie fowl", "84) peacock", "85) quail", "86) partridge", "87) African grey, African gray, Psittacus erithacus", "88) macaw", "89) sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "90) lorikeet", "91) coucal", "92) bee eater", "93) hornbill", "94) hummingbird", "95) jacamar", "96) toucan", "97) drake", "98) red-breasted merganser, Mergus serrator", "99) goose", "100) black swan, Cygnus atratus", "101) tusker", "102) echidna, spiny anteater, anteater", "103) platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "104) wallaby, brush kangaroo", "105) koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "106) wombat", "107) jellyfish", "108) sea anemone, anemone", "109) brain coral", "110) flatworm, platyhelminth", "111) nematode, nematode worm, roundworm", "112) conch", "113) snail", "114) slug", "115) sea slug, nudibranch", "116) chiton, coat-of-mail shell, sea cradle, polyplacophore", "117) chambered nautilus, pearly nautilus, nautilus", "118) Dungeness crab, Cancer magister", "119) rock crab, Cancer irroratus", "120) fiddler crab", "121) king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "122) American lobster, Northern lobster, Maine lobster, Homarus americanus", "123) spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "124) crayfish, crawfish, crawdad, crawdaddy", "125) hermit crab", "126) isopod", "127) white stork, Ciconia ciconia", "128) black stork, Ciconia nigra", "129) spoonbill", "130) flamingo", "131) little blue heron, Egretta caerulea", "132) American egret, great white heron, Egretta albus", "133) bittern", "134) crane", "135) limpkin, Aramus pictus", "136) European gallinule, Porphyrio porphyrio", "137) American coot, marsh hen, mud hen, water hen, Fulica americana", "138) bustard", "139) ruddy turnstone, Arenaria interpres", "140) red-backed sandpiper, dunlin, Erolia alpina", "141) redshank, Tringa totanus", "142) dowitcher", "143) oystercatcher, oyster catcher", "144) pelican", "145) king penguin, Aptenodytes patagonica", "146) albatross, mollymawk", "147) grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "148) killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "149) dugong, Dugong dugon", "150) sea lion", "151) Chihuahua", "152) Japanese spaniel", "153) Maltese dog, Maltese terrier, Maltese", "154) Pekinese, Pekingese, Peke", "155) Shih-Tzu", "156) Blenheim spaniel", "157) papillon", "158) toy terrier", "159) Rhodesian ridgeback", "160) Afghan hound, Afghan", "161) basset, basset hound", "162) beagle", "163) bloodhound, sleuthhound", "164) bluetick", "165) black-and-tan coonhound", "166) Walker hound, Walker foxhound", "167) English foxhound", "168) redbone", "169) borzoi, Russian wolfhound", "170) Irish wolfhound", "171) Italian greyhound", "172) whippet", "173) Ibizan hound, Ibizan Podenco", "174) Norwegian elkhound, elkhound", "175) otterhound, otter hound", "176) Saluki, gazelle hound", "177) Scottish deerhound, deerhound", "178) Weimaraner", "179) Staffordshire bullterrier, Staffordshire bull terrier", "180) American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "181) Bedlington terrier", "182) Border terrier", "183) Kerry blue terrier", "184) Irish terrier", "185) Norfolk terrier", "186) Norwich terrier", "187) Yorkshire terrier", "188) wire-haired fox terrier", "189) Lakeland terrier", "190) Sealyham terrier, Sealyham", "191) Airedale, Airedale terrier", "192) cairn, cairn terrier", "193) Australian terrier", "194) Dandie Dinmont, Dandie Dinmont terrier", "195) Boston bull, Boston terrier", "196) miniature schnauzer", "197) giant schnauzer", "198) standard schnauzer", "199) Scotch terrier, Scottish terrier, Scottie", "200) Tibetan terrier, chrysanthemum dog", "201) silky terrier, Sydney silky", "202) soft-coated wheaten terrier", "203) West Highland white terrier", "204) Lhasa, Lhasa apso", "205) flat-coated retriever", "206) curly-coated retriever", "207) golden retriever", "208) Labrador retriever", "209) Chesapeake Bay retriever", "210) German short-haired pointer", "211) vizsla, Hungarian pointer", "212) English setter", "213) Irish setter, red setter", "214) Gordon setter", "215) Brittany spaniel", "216) clumber, clumber spaniel", "217) English springer, English springer spaniel", "218) Welsh springer spaniel", "219) cocker spaniel, English cocker spaniel, cocker", "220) Sussex spaniel", "221) Irish water spaniel", "222) kuvasz", "223) schipperke", "224) groenendael", "225) malinois", "226) briard", "227) kelpie", "228) komondor", "229) Old English sheepdog, bobtail", "230) Shetland sheepdog, Shetland sheep dog, Shetland", "231) collie", "232) Border collie", "233) Bouvier des Flandres, Bouviers des Flandres", "234) Rottweiler", "235) German shepherd, German shepherd dog, German police dog, alsatian", "236) Doberman, Doberman pinscher", "237) miniature pinscher", "238) Greater Swiss Mountain dog", "239) Bernese mountain dog", "240) Appenzeller", "241) EntleBucher", "242) boxer", "243) bull mastiff", "244) Tibetan mastiff", "245) French bulldog", "246) Great Dane", "247) Saint Bernard, St Bernard", "248) Eskimo dog, husky", "249) malamute, malemute, Alaskan malamute", "250) Siberian husky", "251) dalmatian, coach dog, carriage dog", "252) affenpinscher, monkey pinscher, monkey dog", "253) basenji", "254) pug, pug-dog", "255) Leonberg", "256) Newfoundland, Newfoundland dog", "257) Great Pyrenees", "258) Samoyed, Samoyede", "259) Pomeranian", "260) chow, chow chow", "261) keeshond", "262) Brabancon griffon", "263) Pembroke, Pembroke Welsh corgi", "264) Cardigan, Cardigan Welsh corgi", "265) toy poodle", "266) miniature poodle", "267) standard poodle", "268) Mexican hairless", "269) timber wolf, grey wolf, gray wolf, Canis lupus", "270) white wolf, Arctic wolf, Canis lupus tundrarum", "271) red wolf, maned wolf, Canis rufus, Canis niger", "272) coyote, prairie wolf, brush wolf, Canis latrans", "273) dingo, warrigal, warragal, Canis dingo", "274) dhole, Cuon alpinus", "275) African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "276) hyena, hyaena", "277) red fox, Vulpes vulpes", "278) kit fox, Vulpes macrotis", "279) Arctic fox, white fox, Alopex lagopus", "280) grey fox, gray fox, Urocyon cinereoargenteus", "281) tabby, tabby cat", "282) tiger cat", "283) Persian cat", "284) Siamese cat, Siamese", "285) Egyptian cat", "286) cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "287) lynx, catamount", "288) leopard, Panthera pardus", "289) snow leopard, ounce, Panthera uncia", "290) jaguar, panther, Panthera onca, Felis onca", "291) lion, king of beasts, Panthera leo", "292) tiger, Panthera tigris", "293) cheetah, chetah, Acinonyx jubatus", "294) brown bear, bruin, Ursus arctos", "295) American black bear, black bear, Ursus americanus, Euarctos americanus", "296) ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "297) sloth bear, Melursus ursinus, Ursus ursinus", "298) mongoose", "299) meerkat, mierkat", "300) tiger beetle", "301) ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "302) ground beetle, carabid beetle", "303) long-horned beetle, longicorn, longicorn beetle", "304) leaf beetle, chrysomelid", "305) dung beetle", "306) rhinoceros beetle", "307) weevil", "308) fly", "309) bee", "310) ant, emmet, pismire", "311) grasshopper, hopper", "312) cricket", "313) walking stick, walkingstick, stick insect", "314) cockroach, roach", "315) mantis, mantid", "316) cicada, cicala", "317) leafhopper", "318) lacewing, lacewing fly", "319) dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "320) damselfly", "321) admiral", "322) ringlet, ringlet butterfly", "323) monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "324) cabbage butterfly", "325) sulphur butterfly, sulfur butterfly", "326) lycaenid, lycaenid butterfly", "327) starfish, sea star", "328) sea urchin", "329) sea cucumber, holothurian", "330) wood rabbit, cottontail, cottontail rabbit", "331) hare", "332) Angora, Angora rabbit", "333) hamster", "334) porcupine, hedgehog", "335) fox squirrel, eastern fox squirrel, Sciurus niger", "336) marmot", "337) beaver", "338) guinea pig, Cavia cobaya", "339) sorrel", "340) zebra", "341) hog, pig, grunter, squealer, Sus scrofa", "342) wild boar, boar, Sus scrofa", "343) warthog", "344) hippopotamus, hippo, river horse, Hippopotamus amphibius", "345) ox", "346) water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "347) bison", "348) ram, tup", "349) bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "350) ibex, Capra ibex", "351) hartebeest", "352) impala, Aepyceros melampus", "353) gazelle", "354) Arabian camel, dromedary, Camelus dromedarius", "355) llama", "356) weasel", "357) mink", "358) polecat, fitch, foulmart, foumart, Mustela putorius", "359) black-footed ferret, ferret, Mustela nigripes", "360) otter", "361) skunk, polecat, wood pussy", "362) badger", "363) armadillo", "364) three-toed sloth, ai, Bradypus tridactylus", "365) orangutan, orang, orangutang, Pongo pygmaeus", "366) gorilla, Gorilla gorilla", "367) chimpanzee, chimp, Pan troglodytes", "368) gibbon, Hylobates lar", "369) siamang, Hylobates syndactylus, Symphalangus syndactylus", "370) guenon, guenon monkey", "371) patas, hussar monkey, Erythrocebus patas", "372) baboon", "373) macaque", "374) langur", "375) colobus, colobus monkey", "376) proboscis monkey, Nasalis larvatus", "377) marmoset", "378) capuchin, ringtail, Cebus capucinus", "379) howler monkey, howler", "380) titi, titi monkey", "381) spider monkey, Ateles geoffroyi", "382) squirrel monkey, Saimiri sciureus", "383) Madagascar cat, ring-tailed lemur, Lemur catta", "384) indri, indris, Indri indri, Indri brevicaudatus", "385) Indian elephant, Elephas maximus", "386) African elephant, Loxodonta africana", "387) lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "388) giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "389) barracouta, snoek", "390) eel", "391) coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "392) rock beauty, Holocanthus tricolor", "393) anemone fish", "394) sturgeon", "395) gar, garfish, garpike, billfish, Lepisosteus osseus", "396) lionfish", "397) puffer, pufferfish, blowfish, globefish", "398) abacus", "399) abaya", "400) academic gown, academic robe, judge's robe", "401) accordion, piano accordion, squeeze box", "402) acoustic guitar", "403) aircraft carrier, carrier, flattop, attack aircraft carrier", "404) airliner", "405) airship, dirigible", "406) altar", "407) ambulance", "408) amphibian, amphibious vehicle", "409) analog clock", "410) apiary, bee house", "411) apron", "412) ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "413) assault rifle, assault gun", "414) backpack, back pack, knapsack, packsack, rucksack, haversack", "415) bakery, bakeshop, bakehouse", "416) balance beam, beam", "417) balloon", "418) ballpoint, ballpoint pen, ballpen, Biro", "419) Band Aid", "420) banjo", "421) bannister, banister, balustrade, balusters, handrail", "422) barbell", "423) barber chair", "424) barbershop", "425) barn", "426) barometer", "427) barrel, cask", "428) barrow, garden cart, lawn cart, wheelbarrow", "429) baseball", "430) basketball", "431) bassinet", "432) bassoon", "433) bathing cap, swimming cap", "434) bath towel", "435) bathtub, bathing tub, bath, tub", "436) beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "437) beacon, lighthouse, beacon light, pharos", "438) beaker", "439) bearskin, busby, shako", "440) beer bottle", "441) beer glass", "442) bell cote, bell cot", "443) bib", "444) bicycle-built-for-two, tandem bicycle, tandem", "445) bikini, two-piece", "446) binder, ring-binder", "447) binoculars, field glasses, opera glasses", "448) birdhouse", "449) boathouse", "450) bobsled, bobsleigh, bob", "451) bolo tie, bolo, bola tie, bola", "452) bonnet, poke bonnet", "453) bookcase", "454) bookshop, bookstore, bookstall", "455) bottlecap", "456) bow", "457) bow tie, bow-tie, bowtie", "458) brass, memorial tablet, plaque", "459) brassiere, bra, bandeau", "460) breakwater, groin, groyne, mole, bulwark, seawall, jetty", "461) breastplate, aegis, egis", "462) broom", "463) bucket, pail", "464) buckle", "465) bulletproof vest", "466) bullet train, bullet", "467) butcher shop, meat market", "468) cab, hack, taxi, taxicab", "469) caldron, cauldron", "470) candle, taper, wax light", "471) cannon", "472) canoe", "473) can opener, tin opener", "474) cardigan", "475) car mirror", "476) carousel, carrousel, merry-go-round, roundabout, whirligig", "477) carpenter's kit, tool kit", "478) carton", "479) car wheel", "480) cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "481) cassette", "482) cassette player", "483) castle", "484) catamaran", "485) CD player", "486) cello, violoncello", "487) cellular telephone, cellular phone, cellphone, cell, mobile phone", "488) chain", "489) chainlink fence", "490) chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "491) chain saw, chainsaw", "492) chest", "493) chiffonier, commode", "494) chime, bell, gong", "495) china cabinet, china closet", "496) Christmas stocking", "497) church, church building", "498) cinema, movie theater, movie theatre, movie house, picture palace", "499) cleaver, meat cleaver, chopper", "500) cliff dwelling", "501) cloak", "502) clog, geta, patten, sabot", "503) cocktail shaker", "504) coffee mug", "505) coffeepot", "506) coil, spiral, volute, whorl, helix", "507) combination lock", "508) computer keyboard, keypad", "509) confectionery, confectionary, candy store", "510) container ship, containership, container vessel", "511) convertible", "512) corkscrew, bottle screw", "513) cornet, horn, trumpet, trump", "514) cowboy boot", "515) cowboy hat, ten-gallon hat", "516) cradle", "517) crane", "518) crash helmet", "519) crate", "520) crib, cot", "521) Crock Pot", "522) croquet ball", "523) crutch", "524) cuirass", "525) dam, dike, dyke", "526) desk", "527) desktop computer", "528) dial telephone, dial phone", "529) diaper, nappy, napkin", "530) digital clock", "531) digital watch", "532) dining table, board", "533) dishrag, dishcloth", "534) dishwasher, dish washer, dishwashing machine", "535) disk brake, disc brake", "536) dock, dockage, docking facility", "537) dogsled, dog sled, dog sleigh", "538) dome", "539) doormat, welcome mat", "540) drilling platform, offshore rig", "541) drum, membranophone, tympan", "542) drumstick", "543) dumbbell", "544) Dutch oven", "545) electric fan, blower", "546) electric guitar", "547) electric locomotive", "548) entertainment center", "549) envelope", "550) espresso maker", "551) face powder", "552) feather boa, boa", "553) file, file cabinet, filing cabinet", "554) fireboat", "555) fire engine, fire truck", "556) fire screen, fireguard", "557) flagpole, flagstaff", "558) flute, transverse flute", "559) folding chair", "560) football helmet", "561) forklift", "562) fountain", "563) fountain pen", "564) four-poster", "565) freight car", "566) French horn, horn", "567) frying pan, frypan, skillet", "568) fur coat", "569) garbage truck, dustcart", "570) gasmask, respirator, gas helmet", "571) gas pump, gasoline pump, petrol pump, island dispenser", "572) goblet", "573) go-kart", "574) golf ball", "575) golfcart, golf cart", "576) gondola", "577) gong, tam-tam", "578) gown", "579) grand piano, grand", "580) greenhouse, nursery, glasshouse", "581) grille, radiator grille", "582) grocery store, grocery, food market, market", "583) guillotine", "584) hair slide", "585) hair spray", "586) half track", "587) hammer", "588) hamper", "589) hand blower, blow dryer, blow drier, hair dryer, hair drier", "590) hand-held computer, hand-held microcomputer", "591) handkerchief, hankie, hanky, hankey", "592) hard disc, hard disk, fixed disk", "593) harmonica, mouth organ, harp, mouth harp", "594) harp", "595) harvester, reaper", "596) hatchet", "597) holster", "598) home theater, home theatre", "599) honeycomb", "600) hook, claw", "601) hoopskirt, crinoline", "602) horizontal bar, high bar", "603) horse cart, horse-cart", "604) hourglass", "605) iPod", "606) iron, smoothing iron", "607) jack-o'-lantern", "608) jean, blue jean, denim", "609) jeep, landrover", "610) jersey, T-shirt, tee shirt", "611) jigsaw puzzle", "612) jinrikisha, ricksha, rickshaw", "613) joystick", "614) kimono", "615) knee pad", "616) knot", "617) lab coat, laboratory coat", "618) ladle", "619) lampshade, lamp shade", "620) laptop, laptop computer", "621) lawn mower, mower", "622) lens cap, lens cover", "623) letter opener, paper knife, paperknife", "624) library", "625) lifeboat", "626) lighter, light, igniter, ignitor", "627) limousine, limo", "628) liner, ocean liner", "629) lipstick, lip rouge", "630) Loafer", "631) lotion", "632) loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "633) loupe, jeweler's loupe", "634) lumbermill, sawmill", "635) magnetic compass", "636) mailbag, postbag", "637) mailbox, letter box", "638) maillot", "639) maillot, tank suit", "640) manhole cover", "641) maraca", "642) marimba, xylophone", "643) mask", "644) matchstick", "645) maypole", "646) maze, labyrinth", "647) measuring cup", "648) medicine chest, medicine cabinet", "649) megalith, megalithic structure", "650) microphone, mike", "651) microwave, microwave oven", "652) military uniform", "653) milk can", "654) minibus", "655) miniskirt, mini", "656) minivan", "657) missile", "658) mitten", "659) mixing bowl", "660) mobile home, manufactured home", "661) Model T", "662) modem", "663) monastery", "664) monitor", "665) moped", "666) mortar", "667) mortarboard", "668) mosque", "669) mosquito net", "670) motor scooter, scooter", "671) mountain bike, all-terrain bike, off-roader", "672) mountain tent", "673) mouse, computer mouse", "674) mousetrap", "675) moving van", "676) muzzle", "677) nail", "678) neck brace", "679) necklace", "680) nipple", "681) notebook, notebook computer", "682) obelisk", "683) oboe, hautboy, hautbois", "684) ocarina, sweet potato", "685) odometer, hodometer, mileometer, milometer", "686) oil filter", "687) organ, pipe organ", "688) oscilloscope, scope, cathode-ray oscilloscope, CRO", "689) overskirt", "690) oxcart", "691) oxygen mask", "692) packet", "693) paddle, boat paddle", "694) paddlewheel, paddle wheel", "695) padlock", "696) paintbrush", "697) pajama, pyjama, pj's, jammies", "698) palace", "699) panpipe, pandean pipe, syrinx", "700) paper towel", "701) parachute, chute", "702) parallel bars, bars", "703) park bench", "704) parking meter", "705) passenger car, coach, carriage", "706) patio, terrace", "707) pay-phone, pay-station", "708) pedestal, plinth, footstall", "709) pencil box, pencil case", "710) pencil sharpener", "711) perfume, essence", "712) Petri dish", "713) photocopier", "714) pick, plectrum, plectron", "715) pickelhaube", "716) picket fence, paling", "717) pickup, pickup truck", "718) pier", "719) piggy bank, penny bank", "720) pill bottle", "721) pillow", "722) ping-pong ball", "723) pinwheel", "724) pirate, pirate ship", "725) pitcher, ewer", "726) plane, carpenter's plane, woodworking plane", "727) planetarium", "728) plastic bag", "729) plate rack", "730) plow, plough", "731) plunger, plumber's helper", "732) Polaroid camera, Polaroid Land camera", "733) pole", "734) police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "735) poncho", "736) pool table, billiard table, snooker table", "737) pop bottle, soda bottle", "738) pot, flowerpot", "739) potter's wheel", "740) power drill", "741) prayer rug, prayer mat", "742) printer", "743) prison, prison house", "744) projectile, missile", "745) projector", "746) puck, hockey puck", "747) punching bag, punch bag, punching ball, punchball", "748) purse", "749) quill, quill pen", "750) quilt, comforter, comfort, puff", "751) racer, race car, racing car", "752) racket, racquet", "753) radiator", "754) radio, wireless", "755) radio telescope, radio reflector", "756) rain barrel", "757) recreational vehicle, RV, R.V.", "758) reel", "759) reflex camera", "760) refrigerator, icebox", "761) remote control, remote", "762) restaurant, eating house, eating place, eatery", "763) revolver, six-gun, six-shooter", "764) rifle", "765) rocking chair, rocker", "766) rotisserie", "767) rubber eraser, rubber, pencil eraser", "768) rugby ball", "769) rule, ruler", "770) running shoe", "771) safe", "772) safety pin", "773) saltshaker, salt shaker", "774) sandal", "775) sarong", "776) sax, saxophone", "777) scabbard", "778) scale, weighing machine", "779) school bus", "780) schooner", "781) scoreboard", "782) screen, CRT screen", "783) screw", "784) screwdriver", "785) seat belt, seatbelt", "786) sewing machine", "787) shield, buckler", "788) shoe shop, shoe-shop, shoe store", "789) shoji", "790) shopping basket", "791) shopping cart", "792) shovel", "793) shower cap", "794) shower curtain", "795) ski", "796) ski mask", "797) sleeping bag", "798) slide rule, slipstick", "799) sliding door", "800) slot, one-armed bandit", "801) snorkel", "802) snowmobile", "803) snowplow, snowplough", "804) soap dispenser", "805) soccer ball", "806) sock", "807) solar dish, solar collector, solar furnace", "808) sombrero", "809) soup bowl", "810) space bar", "811) space heater", "812) space shuttle", "813) spatula", "814) speedboat", "815) spider web, spider's web", "816) spindle", "817) sports car, sport car", "818) spotlight, spot", "819) stage", "820) steam locomotive", "821) steel arch bridge", "822) steel drum", "823) stethoscope", "824) stole", "825) stone wall", "826) stopwatch, stop watch", "827) stove", "828) strainer", "829) streetcar, tram, tramcar, trolley, trolley car", "830) stretcher", "831) studio couch, day bed", "832) stupa, tope", "833) submarine, pigboat, sub, U-boat", "834) suit, suit of clothes", "835) sundial", "836) sunglass", "837) sunglasses, dark glasses, shades", "838) sunscreen, sunblock, sun blocker", "839) suspension bridge", "840) swab, swob, mop", "841) sweatshirt", "842) swimming trunks, bathing trunks", "843) swing", "844) switch, electric switch, electrical switch", "845) syringe", "846) table lamp", "847) tank, army tank, armored combat vehicle, armoured combat vehicle", "848) tape player", "849) teapot", "850) teddy, teddy bear", "851) television, television system", "852) tennis ball", "853) thatch, thatched roof", "854) theater curtain, theatre curtain", "855) thimble", "856) thresher, thrasher, threshing machine", "857) throne", "858) tile roof", "859) toaster", "860) tobacco shop, tobacconist shop, tobacconist", "861) toilet seat", "862) torch", "863) totem pole", "864) tow truck, tow car, wrecker", "865) toyshop", "866) tractor", "867) trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "868) tray", "869) trench coat", "870) tricycle, trike, velocipede", "871) trimaran", "872) tripod", "873) triumphal arch", "874) trolleybus, trolley coach, trackless trolley", "875) trombone", "876) tub, vat", "877) turnstile", "878) typewriter keyboard", "879) umbrella", "880) unicycle, monocycle", "881) upright, upright piano", "882) vacuum, vacuum cleaner", "883) vase", "884) vault", "885) velvet", "886) vending machine", "887) vestment", "888) viaduct", "889) violin, fiddle", "890) volleyball", "891) waffle iron", "892) wall clock", "893) wallet, billfold, notecase, pocketbook", "894) wardrobe, closet, press", "895) warplane, military plane", "896) washbasin, handbasin, washbowl, lavabo, wash-hand basin", "897) washer, automatic washer, washing machine", "898) water bottle", "899) water jug", "900) water tower", "901) whiskey jug", "902) whistle", "903) wig", "904) window screen", "905) window shade", "906) Windsor tie", "907) wine bottle", "908) wing", "909) wok", "910) wooden spoon", "911) wool, woolen, woollen", "912) worm fence, snake fence, snake-rail fence, Virginia fence", "913) wreck", "914) yawl", "915) yurt", "916) web site, website, internet site, site", "917) comic book", "918) crossword puzzle, crossword", "919) street sign", "920) traffic light, traffic signal, stoplight", "921) book jacket, dust cover, dust jacket, dust wrapper", "922) menu", "923) plate", "924) guacamole", "925) consomme", "926) hot pot, hotpot", "927) trifle", "928) ice cream, icecream", "929) ice lolly, lolly, lollipop, popsicle", "930) French loaf", "931) bagel, beigel", "932) pretzel", "933) cheeseburger", "934) hotdog, hot dog, red hot", "935) mashed potato", "936) head cabbage", "937) broccoli", "938) cauliflower", "939) zucchini, courgette", "940) spaghetti squash", "941) acorn squash", "942) butternut squash", "943) cucumber, cuke", "944) artichoke, globe artichoke", "945) bell pepper", "946) cardoon", "947) mushroom", "948) Granny Smith", "949) strawberry", "950) orange", "951) lemon", "952) fig", "953) pineapple, ananas", "954) banana", "955) jackfruit, jak, jack", "956) custard apple", "957) pomegranate", "958) hay", "959) carbonara", "960) chocolate sauce, chocolate syrup", "961) dough", "962) meat loaf, meatloaf", "963) pizza, pizza pie", "964) potpie", "965) burrito", "966) red wine", "967) espresso", "968) cup", "969) eggnog", "970) alp", "971) bubble", "972) cliff, drop, drop-off", "973) coral reef", "974) geyser", "975) lakeside, lakeshore", "976) promontory, headland, head, foreland", "977) sandbar, sand bar", "978) seashore, coast, seacoast, sea-coast", "979) valley, vale", "980) volcano", "981) ballplayer, baseball player", "982) groom, bridegroom", "983) scuba diver", "984) rapeseed", "985) daisy", "986) yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "987) corn", "988) acorn", "989) hip, rose hip, rosehip", "990) buckeye, horse chestnut, conker", "991) coral fungus", "992) agaric", "993) gyromitra", "994) stinkhorn, carrion fungus", "995) earthstar", "996) hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "997) bolete", "998) ear, spike, capitulum", "999) toilet tissue, toilet paper, bathroom tissue"]
z = truncated_z_sample(num_samples, truncation, noise_seed)
y = int(category.split(')')[0])
ims = sample(sess, z, y, truncation=truncation)
imshow(imgrid(ims, cols=min(num_samples, 5)))
###Output
_____no_output_____
###Markdown
Interpolate between BigGAN samplesTry setting different **`category`**s with the same **`noise_seed`**s, or the same **`category`**s with different **`noise_seed`**s. Or go wild and set both any way you like!(Double-click on the cell to view code.)
###Code
#@title Interpolation { display-mode: "form", run: "auto" }
num_samples = 2 #@param {type:"slider", min:1, max:5, step:1}
num_interps = 5 #@param {type:"slider", min:2, max:10, step:1}
truncation = 0.2 #@param {type:"slider", min:0.02, max:1, step:0.02}
noise_seed_A = 0 #@param {type:"slider", min:0, max:100, step:1}
category_A = "207) golden retriever" #@param ["0) tench, Tinca tinca", "1) goldfish, Carassius auratus", "2) great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "3) tiger shark, Galeocerdo cuvieri", "4) hammerhead, hammerhead shark", "5) electric ray, crampfish, numbfish, torpedo", "6) stingray", "7) cock", "8) hen", "9) ostrich, Struthio camelus", "10) brambling, Fringilla montifringilla", "11) goldfinch, Carduelis carduelis", "12) house finch, linnet, Carpodacus mexicanus", "13) junco, snowbird", "14) indigo bunting, indigo finch, indigo bird, Passerina cyanea", "15) robin, American robin, Turdus migratorius", "16) bulbul", "17) jay", "18) magpie", "19) chickadee", "20) water ouzel, dipper", "21) kite", "22) bald eagle, American eagle, Haliaeetus leucocephalus", "23) vulture", "24) great grey owl, great gray owl, Strix nebulosa", "25) European fire salamander, Salamandra salamandra", "26) common newt, Triturus vulgaris", "27) eft", "28) spotted salamander, Ambystoma maculatum", "29) axolotl, mud puppy, Ambystoma mexicanum", "30) bullfrog, Rana catesbeiana", "31) tree frog, tree-frog", "32) tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "33) loggerhead, loggerhead turtle, Caretta caretta", "34) leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "35) mud turtle", "36) terrapin", "37) box turtle, box tortoise", "38) banded gecko", "39) common iguana, iguana, Iguana iguana", "40) American chameleon, anole, Anolis carolinensis", "41) whiptail, whiptail lizard", "42) agama", "43) frilled lizard, Chlamydosaurus kingi", "44) alligator lizard", "45) Gila monster, Heloderma suspectum", "46) green lizard, Lacerta viridis", "47) African chameleon, Chamaeleo chamaeleon", "48) Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "49) African crocodile, Nile crocodile, Crocodylus niloticus", "50) American alligator, Alligator mississipiensis", "51) triceratops", "52) thunder snake, worm snake, Carphophis amoenus", "53) ringneck snake, ring-necked snake, ring snake", "54) hognose snake, puff adder, sand viper", "55) green snake, grass snake", "56) king snake, kingsnake", "57) garter snake, grass snake", "58) water snake", "59) vine snake", "60) night snake, Hypsiglena torquata", "61) boa constrictor, Constrictor constrictor", "62) rock python, rock snake, Python sebae", "63) Indian cobra, Naja naja", "64) green mamba", "65) sea snake", "66) horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "67) diamondback, diamondback rattlesnake, Crotalus adamanteus", "68) sidewinder, horned rattlesnake, Crotalus cerastes", "69) trilobite", "70) harvestman, daddy longlegs, Phalangium opilio", "71) scorpion", "72) black and gold garden spider, Argiope aurantia", "73) barn spider, Araneus cavaticus", "74) garden spider, Aranea diademata", "75) black widow, Latrodectus mactans", "76) tarantula", "77) wolf spider, hunting spider", "78) tick", "79) centipede", "80) black grouse", "81) ptarmigan", "82) ruffed grouse, partridge, Bonasa umbellus", "83) prairie chicken, prairie grouse, prairie fowl", "84) peacock", "85) quail", "86) partridge", "87) African grey, African gray, Psittacus erithacus", "88) macaw", "89) sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "90) lorikeet", "91) coucal", "92) bee eater", "93) hornbill", "94) hummingbird", "95) jacamar", "96) toucan", "97) drake", "98) red-breasted merganser, Mergus serrator", "99) goose", "100) black swan, Cygnus atratus", "101) tusker", "102) echidna, spiny anteater, anteater", "103) platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "104) wallaby, brush kangaroo", "105) koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "106) wombat", "107) jellyfish", "108) sea anemone, anemone", "109) brain coral", "110) flatworm, platyhelminth", "111) nematode, nematode worm, roundworm", "112) conch", "113) snail", "114) slug", "115) sea slug, nudibranch", "116) chiton, coat-of-mail shell, sea cradle, polyplacophore", "117) chambered nautilus, pearly nautilus, nautilus", "118) Dungeness crab, Cancer magister", "119) rock crab, Cancer irroratus", "120) fiddler crab", "121) king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "122) American lobster, Northern lobster, Maine lobster, Homarus americanus", "123) spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "124) crayfish, crawfish, crawdad, crawdaddy", "125) hermit crab", "126) isopod", "127) white stork, Ciconia ciconia", "128) black stork, Ciconia nigra", "129) spoonbill", "130) flamingo", "131) little blue heron, Egretta caerulea", "132) American egret, great white heron, Egretta albus", "133) bittern", "134) crane", "135) limpkin, Aramus pictus", "136) European gallinule, Porphyrio porphyrio", "137) American coot, marsh hen, mud hen, water hen, Fulica americana", "138) bustard", "139) ruddy turnstone, Arenaria interpres", "140) red-backed sandpiper, dunlin, Erolia alpina", "141) redshank, Tringa totanus", "142) dowitcher", "143) oystercatcher, oyster catcher", "144) pelican", "145) king penguin, Aptenodytes patagonica", "146) albatross, mollymawk", "147) grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "148) killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "149) dugong, Dugong dugon", "150) sea lion", "151) Chihuahua", "152) Japanese spaniel", "153) Maltese dog, Maltese terrier, Maltese", "154) Pekinese, Pekingese, Peke", "155) Shih-Tzu", "156) Blenheim spaniel", "157) papillon", "158) toy terrier", "159) Rhodesian ridgeback", "160) Afghan hound, Afghan", "161) basset, basset hound", "162) beagle", "163) bloodhound, sleuthhound", "164) bluetick", "165) black-and-tan coonhound", "166) Walker hound, Walker foxhound", "167) English foxhound", "168) redbone", "169) borzoi, Russian wolfhound", "170) Irish wolfhound", "171) Italian greyhound", "172) whippet", "173) Ibizan hound, Ibizan Podenco", "174) Norwegian elkhound, elkhound", "175) otterhound, otter hound", "176) Saluki, gazelle hound", "177) Scottish deerhound, deerhound", "178) Weimaraner", "179) Staffordshire bullterrier, Staffordshire bull terrier", "180) American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "181) Bedlington terrier", "182) Border terrier", "183) Kerry blue terrier", "184) Irish terrier", "185) Norfolk terrier", "186) Norwich terrier", "187) Yorkshire terrier", "188) wire-haired fox terrier", "189) Lakeland terrier", "190) Sealyham terrier, Sealyham", "191) Airedale, Airedale terrier", "192) cairn, cairn terrier", "193) Australian terrier", "194) Dandie Dinmont, Dandie Dinmont terrier", "195) Boston bull, Boston terrier", "196) miniature schnauzer", "197) giant schnauzer", "198) standard schnauzer", "199) Scotch terrier, Scottish terrier, Scottie", "200) Tibetan terrier, chrysanthemum dog", "201) silky terrier, Sydney silky", "202) soft-coated wheaten terrier", "203) West Highland white terrier", "204) Lhasa, Lhasa apso", "205) flat-coated retriever", "206) curly-coated retriever", "207) golden retriever", "208) Labrador retriever", "209) Chesapeake Bay retriever", "210) German short-haired pointer", "211) vizsla, Hungarian pointer", "212) English setter", "213) Irish setter, red setter", "214) Gordon setter", "215) Brittany spaniel", "216) clumber, clumber spaniel", "217) English springer, English springer spaniel", "218) Welsh springer spaniel", "219) cocker spaniel, English cocker spaniel, cocker", "220) Sussex spaniel", "221) Irish water spaniel", "222) kuvasz", "223) schipperke", "224) groenendael", "225) malinois", "226) briard", "227) kelpie", "228) komondor", "229) Old English sheepdog, bobtail", "230) Shetland sheepdog, Shetland sheep dog, Shetland", "231) collie", "232) Border collie", "233) Bouvier des Flandres, Bouviers des Flandres", "234) Rottweiler", "235) German shepherd, German shepherd dog, German police dog, alsatian", "236) Doberman, Doberman pinscher", "237) miniature pinscher", "238) Greater Swiss Mountain dog", "239) Bernese mountain dog", "240) Appenzeller", "241) EntleBucher", "242) boxer", "243) bull mastiff", "244) Tibetan mastiff", "245) French bulldog", "246) Great Dane", "247) Saint Bernard, St Bernard", "248) Eskimo dog, husky", "249) malamute, malemute, Alaskan malamute", "250) Siberian husky", "251) dalmatian, coach dog, carriage dog", "252) affenpinscher, monkey pinscher, monkey dog", "253) basenji", "254) pug, pug-dog", "255) Leonberg", "256) Newfoundland, Newfoundland dog", "257) Great Pyrenees", "258) Samoyed, Samoyede", "259) Pomeranian", "260) chow, chow chow", "261) keeshond", "262) Brabancon griffon", "263) Pembroke, Pembroke Welsh corgi", "264) Cardigan, Cardigan Welsh corgi", "265) toy poodle", "266) miniature poodle", "267) standard poodle", "268) Mexican hairless", "269) timber wolf, grey wolf, gray wolf, Canis lupus", "270) white wolf, Arctic wolf, Canis lupus tundrarum", "271) red wolf, maned wolf, Canis rufus, Canis niger", "272) coyote, prairie wolf, brush wolf, Canis latrans", "273) dingo, warrigal, warragal, Canis dingo", "274) dhole, Cuon alpinus", "275) African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "276) hyena, hyaena", "277) red fox, Vulpes vulpes", "278) kit fox, Vulpes macrotis", "279) Arctic fox, white fox, Alopex lagopus", "280) grey fox, gray fox, Urocyon cinereoargenteus", "281) tabby, tabby cat", "282) tiger cat", "283) Persian cat", "284) Siamese cat, Siamese", "285) Egyptian cat", "286) cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "287) lynx, catamount", "288) leopard, Panthera pardus", "289) snow leopard, ounce, Panthera uncia", "290) jaguar, panther, Panthera onca, Felis onca", "291) lion, king of beasts, Panthera leo", "292) tiger, Panthera tigris", "293) cheetah, chetah, Acinonyx jubatus", "294) brown bear, bruin, Ursus arctos", "295) American black bear, black bear, Ursus americanus, Euarctos americanus", "296) ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "297) sloth bear, Melursus ursinus, Ursus ursinus", "298) mongoose", "299) meerkat, mierkat", "300) tiger beetle", "301) ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "302) ground beetle, carabid beetle", "303) long-horned beetle, longicorn, longicorn beetle", "304) leaf beetle, chrysomelid", "305) dung beetle", "306) rhinoceros beetle", "307) weevil", "308) fly", "309) bee", "310) ant, emmet, pismire", "311) grasshopper, hopper", "312) cricket", "313) walking stick, walkingstick, stick insect", "314) cockroach, roach", "315) mantis, mantid", "316) cicada, cicala", "317) leafhopper", "318) lacewing, lacewing fly", "319) dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "320) damselfly", "321) admiral", "322) ringlet, ringlet butterfly", "323) monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "324) cabbage butterfly", "325) sulphur butterfly, sulfur butterfly", "326) lycaenid, lycaenid butterfly", "327) starfish, sea star", "328) sea urchin", "329) sea cucumber, holothurian", "330) wood rabbit, cottontail, cottontail rabbit", "331) hare", "332) Angora, Angora rabbit", "333) hamster", "334) porcupine, hedgehog", "335) fox squirrel, eastern fox squirrel, Sciurus niger", "336) marmot", "337) beaver", "338) guinea pig, Cavia cobaya", "339) sorrel", "340) zebra", "341) hog, pig, grunter, squealer, Sus scrofa", "342) wild boar, boar, Sus scrofa", "343) warthog", "344) hippopotamus, hippo, river horse, Hippopotamus amphibius", "345) ox", "346) water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "347) bison", "348) ram, tup", "349) bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "350) ibex, Capra ibex", "351) hartebeest", "352) impala, Aepyceros melampus", "353) gazelle", "354) Arabian camel, dromedary, Camelus dromedarius", "355) llama", "356) weasel", "357) mink", "358) polecat, fitch, foulmart, foumart, Mustela putorius", "359) black-footed ferret, ferret, Mustela nigripes", "360) otter", "361) skunk, polecat, wood pussy", "362) badger", "363) armadillo", "364) three-toed sloth, ai, Bradypus tridactylus", "365) orangutan, orang, orangutang, Pongo pygmaeus", "366) gorilla, Gorilla gorilla", "367) chimpanzee, chimp, Pan troglodytes", "368) gibbon, Hylobates lar", "369) siamang, Hylobates syndactylus, Symphalangus syndactylus", "370) guenon, guenon monkey", "371) patas, hussar monkey, Erythrocebus patas", "372) baboon", "373) macaque", "374) langur", "375) colobus, colobus monkey", "376) proboscis monkey, Nasalis larvatus", "377) marmoset", "378) capuchin, ringtail, Cebus capucinus", "379) howler monkey, howler", "380) titi, titi monkey", "381) spider monkey, Ateles geoffroyi", "382) squirrel monkey, Saimiri sciureus", "383) Madagascar cat, ring-tailed lemur, Lemur catta", "384) indri, indris, Indri indri, Indri brevicaudatus", "385) Indian elephant, Elephas maximus", "386) African elephant, Loxodonta africana", "387) lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "388) giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "389) barracouta, snoek", "390) eel", "391) coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "392) rock beauty, Holocanthus tricolor", "393) anemone fish", "394) sturgeon", "395) gar, garfish, garpike, billfish, Lepisosteus osseus", "396) lionfish", "397) puffer, pufferfish, blowfish, globefish", "398) abacus", "399) abaya", "400) academic gown, academic robe, judge's robe", "401) accordion, piano accordion, squeeze box", "402) acoustic guitar", "403) aircraft carrier, carrier, flattop, attack aircraft carrier", "404) airliner", "405) airship, dirigible", "406) altar", "407) ambulance", "408) amphibian, amphibious vehicle", "409) analog clock", "410) apiary, bee house", "411) apron", "412) ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "413) assault rifle, assault gun", "414) backpack, back pack, knapsack, packsack, rucksack, haversack", "415) bakery, bakeshop, bakehouse", "416) balance beam, beam", "417) balloon", "418) ballpoint, ballpoint pen, ballpen, Biro", "419) Band Aid", "420) banjo", "421) bannister, banister, balustrade, balusters, handrail", "422) barbell", "423) barber chair", "424) barbershop", "425) barn", "426) barometer", "427) barrel, cask", "428) barrow, garden cart, lawn cart, wheelbarrow", "429) baseball", "430) basketball", "431) bassinet", "432) bassoon", "433) bathing cap, swimming cap", "434) bath towel", "435) bathtub, bathing tub, bath, tub", "436) beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "437) beacon, lighthouse, beacon light, pharos", "438) beaker", "439) bearskin, busby, shako", "440) beer bottle", "441) beer glass", "442) bell cote, bell cot", "443) bib", "444) bicycle-built-for-two, tandem bicycle, tandem", "445) bikini, two-piece", "446) binder, ring-binder", "447) binoculars, field glasses, opera glasses", "448) birdhouse", "449) boathouse", "450) bobsled, bobsleigh, bob", "451) bolo tie, bolo, bola tie, bola", "452) bonnet, poke bonnet", "453) bookcase", "454) bookshop, bookstore, bookstall", "455) bottlecap", "456) bow", "457) bow tie, bow-tie, bowtie", "458) brass, memorial tablet, plaque", "459) brassiere, bra, bandeau", "460) breakwater, groin, groyne, mole, bulwark, seawall, jetty", "461) breastplate, aegis, egis", "462) broom", "463) bucket, pail", "464) buckle", "465) bulletproof vest", "466) bullet train, bullet", "467) butcher shop, meat market", "468) cab, hack, taxi, taxicab", "469) caldron, cauldron", "470) candle, taper, wax light", "471) cannon", "472) canoe", "473) can opener, tin opener", "474) cardigan", "475) car mirror", "476) carousel, carrousel, merry-go-round, roundabout, whirligig", "477) carpenter's kit, tool kit", "478) carton", "479) car wheel", "480) cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "481) cassette", "482) cassette player", "483) castle", "484) catamaran", "485) CD player", "486) cello, violoncello", "487) cellular telephone, cellular phone, cellphone, cell, mobile phone", "488) chain", "489) chainlink fence", "490) chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "491) chain saw, chainsaw", "492) chest", "493) chiffonier, commode", "494) chime, bell, gong", "495) china cabinet, china closet", "496) Christmas stocking", "497) church, church building", "498) cinema, movie theater, movie theatre, movie house, picture palace", "499) cleaver, meat cleaver, chopper", "500) cliff dwelling", "501) cloak", "502) clog, geta, patten, sabot", "503) cocktail shaker", "504) coffee mug", "505) coffeepot", "506) coil, spiral, volute, whorl, helix", "507) combination lock", "508) computer keyboard, keypad", "509) confectionery, confectionary, candy store", "510) container ship, containership, container vessel", "511) convertible", "512) corkscrew, bottle screw", "513) cornet, horn, trumpet, trump", "514) cowboy boot", "515) cowboy hat, ten-gallon hat", "516) cradle", "517) crane", "518) crash helmet", "519) crate", "520) crib, cot", "521) Crock Pot", "522) croquet ball", "523) crutch", "524) cuirass", "525) dam, dike, dyke", "526) desk", "527) desktop computer", "528) dial telephone, dial phone", "529) diaper, nappy, napkin", "530) digital clock", "531) digital watch", "532) dining table, board", "533) dishrag, dishcloth", "534) dishwasher, dish washer, dishwashing machine", "535) disk brake, disc brake", "536) dock, dockage, docking facility", "537) dogsled, dog sled, dog sleigh", "538) dome", "539) doormat, welcome mat", "540) drilling platform, offshore rig", "541) drum, membranophone, tympan", "542) drumstick", "543) dumbbell", "544) Dutch oven", "545) electric fan, blower", "546) electric guitar", "547) electric locomotive", "548) entertainment center", "549) envelope", "550) espresso maker", "551) face powder", "552) feather boa, boa", "553) file, file cabinet, filing cabinet", "554) fireboat", "555) fire engine, fire truck", "556) fire screen, fireguard", "557) flagpole, flagstaff", "558) flute, transverse flute", "559) folding chair", "560) football helmet", "561) forklift", "562) fountain", "563) fountain pen", "564) four-poster", "565) freight car", "566) French horn, horn", "567) frying pan, frypan, skillet", "568) fur coat", "569) garbage truck, dustcart", "570) gasmask, respirator, gas helmet", "571) gas pump, gasoline pump, petrol pump, island dispenser", "572) goblet", "573) go-kart", "574) golf ball", "575) golfcart, golf cart", "576) gondola", "577) gong, tam-tam", "578) gown", "579) grand piano, grand", "580) greenhouse, nursery, glasshouse", "581) grille, radiator grille", "582) grocery store, grocery, food market, market", "583) guillotine", "584) hair slide", "585) hair spray", "586) half track", "587) hammer", "588) hamper", "589) hand blower, blow dryer, blow drier, hair dryer, hair drier", "590) hand-held computer, hand-held microcomputer", "591) handkerchief, hankie, hanky, hankey", "592) hard disc, hard disk, fixed disk", "593) harmonica, mouth organ, harp, mouth harp", "594) harp", "595) harvester, reaper", "596) hatchet", "597) holster", "598) home theater, home theatre", "599) honeycomb", "600) hook, claw", "601) hoopskirt, crinoline", "602) horizontal bar, high bar", "603) horse cart, horse-cart", "604) hourglass", "605) iPod", "606) iron, smoothing iron", "607) jack-o'-lantern", "608) jean, blue jean, denim", "609) jeep, landrover", "610) jersey, T-shirt, tee shirt", "611) jigsaw puzzle", "612) jinrikisha, ricksha, rickshaw", "613) joystick", "614) kimono", "615) knee pad", "616) knot", "617) lab coat, laboratory coat", "618) ladle", "619) lampshade, lamp shade", "620) laptop, laptop computer", "621) lawn mower, mower", "622) lens cap, lens cover", "623) letter opener, paper knife, paperknife", "624) library", "625) lifeboat", "626) lighter, light, igniter, ignitor", "627) limousine, limo", "628) liner, ocean liner", "629) lipstick, lip rouge", "630) Loafer", "631) lotion", "632) loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "633) loupe, jeweler's loupe", "634) lumbermill, sawmill", "635) magnetic compass", "636) mailbag, postbag", "637) mailbox, letter box", "638) maillot", "639) maillot, tank suit", "640) manhole cover", "641) maraca", "642) marimba, xylophone", "643) mask", "644) matchstick", "645) maypole", "646) maze, labyrinth", "647) measuring cup", "648) medicine chest, medicine cabinet", "649) megalith, megalithic structure", "650) microphone, mike", "651) microwave, microwave oven", "652) military uniform", "653) milk can", "654) minibus", "655) miniskirt, mini", "656) minivan", "657) missile", "658) mitten", "659) mixing bowl", "660) mobile home, manufactured home", "661) Model T", "662) modem", "663) monastery", "664) monitor", "665) moped", "666) mortar", "667) mortarboard", "668) mosque", "669) mosquito net", "670) motor scooter, scooter", "671) mountain bike, all-terrain bike, off-roader", "672) mountain tent", "673) mouse, computer mouse", "674) mousetrap", "675) moving van", "676) muzzle", "677) nail", "678) neck brace", "679) necklace", "680) nipple", "681) notebook, notebook computer", "682) obelisk", "683) oboe, hautboy, hautbois", "684) ocarina, sweet potato", "685) odometer, hodometer, mileometer, milometer", "686) oil filter", "687) organ, pipe organ", "688) oscilloscope, scope, cathode-ray oscilloscope, CRO", "689) overskirt", "690) oxcart", "691) oxygen mask", "692) packet", "693) paddle, boat paddle", "694) paddlewheel, paddle wheel", "695) padlock", "696) paintbrush", "697) pajama, pyjama, pj's, jammies", "698) palace", "699) panpipe, pandean pipe, syrinx", "700) paper towel", "701) parachute, chute", "702) parallel bars, bars", "703) park bench", "704) parking meter", "705) passenger car, coach, carriage", "706) patio, terrace", "707) pay-phone, pay-station", "708) pedestal, plinth, footstall", "709) pencil box, pencil case", "710) pencil sharpener", "711) perfume, essence", "712) Petri dish", "713) photocopier", "714) pick, plectrum, plectron", "715) pickelhaube", "716) picket fence, paling", "717) pickup, pickup truck", "718) pier", "719) piggy bank, penny bank", "720) pill bottle", "721) pillow", "722) ping-pong ball", "723) pinwheel", "724) pirate, pirate ship", "725) pitcher, ewer", "726) plane, carpenter's plane, woodworking plane", "727) planetarium", "728) plastic bag", "729) plate rack", "730) plow, plough", "731) plunger, plumber's helper", "732) Polaroid camera, Polaroid Land camera", "733) pole", "734) police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "735) poncho", "736) pool table, billiard table, snooker table", "737) pop bottle, soda bottle", "738) pot, flowerpot", "739) potter's wheel", "740) power drill", "741) prayer rug, prayer mat", "742) printer", "743) prison, prison house", "744) projectile, missile", "745) projector", "746) puck, hockey puck", "747) punching bag, punch bag, punching ball, punchball", "748) purse", "749) quill, quill pen", "750) quilt, comforter, comfort, puff", "751) racer, race car, racing car", "752) racket, racquet", "753) radiator", "754) radio, wireless", "755) radio telescope, radio reflector", "756) rain barrel", "757) recreational vehicle, RV, R.V.", "758) reel", "759) reflex camera", "760) refrigerator, icebox", "761) remote control, remote", "762) restaurant, eating house, eating place, eatery", "763) revolver, six-gun, six-shooter", "764) rifle", "765) rocking chair, rocker", "766) rotisserie", "767) rubber eraser, rubber, pencil eraser", "768) rugby ball", "769) rule, ruler", "770) running shoe", "771) safe", "772) safety pin", "773) saltshaker, salt shaker", "774) sandal", "775) sarong", "776) sax, saxophone", "777) scabbard", "778) scale, weighing machine", "779) school bus", "780) schooner", "781) scoreboard", "782) screen, CRT screen", "783) screw", "784) screwdriver", "785) seat belt, seatbelt", "786) sewing machine", "787) shield, buckler", "788) shoe shop, shoe-shop, shoe store", "789) shoji", "790) shopping basket", "791) shopping cart", "792) shovel", "793) shower cap", "794) shower curtain", "795) ski", "796) ski mask", "797) sleeping bag", "798) slide rule, slipstick", "799) sliding door", "800) slot, one-armed bandit", "801) snorkel", "802) snowmobile", "803) snowplow, snowplough", "804) soap dispenser", "805) soccer ball", "806) sock", "807) solar dish, solar collector, solar furnace", "808) sombrero", "809) soup bowl", "810) space bar", "811) space heater", "812) space shuttle", "813) spatula", "814) speedboat", "815) spider web, spider's web", "816) spindle", "817) sports car, sport car", "818) spotlight, spot", "819) stage", "820) steam locomotive", "821) steel arch bridge", "822) steel drum", "823) stethoscope", "824) stole", "825) stone wall", "826) stopwatch, stop watch", "827) stove", "828) strainer", "829) streetcar, tram, tramcar, trolley, trolley car", "830) stretcher", "831) studio couch, day bed", "832) stupa, tope", "833) submarine, pigboat, sub, U-boat", "834) suit, suit of clothes", "835) sundial", "836) sunglass", "837) sunglasses, dark glasses, shades", "838) sunscreen, sunblock, sun blocker", "839) suspension bridge", "840) swab, swob, mop", "841) sweatshirt", "842) swimming trunks, bathing trunks", "843) swing", "844) switch, electric switch, electrical switch", "845) syringe", "846) table lamp", "847) tank, army tank, armored combat vehicle, armoured combat vehicle", "848) tape player", "849) teapot", "850) teddy, teddy bear", "851) television, television system", "852) tennis ball", "853) thatch, thatched roof", "854) theater curtain, theatre curtain", "855) thimble", "856) thresher, thrasher, threshing machine", "857) throne", "858) tile roof", "859) toaster", "860) tobacco shop, tobacconist shop, tobacconist", "861) toilet seat", "862) torch", "863) totem pole", "864) tow truck, tow car, wrecker", "865) toyshop", "866) tractor", "867) trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "868) tray", "869) trench coat", "870) tricycle, trike, velocipede", "871) trimaran", "872) tripod", "873) triumphal arch", "874) trolleybus, trolley coach, trackless trolley", "875) trombone", "876) tub, vat", "877) turnstile", "878) typewriter keyboard", "879) umbrella", "880) unicycle, monocycle", "881) upright, upright piano", "882) vacuum, vacuum cleaner", "883) vase", "884) vault", "885) velvet", "886) vending machine", "887) vestment", "888) viaduct", "889) violin, fiddle", "890) volleyball", "891) waffle iron", "892) wall clock", "893) wallet, billfold, notecase, pocketbook", "894) wardrobe, closet, press", "895) warplane, military plane", "896) washbasin, handbasin, washbowl, lavabo, wash-hand basin", "897) washer, automatic washer, washing machine", "898) water bottle", "899) water jug", "900) water tower", "901) whiskey jug", "902) whistle", "903) wig", "904) window screen", "905) window shade", "906) Windsor tie", "907) wine bottle", "908) wing", "909) wok", "910) wooden spoon", "911) wool, woolen, woollen", "912) worm fence, snake fence, snake-rail fence, Virginia fence", "913) wreck", "914) yawl", "915) yurt", "916) web site, website, internet site, site", "917) comic book", "918) crossword puzzle, crossword", "919) street sign", "920) traffic light, traffic signal, stoplight", "921) book jacket, dust cover, dust jacket, dust wrapper", "922) menu", "923) plate", "924) guacamole", "925) consomme", "926) hot pot, hotpot", "927) trifle", "928) ice cream, icecream", "929) ice lolly, lolly, lollipop, popsicle", "930) French loaf", "931) bagel, beigel", "932) pretzel", "933) cheeseburger", "934) hotdog, hot dog, red hot", "935) mashed potato", "936) head cabbage", "937) broccoli", "938) cauliflower", "939) zucchini, courgette", "940) spaghetti squash", "941) acorn squash", "942) butternut squash", "943) cucumber, cuke", "944) artichoke, globe artichoke", "945) bell pepper", "946) cardoon", "947) mushroom", "948) Granny Smith", "949) strawberry", "950) orange", "951) lemon", "952) fig", "953) pineapple, ananas", "954) banana", "955) jackfruit, jak, jack", "956) custard apple", "957) pomegranate", "958) hay", "959) carbonara", "960) chocolate sauce, chocolate syrup", "961) dough", "962) meat loaf, meatloaf", "963) pizza, pizza pie", "964) potpie", "965) burrito", "966) red wine", "967) espresso", "968) cup", "969) eggnog", "970) alp", "971) bubble", "972) cliff, drop, drop-off", "973) coral reef", "974) geyser", "975) lakeside, lakeshore", "976) promontory, headland, head, foreland", "977) sandbar, sand bar", "978) seashore, coast, seacoast, sea-coast", "979) valley, vale", "980) volcano", "981) ballplayer, baseball player", "982) groom, bridegroom", "983) scuba diver", "984) rapeseed", "985) daisy", "986) yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "987) corn", "988) acorn", "989) hip, rose hip, rosehip", "990) buckeye, horse chestnut, conker", "991) coral fungus", "992) agaric", "993) gyromitra", "994) stinkhorn, carrion fungus", "995) earthstar", "996) hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "997) bolete", "998) ear, spike, capitulum", "999) toilet tissue, toilet paper, bathroom tissue"]
noise_seed_B = 0 #@param {type:"slider", min:0, max:100, step:1}
category_B = "8) hen" #@param ["0) tench, Tinca tinca", "1) goldfish, Carassius auratus", "2) great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "3) tiger shark, Galeocerdo cuvieri", "4) hammerhead, hammerhead shark", "5) electric ray, crampfish, numbfish, torpedo", "6) stingray", "7) cock", "8) hen", "9) ostrich, Struthio camelus", "10) brambling, Fringilla montifringilla", "11) goldfinch, Carduelis carduelis", "12) house finch, linnet, Carpodacus mexicanus", "13) junco, snowbird", "14) indigo bunting, indigo finch, indigo bird, Passerina cyanea", "15) robin, American robin, Turdus migratorius", "16) bulbul", "17) jay", "18) magpie", "19) chickadee", "20) water ouzel, dipper", "21) kite", "22) bald eagle, American eagle, Haliaeetus leucocephalus", "23) vulture", "24) great grey owl, great gray owl, Strix nebulosa", "25) European fire salamander, Salamandra salamandra", "26) common newt, Triturus vulgaris", "27) eft", "28) spotted salamander, Ambystoma maculatum", "29) axolotl, mud puppy, Ambystoma mexicanum", "30) bullfrog, Rana catesbeiana", "31) tree frog, tree-frog", "32) tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "33) loggerhead, loggerhead turtle, Caretta caretta", "34) leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "35) mud turtle", "36) terrapin", "37) box turtle, box tortoise", "38) banded gecko", "39) common iguana, iguana, Iguana iguana", "40) American chameleon, anole, Anolis carolinensis", "41) whiptail, whiptail lizard", "42) agama", "43) frilled lizard, Chlamydosaurus kingi", "44) alligator lizard", "45) Gila monster, Heloderma suspectum", "46) green lizard, Lacerta viridis", "47) African chameleon, Chamaeleo chamaeleon", "48) Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "49) African crocodile, Nile crocodile, Crocodylus niloticus", "50) American alligator, Alligator mississipiensis", "51) triceratops", "52) thunder snake, worm snake, Carphophis amoenus", "53) ringneck snake, ring-necked snake, ring snake", "54) hognose snake, puff adder, sand viper", "55) green snake, grass snake", "56) king snake, kingsnake", "57) garter snake, grass snake", "58) water snake", "59) vine snake", "60) night snake, Hypsiglena torquata", "61) boa constrictor, Constrictor constrictor", "62) rock python, rock snake, Python sebae", "63) Indian cobra, Naja naja", "64) green mamba", "65) sea snake", "66) horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "67) diamondback, diamondback rattlesnake, Crotalus adamanteus", "68) sidewinder, horned rattlesnake, Crotalus cerastes", "69) trilobite", "70) harvestman, daddy longlegs, Phalangium opilio", "71) scorpion", "72) black and gold garden spider, Argiope aurantia", "73) barn spider, Araneus cavaticus", "74) garden spider, Aranea diademata", "75) black widow, Latrodectus mactans", "76) tarantula", "77) wolf spider, hunting spider", "78) tick", "79) centipede", "80) black grouse", "81) ptarmigan", "82) ruffed grouse, partridge, Bonasa umbellus", "83) prairie chicken, prairie grouse, prairie fowl", "84) peacock", "85) quail", "86) partridge", "87) African grey, African gray, Psittacus erithacus", "88) macaw", "89) sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "90) lorikeet", "91) coucal", "92) bee eater", "93) hornbill", "94) hummingbird", "95) jacamar", "96) toucan", "97) drake", "98) red-breasted merganser, Mergus serrator", "99) goose", "100) black swan, Cygnus atratus", "101) tusker", "102) echidna, spiny anteater, anteater", "103) platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "104) wallaby, brush kangaroo", "105) koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "106) wombat", "107) jellyfish", "108) sea anemone, anemone", "109) brain coral", "110) flatworm, platyhelminth", "111) nematode, nematode worm, roundworm", "112) conch", "113) snail", "114) slug", "115) sea slug, nudibranch", "116) chiton, coat-of-mail shell, sea cradle, polyplacophore", "117) chambered nautilus, pearly nautilus, nautilus", "118) Dungeness crab, Cancer magister", "119) rock crab, Cancer irroratus", "120) fiddler crab", "121) king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "122) American lobster, Northern lobster, Maine lobster, Homarus americanus", "123) spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "124) crayfish, crawfish, crawdad, crawdaddy", "125) hermit crab", "126) isopod", "127) white stork, Ciconia ciconia", "128) black stork, Ciconia nigra", "129) spoonbill", "130) flamingo", "131) little blue heron, Egretta caerulea", "132) American egret, great white heron, Egretta albus", "133) bittern", "134) crane", "135) limpkin, Aramus pictus", "136) European gallinule, Porphyrio porphyrio", "137) American coot, marsh hen, mud hen, water hen, Fulica americana", "138) bustard", "139) ruddy turnstone, Arenaria interpres", "140) red-backed sandpiper, dunlin, Erolia alpina", "141) redshank, Tringa totanus", "142) dowitcher", "143) oystercatcher, oyster catcher", "144) pelican", "145) king penguin, Aptenodytes patagonica", "146) albatross, mollymawk", "147) grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "148) killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "149) dugong, Dugong dugon", "150) sea lion", "151) Chihuahua", "152) Japanese spaniel", "153) Maltese dog, Maltese terrier, Maltese", "154) Pekinese, Pekingese, Peke", "155) Shih-Tzu", "156) Blenheim spaniel", "157) papillon", "158) toy terrier", "159) Rhodesian ridgeback", "160) Afghan hound, Afghan", "161) basset, basset hound", "162) beagle", "163) bloodhound, sleuthhound", "164) bluetick", "165) black-and-tan coonhound", "166) Walker hound, Walker foxhound", "167) English foxhound", "168) redbone", "169) borzoi, Russian wolfhound", "170) Irish wolfhound", "171) Italian greyhound", "172) whippet", "173) Ibizan hound, Ibizan Podenco", "174) Norwegian elkhound, elkhound", "175) otterhound, otter hound", "176) Saluki, gazelle hound", "177) Scottish deerhound, deerhound", "178) Weimaraner", "179) Staffordshire bullterrier, Staffordshire bull terrier", "180) American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "181) Bedlington terrier", "182) Border terrier", "183) Kerry blue terrier", "184) Irish terrier", "185) Norfolk terrier", "186) Norwich terrier", "187) Yorkshire terrier", "188) wire-haired fox terrier", "189) Lakeland terrier", "190) Sealyham terrier, Sealyham", "191) Airedale, Airedale terrier", "192) cairn, cairn terrier", "193) Australian terrier", "194) Dandie Dinmont, Dandie Dinmont terrier", "195) Boston bull, Boston terrier", "196) miniature schnauzer", "197) giant schnauzer", "198) standard schnauzer", "199) Scotch terrier, Scottish terrier, Scottie", "200) Tibetan terrier, chrysanthemum dog", "201) silky terrier, Sydney silky", "202) soft-coated wheaten terrier", "203) West Highland white terrier", "204) Lhasa, Lhasa apso", "205) flat-coated retriever", "206) curly-coated retriever", "207) golden retriever", "208) Labrador retriever", "209) Chesapeake Bay retriever", "210) German short-haired pointer", "211) vizsla, Hungarian pointer", "212) English setter", "213) Irish setter, red setter", "214) Gordon setter", "215) Brittany spaniel", "216) clumber, clumber spaniel", "217) English springer, English springer spaniel", "218) Welsh springer spaniel", "219) cocker spaniel, English cocker spaniel, cocker", "220) Sussex spaniel", "221) Irish water spaniel", "222) kuvasz", "223) schipperke", "224) groenendael", "225) malinois", "226) briard", "227) kelpie", "228) komondor", "229) Old English sheepdog, bobtail", "230) Shetland sheepdog, Shetland sheep dog, Shetland", "231) collie", "232) Border collie", "233) Bouvier des Flandres, Bouviers des Flandres", "234) Rottweiler", "235) German shepherd, German shepherd dog, German police dog, alsatian", "236) Doberman, Doberman pinscher", "237) miniature pinscher", "238) Greater Swiss Mountain dog", "239) Bernese mountain dog", "240) Appenzeller", "241) EntleBucher", "242) boxer", "243) bull mastiff", "244) Tibetan mastiff", "245) French bulldog", "246) Great Dane", "247) Saint Bernard, St Bernard", "248) Eskimo dog, husky", "249) malamute, malemute, Alaskan malamute", "250) Siberian husky", "251) dalmatian, coach dog, carriage dog", "252) affenpinscher, monkey pinscher, monkey dog", "253) basenji", "254) pug, pug-dog", "255) Leonberg", "256) Newfoundland, Newfoundland dog", "257) Great Pyrenees", "258) Samoyed, Samoyede", "259) Pomeranian", "260) chow, chow chow", "261) keeshond", "262) Brabancon griffon", "263) Pembroke, Pembroke Welsh corgi", "264) Cardigan, Cardigan Welsh corgi", "265) toy poodle", "266) miniature poodle", "267) standard poodle", "268) Mexican hairless", "269) timber wolf, grey wolf, gray wolf, Canis lupus", "270) white wolf, Arctic wolf, Canis lupus tundrarum", "271) red wolf, maned wolf, Canis rufus, Canis niger", "272) coyote, prairie wolf, brush wolf, Canis latrans", "273) dingo, warrigal, warragal, Canis dingo", "274) dhole, Cuon alpinus", "275) African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "276) hyena, hyaena", "277) red fox, Vulpes vulpes", "278) kit fox, Vulpes macrotis", "279) Arctic fox, white fox, Alopex lagopus", "280) grey fox, gray fox, Urocyon cinereoargenteus", "281) tabby, tabby cat", "282) tiger cat", "283) Persian cat", "284) Siamese cat, Siamese", "285) Egyptian cat", "286) cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "287) lynx, catamount", "288) leopard, Panthera pardus", "289) snow leopard, ounce, Panthera uncia", "290) jaguar, panther, Panthera onca, Felis onca", "291) lion, king of beasts, Panthera leo", "292) tiger, Panthera tigris", "293) cheetah, chetah, Acinonyx jubatus", "294) brown bear, bruin, Ursus arctos", "295) American black bear, black bear, Ursus americanus, Euarctos americanus", "296) ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "297) sloth bear, Melursus ursinus, Ursus ursinus", "298) mongoose", "299) meerkat, mierkat", "300) tiger beetle", "301) ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "302) ground beetle, carabid beetle", "303) long-horned beetle, longicorn, longicorn beetle", "304) leaf beetle, chrysomelid", "305) dung beetle", "306) rhinoceros beetle", "307) weevil", "308) fly", "309) bee", "310) ant, emmet, pismire", "311) grasshopper, hopper", "312) cricket", "313) walking stick, walkingstick, stick insect", "314) cockroach, roach", "315) mantis, mantid", "316) cicada, cicala", "317) leafhopper", "318) lacewing, lacewing fly", "319) dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "320) damselfly", "321) admiral", "322) ringlet, ringlet butterfly", "323) monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "324) cabbage butterfly", "325) sulphur butterfly, sulfur butterfly", "326) lycaenid, lycaenid butterfly", "327) starfish, sea star", "328) sea urchin", "329) sea cucumber, holothurian", "330) wood rabbit, cottontail, cottontail rabbit", "331) hare", "332) Angora, Angora rabbit", "333) hamster", "334) porcupine, hedgehog", "335) fox squirrel, eastern fox squirrel, Sciurus niger", "336) marmot", "337) beaver", "338) guinea pig, Cavia cobaya", "339) sorrel", "340) zebra", "341) hog, pig, grunter, squealer, Sus scrofa", "342) wild boar, boar, Sus scrofa", "343) warthog", "344) hippopotamus, hippo, river horse, Hippopotamus amphibius", "345) ox", "346) water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "347) bison", "348) ram, tup", "349) bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "350) ibex, Capra ibex", "351) hartebeest", "352) impala, Aepyceros melampus", "353) gazelle", "354) Arabian camel, dromedary, Camelus dromedarius", "355) llama", "356) weasel", "357) mink", "358) polecat, fitch, foulmart, foumart, Mustela putorius", "359) black-footed ferret, ferret, Mustela nigripes", "360) otter", "361) skunk, polecat, wood pussy", "362) badger", "363) armadillo", "364) three-toed sloth, ai, Bradypus tridactylus", "365) orangutan, orang, orangutang, Pongo pygmaeus", "366) gorilla, Gorilla gorilla", "367) chimpanzee, chimp, Pan troglodytes", "368) gibbon, Hylobates lar", "369) siamang, Hylobates syndactylus, Symphalangus syndactylus", "370) guenon, guenon monkey", "371) patas, hussar monkey, Erythrocebus patas", "372) baboon", "373) macaque", "374) langur", "375) colobus, colobus monkey", "376) proboscis monkey, Nasalis larvatus", "377) marmoset", "378) capuchin, ringtail, Cebus capucinus", "379) howler monkey, howler", "380) titi, titi monkey", "381) spider monkey, Ateles geoffroyi", "382) squirrel monkey, Saimiri sciureus", "383) Madagascar cat, ring-tailed lemur, Lemur catta", "384) indri, indris, Indri indri, Indri brevicaudatus", "385) Indian elephant, Elephas maximus", "386) African elephant, Loxodonta africana", "387) lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "388) giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "389) barracouta, snoek", "390) eel", "391) coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "392) rock beauty, Holocanthus tricolor", "393) anemone fish", "394) sturgeon", "395) gar, garfish, garpike, billfish, Lepisosteus osseus", "396) lionfish", "397) puffer, pufferfish, blowfish, globefish", "398) abacus", "399) abaya", "400) academic gown, academic robe, judge's robe", "401) accordion, piano accordion, squeeze box", "402) acoustic guitar", "403) aircraft carrier, carrier, flattop, attack aircraft carrier", "404) airliner", "405) airship, dirigible", "406) altar", "407) ambulance", "408) amphibian, amphibious vehicle", "409) analog clock", "410) apiary, bee house", "411) apron", "412) ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "413) assault rifle, assault gun", "414) backpack, back pack, knapsack, packsack, rucksack, haversack", "415) bakery, bakeshop, bakehouse", "416) balance beam, beam", "417) balloon", "418) ballpoint, ballpoint pen, ballpen, Biro", "419) Band Aid", "420) banjo", "421) bannister, banister, balustrade, balusters, handrail", "422) barbell", "423) barber chair", "424) barbershop", "425) barn", "426) barometer", "427) barrel, cask", "428) barrow, garden cart, lawn cart, wheelbarrow", "429) baseball", "430) basketball", "431) bassinet", "432) bassoon", "433) bathing cap, swimming cap", "434) bath towel", "435) bathtub, bathing tub, bath, tub", "436) beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "437) beacon, lighthouse, beacon light, pharos", "438) beaker", "439) bearskin, busby, shako", "440) beer bottle", "441) beer glass", "442) bell cote, bell cot", "443) bib", "444) bicycle-built-for-two, tandem bicycle, tandem", "445) bikini, two-piece", "446) binder, ring-binder", "447) binoculars, field glasses, opera glasses", "448) birdhouse", "449) boathouse", "450) bobsled, bobsleigh, bob", "451) bolo tie, bolo, bola tie, bola", "452) bonnet, poke bonnet", "453) bookcase", "454) bookshop, bookstore, bookstall", "455) bottlecap", "456) bow", "457) bow tie, bow-tie, bowtie", "458) brass, memorial tablet, plaque", "459) brassiere, bra, bandeau", "460) breakwater, groin, groyne, mole, bulwark, seawall, jetty", "461) breastplate, aegis, egis", "462) broom", "463) bucket, pail", "464) buckle", "465) bulletproof vest", "466) bullet train, bullet", "467) butcher shop, meat market", "468) cab, hack, taxi, taxicab", "469) caldron, cauldron", "470) candle, taper, wax light", "471) cannon", "472) canoe", "473) can opener, tin opener", "474) cardigan", "475) car mirror", "476) carousel, carrousel, merry-go-round, roundabout, whirligig", "477) carpenter's kit, tool kit", "478) carton", "479) car wheel", "480) cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "481) cassette", "482) cassette player", "483) castle", "484) catamaran", "485) CD player", "486) cello, violoncello", "487) cellular telephone, cellular phone, cellphone, cell, mobile phone", "488) chain", "489) chainlink fence", "490) chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "491) chain saw, chainsaw", "492) chest", "493) chiffonier, commode", "494) chime, bell, gong", "495) china cabinet, china closet", "496) Christmas stocking", "497) church, church building", "498) cinema, movie theater, movie theatre, movie house, picture palace", "499) cleaver, meat cleaver, chopper", "500) cliff dwelling", "501) cloak", "502) clog, geta, patten, sabot", "503) cocktail shaker", "504) coffee mug", "505) coffeepot", "506) coil, spiral, volute, whorl, helix", "507) combination lock", "508) computer keyboard, keypad", "509) confectionery, confectionary, candy store", "510) container ship, containership, container vessel", "511) convertible", "512) corkscrew, bottle screw", "513) cornet, horn, trumpet, trump", "514) cowboy boot", "515) cowboy hat, ten-gallon hat", "516) cradle", "517) crane", "518) crash helmet", "519) crate", "520) crib, cot", "521) Crock Pot", "522) croquet ball", "523) crutch", "524) cuirass", "525) dam, dike, dyke", "526) desk", "527) desktop computer", "528) dial telephone, dial phone", "529) diaper, nappy, napkin", "530) digital clock", "531) digital watch", "532) dining table, board", "533) dishrag, dishcloth", "534) dishwasher, dish washer, dishwashing machine", "535) disk brake, disc brake", "536) dock, dockage, docking facility", "537) dogsled, dog sled, dog sleigh", "538) dome", "539) doormat, welcome mat", "540) drilling platform, offshore rig", "541) drum, membranophone, tympan", "542) drumstick", "543) dumbbell", "544) Dutch oven", "545) electric fan, blower", "546) electric guitar", "547) electric locomotive", "548) entertainment center", "549) envelope", "550) espresso maker", "551) face powder", "552) feather boa, boa", "553) file, file cabinet, filing cabinet", "554) fireboat", "555) fire engine, fire truck", "556) fire screen, fireguard", "557) flagpole, flagstaff", "558) flute, transverse flute", "559) folding chair", "560) football helmet", "561) forklift", "562) fountain", "563) fountain pen", "564) four-poster", "565) freight car", "566) French horn, horn", "567) frying pan, frypan, skillet", "568) fur coat", "569) garbage truck, dustcart", "570) gasmask, respirator, gas helmet", "571) gas pump, gasoline pump, petrol pump, island dispenser", "572) goblet", "573) go-kart", "574) golf ball", "575) golfcart, golf cart", "576) gondola", "577) gong, tam-tam", "578) gown", "579) grand piano, grand", "580) greenhouse, nursery, glasshouse", "581) grille, radiator grille", "582) grocery store, grocery, food market, market", "583) guillotine", "584) hair slide", "585) hair spray", "586) half track", "587) hammer", "588) hamper", "589) hand blower, blow dryer, blow drier, hair dryer, hair drier", "590) hand-held computer, hand-held microcomputer", "591) handkerchief, hankie, hanky, hankey", "592) hard disc, hard disk, fixed disk", "593) harmonica, mouth organ, harp, mouth harp", "594) harp", "595) harvester, reaper", "596) hatchet", "597) holster", "598) home theater, home theatre", "599) honeycomb", "600) hook, claw", "601) hoopskirt, crinoline", "602) horizontal bar, high bar", "603) horse cart, horse-cart", "604) hourglass", "605) iPod", "606) iron, smoothing iron", "607) jack-o'-lantern", "608) jean, blue jean, denim", "609) jeep, landrover", "610) jersey, T-shirt, tee shirt", "611) jigsaw puzzle", "612) jinrikisha, ricksha, rickshaw", "613) joystick", "614) kimono", "615) knee pad", "616) knot", "617) lab coat, laboratory coat", "618) ladle", "619) lampshade, lamp shade", "620) laptop, laptop computer", "621) lawn mower, mower", "622) lens cap, lens cover", "623) letter opener, paper knife, paperknife", "624) library", "625) lifeboat", "626) lighter, light, igniter, ignitor", "627) limousine, limo", "628) liner, ocean liner", "629) lipstick, lip rouge", "630) Loafer", "631) lotion", "632) loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "633) loupe, jeweler's loupe", "634) lumbermill, sawmill", "635) magnetic compass", "636) mailbag, postbag", "637) mailbox, letter box", "638) maillot", "639) maillot, tank suit", "640) manhole cover", "641) maraca", "642) marimba, xylophone", "643) mask", "644) matchstick", "645) maypole", "646) maze, labyrinth", "647) measuring cup", "648) medicine chest, medicine cabinet", "649) megalith, megalithic structure", "650) microphone, mike", "651) microwave, microwave oven", "652) military uniform", "653) milk can", "654) minibus", "655) miniskirt, mini", "656) minivan", "657) missile", "658) mitten", "659) mixing bowl", "660) mobile home, manufactured home", "661) Model T", "662) modem", "663) monastery", "664) monitor", "665) moped", "666) mortar", "667) mortarboard", "668) mosque", "669) mosquito net", "670) motor scooter, scooter", "671) mountain bike, all-terrain bike, off-roader", "672) mountain tent", "673) mouse, computer mouse", "674) mousetrap", "675) moving van", "676) muzzle", "677) nail", "678) neck brace", "679) necklace", "680) nipple", "681) notebook, notebook computer", "682) obelisk", "683) oboe, hautboy, hautbois", "684) ocarina, sweet potato", "685) odometer, hodometer, mileometer, milometer", "686) oil filter", "687) organ, pipe organ", "688) oscilloscope, scope, cathode-ray oscilloscope, CRO", "689) overskirt", "690) oxcart", "691) oxygen mask", "692) packet", "693) paddle, boat paddle", "694) paddlewheel, paddle wheel", "695) padlock", "696) paintbrush", "697) pajama, pyjama, pj's, jammies", "698) palace", "699) panpipe, pandean pipe, syrinx", "700) paper towel", "701) parachute, chute", "702) parallel bars, bars", "703) park bench", "704) parking meter", "705) passenger car, coach, carriage", "706) patio, terrace", "707) pay-phone, pay-station", "708) pedestal, plinth, footstall", "709) pencil box, pencil case", "710) pencil sharpener", "711) perfume, essence", "712) Petri dish", "713) photocopier", "714) pick, plectrum, plectron", "715) pickelhaube", "716) picket fence, paling", "717) pickup, pickup truck", "718) pier", "719) piggy bank, penny bank", "720) pill bottle", "721) pillow", "722) ping-pong ball", "723) pinwheel", "724) pirate, pirate ship", "725) pitcher, ewer", "726) plane, carpenter's plane, woodworking plane", "727) planetarium", "728) plastic bag", "729) plate rack", "730) plow, plough", "731) plunger, plumber's helper", "732) Polaroid camera, Polaroid Land camera", "733) pole", "734) police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "735) poncho", "736) pool table, billiard table, snooker table", "737) pop bottle, soda bottle", "738) pot, flowerpot", "739) potter's wheel", "740) power drill", "741) prayer rug, prayer mat", "742) printer", "743) prison, prison house", "744) projectile, missile", "745) projector", "746) puck, hockey puck", "747) punching bag, punch bag, punching ball, punchball", "748) purse", "749) quill, quill pen", "750) quilt, comforter, comfort, puff", "751) racer, race car, racing car", "752) racket, racquet", "753) radiator", "754) radio, wireless", "755) radio telescope, radio reflector", "756) rain barrel", "757) recreational vehicle, RV, R.V.", "758) reel", "759) reflex camera", "760) refrigerator, icebox", "761) remote control, remote", "762) restaurant, eating house, eating place, eatery", "763) revolver, six-gun, six-shooter", "764) rifle", "765) rocking chair, rocker", "766) rotisserie", "767) rubber eraser, rubber, pencil eraser", "768) rugby ball", "769) rule, ruler", "770) running shoe", "771) safe", "772) safety pin", "773) saltshaker, salt shaker", "774) sandal", "775) sarong", "776) sax, saxophone", "777) scabbard", "778) scale, weighing machine", "779) school bus", "780) schooner", "781) scoreboard", "782) screen, CRT screen", "783) screw", "784) screwdriver", "785) seat belt, seatbelt", "786) sewing machine", "787) shield, buckler", "788) shoe shop, shoe-shop, shoe store", "789) shoji", "790) shopping basket", "791) shopping cart", "792) shovel", "793) shower cap", "794) shower curtain", "795) ski", "796) ski mask", "797) sleeping bag", "798) slide rule, slipstick", "799) sliding door", "800) slot, one-armed bandit", "801) snorkel", "802) snowmobile", "803) snowplow, snowplough", "804) soap dispenser", "805) soccer ball", "806) sock", "807) solar dish, solar collector, solar furnace", "808) sombrero", "809) soup bowl", "810) space bar", "811) space heater", "812) space shuttle", "813) spatula", "814) speedboat", "815) spider web, spider's web", "816) spindle", "817) sports car, sport car", "818) spotlight, spot", "819) stage", "820) steam locomotive", "821) steel arch bridge", "822) steel drum", "823) stethoscope", "824) stole", "825) stone wall", "826) stopwatch, stop watch", "827) stove", "828) strainer", "829) streetcar, tram, tramcar, trolley, trolley car", "830) stretcher", "831) studio couch, day bed", "832) stupa, tope", "833) submarine, pigboat, sub, U-boat", "834) suit, suit of clothes", "835) sundial", "836) sunglass", "837) sunglasses, dark glasses, shades", "838) sunscreen, sunblock, sun blocker", "839) suspension bridge", "840) swab, swob, mop", "841) sweatshirt", "842) swimming trunks, bathing trunks", "843) swing", "844) switch, electric switch, electrical switch", "845) syringe", "846) table lamp", "847) tank, army tank, armored combat vehicle, armoured combat vehicle", "848) tape player", "849) teapot", "850) teddy, teddy bear", "851) television, television system", "852) tennis ball", "853) thatch, thatched roof", "854) theater curtain, theatre curtain", "855) thimble", "856) thresher, thrasher, threshing machine", "857) throne", "858) tile roof", "859) toaster", "860) tobacco shop, tobacconist shop, tobacconist", "861) toilet seat", "862) torch", "863) totem pole", "864) tow truck, tow car, wrecker", "865) toyshop", "866) tractor", "867) trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "868) tray", "869) trench coat", "870) tricycle, trike, velocipede", "871) trimaran", "872) tripod", "873) triumphal arch", "874) trolleybus, trolley coach, trackless trolley", "875) trombone", "876) tub, vat", "877) turnstile", "878) typewriter keyboard", "879) umbrella", "880) unicycle, monocycle", "881) upright, upright piano", "882) vacuum, vacuum cleaner", "883) vase", "884) vault", "885) velvet", "886) vending machine", "887) vestment", "888) viaduct", "889) violin, fiddle", "890) volleyball", "891) waffle iron", "892) wall clock", "893) wallet, billfold, notecase, pocketbook", "894) wardrobe, closet, press", "895) warplane, military plane", "896) washbasin, handbasin, washbowl, lavabo, wash-hand basin", "897) washer, automatic washer, washing machine", "898) water bottle", "899) water jug", "900) water tower", "901) whiskey jug", "902) whistle", "903) wig", "904) window screen", "905) window shade", "906) Windsor tie", "907) wine bottle", "908) wing", "909) wok", "910) wooden spoon", "911) wool, woolen, woollen", "912) worm fence, snake fence, snake-rail fence, Virginia fence", "913) wreck", "914) yawl", "915) yurt", "916) web site, website, internet site, site", "917) comic book", "918) crossword puzzle, crossword", "919) street sign", "920) traffic light, traffic signal, stoplight", "921) book jacket, dust cover, dust jacket, dust wrapper", "922) menu", "923) plate", "924) guacamole", "925) consomme", "926) hot pot, hotpot", "927) trifle", "928) ice cream, icecream", "929) ice lolly, lolly, lollipop, popsicle", "930) French loaf", "931) bagel, beigel", "932) pretzel", "933) cheeseburger", "934) hotdog, hot dog, red hot", "935) mashed potato", "936) head cabbage", "937) broccoli", "938) cauliflower", "939) zucchini, courgette", "940) spaghetti squash", "941) acorn squash", "942) butternut squash", "943) cucumber, cuke", "944) artichoke, globe artichoke", "945) bell pepper", "946) cardoon", "947) mushroom", "948) Granny Smith", "949) strawberry", "950) orange", "951) lemon", "952) fig", "953) pineapple, ananas", "954) banana", "955) jackfruit, jak, jack", "956) custard apple", "957) pomegranate", "958) hay", "959) carbonara", "960) chocolate sauce, chocolate syrup", "961) dough", "962) meat loaf, meatloaf", "963) pizza, pizza pie", "964) potpie", "965) burrito", "966) red wine", "967) espresso", "968) cup", "969) eggnog", "970) alp", "971) bubble", "972) cliff, drop, drop-off", "973) coral reef", "974) geyser", "975) lakeside, lakeshore", "976) promontory, headland, head, foreland", "977) sandbar, sand bar", "978) seashore, coast, seacoast, sea-coast", "979) valley, vale", "980) volcano", "981) ballplayer, baseball player", "982) groom, bridegroom", "983) scuba diver", "984) rapeseed", "985) daisy", "986) yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "987) corn", "988) acorn", "989) hip, rose hip, rosehip", "990) buckeye, horse chestnut, conker", "991) coral fungus", "992) agaric", "993) gyromitra", "994) stinkhorn, carrion fungus", "995) earthstar", "996) hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "997) bolete", "998) ear, spike, capitulum", "999) toilet tissue, toilet paper, bathroom tissue"]
def interpolate_and_shape(A, B, num_interps):
interps = interpolate(A, B, num_interps)
return (interps.transpose(1, 0, *range(2, len(interps.shape)))
.reshape(num_samples * num_interps, -1))
z_A, z_B = [truncated_z_sample(num_samples, truncation, noise_seed)
for noise_seed in [noise_seed_A, noise_seed_B]]
y_A, y_B = [one_hot([int(category.split(')')[0])] * num_samples)
for category in [category_A, category_B]]
z_interp = interpolate_and_shape(z_A, z_B, num_interps)
y_interp = interpolate_and_shape(y_A, y_B, num_interps)
ims = sample(sess, z_interp, y_interp, truncation=truncation)
imshow(imgrid(ims, cols=num_interps))
###Output
_____no_output_____ |
Git_ESG Visualization.ipynb | ###Markdown
0. Import library
###Code
import pickle
import pandas as pd
import urllib.request
from bs4 import BeautifulSoup as bs
import re
from tqdm import tqdm
import time
import numpy as np
from sklearn.preprocessing import MinMaxScaler
# pd.options.display.float_format = '{:.2f}'.format
###Output
_____no_output_____
###Markdown
1. Load Data
###Code
from IPython.display import Image
Image('활용데이터.png')
# 국내주식재무비율
with open("국내주식재무비율.pkl","rb") as fr:
df_fin_rate = pickle.load(fr)
# 국내주식재무제표
with open("국내주식재무제표.pkl","rb") as fr:
df_fin_num = pickle.load(fr)
# ESG (전체 / E / S / G / FG)
df_esg_info = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', header = 3, index_col = 0)
df_esg_e = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', sheet_name = 1, skiprows = 3, index_col = 0)
df_esg_e.reset_index(drop = True, inplace = True)
df_esg_s = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', sheet_name = 2, skiprows = 3, index_col = 0)
df_esg_s.reset_index(drop = True, inplace = True)
df_esg_g = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', sheet_name = 3, skiprows = 3, index_col = 0)
df_esg_g.reset_index(drop = True, inplace = True)
df_esg_fg = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', sheet_name = 4, skiprows = 3, index_col = 0)
df_esg_fg.reset_index(drop = True, inplace = True)
# ETF
df_etf_info = pd.read_csv('etf_info.csv')
# 업종
df_industry = pd.read_csv('종목_업종_테이블.csv')
# ETF 거래
df_etf_trade = pd.read_csv('국내ETF_2021_4분기_거래량_거래대금.csv')
# 주식 거래
df_stock_trade = pd.read_csv('국내상품_2021_12월_거래량_거래대금.csv')
###Output
_____no_output_____
###Markdown
2. Data Pre-Processing > **2-1. 전체 데이터 공백 제거**> **2-2. ESG Data 전처리**> 2-2-1. 필요 column 추출 등 DataFrame 생성> 2-2-2. 종목별 섹터 데이터 크롤링 (외부 데이터)> 2-2-3. 섹터(동종업계) 내 ESG 점수(순위) 산정> **2-3. ETF Data 전처리**> 2-3-1. 필요 column 추출, ratio 계산 등 DataFrame 생성> 2-3-2. ETF ESG 점수 도출> **2-4. 재무제표 Data 전처리**> 2-4-1. 필요 column 추출 등 DataFrame 생성> **2-5. 업종 Data 전처리**> 2-5-1. 필요 column 추출 등 DataFrame 생성> **2-6. 네이버증권 테마 Data 크롤링 (외부 데이터)**> **2-7. 구글트렌드 Data 크롤링 및 전처리**> **2-8. ETF 거래 Data 전처리**> 2-8-1. 필요 column 추출 등 DataFrame 생성> **2-9. 상품 거래 Data 전처리**> 2-9-1. 필요 column 추출 등 DataFrame 생성 2-1. 전체 데이터 공백 제거
###Code
# 공백제거 함수 정의
def nospace(df):
for i in range(len(df.columns)):
df.iloc[:,i] = df.iloc[:,i].astype(str).str.replace(" ","")
# 데이터 공백 제거
nospace(df_fin_rate)
nospace(df_fin_num)
nospace(df_etf_info)
nospace(df_industry)
nospace(df_etf_trade)
nospace(df_stock_trade)
###Output
_____no_output_____
###Markdown
2-2. ESG Data 전처리
###Code
# FG데이터 전처리
df_esg_fg_no = df_esg_fg[df_esg_fg['상장된 시장'] != 'EX']
df_esg_fg_no.rename(columns={'FG.감사기구 및\n내부통제':'FG.감사기구', 'FG.등급':'G.등급', 'FG.총점':'G.총점', 'FG.주주권리보호':'G.주주권리보호', 'FG.이사회':'G.이사회', 'FG.공시':'G.공시', 'FG.감사기구 및\n내부통제':'G.감사기구', 'FG.감점':'G.감점'}, inplace=True)
df_esg_fg_no = df_esg_fg_no[['Code', 'Name', '법인등록번호', '결산월', '상장된 시장', 'G.등급', 'G.총점', 'G.주주권리보호', 'G.이사회', 'G.공시', 'G.감사기구', 'G.감점']]
df_esg_g_final = pd.concat([df_esg_g, df_esg_fg_no])
df_esg_g = df_esg_g_final
# ESG 정기총점 추출
df_esg_total = df_esg_info[['Code', 'ESG.정기총점']]
# 데이터 병합
merge1 = pd.merge(df_esg_e, df_esg_s, on='Code', how='inner',suffixes=('', '_DROP')).filter(regex='^(?!.*_DROP)')
merge2 = pd.merge(merge1, df_esg_g, on='Code', how='inner',suffixes=('', '_DROP')).filter(regex='^(?!.*_DROP)')
merge3 = pd.merge(merge2, df_esg_total, on='Code', how='inner',suffixes=('', '_DROP')).filter(regex='^(?!.*_DROP)')
# 불필요한 column 삭제
merge3.drop(columns = ['법인등록번호', '결산월'], inplace=True)
df_esg = merge3
# column명 변경
df_esg.rename(columns = {'E.등급':'E등급', 'S.등급':'S등급', 'G.등급':'G등급'}, inplace = True)
# 기업명 공백제거
df_esg.Name = df_esg.Name.astype(str).str.replace(" ", "")
# 종목별 섹터 데이터 크롤링 (외부 데이터)
df_product = pd.read_html('http://kind.krx.co.kr/corpgeneral/corpList.do?method=download&searchType=13', header=0)[0]
df_product.종목코드 = df_product.종목코드.map("{:06d}".format)
df_product = df_product[['종목코드', '회사명', '업종', '주요제품']]
df_product = df_product.rename(columns={'종목코드':'code', '회사명':'name', '업종':'industry', '주요제품':'main_product'})
df_product['code'] = 'A' + df_product['code'].str[:]
df_product
# 섹터 내 ESG 점수(순위) 산정
# ESG 데이터와 섹터 데이터 병합
df_sector_score = pd.merge(df_esg, df_product, left_on='Code', right_on='code', how='inner')
df_sector_score.drop(columns=['code', 'name', 'main_product'], inplace=True)
# 섹터별 E, S, G, ESG 순위 부여
df_sector_score['E_rank'] = df_sector_score.groupby('industry')['E.총점'].rank(method = 'min', ascending=False)
df_sector_score['S_rank'] = df_sector_score.groupby('industry')['S.총점'].rank(method = 'min', ascending=False)
df_sector_score['G_rank'] = df_sector_score.groupby('industry')['G.총점'].rank(method = 'min', ascending=False)
df_sector_score['ESG_rank'] = df_sector_score.groupby('industry')['ESG.정기총점'].rank(method = 'min', ascending=False)
# 섹터별 E, S, G, ESG 평균 총점
df_sector_mean = pd.DataFrame(df_sector_score.groupby('industry').mean())
df_sector_mean.reset_index(inplace=True)
df_sector_mean.rename(columns={'E.총점':'평균 E총점', 'S.총점':'평균 S총점', 'G.총점':'평균 G총점', 'ESG.정기총점':'평균 ESG총점'}, inplace=True)
df_sector_mean = df_sector_mean[['industry', '평균 E총점', '평균 S총점', '평균 G총점', '평균 ESG총점']]
# 섹터별 상위 E, S, G, ESG % 부여
df_sector_count = pd.DataFrame(df_sector_score.industry.value_counts())
df_sector_count.reset_index(inplace=True)
df_sector_count.rename(columns={'index':'industry', 'industry':'count'}, inplace=True)
df_esg_final = pd.merge(df_sector_score, df_sector_mean, on='industry', how='outer')
df_esg_final = pd.merge(df_esg_final, df_sector_count, on='industry', how='outer')
df_esg_final['상위 E%'] = df_esg_final['E_rank'] / df_esg_final['count'] * 100
df_esg_final['상위 S%'] = df_esg_final['S_rank'] / df_esg_final['count'] * 100
df_esg_final['상위 G%'] = df_esg_final['G_rank'] / df_esg_final['count'] * 100
df_esg_final['상위 ESG%'] = df_esg_final['ESG_rank'] / df_esg_final['count'] * 100
###Output
_____no_output_____
###Markdown
2-3. ETF Data 전처리
###Code
# 종목 CODE 있는 종목만 추출 (원화예금 등 index 지표를 빼기 위함)
df_etf = df_etf_info[df_etf_info.ETF_ITEM_CD != 'nan']
# ETF별 평가금액 합
df_etf['ETF_EA'] = df_etf['ETF_EA'].astype(float)
df_etf_sum = df_etf.groupby(['ETF_CD'], as_index=False).sum()
df_etf_notnull = pd.merge(df_etf, df_etf_sum[['ETF_CD','ETF_EA']], on='ETF_CD')
# ETF 구성종목 비율 계산
df_etf_notnull['ratio'] = df_etf_notnull['ETF_EA_x'] / df_etf_notnull['ETF_EA_y']
df_etf = df_etf_notnull[['ETF_CD', 'ETF_NM', 'ETF_ITEM_CD', 'ETF_CMST_ITM_NM', 'ETF_EA_x', 'ratio']]
Image('esg.png')
# ETF의 ESG 점수 도출
# ETF, ESG 데이터 병합
df_esg_weight = df_esg_final[['Code', 'Name', 'ESG.정기총점', '평균 ESG총점', '상위 ESG%']]
etf_esg = pd.merge(df_etf, df_esg_weight, left_on='ETF_ITEM_CD', right_on='Code')
# 가중치 계산
# MinMax Scale (0~1)
etf_esg['weight1_ratio'] = etf_esg['ratio']
etf_esg['weight2_diff'] = etf_esg['ESG.정기총점'] - etf_esg['평균 ESG총점']
etf_esg['weight3_rank'] = 101 - etf_esg['상위 ESG%']
etf_esg_weight = etf_esg[['weight1_ratio', 'weight2_diff', 'weight3_rank']]
transformer = MinMaxScaler(feature_range=(0, 1))
transformer.fit(etf_esg_weight)
etf_esg_weight_scale = pd.DataFrame(transformer.transform(etf_esg_weight), columns=etf_esg_weight.columns)
etf_esg_weight_scale['weight'] = etf_esg_weight_scale['weight1_ratio'] * etf_esg_weight_scale['weight2_diff'] * etf_esg_weight_scale['weight3_rank']
etf_esg_weight_scale = etf_esg_weight_scale[['weight']]
transformer.fit(etf_esg_weight_scale)
etf_esg_weight_scale_final = pd.DataFrame(transformer.transform(etf_esg_weight_scale), columns=etf_esg_weight_scale.columns)
etf_esg['weight'] = etf_esg_weight_scale_final['weight']
etf_esg.drop(columns=['Code', 'Name', 'weight1_ratio', 'weight2_diff', 'weight3_rank'], inplace=True)
# 가중치를 고려한 ETF ESG 점수 계산
etf_esg['ESG_SCORE'] = etf_esg['ESG.정기총점'] * etf_esg['weight']
etf_esg_score = etf_esg.groupby(['ETF_CD'], as_index = False).sum()
etf_esg_score = etf_esg_score[['ETF_CD', 'ESG_SCORE']]
etf_esg = pd.merge(etf_esg, etf_esg_score, on = 'ETF_CD')
etf_esg.drop(columns=['ESG_SCORE_x'], inplace=True)
etf_esg.rename(columns={'ESG_SCORE_y':'ESG_SCORE'}, inplace=True)
###Output
_____no_output_____
###Markdown
2-4. 재무제표 Data 전처리
###Code
# 재무비율
df_fin1 = df_fin_rate[['CODE', 'DATA_TP_CODE', 'IFRS_TP_CODE', 'SET_TP_CODE', 'BASE_YM', 'CO_NM', 'SALE_GROW_RATE', 'PROFIT_GROW_RATE', 'ROE']]
df_fin1 = df_fin1[df_fin1['BASE_YM'].str.contains('2021')]
df_fin1 = df_fin1[df_fin1['DATA_TP_CODE'] == '1']
df_fin1 = df_fin1[df_fin1['IFRS_TP_CODE'] == 'B']
df_fin1.drop(df_fin1[df_fin1['SET_TP_CODE'] == '4'].index, inplace=True)
df_fin1.drop(['DATA_TP_CODE', 'IFRS_TP_CODE', 'BASE_YM'], axis=1, inplace=True)
# 재무제표
df_fin2 = df_fin_num[['CODE', 'DATA_TP_CODE', 'IFRS_TP_CODE', 'SET_TP_CODE', 'BASE_YM', 'CO_NM', 'SALE_AMT', 'THIS_TERM_PROFIT', 'BHJS_CNT']]
df_fin2 = df_fin2[df_fin2['BASE_YM'].str.contains('2021')]
df_fin2 = df_fin2[df_fin2['DATA_TP_CODE'] == '1']
df_fin2 = df_fin2[df_fin2['IFRS_TP_CODE'] == 'B']
df_fin2.drop(df_fin2[df_fin2['SET_TP_CODE'] == '4'].index, inplace=True)
df_fin2.drop(['DATA_TP_CODE', 'IFRS_TP_CODE', 'BASE_YM'], axis=1, inplace=True)
# 재무데이터 병합
df_fin = pd.merge(df_fin1, df_fin2, left_on=['CODE', 'SET_TP_CODE', 'CO_NM'], right_on=['CODE', 'SET_TP_CODE', 'CO_NM'], how='inner')
df_fin[['SALE_GROW_RATE', 'PROFIT_GROW_RATE', 'ROE', 'SALE_AMT', 'THIS_TERM_PROFIT', 'BHJS_CNT']] = df_fin[['SALE_GROW_RATE', 'PROFIT_GROW_RATE', 'ROE', 'SALE_AMT', 'THIS_TERM_PROFIT', 'BHJS_CNT']].astype(float)
###Output
_____no_output_____
###Markdown
2-5. 업종 Data 전처리
###Code
df_industry = df_industry[['종목코드', '종목명', '업종']]
###Output
_____no_output_____
###Markdown
2-6. 네이버증권 테마 데이터 크롤링 (외부 데이터)
###Code
# 각 테마 페이지별 테마명&주식종목명 가져오기
def one_page_list(page):
STOCKLIST_URL = "https://finance.naver.com/sise/sise_group_detail.nhn?type=theme&no={}".format(page)
response = urllib.request.urlopen(STOCKLIST_URL)
STOCKLIST_HTML = response.read()
soup = bs(STOCKLIST_HTML)
STOCK_NAME_LIST = []
STOCK_CODE_LIST = []
THEME_NAME_LIST = []
for tr in soup.findAll('td', attrs={'class', 'name'}):
stockName = tr.findAll('a', attrs={})
stockCode = re.findall('.+(?=")', str(tr.findAll('a', attrs={})))[0].split("code=")[1]
if stockName is None or stockName == []:
pass
else:
stockName = stockName[0].contents[-1]
STOCK_NAME_LIST.append(stockName)
STOCK_CODE_LIST.append(stockCode)
for tr in soup.findAll('title'):
themeName = tr
if themeName is None or themeName == []:
pass
else:
themeName = themeName.contents[-1]
THEME_NAME_LIST.append(themeName)
STOCK_LIST = []
for i in range(len(STOCK_NAME_LIST)):
stockInfo = [STOCK_CODE_LIST[i], STOCK_NAME_LIST[i], THEME_NAME_LIST[0]]
STOCK_LIST.append(stockInfo)
return pd.DataFrame(STOCK_LIST, columns=('코드', '종목명', '테마'))
theme_list = []
for i in tqdm([1,2,3,4,5,6,7]):
url = "https://finance.naver.com/sise/theme.naver?&page={}".format(i)
req = urllib.request.Request(url)
sourcecode = urllib.request.urlopen(url).read()
soup = bs(sourcecode, "html.parser")
soup = soup.find_all("td", attrs={'class', "col_type1"})
theme_list.extend(list(soup))
for i in range(0,len(theme_list)):
theme_list[i] = theme_list[i].find("a")["href"]
theme_list[i] = theme_list[i].replace('/sise/sise_group_detail.naver?type=theme&no=', '')
df_theme = pd.DataFrame()
for i in tqdm(theme_list):
df_temp = one_page_list(i)
df_theme = df_theme.append(df_temp)
for i in tqdm(range(0,len(df_theme))):
df_theme.iloc[i]['테마'] = df_theme.iloc[i]['테마'].replace(' : 네이버 금융', '')
def make_code(x):
x=str(x)
return 'A'+ '0'*(6-len(x)) + x
df_theme['코드'] = df_theme['코드'].apply(make_code)
df_theme
###Output
100%|██████████| 7/7 [00:04<00:00, 1.43it/s]
100%|██████████| 254/254 [03:30<00:00, 1.20it/s]
100%|██████████| 6050/6050 [00:00<00:00, 6564.82it/s]
###Markdown
2-7. 구글트렌드 Data 전처리
###Code
pip install pytrends
# pytrends 라이브러리
from pytrends.request import TrendReq
from tqdm import tqdm
from tqdm import trange
# 기업명 list 생성
stock_list = list(df_esg_final.Name)
# 크롤러 생성
trend_df = pd.DataFrame(columns=['Date', 'CO NM', 'Trend'])
period = 'today 1-m'
for i in trange(len(stock_list)):
try:
a = TrendReq()
a.build_payload(kw_list=[stock_list[i]], timeframe=period, geo='KR')
a_df = a.interest_over_time()
data1 = {'Date' : a_df.index,
'CO NM' : a_df.columns[0],
'Trend' : a_df.iloc[:,0]}
DF1 = pd.DataFrame(data1).reset_index(drop=True)
trend_df = pd.concat([trend_df, DF1], ignore_index=True)
except:
pass
# DF에 종목코드 추가
Code_df = df_esg_final[['Code', 'Name']]
trend_code_df = pd.merge(trend_df, Code_df, left_on='CO NM', right_on='Name')
trend_code_df.drop(columns=['Name'], inplace=True)
trend_code_df.rename(columns={'Code':'CODE'}, inplace=True)
trend_code_df
###Output
_____no_output_____
###Markdown
2-8. ETF 거래 Data 전처리
###Code
df_etf_trade = df_etf_trade[['DATA_DATE', '거래량', '거래대금', '종가', 'ITEM_S_CD', '한글명_F']]
df_etf_trade['DATA_DATE'] = df_etf_trade['DATA_DATE'].str[:8]
df_etf_trade[['거래량', '거래대금', '종가']] = df_etf_trade[['거래량', '거래대금', '종가']].astype(float)
df_etf_trade.rename(columns={'ITEM_S_CD':'ETF_CD'}, inplace=True)
###Output
_____no_output_____
###Markdown
2-9. 상품 거래 Data 전처리
###Code
stock_trade = df_stock_trade[['DATA_DATE', '거래량', '거래대금', '종가', 'ITEM_S_CD', '한글명_F']]
stock_trade['DATA_DATE'] = stock_trade['DATA_DATE'].str[:8]
stock_trade[['거래량', '거래대금', '종가']] = stock_trade[['거래량', '거래대금', '종가']].astype(float)
stock_trade.rename(columns={'ITEM_S_CD':'ITEM_CD'}, inplace=True)
###Output
<ipython-input-19-b47f7b5db5ec>:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
stock_trade['DATA_DATE'] = stock_trade['DATA_DATE'].str[:8]
/opt/anaconda3/lib/python3.8/site-packages/pandas/core/frame.py:3065: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self[k1] = value[k2]
/opt/anaconda3/lib/python3.8/site-packages/pandas/core/frame.py:4296: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().rename(
|
topologias/Lectura de diferentes formatos de datos con Pandas.ipynb | ###Markdown
Entrada y Salida de Datos en formatos Heterogeneos
###Code
# Importar la libreía Pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
Crearemos una conjunto de datos con nombres de alumnos y los cursos a los cuales están matriculados. Para eso colectaremos datos desde la web
###Code
# leyendo datos desde la web, estos datos están en formato json
nombres_f = pd.read_json("https://servicodados.ibge.gov.br/api/v1/censos/nomes/ranking?qtd=20&sexo=f")
nombres_m = pd.read_json("https://servicodados.ibge.gov.br/api/v1/censos/nomes/ranking?qtd=20&sexo=m")
# Mostrando el tipo de datos que es DataFrame
type(nombres_f)
# Mostrando la cantidad de nombres en el DataFrame
print("Cantidad de nombres: " + str(len(nombres_m) + len(nombres_f)))
# Otro formato de impresión
print("Cantidad de nombres: %d" % (len(nombres_m) + len(nombres_f)))
# Colocamos en una lista los nombre femeninos y masculinos que colectamos
frames = [nombres_f, nombres_m]
type(frames)
# listando los datos
frames
# concatenamos los datos de la lista en un DataFrame
pd.concat(frames)
# Filtramos solamente la columna "nome" y actualizamos el DataFrame
pd.concat(frames)["nome"].to_frame()
# guardamos los dato en la variable "nombres"
nombres = pd.concat(frames)["nome"].to_frame()
nombres.sample(5)
# Renombrar el encabezado para "nombre"
nombres = nombres.rename(columns={'nome': 'nombre'})
print(nombres.columns)
###Output
_____no_output_____
###Markdown
Incluir ID
###Code
# Importamos la librería numpy
import numpy as np
# generar siempre la misma frecuencia de numeros aleatorios
np.random.seed(123)
# mostranso la cantidad de nombres en el DataFrame
total_alumnos = len(nombres)
total_alumnos
# Adicionaremos una identificación a los alumnos diferente de su posición en el DataFrame
nombres.sample(3)
# queremos que los ID sean aleatorios de 1 a 40, vamos a crear una nueva columna
# La nueva columna recibirá la llamada de np.random.permutation(), una función de Pandas que distribuye números de forma aleatoria.
# Para eso, pasaremos como parámetro el total_alumnos y sumaremos 1
nombres["id_alumno"] = np.random.permutation(total_alumnos) + 1
nombres.sample(3)
# mostramos los datos actuales
nombres.head(10)
# Adicionaremos al DataFrame emails, para eso generaremos dominios de emails para luego concaternalos
# con los nombres de los alumnos
dominios = ['@dominiodeemmail.com.py', '@serviciodeemail.com']
# usamos la función "np.random.choice" para tomar en forma aleatoria el valor de la lista dominios
# Adicionamos a una nueva columna que se llama "dominio"
nombres['dominio'] = np.random.choice(dominios, total_alumnos)
# Listando el DataFrame
nombres.sample(5)
# concatenamos el dominio con el nombre del alumno y lo adicionamos a una nueva columna que se llama "email"
nombres['email'] = nombres.nombre.str.cat(nombres.dominio).str.lower()
# listando una muestra de los datos
nombres.sample(5)
###Output
_____no_output_____
###Markdown
Leyendo html
###Code
# Importando la librería para leer datos html
import html5lib
# leemos los datos desde una url donde figuran nombres de los cursos a los que se matricularan los alumnos
url = 'http://tabela-cursos.herokuapp.com/index.html'
cursos = pd.read_html(url)
cursos
# los datos html son traidos en un formato tipo lista, por lo que extraemos el primer elemento que contiene el DataFrame
cursos = cursos[0]
# mostrando el tipo del elemento que extraimos
type(cursos)
# listando una muestra de los cursos
cursos.sample(5)
# renombramos la columna para 'nombre del curso'
cursos = cursos.rename(columns={'Nome do curso': 'nombre del curso'})
print(cursos.columns)
# Crearemos un identificador, ID, para cada curso. Generaremos la columna ID que recibirá el indice
cursos['id'] = cursos.index + 1
cursos.tail()
###Output
_____no_output_____
###Markdown
Matriculando los alumnos en los cursos
###Code
# Adicionamos la columna 'matriculas' al DataFrame que contendrá la cantidad de cursos al cual el alumno está matriculado
# para eso usamos una función que aleatoriamente genera un número entre 0 y total_alumnos
nombres['matriculas'] = np.random.exponential(size=total_alumnos).astype(int)
nombres.sample(5)
# Ajustamos las matriculas cuyo valor fue 0, para eso usamos np.ceil, de esa manera todos los alumnos al menos estarán
# matriculados a un curso
nombres['matriculas'] = np.ceil(np.random.exponential(size=total_alumnos)).astype(int)
nombres.sample(5)
# listamos una muestra del DataFrame
nombres.sample(5)
# aumentaremos el numero de cursos en los cuales los alumnos estan matriculados.
# Multiplicamos el resultado obtenido usando el generador de números randómicos por 1.5
nombres['matriculas'] = np.ceil(np.random.exponential(size=total_alumnos) * 1.5).astype(int)
nombres.sample(5)
# Describir como quedó la distribución de los datos
nombres.matriculas.describe()
###Output
_____no_output_____
###Markdown
los alumnos estan inscriptos en al menos 1 curso y el máximo de cursos en que el alumno está inscripto es 5 (este número varía según los datos generados)
###Code
# visualizamos la información en un gráfico
# importamos la librería seaborn
import seaborn as sns
# Graficamos la distribución de los datos de la columna matriculas
sns.distplot(nombres.matriculas)
# mostramos el número de alumnos por cada cantidad de cursos matriculados
nombres.matriculas.value_counts()
# value_counts() muestra la cantidad de elementos por cada valor diferente en la columna
nombres.dominio.value_counts()
###Output
_____no_output_____
###Markdown
Seleccionando cursos
###Code
# listamos una muestra con los datos
nombres.sample(5)
# para hacer esa distribución crearemos un código para asignar los cursos según la matricula
todas_matriculas = []
x = np.random.rand(20)
prob = x / sum(x)
# el iterador for buscará el index y la linea que se utilizará row.
# ese iterador recorrerá el dataframe nombres con el auxilio de función iterrows()
# for index, row in nombres.iterrows()
# A cada elemento encontrado, almacenaremos el id del alumno, conseguido con row.id_alumno
# y la cantidad de matriculas conseguida con row.matriculas
for index, row in nombres.iterrows():
id = row.id_alumno
matriculas = row.matriculas
for i in range(matriculas):
mat = [id, np.random.choice(cursos.index, p = prob)]
todas_matriculas.append(mat)
# Creamos el DataFrame que se llamará matriculas y contendrá los datos de id del alumno y el id del curso
matriculas = pd.DataFrame(todas_matriculas, columns = ['id_alumno', 'id_curso'])
matriculas.head(5)
# podemos usar comandos, como en SQL, para realizar consultas con los datos
matriculas.groupby('id_curso').count().join(cursos['nombre del curso'])
# Consultando la cantidad de alumnos por curso
matriculas.groupby('id_curso').count().join(cursos['nombre del curso']).rename(columns={'id_alumno':'Cantidad_de_alumnos'})
# listando una muestra del DataFrame nombres
nombres.sample(5)
# listando una muestra del DataFrame cursos
cursos.sample(5)
# listando una muestra del DataFrame matriculas
matriculas.sample(5)
# guardando la consulta sobre cantidad de alumnos por curso en una variable
matriculas_por_curso = matriculas.groupby('id_curso').count().join(cursos['nombre del curso']).rename(columns={'id_alumno':'Cantidad_de_alumnos'})
# visualizando una muestra
matriculas_por_curso.sample(5)
###Output
_____no_output_____
###Markdown
Salida en diferentes formatos
###Code
# Exportamos los datos a un archivo csv, se guardara en el directorio actual de trabajo
matriculas_por_curso.to_csv('matriculas_por_curso.csv', index=False)
# podemos leer nuevamente los datos guardados
pd.read_csv('matriculas_por_curso.csv')
# podemos transformar los datos del DataFrame en el formato json
matriculas_json = matriculas_por_curso.to_json()
matriculas_json
# podemos transformar los datos del DataFrame en el formato html
matriculas_html = matriculas_por_curso.to_html()
matriculas_html
# al imprimir podemos visualizar en forma organizada
print(matriculas_html)
###Output
_____no_output_____
###Markdown
Creando la base de datos SQLUsando slqalchemy
###Code
# Instalar la librería en caso que no lo tenga
#!pip install sqlalchemy
# importaremos las siguientes bibliotecas
from sqlalchemy import create_engine, MetaData, Table
###Output
_____no_output_____
###Markdown
Crearemos el motor (engine) con el camino de la base de datos. SQLite viene nativamente en Colab
###Code
# creamos una variable engine
engine = create_engine('sqlite:///:memory:')
type(engine)
###Output
_____no_output_____
###Markdown
Creada la base de datos, necesitamos transformar el dataframe matriculas_por_curso en el formato de la BD usando to_sql()
###Code
# esta función recibe inicialmente dos parametros: una string representando el nombre de la tabla, en este caso matriculas
# y la engine
matriculas_por_curso.to_sql('matriculas', engine)
# imprimimos el retorno de la función
print(engine.table_names())
###Output
_____no_output_____
###Markdown
Consultas en la BD
###Code
# Obtener todos los cursos con menos de 20 personas matriculadas
query = 'select * from matriculas where Cantidad_de_alumnos < 5'
pd.read_sql(query, engine)
# otro comando para leer una tabla
pd.read_sql_table('matriculas', engine, columns=['nombre del curso', 'Cantidad_de_alumnos'])
# Atribuir a una variable
muchas_matriculas = pd.read_sql_table('matriculas', engine, columns=['nombre del curso', 'Cantidad_de_alumnos'])
muchas_matriculas
# Utilizar pandas para realizar consultas
muchas_matriculas.query('Cantidad_de_alumnos > 5')
# Repetir el proceso apenas para cursos con mas de 10 inscriptos y atribuir el resultado a una variable
muchas_matriculas = muchas_matriculas.query('Cantidad_de_alumnos > 5')
muchas_matriculas
muchas_matriculas.to_sql('muchas_matriculas', con=engine)
print(engine.table_names())
###Output
_____no_output_____ |
SVM Test.ipynb | ###Markdown
SVC TestThis is an example use case of a support vector classifier. We will infer a data classification function from a set of training data.We will use the SVC implementation in the [scikit-learn](http://scikit-learn.org/) toolbox.
###Code
from sklearn import svm
import pandas as pd
import pylab as pl
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
We begin by defining a set of training points. This is the set which the classifier will use to infer the data classification function. Each row represents a data point, with x,y coordinates and classification value.
###Code
fit_points = [
[2,1,1],
[1,2,1],
[3,2,1],
[4,2,0],
[4,4,0],
[5,1,0]
]
###Output
_____no_output_____
###Markdown
To understand the data set, we can plot the points from both classes (1 and 0). Points of class 1 are in black, and points from class 0 in red.
###Code
sns.set(style="darkgrid")
pl.scatter([point[0] if point[2]==1 else None for point in fit_points],
[point[1] for point in fit_points],
color = 'black')
pl.scatter([point[0] if point[2]==0 else None for point in fit_points],
[point[1] for point in fit_points],
color = 'red')
pl.grid(True)
pl.show()
###Output
_____no_output_____
###Markdown
The SVC uses [pandas](http://pandas.pydata.org/) data frames to represent data. The [data frame](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.html) is a convenient data structure for tabular data, which enables column labels.
###Code
df_fit = pd.DataFrame(fit_points, columns=["x", "y", "value"])
print(df_fit)
###Output
x y value
0 2 1 1
1 1 2 1
2 3 2 1
3 4 2 0
4 4 4 0
5 5 1 0
###Markdown
We need to select the set of columns with the data features. In our example, those are the `x` and `y` coordinates.
###Code
train_cols = ["x", "y"]
###Output
_____no_output_____
###Markdown
We are now able to build and train the classifier.
###Code
clf = svm.SVC()
clf.fit(df_fit[train_cols], df_fit.value)
###Output
_____no_output_____
###Markdown
The classifier is now trained with the fit points, and is ready to be evaluated with a set of test points, which have a similiar structure as the fit points: `x`, `y` coordinates, and a value.
###Code
test_points = [
[5,3],
[4,5],
[2,5],
[2,3],
[1,1]
]
###Output
_____no_output_____
###Markdown
We separate the features and values to make clear were the data comes from.
###Code
test_points_values = [0,0,0,1,1]
###Output
_____no_output_____
###Markdown
We build the test points dataframe with the features.
###Code
df_test = pd.DataFrame(test_points, columns=['x','y'])
print(df_test)
###Output
x y
0 5 3
1 4 5
2 2 5
3 2 3
4 1 1
###Markdown
We can add the values to the dataframe.
###Code
df_test['real_value'] = test_points_values
print(df_test)
###Output
x y real_value
0 5 3 0
1 4 5 0
2 2 5 0
3 2 3 1
4 1 1 1
###Markdown
Right now we have a dataframe similar to the one with the fit points. We'll use the classifier to add a fourth column with the predicted values. Our goal is to have the same value in both `real_value` and `predicted_value` columns.
###Code
df_test['predicted_value'] = clf.predict(test_points)
print(df_test)
###Output
x y real_value predicted_value
0 5 3 0 0
1 4 5 0 0
2 2 5 0 0
3 2 3 1 1
4 1 1 1 1
###Markdown
THe classifier is pretty successfull at predicting values from the `x` and `y`coordinates. We may also apply the classifier to the fit points - it's somewhat pointless, because those are the points used to infer the data classification function.
###Code
df_fit[''] = clf.predict([x[0:2] for x in fit_points])
print(df_fit)
###Output
x y value
0 2 1 1 1
1 1 2 1 1
2 3 2 1 1
3 4 2 0 0
4 4 4 0 0
5 5 1 0 0
###Markdown
To better understand the data separation between values 1 and 0, we'll plot both the fit points and the test points.Following the same color code as before, points that belong to class 1 are represented in black, and points that belong to class 0 in red. Fit points are represented in a full cirle, and the test points are represented by circunferences.
###Code
sns.set(style="darkgrid")
for i in range(0,2):
pl.scatter(df_fit[df_fit.value==i].x,
df_fit[df_fit.value==i].y,
color = 'black' if i == 1 else 'red')
pl.scatter(df_test[df_test.predicted_value==i].x,
df_test[df_test.predicted_value==i].y,
marker='o',
facecolor='none',
color='black' if i == 1 else 'red')
pl.grid(True)
pl.show()
###Output
_____no_output_____ |
Exploratory_Data_Analysis_of_Original_Waymo_Data.ipynb | ###Markdown
Explore the original Waymo datasetIn this notebook, we will perform an EDA (Exploratory Data Analysis) on the original Waymo dataset (downloaded data in the `raw` folder in the data directory `./data/waymo`). The data frames in the original tfrecord files contain the images of all five cameras, range images and lidar point clouds including ground truth annotations. Open data set tree structure of a Waymo tfrecordThe tree structure of an original Waymo tfrecord file is shown below:```open_dataset|-- LaserName| |-- UNKNOWN| |-- TOP| |-- FRONT| |-- SIDE_LEFT| |-- SIDE_RIGHT| `-- REAR|-- CameraName| |-- UNKNOWN| |-- FRONT| |-- FRONT_LEFT| |-- FRONT_RIGHT| |-- SIDE_LEFT| `-- SIDE_RIGHT|-- RollingShutterReadOutDirection| |-- UNKNOWN| |-- TOP_TO_BOTTOM| |-- LEFT_TO_RIGHT| |-- BOTTOM_TO_TOP| |-- RIGHT_TO_LEFT| `-- GLOBAL_SHUTTER|-- Frame| |-- images ⇒ list of CameraImage| | |-- name (CameraName)| | |-- image| | |-- pose| | |-- velocity (v_x, v_y, v_z, w_x, w_y, w_z)| | |-- pose_timestamp| | |-- shutter| | |-- camera_trigger_time| | `-- camera_readout_done_time| |-- Context| | |-- name| | |-- camera_calibrations ⇒ list of CameraCalibration| | | |-- name| | | |-- intrinsic| | | |-- extrinsic| | | |-- width| | | |-- height| | | `-- rolling_shutter_direction (RollingShutterReadOutDirection)| | |-- laser_calibrations ⇒ list of LaserCalibration| | | |-- name| | | |-- beam_inclinations| | | |-- beam_inclination_min| | | |-- beam_inclination_max| | | `-- extrinsic| | `-- Stats| | |-- laser_object_counts| | |-- camera_object_counts| | |-- time_of_day| | |-- location| | `-- weather| |-- timestamp_micros| |-- pose| |-- lasers ⇒ list of Laser| | |-- name (LaserName)| | |-- ri_return1 (RangeImage class)| | | |-- range_image_compressed| | | |-- camera_projection_compressed| | | |-- range_image_pose_compressed| | | `-- range_image| | `-- ri_return2 (same as ri_return1)| |-- laser_labels ⇒ list of Label| |-- projected_lidar_labels (same as camera_labels)| |-- camera_labels ⇒ list of CameraLabels| | |-- name (CameraName)| | `-- labels ⇒ list of Label| `-- no_label_zones (Refer to the doc)`-- Label |-- Box | |-- center_x | |-- center_y | |-- center_z | |-- length | |-- width | |-- height | `-- heading |-- Metadata | |-- speed_x | |-- speed_y | |-- accel_x | `-- accel_y |-- type |-- id |-- detection_difficulty_level `-- tracking_difficulty_level```
###Code
import os
import tensorflow as tf
import math
import numpy as np
import itertools
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
#Used for creating the bouding boxes
import matplotlib.patches as patches
# Enable TensorFlow eager execution mode
tf.compat.v1.enable_eager_execution()
# Import waymoe open dataset utils
from waymo_open_dataset.utils import range_image_utils
from waymo_open_dataset.utils import transform_utils
from waymo_open_dataset.utils import frame_utils
from waymo_open_dataset import dataset_pb2 as open_dataset
from utils import get_dataset
from utils import get_label_map
%matplotlib inline
# get object category index and object classname index
category_idx, classname_idx = get_label_map(label_map_path='./label_map.pbtxt')
print("Object category index: {}".format(category_idx))
print("Object classname index: {}".format(classname_idx))
# get number of availabel object classes
num_of_object_classes = len(category_idx)
print("Number of object classes: {}".format(num_of_object_classes))
#class_color_idx = {1: [0, 0, 1], 2: [1, 0, 0], 4: [0, 1, 0]}
class_color_idx = {1: u'#1f77b4', 2: u'#ff7f0e', 4: u'#2ca02c'}
# create an array with class-based colors
custom_colors = list(class_color_idx.values())
# define custom color palette
custom_color_palette = sns.set_palette(sns.color_palette(custom_colors))
###Output
_____no_output_____
###Markdown
Load Raw Data Set from TFRecord Files
###Code
tfrecord_path = "./data/waymo/raw/"
tfrecord_filelist = os.listdir(tfrecord_path)
print("Number of tfrecord files in filelist: {}".format(len(tfrecord_filelist)))
def get_statistics_from_tf_data_set(tfrecord_path, tfrecord_filelist, classname_idx):
""" Get ground truth label statistics from a dataset for object detection
in color images contained in a set of tfrecord files.
Args:
tfrecord_path [str]: filepath to the tfrecord file storage location
tfrecord_filelist [list]: list of tfrecord files to be evaluated
classname_idx [dict]: object class name index by object class name
Returns:
[pandas.DataFrame]: dataframe holding the frame statistics
"""
# column names for frame statistics
frame_stats_columns = [
'tfrecord_index',
'tfrecord_id',
'frame_index',
'location',
'time_of_day',
'weather',
'num_of_vehicles',
'num_of_pedestrians',
'num_of_cyclists',
]
# init list of frame statistics
frame_stats = []
# number of tfrecord files to be evaluated
num_of_files = len(tfrecord_filelist)
# loop over all tfrecord files in the given directory
for tfrecord_idx, tfrecord_file in enumerate(tfrecord_filelist):
# load data set from the current tfrecord file
dataset = tf.data.TFRecordDataset(os.path.join(tfrecord_path, tfrecord_file), compression_type='')
# loop over all frames in the current data set and count the number of frames in tfrecord file
for frame_idx, data in enumerate(dataset):
# open next data frame
frame = open_dataset.Frame()
# convert the byte array to numpy dictionary
frame.ParseFromString(bytearray(data.numpy()))
# count the labeled object per frame and per class
obj_cnts_vehicles = 0
obj_cnts_pedestrians = 0
obj_cnts_cyclists = 0
for obj_cnts in frame.context.stats.camera_object_counts:
if obj_cnts.type == classname_idx['vehicle']:
obj_cnts_vehicles = obj_cnts.count
if obj_cnts.type == classname_idx['pedestrian']:
obj_cnts_pedestrians = obj_cnts.count
if obj_cnts.type == classname_idx['cyclist']:
obj_cnts_cyclists = obj_cnts.count
# append evaluation results to list of frame statistics
frame_stats.append([
tfrecord_idx,
frame.context.name,
frame_idx,
frame.context.stats.location,
frame.context.stats.time_of_day,
frame.context.stats.weather,
obj_cnts_vehicles,
obj_cnts_pedestrians,
obj_cnts_cyclists,
])
# count number of frames per tfrecord
num_of_frames = frame_idx + 1
# print tfrecord file name incl. number of contained frames
print("tfrecord index: {} of {}".format(tfrecord_idx, num_of_files))
print("tfrecord file: {}".format(tfrecord_file))
print("number of frames: {}".format(num_of_frames))
# create and return data frame holding the frame statistics
return pd.DataFrame(frame_stats, columns=frame_stats_columns)
# get the frame statistics from tfrecord files
df_frame_stats = get_statistics_from_tf_data_set(tfrecord_path, tfrecord_filelist, classname_idx)
print(df_frame_stats.head())
# total number of frames in the data set
total_num_of_frames = len(df_frame_stats)
print("\nTotal number of frames: {}\n".format(total_num_of_frames))
###Output
Total number of frames: 19803
###Markdown
Object classes and object counts
###Code
# get total number of labeled objects in the data set
object_classes = ['vehicle', 'pedestrian', 'cyclist']
object_counts = [
df_frame_stats.num_of_vehicles.sum(),
df_frame_stats.num_of_pedestrians.sum(),
df_frame_stats.num_of_cyclists.sum()
]
object_percentage = np.array(object_counts) / sum(object_counts) * 100
# Print results
print('Total number of labeled objects in the data set:')
print("- Vehicles: {0} ({1:5.2f}%)".format(object_counts[0], object_percentage[0]))
print("- Pedestrians: {0} ({1:5.2f}%)".format(object_counts[1], object_percentage[1]))
print("- Cyclists: {0} ({1:5.2f}%)".format(object_counts[2], object_percentage[2]))
print("- Objects: {0} (100%)".format(sum(object_counts)))
f, axs = plt.subplots(1, 2, figsize=(15, 5))
sns.barplot(x=object_classes, y=object_counts, palette=custom_color_palette, ax=axs[0])
axs[0].set(
xlabel='object classes',
ylabel='object counts',
title='Object counts per class over {} images'.format(total_num_of_frames),
);
sns.barplot(x=object_classes, y=object_percentage, palette=custom_color_palette, ax=axs[1])
axs[1].set(
xlabel='object classes',
ylabel='object counts [%]',
title='Percentual object counts per class over {} images'.format(total_num_of_frames),
);
# plot pie graph
f, ax = plt.subplots(1, 1, figsize=(7.5, 5))
ax.pie(object_counts, labels=object_classes, colors=custom_colors)
# calculate distribution percentage
print("Percentage of vehicles = {0:5.2f} %".format(object_percentage[0]))
print("Percentage of pedestrian = {0:5.2f} %".format(object_percentage[1]))
print("Percentage of cyclists = {0:5.2f} %".format(object_percentage[2]))
###Output
Percentage of vehicles = 74.42 %
Percentage of pedestrian = 25.03 %
Percentage of cyclists = 0.55 %
###Markdown
Frame counts per time of day, location and weather conditions
###Code
# Frame counts per time of day, location and weather condition
frame_counts_by_time_of_day = df_frame_stats.time_of_day.value_counts()
frame_counts_by_location = df_frame_stats.location.value_counts()
frame_counts_by_weather = df_frame_stats.weather.value_counts()
# Print results
print("\nFrame counts by time of day:\n{}".format(frame_counts_by_time_of_day))
print("\nFrame counts by loation:\n{}".format(frame_counts_by_time_of_day))
print("\nFrame counts by weather:\n{}".format(frame_counts_by_time_of_day))
# Plot distribution of time of day, location and weather conditions over all frames in the data set
f, axs = plt.subplots(3, 2, figsize=(15, 15))
sns.barplot(
x=frame_counts_by_time_of_day.index, y=frame_counts_by_time_of_day,
palette=custom_color_palette, ax=axs[0, 0]
)
axs[0, 0].set(
xlabel='time of day',
ylabel='frame counts',
title='Frame counts per time of day over {} frames'.format(total_num_of_frames),
);
sns.barplot(
x=frame_counts_by_time_of_day.index, y=frame_counts_by_time_of_day/total_num_of_frames*100,
palette=custom_color_palette, ax=axs[0, 1]
)
axs[0, 1].set(
xlabel='time of day',
ylabel='frame counts [%]',
title='Percentual frame counts per time of day over {} frames'.format(total_num_of_frames),
);
sns.barplot(
x=frame_counts_by_location.index, y=frame_counts_by_location,
palette=custom_color_palette, ax=axs[1, 0]
)
axs[1, 0].set(
xlabel='location',
ylabel='frame counts',
title='Frame counts per location over {} frames'.format(total_num_of_frames),
);
sns.barplot(
x=frame_counts_by_location.index, y=frame_counts_by_location/total_num_of_frames*100,
palette=custom_color_palette, ax=axs[1, 1]
)
axs[1, 1].set(
xlabel='location',
ylabel='frame counts [%]',
title='Percentual frame counts per location over {} frames'.format(total_num_of_frames),
);
sns.barplot(
x=frame_counts_by_weather.index, y=frame_counts_by_weather,
palette=custom_color_palette, ax=axs[2, 0]
)
axs[2, 0].set(
xlabel='weather conditions',
ylabel='frame counts',
title='Frame counts per weather condition over {} frames'.format(total_num_of_frames),
);
sns.barplot(
x=frame_counts_by_weather.index, y=frame_counts_by_weather/total_num_of_frames*100,
palette=custom_color_palette, ax=axs[2, 1]
)
axs[2, 1].set(
xlabel='weather conditions',
ylabel='frame counts [%]',
title='Percentual frame counts per weather condition over {} frames'.format(total_num_of_frames),
);
###Output
_____no_output_____
###Markdown
Object class frequency per frame
###Code
# Plot histograms of object counts per frame over all frames in the data set
f, axs = plt.subplots(2, 2, figsize=(15, 10))
sns.histplot(
data=(df_frame_stats.num_of_vehicles + df_frame_stats.num_of_pedestrians + df_frame_stats.num_of_cyclists),
kde=True, color="#03012d", ax=axs[0, 0])
axs[0, 0].grid()
axs[0, 0].set(
xlabel='number of objects per frame',
ylabel='object counts',
title='Object counts per frame over {} frames'.format(total_num_of_frames),
);
sns.histplot(
data=df_frame_stats.num_of_vehicles,
kde=True, color=custom_colors[0], ax=axs[0, 1]
)
axs[0, 1].grid()
axs[0, 1].set(
xlabel='number of vehicles per frame',
ylabel='vehicle counts',
title='Vehicle counts per frame over {} frames'.format(total_num_of_frames),
);
sns.histplot(
data=df_frame_stats.num_of_pedestrians,
kde=True, color=custom_colors[1], ax=axs[1, 0]
)
axs[1, 0].grid()
axs[1, 0].set(
xlabel='number of pedestrians per frame',
ylabel='pedestrian counts',
title='Pedestrian counts per frame over {} frames'.format(total_num_of_frames),
);
sns.histplot(
data=df_frame_stats.num_of_cyclists,
kde=True, color=custom_colors[2],ax=axs[1, 1]
)
axs[1, 1].grid()
axs[1, 1].set(
xlabel='number of cyclists per frame',
ylabel='cyclist counts',
title='Cyclist counts per frame over {} frames'.format(total_num_of_frames),
);
###Output
_____no_output_____
###Markdown
Analyse and Compare Training, Validation and Test Data Sub-Sets
###Code
# get filenames of the training data set
train_file_list = 'tfrecord_files_train.txt'
with open(train_file_list, 'r') as f:
train_filenames = f.read().splitlines()
# get filenames of the validation data set
val_file_list = 'tfrecord_files_val.txt'
with open(val_file_list, 'r') as f:
val_filenames = f.read().splitlines()
# get filenames of the test data set
test_file_list = 'tfrecord_files_test.txt'
with open(test_file_list, 'r') as f:
test_filenames = f.read().splitlines()
# extract the tfrecord context ids from the filenames belonging to the training, validation and test data set
train_tfrecord_ids = [fn.replace('segment-', '').replace('_with_camera_labels.tfrecord', '') for fn in train_filenames]
val_tfrecord_ids = [fn.replace('segment-', '').replace('_with_camera_labels.tfrecord', '') for fn in val_filenames]
test_tfrecord_ids = [fn.replace('segment-', '').replace('_with_camera_labels.tfrecord', '') for fn in test_filenames]
# create separate data frames for the statistics of the training, validation and test data set
df_frame_stats_train = df_frame_stats[df_frame_stats['tfrecord_id'].isin(train_tfrecord_ids)]
df_frame_stats_val = df_frame_stats[df_frame_stats['tfrecord_id'].isin(val_tfrecord_ids)]
df_frame_stats_test = df_frame_stats[df_frame_stats['tfrecord_id'].isin(test_tfrecord_ids)]
###Output
_____no_output_____
###Markdown
Frame counts per data sub-set
###Code
# get the number of frames in the training, validation and test data set (without applying downsampling)
num_of_frames_train = len(df_frame_stats_train)
num_of_frames_val = len(df_frame_stats_val)
num_of_frames_test = len(df_frame_stats_test)
# Print results
print('Total number of frames in the data sets (without applying downsampling):')
print("- Number of frames in the training set: {0}".format(num_of_frames_train))
print("- Number of frames in the validation set: {0}".format(num_of_frames_val))
print("- Number of frames in the test set: {0}".format(num_of_frames_test))
print("- Total number of frames: {0}".format(total_num_of_frames))
###Output
Total number of frames in the data sets (without applying downsampling):
- Number of frames in the training set: 16230
- Number of frames in the validation set: 2977
- Number of frames in the test set: 596
- Total number of frames: 19803
###Markdown
Frame counts per time of day
###Code
# Frame counts per time of day for training, validation and test data set (without applying downsampling)
time_of_day_classes_train = df_frame_stats_train.time_of_day.unique()
frame_counts_by_time_of_day_train = df_frame_stats_train.time_of_day.value_counts().to_numpy()
time_of_day_classes_val = df_frame_stats_val.time_of_day.unique()
frame_counts_by_time_of_day_val = df_frame_stats_val.time_of_day.value_counts().to_numpy()
time_of_day_classes_test = df_frame_stats_test.time_of_day.unique()
frame_counts_by_time_of_day_test = df_frame_stats_test.time_of_day.value_counts().to_numpy()
# Print results
print("\nFrame counts by time of day (without applying downsampling):\n")
print("Training data set:\n{}\n{}\n".format(time_of_day_classes_train, frame_counts_by_time_of_day_train))
print("Validation data set:\n{}\n{}\n".format(time_of_day_classes_val, frame_counts_by_time_of_day_val))
print("Test data set:\n{}\n{}\n".format(time_of_day_classes_test, frame_counts_by_time_of_day_test))
f, axs = plt.subplots(3, 2, figsize=(15, 15))
sns.barplot(x=time_of_day_classes_train, y=frame_counts_by_time_of_day_train,
palette=custom_color_palette, ax=axs[0, 0])
axs[0, 0].set(
xlabel='time of day classes',
ylabel='time of day frame counts',
title='Training set: Frame counts per time of day over {} images'.format(
num_of_frames_train),
);
sns.barplot(x=time_of_day_classes_train, y=frame_counts_by_time_of_day_train/num_of_frames_train*100,
palette=custom_color_palette, ax=axs[0, 1])
axs[0, 1].set(
xlabel='time of day classes',
ylabel='time of day frame counts',
title='Training set: Percentual frame counts per time of day over {} images'.format(
num_of_frames_train),
);
sns.barplot(x=time_of_day_classes_val, y=frame_counts_by_time_of_day_val,
palette=custom_color_palette, ax=axs[1, 0])
axs[1, 0].set(
xlabel='time of day classes',
ylabel='time of day frame counts',
title='Validation set: Frame counts per time of day over {} images'.format(
num_of_frames_val),
);
sns.barplot(x=time_of_day_classes_val, y=frame_counts_by_time_of_day_val/num_of_frames_val*100,
palette=custom_color_palette, ax=axs[1, 1])
axs[1, 1].set(
xlabel='time of day classes',
ylabel='time of day frame counts',
title='Validation set: Percentual frame counts per time of day over {} images'.format(
num_of_frames_val),
);
sns.barplot(x=time_of_day_classes_test, y=frame_counts_by_time_of_day_test,
palette=custom_color_palette, ax=axs[2, 0])
axs[2, 0].set(
xlabel='time of day classes',
ylabel='time of day frame counts',
title='Test set: Frame counts per time of day over {} images'.format(
num_of_frames_test),
);
sns.barplot(x=time_of_day_classes_test, y=frame_counts_by_time_of_day_test/num_of_frames_test*100,
palette=custom_color_palette, ax=axs[2, 1])
axs[2, 1].set(
xlabel='time of day classes',
ylabel='time of day frame counts',
title='Test set: Percentual frame counts per time of day over {} images'.format(
num_of_frames_test),
);
###Output
_____no_output_____
###Markdown
Object classes and object counts
###Code
# get total number of labeled objects in the training data set
object_counts_train = [
df_frame_stats_train.num_of_vehicles.sum(),
df_frame_stats_train.num_of_pedestrians.sum(),
df_frame_stats_train.num_of_cyclists.sum()
]
object_percentage_train = np.array(object_counts_train) / sum(object_counts_train) * 100
# Print results
print('Total number of labeled objects in the training data set:')
print("- Vehicles: {0} ({1:5.2f}%)".format(object_counts_train[0], object_percentage_train[0]))
print("- Pedestrians: {0} ({1:5.2f}%)".format(object_counts_train[1], object_percentage_train[1]))
print("- Cyclists: {0} ({1:5.2f}%)".format(object_counts_train[2], object_percentage_train[2]))
print("- Objects: {0} (100%)".format(sum(object_counts_train)))
print("")
# get total number of labeled objects in the validation data set
object_counts_val = [
df_frame_stats_val.num_of_vehicles.sum(),
df_frame_stats_val.num_of_pedestrians.sum(),
df_frame_stats_val.num_of_cyclists.sum()
]
object_percentage_val = np.array(object_counts_val) / sum(object_counts_val) * 100
# Print results
print('Total number of labeled objects in the validation data set:')
print("- Vehicles: {0} ({1:5.2f}%)".format(object_counts_val[0], object_percentage_val[0]))
print("- Pedestrians: {0} ({1:5.2f}%)".format(object_counts_val[1], object_percentage_val[1]))
print("- Cyclists: {0} ({1:5.2f}%)".format(object_counts_val[2], object_percentage_val[2]))
print("- Objects: {0} (100%)".format(sum(object_counts_val)))
print("")
# get total number of labeled objects in the test data set
object_counts_test = [
df_frame_stats_test.num_of_vehicles.sum(),
df_frame_stats_test.num_of_pedestrians.sum(),
df_frame_stats_test.num_of_cyclists.sum()
]
object_percentage_test = np.array(object_counts_test) / sum(object_counts_test) * 100
# Print results
print('Total number of labeled objects in the test data set:')
print("- Vehicles: {0} ({1:5.2f}%)".format(object_counts_test[0], object_percentage_test[0]))
print("- Pedestrians: {0} ({1:5.2f}%)".format(object_counts_test[1], object_percentage_test[1]))
print("- Cyclists: {0} ({1:5.2f}%)".format(object_counts_test[2], object_percentage_test[2]))
print("- Objects: {0} (100%)".format(sum(object_counts_test)))
f, axs = plt.subplots(3, 2, figsize=(15, 15))
sns.barplot(x=object_classes, y=object_counts_train, palette=custom_color_palette, ax=axs[0, 0])
axs[0, 0].set(
xlabel='object classes',
ylabel='object counts',
title='Training set: Object counts per class over {} images'.format(num_of_frames_train),
);
sns.barplot(x=object_classes, y=object_percentage_train, palette=custom_color_palette, ax=axs[0, 1])
axs[0, 1].set(
xlabel='object classes',
ylabel='object counts [%]',
title='Training set: Percentual object counts per class over {} images'.format(num_of_frames_train),
);
sns.barplot(x=object_classes, y=object_counts_val, palette=custom_color_palette, ax=axs[1, 0])
axs[1, 0].set(
xlabel='object classes',
ylabel='object counts',
title='Validation set: Object counts per class over {} images'.format(num_of_frames_val),
);
sns.barplot(x=object_classes, y=object_percentage_val, palette=custom_color_palette, ax=axs[1, 1])
axs[1, 1].set(
xlabel='object classes',
ylabel='object counts [%]',
title='Validation set: Percentual object counts per class over {} images'.format(num_of_frames_val),
);
sns.barplot(x=object_classes, y=object_counts_test, palette=custom_color_palette, ax=axs[2, 0])
axs[2, 0].set(
xlabel='object classes',
ylabel='object counts',
title='Test set: Object counts per class over {} images'.format(num_of_frames_test),
);
sns.barplot(x=object_classes, y=object_percentage_test, palette=custom_color_palette, ax=axs[2, 1])
axs[2, 1].set(
xlabel='object classes',
ylabel='object counts [%]',
title='Test set: Percentual object counts per class over {} images'.format(num_of_frames_test),
);
###Output
_____no_output_____
###Markdown
Object class frequency per frame
###Code
# Plot histograms of object counts per frame over all frames in the data set
f, ax = plt.subplots(1, 1, figsize=(15, 7))
sns.histplot(
data=(df_frame_stats_train.num_of_vehicles + \
df_frame_stats_train.num_of_pedestrians + \
df_frame_stats_train.num_of_cyclists),
kde=True, color="blue", label="training set", ax=ax)
sns.histplot(
data=(df_frame_stats_val.num_of_vehicles + \
df_frame_stats_val.num_of_pedestrians + \
df_frame_stats_val.num_of_cyclists),
kde=True, color="green", label="validation set", ax=ax)
sns.histplot(
data=(df_frame_stats_test.num_of_vehicles + \
df_frame_stats_test.num_of_pedestrians + \
df_frame_stats_test.num_of_cyclists),
kde=True, color="red", label="test set", ax=ax)
ax.set(
xlabel='number of objects per frame',
ylabel='object counts',
title='Object counts per frame over {} frames distributed over training, validation and test set'.format(
total_num_of_frames),
)
ax.grid()
ax.legend();
# Plot histograms of vehicle counts per frame over all frames in the data set
f, ax = plt.subplots(1, 1, figsize=(15, 7))
sns.histplot(
data=df_frame_stats_train.num_of_vehicles,
kde=True, color="blue", label="training set", ax=ax)
sns.histplot(
data=df_frame_stats_val.num_of_vehicles,
kde=True, color="green", label="validation set", ax=ax)
sns.histplot(
data=df_frame_stats_test.num_of_vehicles,
kde=True, color="red", label="test set", ax=ax)
ax.set(
xlabel='number of vehicles per frame',
ylabel='object counts',
title='Vehicle counts per frame over {} frames distributed over training, validation and test set'.format(
total_num_of_frames),
)
ax.grid()
ax.legend();
# Plot histograms of object counts per frame over all frames in the data set
f, ax = plt.subplots(1, 1, figsize=(15, 7))
sns.histplot(
data=df_frame_stats_train.num_of_pedestrians,
kde=True, color="blue", label="training set", ax=ax)
sns.histplot(
data=df_frame_stats_val.num_of_pedestrians,
kde=True, color="green", label="validation set", ax=ax)
sns.histplot(
data=df_frame_stats_test.num_of_pedestrians,
kde=True, color="red", label="test set", ax=ax)
ax.set(
xlabel='number of pedestrians per frame',
ylabel='object counts',
title='Pedestrian counts per frame over {} frames distributed over training, validation and test set'.format(
total_num_of_frames),
)
ax.grid()
ax.legend();
# Plot histograms of object counts per frame over all frames in the data set
f, ax = plt.subplots(1, 1, figsize=(15, 7))
sns.histplot(
data=df_frame_stats_train.num_of_cyclists,
kde=True, color="blue", label="training set", ax=ax)
sns.histplot(
data=df_frame_stats_val.num_of_cyclists,
kde=True, color="green", label="validation set", ax=ax)
sns.histplot(
data=df_frame_stats_test.num_of_cyclists,
kde=True, color="red", label="test set", ax=ax)
ax.set(
xlabel='number of cyclists per frame',
ylabel='object counts',
title='Cyclist counts per frame over {} frames distributed over training, validation and test set'.format(
total_num_of_frames),
)
ax.grid()
ax.legend();
###Output
_____no_output_____ |
Fairness_Survey/ALGORITHMS/LFR/Violent.ipynb | ###Markdown
INSTALLATION
###Code
!pip install aif360
!pip install fairlearn
!apt-get install -jre
!java -version
!pip install h2o
!pip install xlsxwriter
!pip install BlackBoxAuditing
###Output
Requirement already satisfied: BlackBoxAuditing in /usr/local/lib/python3.7/dist-packages (0.1.54)
Requirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing) (2.6.2)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing) (3.2.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing) (1.19.5)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing) (1.1.5)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->BlackBoxAuditing) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->BlackBoxAuditing) (2.8.2)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->BlackBoxAuditing) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->BlackBoxAuditing) (1.3.1)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib->BlackBoxAuditing) (1.15.0)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->BlackBoxAuditing) (2018.9)
###Markdown
IMPORTS
###Code
import numpy as np
from mlxtend.feature_selection import ExhaustiveFeatureSelector
from xgboost import XGBClassifier
# import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import openpyxl
import xlsxwriter
from openpyxl import load_workbook
import BlackBoxAuditing
import shap
#suppress setwith copy warning
pd.set_option('mode.chained_assignment',None)
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectKBest, SelectFwe, SelectPercentile,SelectFdr, SelectFpr, SelectFromModel
from sklearn.feature_selection import chi2, mutual_info_classif
# from skfeature.function.similarity_based import fisher_score
import aif360
import matplotlib.pyplot as plt
from aif360.metrics.classification_metric import ClassificationMetric
from aif360.metrics import BinaryLabelDatasetMetric
from aif360.algorithms.preprocessing import DisparateImpactRemover, Reweighing, LFR,OptimPreproc
from aif360.datasets import StandardDataset , BinaryLabelDataset
from sklearn.preprocessing import MinMaxScaler
MM= MinMaxScaler()
import h2o
from h2o.automl import H2OAutoML
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
import sys
sys.path.append("../")
import os
h2o.init()
###Output
Checking whether there is an H2O instance running at http://localhost:54321 . connected.
###Markdown
**************************LOADING DATASET*******************************
###Code
from google.colab import drive
drive.mount('/content/gdrive')
# advantagedGroup= [{'race':1}]
# disadvantagedGroup= [{'race':0}]
# tr=pd.read_csv(r'/content/gdrive/MyDrive/Datasets/SurveyData/DATASET/Violent/Test/Test1.csv')
# tr
# tester= BinaryLabelDataset(favorable_label=1,
# unfavorable_label=0,
# df=tr,
# label_names=['two_year_recid'],
# protected_attribute_names=['race'],
# unprivileged_protected_attributes=[[0]],
# privileged_protected_attributes=[[1]])
# tester
# TR = LFR(unprivileged_groups=disadvantagedGroup,
# privileged_groups=advantagedGroup,
# k=10, Ax=0.1, Ay=1.0, Az=2.0,
# # verbose=1
# )
# TR = TR.fit(tester, maxiter=5000, maxfun=5000)
# transformed = TR.transform(tester)
# transformed.labels= tester.labels
# transformed.protected_attributes= tester.protected_attributes
# transformed.feature_names
###Output
_____no_output_____
###Markdown
GBM LFR
###Code
for i in range(31,51,1):
train_url=r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/FairData/Violent/Train'
train_path= os.path.join(train_url ,("Train"+ str(i)+ ".csv"))
train= pd.read_csv(train_path)
first_column = train.pop('two_year_recid')
train.insert(0, 'two_year_recid', first_column)
test_url=r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/FairData/Violent/Test'
test_path= os.path.join(test_url ,("Test"+ str(i)+ ".csv"))
test= pd.read_csv(test_path)
first_column = test.pop('two_year_recid')
test.insert(0, 'two_year_recid', first_column)
#********************************************************binary labels for LFR*************************************************************
advantagedGroup= [{'race':1}]
disadvantagedGroup= [{'race':0}]
# bldTrain= BinaryLabelDataset(favorable_label=1,
# unfavorable_label=0,
# df=train,
# label_names=['two_year_recid'],
# protected_attribute_names=['race'],
# unprivileged_protected_attributes=[[0]],
# privileged_protected_attributes=[[1]])
# bldTest= BinaryLabelDataset(favorable_label=1,
# unfavorable_label=0,
# df=test,
# label_names=['two_year_recid'],
# protected_attribute_names=['race'],
# unprivileged_protected_attributes=[[0]],
# privileged_protected_attributes=[[1]])
# #*******************************************************LFR instance**************************************************************
# TR = LFR(unprivileged_groups=disadvantagedGroup,
# privileged_groups=advantagedGroup)
# TR = TR.fit(bldTrain, maxiter=5000, maxfun=5000)
# #setting the label and the protected groups of the transformed to the original so that only
# #features are transformed. in order for LFR to be used as pre_processing algorithm
# #transforming and setting transformed train labels and protected attributes
# LFR_Train = TR .transform(bldTrain )
# LFR_Train.labels= bldTrain.labels
# # LFR_Train.protected_attributes= bldTrain.protected_attributes
# #transforming and setting transformed test labels and protected attributes
# LFR_Test = TR .transform(bldTest)
# LFR_Test.labels= bldTest.labels
# # LFR_Test.protected_attributes= bldTest.protected_attributes
# #*****************************************Repaired Train and Test Set NB: home language is at index 3*******************************************************
# #using bldTest and bldTrain protected attr vals which is the old protected attributes as we are not using the transformed version of it created by the LFR
# train= pd.DataFrame(np.hstack([LFR_Train .labels, LFR_Train .features[:,0:2],bldTrain.protected_attributes,LFR_Train.features[:,3:]]),columns=train.columns)
# test= pd.DataFrame(np.hstack([LFR_Test .labels, LFR_Test .features[:,0:2],bldTest.protected_attributes,LFR_Test.features[:,3:]]),columns=test.columns)
# # TotalRepairedDF= pd.concat([RepairedTrain ,RepairedTest ])
# # normalization of train and test sets
# Fitter= MM.fit(train)
# transformed_train=Fitter.transform(train)
# train=pd.DataFrame(transformed_train, columns= train.columns)
# #test normalization
# transformed_test=Fitter.transform(test)
# test=pd.DataFrame(transformed_test, columns= test.columns)
# *************CHECKING FAIRNESS IN DATASET**************************
## ****************CONVERTING TO BLD FORMAT******************************
#Transforming the Train and Test Set to BinaryLabel
class Test(StandardDataset):
def __init__(self,label_name= 'two_year_recid',
favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
super(Test, self).__init__(df=test , label_name=label_name ,
favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
privileged_classes=privileged_classes ,
)
BLD_Test= Test(protected_attribute_names= ['race'],
privileged_classes= [[1]])
## ********************Checking Bias Repaired Data********************************
DataBias_Checker = BinaryLabelDatasetMetric(BLD_Test , unprivileged_groups= disadvantagedGroup, privileged_groups= advantagedGroup)
dsp= DataBias_Checker .statistical_parity_difference()
dif= DataBias_Checker.consistency()
ddi= DataBias_Checker.disparate_impact()
print('The Statistical Parity diference is = {diff}'.format(diff= dsp ))
print('Individual Fairness is = {IF}'.format( IF= dif ))
print('Disparate Impact is = {IF}'.format( IF= ddi ))
# ********************SETTING TO H20 FRAME AND MODEL TRAINING*******************************
x = list(train.columns)
y = "two_year_recid"
x.remove(y)
Train=h2o.H2OFrame(train)
Test= h2o.H2OFrame(test)
Train[y] = Train[y].asfactor()
Test[y] = Test[y].asfactor()
aml = H2OAutoML(max_models=5, nfolds=5, include_algos=['GBM'] , stopping_metric='AUTO') #verbosity='info',,'GBM', 'DRF'
aml.train(x=x, y=y, training_frame=Train)
best_model= aml.leader
# a.model_performance()
#**********************REPLACE LABELS OF DUPLICATED TEST SET WITH PREDICTIONS****************************
#predicted labels
gbm_Predictions= best_model.predict(Test)
gbm_Predictions= gbm_Predictions.as_data_frame()
predicted_df= test.copy()
predicted_df['two_year_recid']= gbm_Predictions.predict.to_numpy()
# ********************COMPUTE DISCRIMINATION*****************************
advantagedGroup= [{'race':1}]
disadvantagedGroup= [{'race':0}]
class PredTest(StandardDataset):
def __init__(self,label_name= 'two_year_recid',
favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
super(PredTest, self).__init__(df=predicted_df , label_name=label_name ,
favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
privileged_classes=privileged_classes ,
)
BLD_PredTest= PredTest(protected_attribute_names= ['race'],
privileged_classes= [[1]])
# # Workbook= pd.ExcelFile(r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/BaseLines/GBM/gbm_Results.xlsx')
# excelBook= load_workbook('/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/BaseLines/GBM/gbm_Results.xlsx')
# OldDF= excelBook.get_sheet_by_name("Violent")#pd.read_excel(Workbook,sheet_name='Violent')
#load workbook
excelBook= load_workbook('/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/NewLFR_Results/gbm_Results.xlsx')
Violent= excelBook['Violent']
data= Violent.values
# Get columns
columns = next(data)[0:]
10# Create a DataFrame based on the second and subsequent lines of data
OldDF = pd.DataFrame(data, columns=columns)
ClassifierBias = ClassificationMetric( BLD_Test,BLD_PredTest , unprivileged_groups= disadvantagedGroup, privileged_groups= advantagedGroup)
Accuracy= ClassifierBias.accuracy()
TPR= ClassifierBias.true_positive_rate()
TNR= ClassifierBias.true_negative_rate()
NPV= ClassifierBias.negative_predictive_value()
PPV= ClassifierBias.positive_predictive_value()
SP=ClassifierBias .statistical_parity_difference()
IF=ClassifierBias.consistency()
DI=ClassifierBias.disparate_impact()
EOP=ClassifierBias.true_positive_rate_difference()
EO=ClassifierBias.average_odds_difference()
FDR= ClassifierBias.false_discovery_rate(privileged=False)- ClassifierBias.false_discovery_rate(privileged=True)
NPV_diff=ClassifierBias.negative_predictive_value(privileged=False)-ClassifierBias.negative_predictive_value(privileged=True)
FOR=ClassifierBias.false_omission_rate(privileged=False)-ClassifierBias.false_omission_rate(privileged=True)
PPV_diff=ClassifierBias.positive_predictive_value(privileged=False) -ClassifierBias.positive_predictive_value(privileged=True)
BGE = ClassifierBias.between_group_generalized_entropy_index()
WGE = ClassifierBias.generalized_entropy_index()-ClassifierBias.between_group_generalized_entropy_index()
BGTI = ClassifierBias.between_group_theil_index()
WGTI = ClassifierBias.theil_index() -ClassifierBias.between_group_theil_index()
EDF= ClassifierBias.differential_fairness_bias_amplification()
newdf= pd.DataFrame(index = [0], data= { 'ACCURACY': Accuracy,'TPR': TPR, 'PPV':PPV, 'TNR':TNR,'NPV':NPV,'SP':SP,'CONSISTENCY':IF,'DI':DI,'EOP':EOP,'EO':EO,'FDR':FDR,'NPV_diff':NPV_diff,
'FOR':FOR,'PPV_diff':PPV_diff,'BGEI':BGE,'WGEI':WGE,'BGTI':BGTI,'WGTI':WGTI,'EDF':EDF,
'DATA_SP':dsp,'DATA_CONS':dif,'DATA_DI':ddi})
newdf=pd.concat([OldDF,newdf])
pathway= r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/NewLFR_Results/gbm_Results.xlsx'
with pd.ExcelWriter(pathway, engine='openpyxl') as writer:
#load workbook base as for writer
writer.book= excelBook
writer.sheets=dict((ws.title, ws) for ws in excelBook.worksheets)
newdf.to_excel(writer, sheet_name='Violent', index=False)
# newdf.to_excel(writer, sheet_name='Adult', index=False)
print('Accuracy', Accuracy)
###Output
The Statistical Parity diference is = 0.4291687406669985
Individual Fairness is = [0.95785536]
Disparate Impact is = 2.9117516629711755
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
###Markdown
LOGISTIC REGRESSION LFR
###Code
for i in range(1,51,1):
train_url=r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/FairData/Violent/Train'
train_path= os.path.join(train_url ,("Train"+ str(i)+ ".csv"))
train= pd.read_csv(train_path)
first_column = train.pop('two_year_recid')
train.insert(0, 'two_year_recid', first_column)
test_url=r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/FairData/Violent/Test'
test_path= os.path.join(test_url ,("Test"+ str(i)+ ".csv"))
test= pd.read_csv(test_path)
first_column = test.pop('two_year_recid')
test.insert(0, 'two_year_recid', first_column)
#********************************************************binary labels for LFR*************************************************************
# bldTrain= BinaryLabelDataset(favorable_label=1,
# unfavorable_label=0,
# df=train,
# label_names=['two_year_recid'],
# protected_attribute_names=['race'],
# unprivileged_protected_attributes=[[0]],
# privileged_protected_attributes=[[1]])
# bldTest= BinaryLabelDataset(favorable_label=1,
# unfavorable_label=0,
# df=test,
# label_names=['two_year_recid'],
# protected_attribute_names=['race'],
# unprivileged_protected_attributes=[[0]],
# privileged_protected_attributes=[[1]])
# #*******************************************************LFR instance**************************************************************
# TR = LFR(unprivileged_groups=disadvantagedGroup,
# privileged_groups=advantagedGroup)
# TR = TR.fit(bldTrain, maxiter=5000, maxfun=5000)
# #setting the label and the protected groups of the transformed to the original so that only
# #features are transformed. in order for LFR to be used as pre_processing algorithm
# #transforming and setting transformed train labels and protected attributes
# LFR_Train = TR .transform(bldTrain )
# LFR_Train.labels= bldTrain.labels
# # LFR_Train.protected_attributes= bldTrain.protected_attributes
# #transforming and setting transformed test labels and protected attributes
# LFR_Test = TR .transform(bldTest)
# LFR_Test.labels= bldTest.labels
# # LFR_Test.protected_attributes= bldTest.protected_attributes
# #*****************************************Repaired Train and Test Set NB: home language is at index 3*******************************************************
# #using bldTest and bldTrain protected attr vals which is the old protected attributes as we are not using the transformed version of it created by the LFR
# train= pd.DataFrame(np.hstack([LFR_Train .labels, LFR_Train .features[:,0:2],bldTrain.protected_attributes,LFR_Train.features[:,3:]]),columns=train.columns)
# test= pd.DataFrame(np.hstack([LFR_Test .labels, LFR_Test .features[:,0:2],bldTest.protected_attributes,LFR_Test.features[:,3:]]),columns=test.columns)
# normalization of train and test sets
Fitter= MM.fit(train)
transformed_train=Fitter.transform(train)
train=pd.DataFrame(transformed_train, columns= train.columns)
#test normalization
transformed_test=Fitter.transform(test)
test=pd.DataFrame(transformed_test, columns= test.columns)
# *************CHECKING FAIRNESS IN DATASET**************************
## ****************CONVERTING TO BLD FORMAT******************************
#Transforming the Train and Test Set to BinaryLabel
advantagedGroup= [{'race':1}]
disadvantagedGroup= [{'race':0}]
# class Train(StandardDataset):
# def __init__(self,label_name= 'two_year_recid',
# favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
# super(Train, self).__init__(df=train , label_name=label_name ,
# favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
# privileged_classes=privileged_classes ,
# )
# BLD_Train= Train(protected_attribute_names= ['race'],
# privileged_classes= [[1]])
class Test(StandardDataset):
def __init__(self,label_name= 'two_year_recid',
favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
super(Test, self).__init__(df=test , label_name=label_name ,
favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
privileged_classes=privileged_classes ,
)
BLD_Test= Test(protected_attribute_names= ['race'],
privileged_classes= [[1]])
## ********************Checking Bias in Data********************************
DataBias_Checker = BinaryLabelDatasetMetric(BLD_Test , unprivileged_groups= disadvantagedGroup, privileged_groups= advantagedGroup)
dsp= DataBias_Checker .statistical_parity_difference()
dif= DataBias_Checker.consistency()
ddi= DataBias_Checker.disparate_impact()
print('The Statistical Parity diference is = {diff}'.format(diff= dsp ))
print('Individual Fairness is = {IF}'.format( IF= dif ))
print('Disparate Impact is = {IF}'.format( IF= ddi ))
# ********************SETTING TO H20 FRAME AND MODEL TRAINING*******************************
x = list(train.columns)
y = "two_year_recid"
x.remove(y)
Train=h2o.H2OFrame(train)
Test= h2o.H2OFrame(test)
Train[y] = Train[y].asfactor()
Test[y] = Test[y].asfactor()
LogReg = H2OGeneralizedLinearEstimator(family= "binomial", lambda_ = 0)
LogReg.train(x=x, y=y, training_frame=Train)
LogReg_Predictions= LogReg.predict(Test)
LogReg_Predictions= LogReg_Predictions.as_data_frame()
# *************************REPLACE LABELS OF DUPLICATED TEST SET WITH PREDICTIONS**************************************
predicted_df= test.copy()
predicted_df['two_year_recid']= LogReg_Predictions.predict.to_numpy()
# ***************************COMPUTE DISCRIMINATION********************************
advantagedGroup= [{'race':1}]
disadvantagedGroup= [{'race':0}]
class PredTest(StandardDataset):
def __init__(self,label_name= 'two_year_recid',
favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
super(PredTest, self).__init__(df=predicted_df , label_name=label_name ,
favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
privileged_classes=privileged_classes ,
)
BLD_PredTest= PredTest(protected_attribute_names= ['race'],
privileged_classes= [[1]])
excelBook= load_workbook(r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/NewLFR_Results/LR_Results.xlsx')
Violent= excelBook['Violent']
data= Violent.values
# Get columns
columns = next(data)[0:]
OldDF = pd.DataFrame(data, columns=columns)
ClassifierBias = ClassificationMetric( BLD_Test,BLD_PredTest , unprivileged_groups= disadvantagedGroup, privileged_groups= advantagedGroup)
Accuracy= ClassifierBias.accuracy()
TPR= ClassifierBias.true_positive_rate()
TNR= ClassifierBias.true_negative_rate()
NPV= ClassifierBias.negative_predictive_value()
PPV= ClassifierBias.positive_predictive_value()
SP=ClassifierBias .statistical_parity_difference()
IF=ClassifierBias.consistency()
DI=ClassifierBias.disparate_impact()
EOP=ClassifierBias.true_positive_rate_difference()
EO=ClassifierBias.average_odds_difference()
FDR= ClassifierBias.false_discovery_rate(privileged=False)- ClassifierBias.false_discovery_rate(privileged=True)
NPV_diff=ClassifierBias.negative_predictive_value(privileged=False)-ClassifierBias.negative_predictive_value(privileged=True)
FOR=ClassifierBias.false_omission_rate(privileged=False)-ClassifierBias.false_omission_rate(privileged=True)
PPV_diff=ClassifierBias.positive_predictive_value(privileged=False) -ClassifierBias.positive_predictive_value(privileged=True)
BGE = ClassifierBias.between_group_generalized_entropy_index()
WGE = ClassifierBias.generalized_entropy_index()-ClassifierBias.between_group_generalized_entropy_index()
BGTI = ClassifierBias.between_group_theil_index()
WGTI = ClassifierBias.theil_index() -ClassifierBias.between_group_theil_index()
EDF= ClassifierBias.differential_fairness_bias_amplification()
newdf= pd.DataFrame(index = [0], data= { 'ACCURACY': Accuracy,'TPR': TPR, 'PPV':PPV, 'TNR':TNR,'NPV':NPV,'SP':SP,'CONSISTENCY':IF,'DI':DI,'EOP':EOP,'EO':EO,'FDR':FDR,'NPV_diff':NPV_diff,
'FOR':FOR,'PPV_diff':PPV_diff,'BGEI':BGE,'WGEI':WGE,'BGTI':BGTI,'WGTI':WGTI,'EDF':EDF,
'DATA_SP':dsp,'DATA_CONS':dif,'DATA_DI':ddi})
newdf=pd.concat([OldDF,newdf])
pathway= r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/NewLFR_Results/LR_Results.xlsx'
with pd.ExcelWriter(pathway, engine='openpyxl') as writer:
#load workbook base as for writer
writer.book= excelBook
writer.sheets=dict((ws.title, ws) for ws in excelBook.worksheets)
newdf.to_excel(writer, sheet_name='Violent', index=False)
# newdf.to_excel(writer, sheet_name='Adult', index=False)
print('Accuracy', Accuracy)
###Output
_____no_output_____ |
1_python/python_review_2.ipynb | ###Markdown
Python入门(中)1. [简介](简介)2. [列表](列表) [1. 列表的定义](1.-列表的定义) [2. 列表的创建](2.-列表的创建) [3. 向列表中添加元素](3.-向列表中添加元素) [4. 删除列表中的元素](4.-删除列表中的元素) [5. 获取列表中的元素](5.-获取列表中的元素) [6. 列表的常用操作符](6.-列表的常用操作符) [7. 列表的其它方法](7.-列表的其它方法) 3. [元组](元组) [1. 创建和访问一个元组](1.-创建和访问一个元组) [2. 更新和删除一个元组](2.-更新和删除一个元组) [3. 元组相关的操作符](3.-元组相关的操作符) [4. 内置方法](4.-内置方法) [5. 解压元组](5.-解压元组) 4. [字符串](字符串) [1. 字符串的定义](1.-字符串的定义) [2. 字符串的切片与拼接](2.-字符串的切片与拼接) [3. 字符串的常用内置方法](3.-字符串的常用内置方法) [4. 字符串格式化](4.-字符串格式化) 5. [字典](字典) [1. 可变类型与不可变类型](1.-可变类型与不可变类型) [2. 字典的定义](2.-字典的定义) [3. 创建和访问字典](3.-创建和访问字典) [4. 字典的内置方法](4.-字典的内置方法) 6. [集合](集合) [1. 集合的创建](1.-集合的创建) [2. 访问集合中的值](2.-访问集合中的值) [3. 集合的内置方法](3.-集合的内置方法) [4. 集合的转换](4.-集合的转换) [5. 不可变集合](5.-不可变集合) 7. [序列](序列) [1. 针对序列的内置函数](1.-针对序列的内置函数) 简介Python 是一种通用编程语言,其在科学计算和机器学习领域具有广泛的应用。如果我们打算利用 Python 来执行机器学习,那么对 Python 有一些基本的了解就是至关重要的。本 Python 入门系列体验就是为这样的初学者精心准备的。 **本实验包括以下内容**:1. 列表 - 列表的定义 - 列表的创建 - 向列表中添加元素 - 删除列表中的元素 - 获取列表中的元素 - 列表的常用操作符 - 列表的其他方法2. 元组 - 创建和访问一个元组 - 更新和删除一个元组 - 元组相关的操作符 - 内置方法 - 解压元组3. 字符串 - 字符串的定义 - 字符串的切片与拼接 - 字符串的常用内置方法 - 字符串格式化4. 字典 - 可变类型与不可变类型 - 字典的定义 - 创建和访问字典 - 字典的内置方法5. 集合 - 集合的创建 - 访问集合中的值 - 集合的内置方法 - 集合的转换 - 不可变集合6. 序列 - 针对序列的内置函数 列表简单数据类型- 整型``- 浮点型``- 布尔型``容器数据类型- 列表``- 元组``- 字典``- 集合``- 字符串`` 1. 列表的定义列表是有序集合,没有固定大小,能够保存任意数量任意类型的 Python 对象,语法为 `[元素1, 元素2, ..., 元素n]`。- 关键点是「中括号 []」和「逗号 ,」- 中括号 把所有元素绑在一起- 逗号 将每个元素一一分开 2. 列表的创建- 创建一个普通列表【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
print(x, type(x))
# ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'] <class 'list'>
x = [2, 3, 4, 5, 6, 7]
print(x, type(x))
# [2, 3, 4, 5, 6, 7] <class 'list'>
###Output
['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'] <class 'list'>
[2, 3, 4, 5, 6, 7] <class 'list'>
###Markdown
- 利用`range()`创建列表【例子】
###Code
x = list(range(10))
print(x, type(x))
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] <class 'list'>
x = list(range(1, 11, 2))
print(x, type(x))
# [1, 3, 5, 7, 9] <class 'list'>
x = list(range(10, 1, -2))
print(x, type(x))
# [10, 8, 6, 4, 2] <class 'list'>
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] <class 'list'>
[1, 3, 5, 7, 9] <class 'list'>
[10, 8, 6, 4, 2] <class 'list'>
###Markdown
- 利用推导式创建列表【例子】
###Code
x = [0] * 5
print(x, type(x))
# [0, 0, 0, 0, 0] <class 'list'>
x = [0 for i in range(5)]
print(x, type(x))
# [0, 0, 0, 0, 0] <class 'list'>
x = [i for i in range(10)]
print(x, type(x))
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] <class 'list'>
x = [i for i in range(1, 10, 2)]
print(x, type(x))
# [1, 3, 5, 7, 9] <class 'list'>
x = [i for i in range(10, 1, -2)]
print(x, type(x))
# [10, 8, 6, 4, 2] <class 'list'>
x = [i ** 2 for i in range(1, 10)]
print(x, type(x))
# [1, 4, 9, 16, 25, 36, 49, 64, 81] <class 'list'>
x = [i for i in range(100) if (i % 2) != 0 and (i % 3) == 0]
print(x, type(x))
# [3, 9, 15, 21, 27, 33, 39,
###Output
[0, 0, 0, 0, 0] <class 'list'>
[0, 0, 0, 0, 0] <class 'list'>
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] <class 'list'>
[1, 3, 5, 7, 9] <class 'list'>
[10, 8, 6, 4, 2] <class 'list'>
[1, 4, 9, 16, 25, 36, 49, 64, 81] <class 'list'>
[3, 9, 15, 21, 27, 33, 39, 45, 51, 57, 63, 69, 75, 81, 87, 93, 99] <class 'list'>
###Markdown
注意:由于list的元素可以是任何对象,因此列表中所保存的是对象的指针。即使保存一个简单的`[1,2,3]`,也有3个指针和3个整数对象。`x = [a] * 4`操作中,只是创建4个指向list的引用,所以一旦`a`改变,`x`中4个`a`也会随之改变。【例子】
###Code
x = [[0] * 3] * 4
print(x, type(x))
# [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]] <class 'list'>
x[0][0] = 1
print(x, type(x))
# [[1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 0, 0]] <class 'list'>
a = [0] * 3
x = [a] * 4
print(x, type(x))
# [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]] <class 'list'>
x[0][0] = 1
print(x, type(x))
# [[1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 0, 0]] <class 'list'>
###Output
[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]] <class 'list'>
[[1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 0, 0]] <class 'list'>
[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]] <class 'list'>
[[1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 0, 0]] <class 'list'>
###Markdown
- 创建一个混合列表【例子】
###Code
mix = [1, 'lsgo', 3.14, [1, 2, 3]]
print(mix, type(mix))
# [1, 'lsgo', 3.14, [1, 2, 3]] <class 'list'>
###Output
[1, 'lsgo', 3.14, [1, 2, 3]] <class 'list'>
###Markdown
- 创建一个空列表【例子】
###Code
empty = []
print(empty, type(empty)) # [] <class 'list'>
###Output
[] <class 'list'>
###Markdown
列表不像元组,列表内容可更改 (mutable),因此附加 (`append`, `extend`)、插入 (`insert`)、删除 (`remove`, `pop`) 这些操作都可以用在它身上。 3. 向列表中添加元素- `list.append(obj)` 在列表末尾添加新的对象,只接受一个参数,参数可以是任何数据类型,被追加的元素在 list 中保持着原结构类型。【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
x.append('Thursday')
print(x)
# ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Thursday']
print(len(x)) # 6
###Output
['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Thursday']
6
###Markdown
此元素如果是一个 list,那么这个 list 将作为一个整体进行追加,注意`append()`和`extend()`的区别。【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
x.append(['Thursday', 'Sunday'])
print(x)
# ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', ['Thursday', 'Sunday']]
print(len(x)) # 6
###Output
['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', ['Thursday', 'Sunday']]
6
###Markdown
- `list.extend(seq)` 在列表末尾一次性追加另一个序列中的多个值(用新列表扩展原来的列表)【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
x.extend(['Thursday', 'Sunday'])
print(x)
# ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Thursday', 'Sunday']
print(len(x)) # 7
###Output
['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Thursday', 'Sunday']
7
###Markdown
严格来说 `append` 是追加,把一个东西整体添加在列表后,而 `extend` 是扩展,把一个东西里的所有元素添加在列表后。- `list.insert(index, obj)` 在编号 `index` 位置插入 `obj`。【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
x.insert(2, 'Sunday')
print(x)
# ['Monday', 'Tuesday', 'Sunday', 'Wednesday', 'Thursday', 'Friday']
print(len(x)) # 6
###Output
['Monday', 'Tuesday', 'Sunday', 'Wednesday', 'Thursday', 'Friday']
6
###Markdown
4. 删除列表中的元素- `list.remove(obj)` 移除列表中某个值的第一个匹配项【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
x.remove('Monday')
print(x) # ['Tuesday', 'Wednesday', 'Thursday', 'Friday']
###Output
['Tuesday', 'Wednesday', 'Thursday', 'Friday']
###Markdown
- `list.pop([index=-1])` 移除列表中的一个元素(默认最后一个元素),并且返回该元素的值【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
y = x.pop()
print(y) # Friday
y = x.pop(0)
print(y) # Monday
y = x.pop(-2)
print(y) # Wednesday
print(x) # ['Tuesday', 'Thursday']
###Output
Friday
Monday
Wednesday
['Tuesday', 'Thursday']
###Markdown
`remove` 和 `pop` 都可以删除元素,前者是指定具体要删除的元素,后者是指定一个索引。- `del var1[, var2 ……]` 删除单个或多个对象。【例子】如果知道要删除的元素在列表中的位置,可使用`del`语句。
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
del x[0:2]
print(x) # ['Wednesday', 'Thursday', 'Friday']
###Output
['Wednesday', 'Thursday', 'Friday']
###Markdown
如果你要从列表中删除一个元素,且不再以任何方式使用它,就使用`del`语句;如果你要在删除元素后还能继续使用它,就使用方法`pop()`。 5. 获取列表中的元素- 通过元素的索引值,从列表获取单个元素,注意,列表索引值是从0开始的。- 通过将索引指定为-1,可让Python返回最后一个列表元素,索引 -2 返回倒数第二个列表元素,以此类推。【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', ['Thursday', 'Friday']]
print(x[0], type(x[0])) # Monday <class 'str'>
print(x[-1], type(x[-1])) # ['Thursday', 'Friday'] <class 'list'>
print(x[-2], type(x[-2])) # Wednesday <class 'str'>
###Output
Monday <class 'str'>
['Thursday', 'Friday'] <class 'list'>
Wednesday <class 'str'>
###Markdown
切片的通用写法是 `start : stop : step`- 情况 1 - "start :" - 以 `step` 为 1 (默认) 从编号 `start` 往列表尾部切片。【例子】
###Code
x = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
print(x[3:]) # ['Thursday', 'Friday']
print(x[-3:]) # ['Wednesday', 'Thursday', 'Friday']
###Output
['Thursday', 'Friday']
['Wednesday', 'Thursday', 'Friday']
###Markdown
- 情况 2 - ": stop"- 以 `step` 为 1 (默认) 从列表头部往编号 `stop` 切片。【例子】
###Code
week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
print(week[:3]) # ['Monday', 'Tuesday', 'Wednesday']
print(week[:-3]) # ['Monday', 'Tuesday']
###Output
['Monday', 'Tuesday', 'Wednesday']
['Monday', 'Tuesday']
###Markdown
- 情况 3 - "start : stop"- 以 `step` 为 1 (默认) 从编号 `start` 往编号 `stop` 切片。【例子】
###Code
week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
print(week[1:3]) # ['Tuesday', 'Wednesday']
print(week[-3:-1]) # ['Wednesday', 'Thursday']
###Output
['Tuesday', 'Wednesday']
['Wednesday', 'Thursday']
###Markdown
- 情况 4 - "start : stop : step"- 以具体的 `step` 从编号 `start` 往编号 `stop` 切片。注意最后把 `step` 设为 -1,相当于将列表反向排列。【例子】
###Code
week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
print(week[1:4:2]) # ['Tuesday', 'Thursday']
print(week[:4:2]) # ['Monday', 'Wednesday']
print(week[1::2]) # ['Tuesday', 'Thursday']
print(week[::-1])
# ['Friday', 'Thursday', 'Wednesday', 'Tuesday', 'Monday']
###Output
['Tuesday', 'Thursday']
['Monday', 'Wednesday']
['Tuesday', 'Thursday']
['Friday', 'Thursday', 'Wednesday', 'Tuesday', 'Monday']
###Markdown
- 情况 5 - " : "- 复制列表中的所有元素(浅拷贝)。【例子】
###Code
eek = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
print(week[:])
# ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
###Output
['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
###Markdown
【例子】浅拷贝与深拷贝
###Code
list1 = [123, 456, 789, 213]
list2 = list1
list3 = list1[:]
print(list2) # [123, 456, 789, 213]
print(list3) # [123, 456, 789, 213]
list1.sort()
print(list2) # [123, 213, 456, 789]
print(list3) # [123, 456, 789, 213]
list1 = [[123, 456], [789, 213]]
list2 = list1
list3 = list1[:]
print(list2) # [[123, 456], [789, 213]]
print(list3) # [[123, 456], [789, 213]]
list1[0][0] = 111
print(list2) # [[111, 456], [789, 213]]
print(list3) # [[111, 456], [789, 213]]
###Output
[123, 456, 789, 213]
[123, 456, 789, 213]
[123, 213, 456, 789]
[123, 456, 789, 213]
[[123, 456], [789, 213]]
[[123, 456], [789, 213]]
[[111, 456], [789, 213]]
[[111, 456], [789, 213]]
###Markdown
6. 列表的常用操作符- 等号操作符:`==`- 连接操作符 `+`- 重复操作符 `*`- 成员关系操作符 `in`、`not in`「等号 ==」,只有成员、成员位置都相同时才返回True。列表拼接有两种方式,用「加号 +」和「乘号 *」,前者首尾拼接,后者复制拼接。【例子】
###Code
list1 = [123, 456]
list2 = [456, 123]
list3 = [123, 456]
print(list1 == list2) # False
print(list1 == list3) # True
list4 = list1 + list2 # extend()
print(list4) # [123, 456, 456, 123]
list5 = list3 * 3
print(list5) # [123, 456, 123, 456, 123, 456]
list3 *= 3
print(list3) # [123, 456, 123, 456, 123, 456]
print(123 in list3) # True
print(456 not in list3) # False
###Output
False
True
[123, 456, 456, 123]
[123, 456, 123, 456, 123, 456]
[123, 456, 123, 456, 123, 456]
True
False
###Markdown
前面三种方法(`append`, `extend`, `insert`)可对列表增加元素,它们没有返回值,是直接修改了原数据对象。而将两个list相加,需要创建新的 list 对象,从而需要消耗额外的内存,特别是当 list 较大时,尽量不要使用 “+” 来添加list。 7. 列表的其它方法`list.count(obj)` 统计某个元素在列表中出现的次数【例子】
###Code
list1 = [123, 456] * 3
print(list1) # [123, 456, 123, 456, 123, 456]
num = list1.count(123)
print(num) # 3
###Output
[123, 456, 123, 456, 123, 456]
3
###Markdown
`list.index(x[, start[, end]])` 从列表中找出某个值第一个匹配项的索引位置【例子】
###Code
list1 = [123, 456] * 5
print(list1.index(123)) # 0
print(list1.index(123, 1)) # 2
print(list1.index(123, 3, 7)) # 4
###Output
0
2
4
###Markdown
`list.reverse()` 反向列表中元素【例子】
###Code
x = [123, 456, 789]
x.reverse()
print(x) # [789, 456, 123]
###Output
[789, 456, 123]
###Markdown
`list.sort(key=None, reverse=False)` 对原列表进行排序。- `key` -- 主要是用来进行比较的元素,只有一个参数,具体的函数的参数就是取自于可迭代对象中,指定可迭代对象中的一个元素来进行排序。- `reverse` -- 排序规则,`reverse = True` 降序, `reverse = False` 升序(默认)。- 该方法没有返回值,但是会对列表的对象进行排序。【例子】
###Code
x = [123, 456, 789, 213]
x.sort()
print(x)
# [123, 213, 456, 789]
x.sort(reverse=True)
print(x)
# [789, 456, 213, 123]
# 获取列表的第二个元素
def takeSecond(elem):
return elem[1]
x = [(2, 2), (3, 4), (4, 1), (1, 3)]
x.sort(key=takeSecond)
print(x)
# [(4, 1), (2, 2), (1, 3), (3, 4)]
x.sort(key=lambda a: a[0])
print(x)
# [(1, 3), (2, 2), (3, 4), (4, 1)]
###Output
[123, 213, 456, 789]
[789, 456, 213, 123]
[(4, 1), (2, 2), (1, 3), (3, 4)]
[(1, 3), (2, 2), (3, 4), (4, 1)]
###Markdown
元组「元组」定义语法为:`(元素1, 元素2, ..., 元素n)`- 小括号把所有元素绑在一起- 逗号将每个元素一一分开 1. 创建和访问一个元组- Python 的元组与列表类似,不同之处在于tuple被创建后就不能对其进行修改,类似字符串。- 元组使用小括号,列表使用方括号。- 元组与列表类似,也用整数来对它进行索引 (indexing) 和切片 (slicing)。【例子】
###Code
t1 = (1, 10.31, 'python')
t2 = 1, 10.31, 'python'
print(t1, type(t1))
# (1, 10.31, 'python') <class 'tuple'>
print(t2, type(t2))
# (1, 10.31, 'python') <class 'tuple'>
tuple1 = (1, 2, 3, 4, 5, 6, 7, 8)
print(tuple1[1]) # 2
print(tuple1[5:]) # (6, 7, 8)
print(tuple1[:5]) # (1, 2, 3, 4, 5)
tuple2 = tuple1[:]
print(tuple2) # (1, 2, 3, 4, 5, 6, 7, 8)
###Output
(1, 10.31, 'python') <class 'tuple'>
(1, 10.31, 'python') <class 'tuple'>
2
(6, 7, 8)
(1, 2, 3, 4, 5)
(1, 2, 3, 4, 5, 6, 7, 8)
###Markdown
- 创建元组可以用小括号 (),也可以什么都不用,为了可读性,建议还是用 ()。- 元组中只包含一个元素时,需要在元素后面添加逗号,否则括号会被当作运算符使用。【例子】
###Code
x = (1)
print(type(x)) # <class 'int'>
x = 2, 3, 4, 5
print(type(x)) # <class 'tuple'>
x = []
print(type(x)) # <class 'list'>
x = ()
print(type(x)) # <class 'tuple'>
x = (1,)
print(type(x)) # <class 'tuple'>
###Output
<class 'int'>
<class 'tuple'>
<class 'list'>
<class 'tuple'>
<class 'tuple'>
###Markdown
【例子】
###Code
print(8 * (8)) # 64
print(8 * (8,)) # (8, 8, 8, 8, 8, 8, 8, 8)
###Output
64
(8, 8, 8, 8, 8, 8, 8, 8)
###Markdown
【例子】创建二维元组。
###Code
x = (1, 10.31, 'python'), ('data', 11)
print(x)
# ((1, 10.31, 'python'), ('data', 11))
print(x[0])
# (1, 10.31, 'python')
print(x[0][0], x[0][1], x[0][2])
# 1 10.31 python
print(x[0][0:2])
# (1, 10.31)
###Output
((1, 10.31, 'python'), ('data', 11))
(1, 10.31, 'python')
1 10.31 python
(1, 10.31)
###Markdown
2. 更新和删除一个元组【例子】
###Code
week = ('Monday', 'Tuesday', 'Thursday', 'Friday')
week = week[:2] + ('Wednesday',) + week[2:]
print(week) # ('Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday')
###Output
('Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday')
###Markdown
【例子】元组有不可更改 (immutable) 的性质,因此不能直接给元组的元素赋值,但是只要元组中的元素可更改 (mutable),那么我们可以直接更改其元素,注意这跟赋值其元素不同。
###Code
t1 = (1, 2, 3, [4, 5, 6])
print(t1) # (1, 2, 3, [4, 5, 6])
t1[3][0] = 9
print(t1) # (1, 2, 3, [9, 5, 6])
###Output
(1, 2, 3, [4, 5, 6])
(1, 2, 3, [9, 5, 6])
###Markdown
3. 元组相关的操作符- 等号操作符:`==`- 连接操作符 `+`- 重复操作符 `*`- 成员关系操作符 `in`、`not in`「等号 ==」,只有成员、成员位置都相同时才返回True。元组拼接有两种方式,用「加号 +」和「乘号 *」,前者首尾拼接,后者复制拼接。【例子】
###Code
t1 = (123, 456)
t2 = (456, 123)
t3 = (123, 456)
print(t1 == t2) # False
print(t1 == t3) # True
t4 = t1 + t2
print(t4) # (123, 456, 456, 123)
t5 = t3 * 3
print(t5) # (123, 456, 123, 456, 123, 456)
t3 *= 3
print(t3) # (123, 456, 123, 456, 123, 456)
print(123 in t3) # True
print(456 not in t3) # False
###Output
False
True
(123, 456, 456, 123)
(123, 456, 123, 456, 123, 456)
(123, 456, 123, 456, 123, 456)
True
False
###Markdown
4. 内置方法元组大小和内容都不可更改,因此只有 `count` 和 `index` 两种方法。【例子】
###Code
t = (1, 10.31, 'python')
print(t.count('python')) # 1
print(t.index(10.31)) # 1
###Output
1
1
###Markdown
- `count('python')` 是记录在元组 `t` 中该元素出现几次,显然是 1 次- `index(10.31)` 是找到该元素在元组 `t` 的索引,显然是 1 5. 解压元组【例子】解压(unpack)一维元组(有几个元素左边括号定义几个变量)
###Code
t = (1, 10.31, 'python')
(a, b, c) = t
print(a, b, c)
# 1 10.31 python
###Output
1 10.31 python
###Markdown
【例子】解压二维元组(按照元组里的元组结构来定义变量)
###Code
t = (1, 10.31, ('OK', 'python'))
(a, b, (c, d)) = t
print(a, b, c, d)
# 1 10.31 OK python
###Output
1 10.31 OK python
###Markdown
【例子】如果你只想要元组其中几个元素,用通配符「*」,英文叫 wildcard,在计算机语言中代表一个或多个元素。下例就是把多个元素丢给了 `rest` 变量。
###Code
t = 1, 2, 3, 4, 5
a, b, *rest, c = t
print(a, b, c) # 1 2 5
print(rest) # [3, 4]
###Output
1 2 5
[3, 4]
###Markdown
【例子】如果你根本不在乎 rest 变量,那么就用通配符「*」加上下划线「_」。
###Code
t = 1, 2, 3, 4, 5
a, b, *_ = t
print(a, b) # 1 2
###Output
1 2
###Markdown
字符串 1. 字符串的定义- Python 中字符串被定义为引号之间的字符集合。- Python 支持使用成对的 单引号 或 双引号。【例子】
###Code
t1 = 'i love Python!'
print(t1, type(t1))
# i love Python! <class 'str'>
t2 = "I love Python!"
print(t2, type(t2))
# I love Python! <class 'str'>
print(5 + 8) # 13
print('5' + '8') # 58
###Output
i love Python! <class 'str'>
I love Python! <class 'str'>
13
58
###Markdown
- Python 的常用转义字符转义字符 | 描述:---:|---`\\` | 反斜杠符号`\'` | 单引号`\"` | 双引号`\n` | 换行`\t` | 横向制表符(TAB)`\r` | 回车【例子】如果字符串中需要出现单引号或双引号,可以使用转义符号`\`对字符串中的符号进行转义。
###Code
print('let\'s go') # let's go
print("let's go") # let's go
print('C:\\now') # C:\now
print("C:\\Program Files\\Intel\\Wifi\\Help")
# C:\Program Files\Intel\Wifi\Help
###Output
let's go
let's go
C:\now
C:\Program Files\Intel\Wifi\Help
###Markdown
【例子】原始字符串只需要在字符串前边加一个英文字母 r 即可。
###Code
print(r'C:\Program Files\Intel\Wifi\Help')
# C:\Program Files\Intel\Wifi\Help
###Output
C:\Program Files\Intel\Wifi\Help
###Markdown
【例子】三引号允许一个字符串跨多行,字符串中可以包含换行符、制表符以及其他特殊字符。
###Code
para_str = """这是一个多行字符串的实例
多行字符串可以使用制表符
TAB ( \t )。
也可以使用换行符 [ \n ]。
"""
print(para_str)
# 这是一个多行字符串的实例
# 多行字符串可以使用制表符
# TAB ( )。
# 也可以使用换行符 [
# ]。
para_str = '''这是一个多行字符串的实例
多行字符串可以使用制表符
TAB ( \t )。
也可以使用换行符 [ \n ]。
'''
print(para_str)
# 这是一个多行字符串的实例
# 多行字符串可以使用制表符
# TAB ( )。
# 也可以使用换行符 [
# ]。
###Output
这是一个多行字符串的实例
多行字符串可以使用制表符
TAB ( )。
也可以使用换行符 [
]。
这是一个多行字符串的实例
多行字符串可以使用制表符
TAB ( )。
也可以使用换行符 [
]。
###Markdown
2. 字符串的切片与拼接- 类似于元组具有不可修改性- 从 0 开始 (和 Java 一样)- 切片通常写成 `start:end` 这种形式,包括「`start` 索引」对应的元素,不包括「`end`索引」对应的元素。- 索引值可正可负,正索引从 0 开始,从左往右;负索引从 -1 开始,从右往左。使用负数索引时,会从最后一个元素开始计数。最后一个元素的位置编号是 -1。【例子】
###Code
str1 = 'I Love LsgoGroup'
print(str1[:6]) # I Love
print(str1[5]) # e
print(str1[:6] + " 插入的字符串 " + str1[6:])
# I Love 插入的字符串 LsgoGroup
s = 'Python'
print(s) # Python
print(s[2:4]) # th
print(s[-5:-2]) # yth
print(s[2]) # t
print(s[-1]) # n
###Output
I Love
e
I Love 插入的字符串 LsgoGroup
Python
th
yth
t
n
###Markdown
3. 字符串的常用内置方法- `capitalize()` 将字符串的第一个字符转换为大写。【例子】
###Code
str2 = 'xiaoxie'
print(str2.capitalize()) # Xiaoxie
###Output
Xiaoxie
###Markdown
- `lower()` 转换字符串中所有大写字符为小写。- `upper()` 转换字符串中的小写字母为大写。- `swapcase()` 将字符串中大写转换为小写,小写转换为大写。【例子】
###Code
str2 = "DAXIExiaoxie"
print(str2.lower()) # daxiexiaoxie
print(str2.upper()) # DAXIEXIAOXIE
print(str2.swapcase()) # daxieXIAOXIE
###Output
daxiexiaoxie
DAXIEXIAOXIE
daxieXIAOXIE
###Markdown
- `count(str, beg= 0,end=len(string))` 返回`str`在 string 里面出现的次数,如果`beg`或者`end`指定则返回指定范围内`str`出现的次数。【例子】
###Code
str2 = "DAXIExiaoxie"
print(str2.count('xi')) # 2
###Output
2
###Markdown
- `endswith(suffix, beg=0, end=len(string))` 检查字符串是否以指定子字符串 `suffix` 结束,如果是,返回 True,否则返回 False。如果 `beg` 和 `end` 指定值,则在指定范围内检查。- `startswith(substr, beg=0,end=len(string))` 检查字符串是否以指定子字符串 `substr` 开头,如果是,返回 True,否则返回 False。如果 `beg` 和 `end` 指定值,则在指定范围内检查。【例子】
###Code
str2 = "DAXIExiaoxie"
print(str2.endswith('ie')) # True
print(str2.endswith('xi')) # False
print(str2.startswith('Da')) # False
print(str2.startswith('DA')) # True
###Output
True
False
False
True
###Markdown
- `find(str, beg=0, end=len(string))` 检测 `str` 是否包含在字符串中,如果指定范围 `beg` 和 `end`,则检查是否包含在指定范围内,如果包含,返回开始的索引值,否则返回 -1。- `rfind(str, beg=0,end=len(string))` 类似于 `find()` 函数,不过是从右边开始查找。【例子】
###Code
str2 = "DAXIExiaoxie"
print(str2.find('xi')) # 5
print(str2.find('ix')) # -1
print(str2.rfind('xi')) # 9
###Output
5
-1
9
###Markdown
- `isnumeric()` 如果字符串中只包含数字字符,则返回 True,否则返回 False。【例子】
###Code
str3 = '12345'
print(str3.isnumeric()) # True
str3 += 'a'
print(str3.isnumeric()) # False
###Output
True
False
###Markdown
- `ljust(width[, fillchar])`返回一个原字符串左对齐,并使用`fillchar`(默认空格)填充至长度`width`的新字符串。- `rjust(width[, fillchar])`返回一个原字符串右对齐,并使用`fillchar`(默认空格)填充至长度`width`的新字符串。【例子】
###Code
str4 = '1101'
print(str4.ljust(8, '0')) # 11010000
print(str4.rjust(8, '0')) # 00001101
###Output
11010000
00001101
###Markdown
- `lstrip([chars])` 截掉字符串左边的空格或指定字符。- `rstrip([chars])` 删除字符串末尾的空格或指定字符。- `strip([chars])` 在字符串上执行`lstrip()`和`rstrip()`。【例子】
###Code
str5 = ' I Love LsgoGroup '
print(str5.lstrip()) # 'I Love LsgoGroup '
print(str5.lstrip().strip('I')) # ' Love LsgoGroup '
print(str5.rstrip()) # ' I Love LsgoGroup'
print(str5.strip()) # 'I Love LsgoGroup'
print(str5.strip().strip('p')) # 'I Love LsgoGrou'
###Output
I Love LsgoGroup
Love LsgoGroup
I Love LsgoGroup
I Love LsgoGroup
I Love LsgoGrou
###Markdown
- `partition(sub)` 找到子字符串sub,把字符串分为一个三元组`(pre_sub,sub,fol_sub)`,如果字符串中不包含sub则返回`('原字符串','','')`。- `rpartition(sub)`类似于`partition()`方法,不过是从右边开始查找。【例子】
###Code
str5 = ' I Love LsgoGroup '
print(str5.strip().partition('o')) # ('I L', 'o', 've LsgoGroup')
print(str5.strip().partition('m')) # ('I Love LsgoGroup', '', '')
print(str5.strip().rpartition('o')) # ('I Love LsgoGr', 'o', 'up')
###Output
('I L', 'o', 've LsgoGroup')
('I Love LsgoGroup', '', '')
('I Love LsgoGr', 'o', 'up')
###Markdown
- `replace(old, new [, max])` 把 将字符串中的`old`替换成`new`,如果`max`指定,则替换不超过`max`次。【例子】
###Code
str5 = ' I Love LsgoGroup '
print(str5.strip().replace('I', 'We')) # We Love LsgoGroup
###Output
We Love LsgoGroup
###Markdown
- `split(str="", num)` 不带参数默认是以空格为分隔符切片字符串,如果`num`参数有设置,则仅分隔`num`个子字符串,返回切片后的子字符串拼接的列表。【例子】
###Code
str5 = ' I Love LsgoGroup '
print(str5.strip().split()) # ['I', 'Love', 'LsgoGroup']
print(str5.strip().split('o')) # ['I L', 've Lsg', 'Gr', 'up']
###Output
['I', 'Love', 'LsgoGroup']
['I L', 've Lsg', 'Gr', 'up']
###Markdown
【例子】
###Code
u = "www.baidu.com.cn"
# 使用默认分隔符
print(u.split()) # ['www.baidu.com.cn']
# 以"."为分隔符
print((u.split('.'))) # ['www', 'baidu', 'com', 'cn']
# 分割0次
print((u.split(".", 0))) # ['www.baidu.com.cn']
# 分割一次
print((u.split(".", 1))) # ['www', 'baidu.com.cn']
# 分割两次
print(u.split(".", 2)) # ['www', 'baidu', 'com.cn']
# 分割两次,并取序列为1的项
print((u.split(".", 2)[1])) # baidu
# 分割两次,并把分割后的三个部分保存到三个变量
u1, u2, u3 = u.split(".", 2)
print(u1) # www
print(u2) # baidu
print(u3) # com.cn
###Output
['www.baidu.com.cn']
['www', 'baidu', 'com', 'cn']
['www.baidu.com.cn']
['www', 'baidu.com.cn']
['www', 'baidu', 'com.cn']
baidu
www
baidu
com.cn
###Markdown
【例子】去掉换行符
###Code
c = '''say
hello
baby'''
print(c)
# say
# hello
# baby
print(c.split('\n')) # ['say', 'hello', 'baby']
###Output
say
hello
baby
['say', 'hello', 'baby']
###Markdown
【例子】
###Code
string = "hello boy<[www.baidu.com]>byebye"
print(string.split('[')[1].split(']')[0]) # www.baidu.com
print(string.split('[')[1].split(']')[0].split('.')) # ['www', 'baidu', 'com']
###Output
www.baidu.com
['www', 'baidu', 'com']
###Markdown
- `splitlines([keepends])` 按照行('\r', '\r\n', \n')分隔,返回一个包含各行作为元素的列表,如果参数`keepends`为 False,不包含换行符,如果为 True,则保留换行符。【例子】
###Code
str6 = 'I \n Love \n LsgoGroup'
print(str6.splitlines()) # ['I ', ' Love ', ' LsgoGroup']
print(str6.splitlines(True)) # ['I \n', ' Love \n', ' LsgoGroup']
###Output
['I ', ' Love ', ' LsgoGroup']
['I \n', ' Love \n', ' LsgoGroup']
###Markdown
- `maketrans(intab, outtab)` 创建字符映射的转换表,第一个参数是字符串,表示需要转换的字符,第二个参数也是字符串表示转换的目标。- `translate(table, deletechars="")` 根据参数`table`给出的表,转换字符串的字符,要过滤掉的字符放到`deletechars`参数中。【例子】
###Code
str7 = 'this is string example....wow!!!'
intab = 'aeiou'
outtab = '12345'
trantab = str7.maketrans(intab, outtab)
print(trantab) # {97: 49, 111: 52, 117: 53, 101: 50, 105: 51}
print(str7.translate(trantab)) # th3s 3s str3ng 2x1mpl2....w4w!!!
###Output
{97: 49, 101: 50, 105: 51, 111: 52, 117: 53}
th3s 3s str3ng 2x1mpl2....w4w!!!
###Markdown
4. 字符串格式化- `format` 格式化函数【例子】
###Code
str8 = "{0} Love {1}".format('I', 'Lsgogroup') # 位置参数
print(str8) # I Love Lsgogroup
str8 = "{a} Love {b}".format(a='I', b='Lsgogroup') # 关键字参数
print(str8) # I Love Lsgogroup
str8 = "{0} Love {b}".format('I', b='Lsgogroup') # 位置参数要在关键字参数之前
print(str8) # I Love Lsgogroup
str8 = '{0:.2f}{1}'.format(27.658, 'GB') # 保留小数点后两位
print(str8) # 27.66GB
###Output
I Love Lsgogroup
I Love Lsgogroup
I Love Lsgogroup
27.66GB
###Markdown
- Python 字符串格式化符号 符 号 | 描述:---:|:---%c | 格式化字符及其ASCII码%s | 格式化字符串,用str()方法处理对象%r | 格式化字符串,用rper()方法处理对象%d | 格式化整数%o | 格式化无符号八进制数%x | 格式化无符号十六进制数%X | 格式化无符号十六进制数(大写)%f | 格式化浮点数字,可指定小数点后的精度%e | 用科学计数法格式化浮点数%E | 作用同%e,用科学计数法格式化浮点数%g | 根据值的大小决定使用%f或%e%G | 作用同%g,根据值的大小决定使用%f或%E【例子】
###Code
print('%c' % 97) # a
print('%c %c %c' % (97, 98, 99)) # a b c
print('%d + %d = %d' % (4, 5, 9)) # 4 + 5 = 9
print("我叫 %s 今年 %d 岁!" % ('小明', 10)) # 我叫 小明 今年 10 岁!
print('%o' % 10) # 12
print('%x' % 10) # a
print('%X' % 10) # A
print('%f' % 27.658) # 27.658000
print('%e' % 27.658) # 2.765800e+01
print('%E' % 27.658) # 2.765800E+01
print('%g' % 27.658) # 27.658
text = "I am %d years old." % 22
print("I said: %s." % text) # I said: I am 22 years old..
print("I said: %r." % text) # I said: 'I am 22 years old.'
###Output
a
a b c
4 + 5 = 9
我叫 小明 今年 10 岁!
12
a
A
27.658000
2.765800e+01
2.765800E+01
27.658
I said: I am 22 years old..
I said: 'I am 22 years old.'.
###Markdown
- 格式化操作符辅助指令符号 | 功能:---:|:---`m.n` | m 是显示的最小总宽度,n 是小数点后的位数(如果可用的话)`-` | 用作左对齐`+` | 在正数前面显示加号( + )`` | 在八进制数前面显示零('0'),在十六进制前面显示'0x'或者'0X'(取决于用的是'x'还是'X')`0` | 显示的数字前面填充'0'而不是默认的空格【例子】
###Code
print('%5.1f' % 27.658) # ' 27.7'
print('%.2e' % 27.658) # 2.77e+01
print('%10d' % 10) # ' 10'
print('%-10d' % 10) # '10 '
print('%+d' % 10) # +10
print('%#o' % 10) # 0o12
print('%#x' % 108) # 0x6c
print('%010d' % 5) # 0000000005
###Output
27.7
2.77e+01
10
10
+10
0o12
0x6c
0000000005
###Markdown
字典 1. 可变类型与不可变类型- 序列是以连续的整数为索引,与此不同的是,字典以"关键字"为索引,关键字可以是任意不可变类型,通常用字符串或数值。- 字典是 Python 唯一的一个 映射类型,字符串、元组、列表属于序列类型。那么如何快速判断一个数据类型 `X` 是不是可变类型的呢?两种方法:- 麻烦方法:用 `id(X)` 函数,对 X 进行某种操作,比较操作前后的 `id`,如果不一样,则 `X` 不可变,如果一样,则 `X` 可变。- 便捷方法:用 `hash(X)`,只要不报错,证明 `X` 可被哈希,即不可变,反过来不可被哈希,即可变。【例子】
###Code
i = 1
print(id(i)) # 140732167000896
i = i + 2
print(id(i)) # 140732167000960
l = [1, 2]
print(id(l)) # 4300825160
l.append('Python')
print(id(l)) # 4300825160
###Output
140731832701760
140731832701824
2131670369800
2131670369800
###Markdown
- 整数 `i` 在加 1 之后的 `id` 和之前不一样,因此加完之后的这个 `i` (虽然名字没变),但不是加之前的那个 `i` 了,因此整数是不可变类型。- 列表 `l` 在附加 `'Python'` 之后的 `id` 和之前一样,因此列表是可变类型。【例子】
###Code
print(hash('Name')) # 7047218704141848153
print(hash((1, 2, 'Python'))) # 1704535747474881831
print(hash([1, 2, 'Python']))
# TypeError: unhashable type: 'list'
print(hash({1, 2, 3}))
# TypeError: unhashable type: 'set'
###Output
_____no_output_____
###Markdown
- 数值、字符和元组 都能被哈希,因此它们是不可变类型。- 列表、集合、字典不能被哈希,因此它是可变类型。 2. 字典的定义字典 是无序的 键:值(`key:value`)对集合,键必须是互不相同的(在同一个字典之内)。- `dict` 内部存放的顺序和 `key` 放入的顺序是没有关系的。- `dict` 查找和插入的速度极快,不会随着 `key` 的增加而增加,但是需要占用大量的内存。字典 定义语法为 `{元素1, 元素2, ..., 元素n}`- 其中每一个元素是一个「键值对」-- 键:值 (`key:value`)- 关键点是「大括号 {}」,「逗号 ,」和「冒号 :」- 大括号 -- 把所有元素绑在一起- 逗号 -- 将每个键值对分开- 冒号 -- 将键和值分开 3. 创建和访问字典【例子】
###Code
brand = ['李宁', '耐克', '阿迪达斯']
slogan = ['一切皆有可能', 'Just do it', 'Impossible is nothing']
print('耐克的口号是:', slogan[brand.index('耐克')])
# 耐克的口号是: Just do it
dic = {'李宁': '一切皆有可能', '耐克': 'Just do it', '阿迪达斯': 'Impossible is nothing'}
print('耐克的口号是:', dic['耐克'])
# 耐克的口号是: Just do it
###Output
耐克的口号是: Just do it
耐克的口号是: Just do it
###Markdown
【例子】通过字符串或数值作为`key`来创建字典。
###Code
dic1 = {1: 'one', 2: 'two', 3: 'three'}
print(dic1) # {1: 'one', 2: 'two', 3: 'three'}
print(dic1[1]) # one
print(dic1[4]) # KeyError: 4
dic2 = {'rice': 35, 'wheat': 101, 'corn': 67}
print(dic2) # {'wheat': 101, 'corn': 67, 'rice': 35}
print(dic2['rice']) # 35
###Output
{'rice': 35, 'wheat': 101, 'corn': 67}
35
###Markdown
注意:如果我们取的键在字典中不存在,会直接报错`KeyError`。【例子】通过元组作为`key`来创建字典,但一般不这样使用。
###Code
dic = {(1, 2, 3): "Tom", "Age": 12, 3: [3, 5, 7]}
print(dic) # {(1, 2, 3): 'Tom', 'Age': 12, 3: [3, 5, 7]}
print(type(dic)) # <class 'dict'>
###Output
{(1, 2, 3): 'Tom', 'Age': 12, 3: [3, 5, 7]}
<class 'dict'>
###Markdown
通过构造函数`dict`来创建字典。- `dict()` 创建一个空的字典。【例子】通过`key`直接把数据放入字典中,但一个`key`只能对应一个`value`,多次对一个`key`放入 `value`,后面的值会把前面的值冲掉。
###Code
dic = dict()
dic['a'] = 1
dic['b'] = 2
dic['c'] = 3
print(dic)
# {'a': 1, 'b': 2, 'c': 3}
dic['a'] = 11
print(dic)
# {'a': 11, 'b': 2, 'c': 3}
dic['d'] = 4
print(dic)
# {'a': 11, 'b': 2, 'c': 3, 'd': 4}
###Output
{'a': 1, 'b': 2, 'c': 3}
{'a': 11, 'b': 2, 'c': 3}
{'a': 11, 'b': 2, 'c': 3, 'd': 4}
###Markdown
- `dict(mapping)` new dictionary initialized from a mapping object's (key, value) pairs【例子】
###Code
dic1 = dict([('apple', 4139), ('peach', 4127), ('cherry', 4098)])
print(dic1) # {'cherry': 4098, 'apple': 4139, 'peach': 4127}
dic2 = dict((('apple', 4139), ('peach', 4127), ('cherry', 4098)))
print(dic2) # {'peach': 4127, 'cherry': 4098, 'apple': 4139}
###Output
{'apple': 4139, 'peach': 4127, 'cherry': 4098}
{'apple': 4139, 'peach': 4127, 'cherry': 4098}
###Markdown
- `dict(**kwargs)` -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2)【例子】这种情况下,键只能为字符串类型,并且创建的时候字符串不能加引号,加上就会直接报语法错误。
###Code
dic = dict(name='Tom', age=10)
print(dic) # {'name': 'Tom', 'age': 10}
print(type(dic)) # <class 'dict'>
###Output
{'name': 'Tom', 'age': 10}
<class 'dict'>
###Markdown
4. 字典的内置方法- `dict.fromkeys(seq[, value])` 用于创建一个新字典,以序列 `seq` 中元素做字典的键,`value` 为字典所有键对应的初始值。【例子】
###Code
seq = ('name', 'age', 'sex')
dic1 = dict.fromkeys(seq)
print(dic1)
# {'name': None, 'age': None, 'sex': None}
dic2 = dict.fromkeys(seq, 10)
print(dic2)
# {'name': 10, 'age': 10, 'sex': 10}
dic3 = dict.fromkeys(seq, ('小马', '8', '男'))
print(dic3)
# {'name': ('小马', '8', '男'), 'age': ('小马', '8', '男'), 'sex': ('小马', '8', '男')}
###Output
{'name': None, 'age': None, 'sex': None}
{'name': 10, 'age': 10, 'sex': 10}
{'name': ('小马', '8', '男'), 'age': ('小马', '8', '男'), 'sex': ('小马', '8', '男')}
###Markdown
- `dict.keys()`返回一个可迭代对象,可以使用 `list()` 来转换为列表,列表为字典中的所有键。【例子】
###Code
dic = {'Name': 'lsgogroup', 'Age': 7}
print(dic.keys()) # dict_keys(['Name', 'Age'])
lst = list(dic.keys()) # 转换为列表
print(lst) # ['Name', 'Age']
###Output
dict_keys(['Name', 'Age'])
['Name', 'Age']
###Markdown
- `dict.values()`返回一个迭代器,可以使用 `list()` 来转换为列表,列表为字典中的所有值。【例子】
###Code
dic = {'Sex': 'female', 'Age': 7, 'Name': 'Zara'}
print(dic.values())
# dict_values(['female', 7, 'Zara'])
print(list(dic.values()))
# [7, 'female', 'Zara']
###Output
dict_values(['female', 7, 'Zara'])
['female', 7, 'Zara']
###Markdown
- `dict.items()`以列表返回可遍历的 (键, 值) 元组数组。【例子】
###Code
dic = {'Name': 'Lsgogroup', 'Age': 7}
print(dic.items())
# dict_items([('Name', 'Lsgogroup'), ('Age', 7)])
print(tuple(dic.items()))
# (('Name', 'Lsgogroup'), ('Age', 7))
print(list(dic.items()))
# [('Name', 'Lsgogroup'), ('Age', 7)]
###Output
dict_items([('Name', 'Lsgogroup'), ('Age', 7)])
(('Name', 'Lsgogroup'), ('Age', 7))
[('Name', 'Lsgogroup'), ('Age', 7)]
###Markdown
- `dict.get(key, default=None)` 返回指定键的值,如果值不在字典中返回默认值。【例子】
###Code
dic = {'Name': 'Lsgogroup', 'Age': 27}
print("Age 值为 : %s" % dic.get('Age')) # Age 值为 : 27
print("Sex 值为 : %s" % dic.get('Sex', "NA")) # Sex 值为 : NA
print(dic) # {'Name': 'Lsgogroup', 'Age': 27}
###Output
Age 值为 : 27
Sex 值为 : NA
{'Name': 'Lsgogroup', 'Age': 27}
###Markdown
- `dict.setdefault(key, default=None)`和`get()`方法 类似, 如果键不存在于字典中,将会添加键并将值设为默认值。【例子】
###Code
dic = {'Name': 'Lsgogroup', 'Age': 7}
print("Age 键的值为 : %s" % dic.setdefault('Age', None)) # Age 键的值为 : 7
print("Sex 键的值为 : %s" % dic.setdefault('Sex', None)) # Sex 键的值为 : None
print(dic)
# {'Age': 7, 'Name': 'Lsgogroup', 'Sex': None}
###Output
Age 键的值为 : 7
Sex 键的值为 : None
{'Name': 'Lsgogroup', 'Age': 7, 'Sex': None}
###Markdown
- `key in dict` `in` 操作符用于判断键是否存在于字典中,如果键在字典 dict 里返回`true`,否则返回`false`。而`not in`操作符刚好相反,如果键在字典 dict 里返回`false`,否则返回`true`。【例子】
###Code
dic = {'Name': 'Lsgogroup', 'Age': 7}
# in 检测键 Age 是否存在
if 'Age' in dic:
print("键 Age 存在")
else:
print("键 Age 不存在")
# 检测键 Sex 是否存在
if 'Sex' in dic:
print("键 Sex 存在")
else:
print("键 Sex 不存在")
# not in 检测键 Age 是否存在
if 'Age' not in dic:
print("键 Age 不存在")
else:
print("键 Age 存在")
# 键 Age 存在
# 键 Sex 不存在
# 键 Age 存在
###Output
键 Age 存在
键 Sex 不存在
键 Age 存在
###Markdown
- `dict.pop(key[,default])`删除字典给定键 `key` 所对应的值,返回值为被删除的值。`key` 值必须给出。若`key`不存在,则返回 `default` 值。- `del dict[key]` 删除字典给定键 `key` 所对应的值。【例子】
###Code
dic1 = {1: "a", 2: [1, 2]}
print(dic1.pop(1), dic1) # a {2: [1, 2]}
# 设置默认值,必须添加,否则报错
print(dic1.pop(3, "nokey"), dic1) # nokey {2: [1, 2]}
del dic1[2]
print(dic1) # {}
###Output
a {2: [1, 2]}
nokey {2: [1, 2]}
{}
###Markdown
- `dict.popitem()`随机返回并删除字典中的一对键和值,如果字典已经为空,却调用了此方法,就报出KeyError异常。【例子】
###Code
dic1 = {1: "a", 2: [1, 2]}
print(dic1.popitem()) # {2: [1, 2]}
print(dic1) # (1, 'a')
###Output
(2, [1, 2])
{1: 'a'}
###Markdown
- `dict.clear()`用于删除字典内所有元素。【例子】
###Code
dic = {'Name': 'Zara', 'Age': 7}
print("字典长度 : %d" % len(dic)) # 字典长度 : 2
dic.clear()
print("字典删除后长度 : %d" % len(dic))
# 字典删除后长度 : 0
###Output
字典长度 : 2
字典删除后长度 : 0
###Markdown
- `dict.copy()`返回一个字典的浅复制。【例子】
###Code
dic1 = {'Name': 'Lsgogroup', 'Age': 7, 'Class': 'First'}
dic2 = dic1.copy()
print("dic2")
# {'Age': 7, 'Name': 'Lsgogroup', 'Class': 'First'}
###Output
dic2
###Markdown
【例子】直接赋值和 copy 的区别
###Code
dic1 = {'user': 'lsgogroup', 'num': [1, 2, 3]}
# 引用对象
dic2 = dic1
# 浅拷贝父对象(一级目录),子对象(二级目录)不拷贝,还是引用
dic3 = dic1.copy()
print(id(dic1)) # 148635574728
print(id(dic2)) # 148635574728
print(id(dic3)) # 148635574344
# 修改 data 数据
dic1['user'] = 'root'
dic1['num'].remove(1)
# 输出结果
print(dic1) # {'user': 'root', 'num': [2, 3]}
print(dic2) # {'user': 'root', 'num': [2, 3]}
print(dic3) # {'user': 'runoob', 'num': [2, 3]}
###Output
2131669221448
2131669221448
2131669225120
{'user': 'root', 'num': [2, 3]}
{'user': 'root', 'num': [2, 3]}
{'user': 'lsgogroup', 'num': [2, 3]}
###Markdown
- `dict.update(dict2)`把字典参数 `dict2` 的 `key:value`对 更新到字典 `dict` 里。【例子】
###Code
dic = {'Name': 'Lsgogroup', 'Age': 7}
dic2 = {'Sex': 'female', 'Age': 8}
dic.update(dic2)
print(dic)
# {'Sex': 'female', 'Age': 8, 'Name': 'Lsgogroup'}
###Output
{'Name': 'Lsgogroup', 'Age': 8, 'Sex': 'female'}
###Markdown
集合Python 中`set`与`dict`类似,也是一组`key`的集合,但不存储`value`。由于`key`不能重复,所以,在`set`中,没有重复的`key`。注意,`key`为不可变类型,即可哈希的值。【例子】
###Code
num = {}
print(type(num)) # <class 'dict'>
num = {1, 2, 3, 4}
print(type(num)) # <class 'set'>
###Output
<class 'dict'>
<class 'set'>
###Markdown
1. 集合的创建- 先创建对象再加入元素。- 在创建空集合的时候只能使用`s = set()`,因为`s = {}`创建的是空字典。【例子】
###Code
basket = set()
basket.add('apple')
basket.add('banana')
print(basket) # {'banana', 'apple'}
###Output
{'banana', 'apple'}
###Markdown
- 直接把一堆元素用花括号括起来`{元素1, 元素2, ..., 元素n}`。- 重复元素在`set`中会被自动被过滤。【例子】
###Code
basket = {'apple', 'orange', 'apple', 'pear', 'orange', 'banana'}
print(basket) # {'banana', 'apple', 'pear', 'orange'}
###Output
{'pear', 'orange', 'banana', 'apple'}
###Markdown
- 使用`set(value)`工厂函数,把列表或元组转换成集合。【例子】
###Code
a = set('abracadabra')
print(a)
# {'r', 'b', 'd', 'c', 'a'}
b = set(("Google", "Lsgogroup", "Taobao", "Taobao"))
print(b)
# {'Taobao', 'Lsgogroup', 'Google'}
c = set(["Google", "Lsgogroup", "Taobao", "Google"])
print(c)
# {'Taobao', 'Lsgogroup', 'Google'}
###Output
{'b', 'r', 'a', 'c', 'd'}
{'Taobao', 'Google', 'Lsgogroup'}
{'Taobao', 'Google', 'Lsgogroup'}
###Markdown
【例子】去掉列表中重复的元素
###Code
lst = [0, 1, 2, 3, 4, 5, 5, 3, 1]
temp = []
for item in lst:
if item not in temp:
temp.append(item)
print(temp) # [0, 1, 2, 3, 4, 5]
a = set(lst)
print(list(a)) # [0, 1, 2, 3, 4, 5]
###Output
[0, 1, 2, 3, 4, 5]
[0, 1, 2, 3, 4, 5]
###Markdown
从结果发现集合的两个特点:无序 (unordered) 和唯一 (unique)。由于 `set` 存储的是无序集合,所以我们不可以为集合创建索引或执行切片(slice)操作,也没有键(keys)可用来获取集合中元素的值,但是可以判断一个元素是否在集合中。 2. 访问集合中的值- 可以使用`len()`內建函数得到集合的大小。【例子】
###Code
s = set(['Google', 'Baidu', 'Taobao'])
print(len(s)) # 3
###Output
3
###Markdown
- 可以使用`for`把集合中的数据一个个读取出来。【例子】
###Code
s = set(['Google', 'Baidu', 'Taobao'])
for item in s:
print(item)
# Baidu
# Google
# Taobao
###Output
Baidu
Taobao
Google
###Markdown
- 可以通过`in`或`not in`判断一个元素是否在集合中已经存在【例子】
###Code
s = set(['Google', 'Baidu', 'Taobao'])
print('Taobao' in s) # True
print('Facebook' not in s) # True
###Output
True
True
###Markdown
3. 集合的内置方法- `set.add(elmnt)`用于给集合添加元素,如果添加的元素在集合中已存在,则不执行任何操作。【例子】
###Code
fruits = {"apple", "banana", "cherry"}
fruits.add("orange")
print(fruits)
# {'orange', 'cherry', 'banana', 'apple'}
fruits.add("apple")
print(fruits)
# {'orange', 'cherry', 'banana', 'apple'}
###Output
{'cherry', 'orange', 'banana', 'apple'}
{'cherry', 'orange', 'banana', 'apple'}
###Markdown
- `set.update(set)`用于修改当前集合,可以添加新的元素或集合到当前集合中,如果添加的元素在集合中已存在,则该元素只会出现一次,重复的会忽略。【例子】
###Code
x = {"apple", "banana", "cherry"}
y = {"google", "baidu", "apple"}
x.update(y)
print(x)
# {'cherry', 'banana', 'apple', 'google', 'baidu'}
y.update(["lsgo", "dreamtech"])
print(y)
# {'lsgo', 'baidu', 'dreamtech', 'apple', 'google'}
###Output
{'google', 'banana', 'cherry', 'apple', 'baidu'}
{'apple', 'dreamtech', 'lsgo', 'google', 'baidu'}
###Markdown
- `set.remove(item)` 用于移除集合中的指定元素。如果元素不存在,则会发生错误。【例子】
###Code
fruits = {"apple", "banana", "cherry"}
fruits.remove("banana")
print(fruits) # {'apple', 'cherry'}
###Output
{'cherry', 'apple'}
###Markdown
- `set.discard(value)` 用于移除指定的集合元素。`remove()` 方法在移除一个不存在的元素时会发生错误,而 `discard()` 方法不会。【例子】
###Code
fruits = {"apple", "banana", "cherry"}
fruits.discard("banana")
print(fruits) # {'apple', 'cherry'}
###Output
{'cherry', 'apple'}
###Markdown
- `set.pop()` 用于随机移除一个元素。【例子】
###Code
fruits = {"apple", "banana", "cherry"}
x = fruits.pop()
print(fruits) # {'cherry', 'apple'}
print(x) # banana
###Output
{'banana', 'apple'}
cherry
###Markdown
由于 set 是无序和无重复元素的集合,所以两个或多个 set 可以做数学意义上的集合操作。- `set.intersection(set1, set2)` 返回两个集合的交集。- `set1 & set2` 返回两个集合的交集。- `set.intersection_update(set1, set2)` 交集,在原始的集合上移除不重叠的元素。【例子】
###Code
a = set('abracadabra')
b = set('alacazam')
print(a) # {'r', 'a', 'c', 'b', 'd'}
print(b) # {'c', 'a', 'l', 'm', 'z'}
c = a.intersection(b)
print(c) # {'a', 'c'}
print(a & b) # {'c', 'a'}
print(a) # {'a', 'r', 'c', 'b', 'd'}
a.intersection_update(b)
print(a) # {'a', 'c'}
###Output
{'b', 'r', 'a', 'c', 'd'}
{'l', 'a', 'c', 'z', 'm'}
{'a', 'c'}
{'a', 'c'}
{'b', 'r', 'a', 'c', 'd'}
{'a', 'c'}
###Markdown
- `set.union(set1, set2)` 返回两个集合的并集。- `set1 | set2` 返回两个集合的并集。【例子】
###Code
a = set('abracadabra')
b = set('alacazam')
print(a) # {'r', 'a', 'c', 'b', 'd'}
print(b) # {'c', 'a', 'l', 'm', 'z'}
print(a | b)
# {'l', 'd', 'm', 'b', 'a', 'r', 'z', 'c'}
c = a.union(b)
print(c)
# {'c', 'a', 'd', 'm', 'r', 'b', 'z', 'l'}
###Output
{'b', 'r', 'a', 'c', 'd'}
{'l', 'a', 'c', 'z', 'm'}
{'l', 'b', 'r', 'a', 'c', 'z', 'd', 'm'}
{'l', 'b', 'r', 'a', 'c', 'z', 'd', 'm'}
###Markdown
- `set.difference(set)` 返回集合的差集。- `set1 - set2` 返回集合的差集。- `set.difference_update(set)` 集合的差集,直接在原来的集合中移除元素,没有返回值。【例子】
###Code
a = set('abracadabra')
b = set('alacazam')
print(a) # {'r', 'a', 'c', 'b', 'd'}
print(b) # {'c', 'a', 'l', 'm', 'z'}
c = a.difference(b)
print(c) # {'b', 'd', 'r'}
print(a - b) # {'d', 'b', 'r'}
print(a) # {'r', 'd', 'c', 'a', 'b'}
a.difference_update(b)
print(a) # {'d', 'r', 'b'}
###Output
{'b', 'r', 'a', 'c', 'd'}
{'l', 'a', 'c', 'z', 'm'}
{'d', 'b', 'r'}
{'d', 'b', 'r'}
{'b', 'r', 'a', 'c', 'd'}
{'b', 'r', 'd'}
###Markdown
- `set.symmetric_difference(set)`返回集合的异或。- `set1 ^ set2` 返回集合的异或。- `set.symmetric_difference_update(set)`移除当前集合中在另外一个指定集合相同的元素,并将另外一个指定集合中不同的元素插入到当前集合中。【例子】
###Code
a = set('abracadabra')
b = set('alacazam')
print(a) # {'r', 'a', 'c', 'b', 'd'}
print(b) # {'c', 'a', 'l', 'm', 'z'}
c = a.symmetric_difference(b)
print(c) # {'m', 'r', 'l', 'b', 'z', 'd'}
print(a ^ b) # {'m', 'r', 'l', 'b', 'z', 'd'}
print(a) # {'r', 'd', 'c', 'a', 'b'}
a.symmetric_difference_update(b)
print(a) # {'r', 'b', 'm', 'l', 'z', 'd'}
###Output
{'b', 'r', 'a', 'c', 'd'}
{'l', 'a', 'c', 'z', 'm'}
{'l', 'b', 'z', 'r', 'd', 'm'}
{'l', 'b', 'z', 'r', 'd', 'm'}
{'b', 'r', 'a', 'c', 'd'}
{'l', 'b', 'r', 'z', 'd', 'm'}
###Markdown
- `set.issubset(set)`判断集合是不是被其他集合包含,如果是则返回 True,否则返回 False。- `set1 <= set2` 判断集合是不是被其他集合包含,如果是则返回 True,否则返回 False。【例子】
###Code
x = {"a", "b", "c"}
y = {"f", "e", "d", "c", "b", "a"}
z = x.issubset(y)
print(z) # True
print(x <= y) # True
x = {"a", "b", "c"}
y = {"f", "e", "d", "c", "b"}
z = x.issubset(y)
print(z) # False
print(x <= y) # False
###Output
True
True
False
False
###Markdown
- `set.issuperset(set)`用于判断集合是不是包含其他集合,如果是则返回 True,否则返回 False。- `set1 >= set2` 判断集合是不是包含其他集合,如果是则返回 True,否则返回 False。【例子】
###Code
x = {"f", "e", "d", "c", "b", "a"}
y = {"a", "b", "c"}
z = x.issuperset(y)
print(z) # True
print(x >= y) # True
x = {"f", "e", "d", "c", "b"}
y = {"a", "b", "c"}
z = x.issuperset(y)
print(z) # False
print(x >= y) # False
###Output
True
True
False
False
###Markdown
- `set.isdisjoint(set)` 用于判断两个集合是不是不相交,如果是返回 True,否则返回 False。【例子】
###Code
x = {"f", "e", "d", "c", "b"}
y = {"a", "b", "c"}
z = x.isdisjoint(y)
print(z) # False
x = {"f", "e", "d", "m", "g"}
y = {"a", "b", "c"}
z = x.isdisjoint(y)
print(z) # True
###Output
False
True
###Markdown
4. 集合的转换【例子】
###Code
se = set(range(4))
li = list(se)
tu = tuple(se)
print(se, type(se)) # {0, 1, 2, 3} <class 'set'>
print(li, type(li)) # [0, 1, 2, 3] <class 'list'>
print(tu, type(tu)) # (0, 1, 2, 3) <class 'tuple'>
###Output
{0, 1, 2, 3} <class 'set'>
[0, 1, 2, 3] <class 'list'>
(0, 1, 2, 3) <class 'tuple'>
###Markdown
5. 不可变集合Python 提供了不能改变元素的集合的实现版本,即不能增加或删除元素,类型名叫`frozenset`。需要注意的是`frozenset`仍然可以进行集合操作,只是不能用带有`update`的方法。- `frozenset([iterable])` 返回一个冻结的集合,冻结后集合不能再添加或删除任何元素。【例子】
###Code
a = frozenset(range(10)) # 生成一个新的不可变集合
print(a)
# frozenset({0, 1, 2, 3, 4, 5, 6, 7, 8, 9})
b = frozenset('lsgogroup')
print(b)
# frozenset({'g', 's', 'p', 'r', 'u', 'o', 'l'})
###Output
frozenset({0, 1, 2, 3, 4, 5, 6, 7, 8, 9})
frozenset({'l', 'g', 'r', 'u', 'o', 's', 'p'})
###Markdown
序列在 Python 中,序列类型包括字符串、列表、元组、集合和字典,这些序列支持一些通用的操作,但比较特殊的是,集合和字典不支持索引、切片、相加和相乘操作。 1. 针对序列的内置函数- `list(sub)` 把一个可迭代对象转换为列表。【例子】
###Code
a = list()
print(a) # []
b = 'I Love LsgoGroup'
b = list(b)
print(b)
# ['I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p']
c = (1, 1, 2, 3, 5, 8)
c = list(c)
print(c) # [1, 1, 2, 3, 5, 8]
###Output
[]
['I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p']
[1, 1, 2, 3, 5, 8]
###Markdown
- `tuple(sub)` 把一个可迭代对象转换为元组。【例子】
###Code
a = tuple()
print(a) # ()
b = 'I Love LsgoGroup'
b = tuple(b)
print(b)
# ('I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p')
c = [1, 1, 2, 3, 5, 8]
c = tuple(c)
print(c) # (1, 1, 2, 3, 5, 8)
###Output
()
('I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p')
(1, 1, 2, 3, 5, 8)
###Markdown
- `str(obj)` 把obj对象转换为字符串【例子】
###Code
a = 123
a = str(a)
print(a) # 123
###Output
123
###Markdown
- `len(s)` 返回对象(字符、列表、元组等)长度或元素个数。 - `s` -- 对象。【例子】
###Code
a = list()
print(len(a)) # 0
b = ('I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p')
print(len(b)) # 16
c = 'I Love LsgoGroup'
print(len(c)) # 16
###Output
0
16
16
###Markdown
- `max(sub)`返回序列或者参数集合中的最大值【例子】
###Code
print(max(1, 2, 3, 4, 5)) # 5
print(max([-8, 99, 3, 7, 83])) # 99
print(max('IloveLsgoGroup')) # v
###Output
5
99
v
###Markdown
- `min(sub)`返回序列或参数集合中的最小值【例子】
###Code
print(min(1, 2, 3, 4, 5)) # 1
print(min([-8, 99, 3, 7, 83])) # -8
print(min('IloveLsgoGroup')) # G
###Output
1
-8
G
###Markdown
- `sum(iterable[, start=0])` 返回序列`iterable`与可选参数`start`的总和。【例子】
###Code
print(sum([1, 3, 5, 7, 9])) # 25
print(sum([1, 3, 5, 7, 9], 10)) # 35
print(sum((1, 3, 5, 7, 9))) # 25
print(sum((1, 3, 5, 7, 9), 20)) # 45
###Output
25
35
25
45
###Markdown
- `sorted(iterable, key=None, reverse=False) ` 对所有可迭代的对象进行排序操作。 - `iterable` -- 可迭代对象。 - `key` -- 主要是用来进行比较的元素,只有一个参数,具体的函数的参数就是取自于可迭代对象中,指定可迭代对象中的一个元素来进行排序。 - `reverse` -- 排序规则,`reverse = True` 降序 , `reverse = False` 升序(默认)。 - 返回重新排序的列表。【例子】
###Code
x = [-8, 99, 3, 7, 83]
print(sorted(x)) # [-8, 3, 7, 83, 99]
print(sorted(x, reverse=True)) # [99, 83, 7, 3, -8]
t = ({"age": 20, "name": "a"}, {"age": 25, "name": "b"}, {"age": 10, "name": "c"})
x = sorted(t, key=lambda a: a["age"])
print(x)
# [{'age': 10, 'name': 'c'}, {'age': 20, 'name': 'a'}, {'age': 25, 'name': 'b'}]
###Output
[-8, 3, 7, 83, 99]
[99, 83, 7, 3, -8]
[{'age': 10, 'name': 'c'}, {'age': 20, 'name': 'a'}, {'age': 25, 'name': 'b'}]
###Markdown
- `reversed(seq)` 函数返回一个反转的迭代器。 - `seq` -- 要转换的序列,可以是 tuple, string, list 或 range。【例子】
###Code
s = 'lsgogroup'
x = reversed(s)
print(type(x)) # <class 'reversed'>
print(x) # <reversed object at 0x000002507E8EC2C8>
print(list(x))
# ['p', 'u', 'o', 'r', 'g', 'o', 'g', 's', 'l']
t = ('l', 's', 'g', 'o', 'g', 'r', 'o', 'u', 'p')
print(list(reversed(t)))
# ['p', 'u', 'o', 'r', 'g', 'o', 'g', 's', 'l']
r = range(5, 9)
print(list(reversed(r)))
# [8, 7, 6, 5]
x = [-8, 99, 3, 7, 83]
print(list(reversed(x)))
# [83, 7, 3, 99, -8]
###Output
<class 'reversed'>
<reversed object at 0x000001F0517DFD68>
['p', 'u', 'o', 'r', 'g', 'o', 'g', 's', 'l']
['p', 'u', 'o', 'r', 'g', 'o', 'g', 's', 'l']
[8, 7, 6, 5]
[83, 7, 3, 99, -8]
###Markdown
- `enumerate(sequence, [start=0])`【例子】用于将一个可遍历的数据对象(如列表、元组或字符串)组合为一个索引序列,同时列出数据和数据下标,一般用在 for 循环当中。
###Code
seasons = ['Spring', 'Summer', 'Fall', 'Winter']
a = list(enumerate(seasons))
print(a)
# [(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')]
b = list(enumerate(seasons, 1))
print(b)
# [(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')]
for i, element in a:
print('{0},{1}'.format(i, element))
# 0,Spring
# 1,Summer
# 2,Fall
# 3,Winter
###Output
[(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')]
[(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')]
0,Spring
1,Summer
2,Fall
3,Winter
###Markdown
- `zip(iter1 [,iter2 [...]])` - 用于将可迭代的对象作为参数,将对象中对应的元素打包成一个个元组,然后返回由这些元组组成的对象,这样做的好处是节约了不少的内存。 - 我们可以使用 `list()` 转换来输出列表。 - 如果各个迭代器的元素个数不一致,则返回列表长度与最短的对象相同,利用 `*` 号操作符,可以将元组解压为列表。【例子】
###Code
a = [1, 2, 3]
b = [4, 5, 6]
c = [4, 5, 6, 7, 8]
zipped = zip(a, b)
print(zipped) # <zip object at 0x000000C5D89EDD88>
print(list(zipped)) # [(1, 4), (2, 5), (3, 6)]
zipped = zip(a, c)
print(list(zipped)) # [(1, 4), (2, 5), (3, 6)]
a1, a2 = zip(*zip(a, b))
print(list(a1)) # [1, 2, 3]
print(list(a2)) # [4, 5, 6]
###Output
<zip object at 0x000001F0517E38C8>
[(1, 4), (2, 5), (3, 6)]
[(1, 4), (2, 5), (3, 6)]
[1, 2, 3]
[4, 5, 6]
|
Reinforcement Learning/Reinforcement_Learning.ipynb | ###Markdown
Reinforcement LearningSaket TiwariDate: 08 July 2019
###Code
#isme humlog machine ko sikhaynge ki kab kon sa action perform karna
#reinforcement me humlog model ko btaynge ki action jo liya h wo sahi h ya galat humlog ko output nhi pata, reward btaynge
#state t par koi action perform kiye to wo state (t+1) me jayga.. lekin jo reward milega humlogon ko wo (t+1) ka reward milega
# alpha - learning rate
# gamma - discount factor
# epsilon - exploration rate
#agar game me(5x4) 20 pos honge to q table me rows 20 hoga
#Q-learning -> Q-table -> rows- describes states, columns-action
#formula -> Q(s_t , a_t)= r_(t+1) + gamma* r_(t+2) + gamma^2 * r_(T+3)
#future me jo reward milne wala h usko hmlog disregard karte jaynge kyuki hmlog future me confirm nhi h
# ..ki wo reward milega ki nahi humlogon ko
#we are concerned with overall reward not the immediate reward
#Q(s_t , a_t)= r_(t+1) + gamma* r_(t+2) + gamma^2 * r_(T+3)
# Q(s_(t+1), a_(t+1)) = r_(t+2) +gamma*r_(t+3) + gamma^2 *r_(t+4)
#Therefor final reward = Q(s_t , a_t) = r_(t+1) + gamma *Q(s_(t+1), a_(t+1))
# or Q(s_t , a_t) = r_(t+1) +gamma * Q_max_(s_(t+1)) Q_max me future me jitna reward milte wala h
#exploration rate explores the other states for highest reward
import gym
import random
import numpy as np
import time
from IPython.display import clear_output
# frozen ice -> slippery
from gym.envs.registration import register
#https://github.com/openai/gym/blob/master/gym/envs/__init__.py par ja kr randomness off krnge
try:
register(
id='FrozenLakeNoSlip-v0',
entry_point='gym.envs.toy_text:FrozenLakeEnv',
kwargs={'map_name' : '4x4', 'is_slippery':False},
max_episode_steps=100,
reward_threshold=0.78, # optimum = .8196
)
except:
pass
env_name = 'FrozenLakeNoSlip-v0'
env= gym.make(env_name)
env.observation_space
type(env.action_space)
class Agent():
def __init__(self, env):
self.isDiscrete =( type( env.action_space)) ==gym.spaces.discrete.Discrete
def get_action(self,state):#check kareha ki discrete ya continuous
if self.isDiscrete:
action = random.choice(range(env.action_space.n))
else:
action = np.random.uniform( env.action_space.low,
env.action_space.high,
env.action_space.shape) #because it is a continuous h
return action
agent= Agent(env)
total_reward=0
for ep in range(10):
state= env.reset() # game fresh start hoga
done=False # gave abhi over nahi hua h
while not done:
action= agent.get_action(state)
state, reward, done, info = env.step(action)
env.render()
time.sleep(0.05)
clear_output(wait=True)
env.close()
###Output
(Up)
SFFF
F[41mH[0mFH
FFFH
HFFG
###Markdown
Q-Learning
###Code
class Agent():
def __init__(self, env, discountedRate=0.97, learningRate=0.01):
self.isDiscrete =( type( env.action_space)) ==gym.spaces.discrete.Discrete
self.stateSize=env.observation_space.n
self.actionSize = env.action_space.n
self.explorationRate=1.0
self.discountedRate= discountedRate
self.learningRate= learningRate
self.qTable= 1e-4*np.random.random([self.stateSize, self.actionSize])
def get_action(self,state):#check kareha ki discrete ya continuous
qState= self.qTable[state] # we will get a single row with corresponding state #sbse pehle kaun se state pr h
actionGreedy=np.argmax(qState)
actionRandom=env.action_space.sample()
return actionRandom if random.random()<self.explorationRate else actionGreedy
def train(self, state, action, next_state , reward, done): #(state, action, next_state , reward, done) is called experience
qStateNext=self.qTable[next_state]
if done:
qStateNext = np.zeros([self.actionSize])
else:
qStateNext= self.qTable[next_state]
qTarget = reward + self.discountedRate*np.max(qStateNext)
qUpdate = qTarget - self.qTable[state, action]
self.qTable[state, action]+=self.learningRate*qUpdate
if done:
self.explorationRate*=0.99
agent= Agent(env)
total_reward=0
for ep in range(100):
state= env.reset() # game fresh start hoga
done=False # gave abhi over nahi hua h
while not done:
action= agent.get_action(state)
nextState, reward, done, info = env.step(action)
agent.train(state,action, nextState,reward,done)
state=nextState
total_reward+=reward
print(f"s:{state},a:{action}")
print(f'Ep:{ep},Goals:{total_reward}, Explore:{agent.explorationRate}')
env.render()
print(agent.qTable)
time.sleep(0.05)
clear_output(wait=True)
env.close()
###Output
s:15,a:2
Ep:99,Goals:10.0, Explore:0.36603234127322926
(Right)
SFFF
FHFH
FFFH
HFF[41mG[0m
[[8.38348880e-05 8.50280486e-05 4.32886961e-05 4.42219907e-05]
[6.16248146e-05 1.18628906e-05 7.60024896e-05 6.51180528e-05]
[4.49707679e-05 8.35571330e-05 6.88482052e-05 6.91467659e-05]
[4.92823556e-05 2.77991778e-05 9.28140945e-05 1.45866569e-05]
[8.60915995e-05 8.31077326e-05 6.30356860e-05 2.76906476e-05]
[5.24200642e-05 7.99565469e-06 1.45887604e-05 8.36816923e-06]
[5.36613061e-05 4.59071319e-05 7.77647646e-05 1.16657799e-05]
[6.52489991e-06 6.40866490e-05 4.63366484e-06 3.68075341e-05]
[2.94979113e-05 6.83794444e-05 9.25961514e-05 4.50018394e-05]
[4.37225205e-05 9.20260749e-05 2.61571505e-04 9.26948989e-05]
[6.00667802e-05 3.99719271e-03 8.61110515e-05 8.38828547e-05]
[7.53533889e-05 6.23302883e-05 1.81477396e-05 5.47302525e-05]
[8.07319327e-05 2.16822646e-05 8.56134600e-05 1.20086852e-05]
[5.00770479e-05 1.60978761e-07 1.19930572e-03 1.83869439e-06]
[9.53734872e-06 9.91226337e-04 9.56840236e-02 7.41995210e-05]
[6.82004625e-05 2.74506687e-05 7.50571792e-05 8.70376562e-05]]
###Markdown
Q-Learning with Neural Networks
###Code
import tensorflow as tf
class Agent():
def __init__(self, env, discountedRate=0.97, learningRate=0.01):
self.isDiscrete =( type( env.action_space)) ==gym.spaces.discrete.Discrete
self.stateSize=env.observation_space.n
self.actionSize = env.action_space.n
self.explorationRate=1.0
self.discountedRate= discountedRate
self.learningRate= learningRate
tf.reset_default_graph()
#creating variables
self.stateIn =tf.placeholder(tf.int32,shape=[1])
self.actionIn =tf.placeholder(tf.int32,shape=[1])
self.targetIn =tf.placeholder(tf.float32,shape=[1])
self.state =tf.one_hot(self.stateIn, depth=self.stateSize)
self.action =tf.one_hot(self.actionIn, depth=self.actionSize)
self.qState= tf.layers.dense(self.state, units= env.action_space.n, name='Qtable')
self.qAction = tf.reduce_sum(tf.multiply(self.qState, self.action), axis=1) #Scalar
self.loss=tf.reduce_sum(tf.square(self.targetIn-self.qAction))
self.optimizer=tf.train.AdamOptimizer(self.learningRate).minimize(self.loss)
self.sess=tf.Session()
self.sess.run(tf.global_variables_initializer())
def get_action(self,state):#check kareha ki discrete ya continuous
qState= self.sess.run(self.qState,feed_dict= {self.stateIn:[state]}) # we will get a single row with corresponding state #sbse pehle kaun se state pr h
actionGreedy=np.argmax(qState)
actionRandom=env.action_space.sample()
return actionRandom if random.random()<self.explorationRate else actionGreedy
def train(self, state, action, next_state , reward, done): #(state, action, next_state , reward, done) is called experience
if done:
qStateNext = np.zeros([self.actionSize])
else:
qStateNext= self.sess.run(self.qState, feed_dict= {self.stateIn:[next_state] })
qTarget = reward + self.discountedRate*np.max(qStateNext)
feed ={ self.stateIn:[state],self.actionIn:[action], self.targetIn:[qTarget]}
self.sess.run(self.optimizer, feed_dict=feed)
if done:
self.explorationRate*=0.99
def __del__(self):
self.sess.close()
agent= Agent(env)
total_reward=0
for ep in range(100):
state= env.reset() # game fresh start hoga
done=False # gave abhi over nahi hua h
while not done:
action= agent.get_action(state)
nextState, reward, done, info = env.step(action)
agent.train(state,action, nextState,reward,done)
state=nextState
total_reward+=reward
print(f"s:{state},a:{action}")
print(f'Ep:{ep},Goals:{total_reward}, Explore:{agent.explorationRate}')
env.render()
with tf.variable_scope('Qtable', reuse=True): #if reuse=True weights will not reset.
weights=agent.sess.run(tf.get_variable('kernel'))
print(weights)
time.sleep(0.05)
clear_output(wait=True)
env.close()
###Output
s:15,a:2
Ep:99,Goals:60.0, Explore:0.13397967485796175
(Right)
SFFF
FHFH
FFFH
HFF[41mG[0m
[[ 3.0034229e-01 1.5514053e-01 9.9149972e-02 1.4712311e-03]
[ 2.9294816e-01 -5.4990214e-01 1.2902808e-01 1.3375770e-01]
[ 3.4233361e-01 3.5908777e-01 5.2457903e-02 1.6361105e-01]
[ 3.2497284e-01 -3.1679502e-01 -5.6305647e-02 -3.0368692e-03]
[ 5.7104278e-02 1.4110394e-01 -6.3415742e-01 8.9487545e-02]
[ 6.9905162e-02 1.0132694e-01 -1.0720506e-01 -7.2537959e-02]
[-5.9418625e-01 3.7649924e-01 -7.7506649e-01 1.7263173e-01]
[-9.5067620e-03 -2.0048982e-01 -4.7103602e-01 -1.1449993e-01]
[ 1.5990813e-01 -3.8714001e-01 -1.7517290e-04 4.4217922e-02]
[ 1.5699953e-02 3.8959095e-01 -3.4888476e-02 -5.6483757e-01]
[ 4.2141238e-01 4.1830918e-01 -4.9695361e-01 1.9285630e-01]
[-1.0826334e-01 -2.2063705e-01 2.0794761e-01 -1.6201201e-01]
[ 4.8506117e-01 1.9182444e-01 4.3048787e-01 1.8552858e-01]
[-6.5426648e-01 1.4091173e-01 2.1401879e-01 1.7491673e-01]
[ 4.3396807e-01 4.3527949e-01 2.4507011e-01 2.1868190e-01]
[ 2.3545861e-01 2.2130251e-02 2.1259862e-01 4.4044465e-01]]
|
Projeto1-Buscas/.ipynb_checkpoints/city-path-checkpoint.ipynb | ###Markdown
Parte 1: Caminho entre cidades--- Dependências Antes de executar este caderno, pode ser necessário instalar previamente as dependências em seu sistema. Isso pode ser feito com os seguintes comandos:```bashpip install --user pandaspip install --user tqdm```
###Code
import pandas as pd
import heapq as heap
from math import sqrt
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Importando os dados
###Code
# Import
australia = pd.read_csv('australia.csv', sep = ",")
# Cleaning
australia = australia[['id', 'city', 'lat', 'lng']]
# Printing
australia
###Output
_____no_output_____
###Markdown
Pré-Game Estrutura de Dados
###Code
class City:
def __init__(self, id, name, lat, lng):
self.id = id
self.name = name
self.lat = lat
self.lng = lng
self.roads = set()
self.weight = None
def successors(self):
for road in self.roads:
yield road.city, road.length
class Road:
def __init__(self, city, length):
self.city = city
self.length = length
def add_road_between(city1, city2):
distance = 1.1 * sqrt((city1.lat - city2.lat)**2 + (city1.lng - city2.lng)**2)
# city1 -> city2
road1 = Road(city2, distance)
city1.roads.add(road1)
# city2 -> city1
road2 = Road(city1, distance)
city2.roads.add(road2)
###Output
_____no_output_____
###Markdown
Preenchendo a estrutura de dados
###Code
cities = [None] # None é só para preencher o id = 0 (outra alternativa seria trabalhar com `índice - 1`)
# Cities
for index, row in australia.iterrows():
city = City(row['id'], row['city'], row['lat'], row['lng'])
cities.append(city)
# Roads
for city in cities:
if city:
if city.id % 2 == 0:
# Road to (x - 1)
add_road_between(city, cities[city.id - 1])
# Road to (x + 2) if it exists
if city.id + 2 < len(cities):
add_road_between(city, cities[city.id + 2])
elif city.id > 2:
# Road to (x - 2)
add_road_between(city, cities[city.id - 2])
# Road to (x + 1) if it exists
if city.id + 1 < len(cities):
add_road_between(city, cities[city.id + 1])
###Output
_____no_output_____
###Markdown
Algoritmo de Busca Parâmetros
###Code
# Escolher partida e destino (pode colocar o nome da cidade (formato string) ou o id (formato int))
source = 'Alice Springs'
destiny = 'Yulara'
# Também funcionaria
# source = 5
# destiny = 219
###Output
_____no_output_____
###Markdown
Adiciona informação (Busca com informação)
###Code
# Calcula as distâncias das cidades até o destino (em linha reta)
# Encontra o nó da cidade destino
if type(destiny) == str:
for city in cities:
if city and city.name == destiny:
dest_city = city
break
dest_city = None
else:
for city in cities:
if city and city.id == destiny:
dest_city = city
break
dest_city = None
# Encontra o nó da cidade fonte
if type(source) == str:
for city in cities:
if city and city.name == source:
src_city = city
break
src_city = None
else:
for city in cities:
if city and city.id == source:
src_city = city
break
src_city = None
if dest_city == None or src_city == None:
print('Cidades não encontradas')
# Para cada distância seta o peso como a distância em linha reta até a cidade destino
for city in cities:
if city:
city.weight = sqrt((city.lat - dest_city.lat)**2 + (city.lng - dest_city.lng)**2)
print(src_city.id, src_city.name, len(src_city.roads))
print(dest_city.id, dest_city.name, len(dest_city.roads))
###Output
5 Alice Springs 4
219 Yulara 1
###Markdown
Algoritmo
###Code
# Algoritmo A*
class Node:
def __init__(self, f_value, g_value, element, parent):
self.f_value = f_value
self.g_value = g_value
self.element = element
self.parent = parent
def __lt__(self, other):
if self.f_value != other.f_value:
return self.f_value < other.f_value
else:
return self.element.id < other.element.id
def astar_search(initial_state, goal_test):
expanded = set()
# heap de minimo
pq = [Node(initial_state.weight, 0, initial_state, None)]
heap.heapify(pq)
for _ in tqdm(range(1000000)):
# não há caminho
if len(pq) == 0:
return None
curr = heap.heappop(pq)
if curr.element in expanded:
continue
else:
expanded.add(curr.element)
# encontrou o objetivo
if goal_test(curr.element):
return curr
# expandindo vizinhos
for succ, price in curr.element.successors():
if succ in expanded:
continue
new_g_value = (curr.g_value + price)
new_f_value = new_g_value + succ.weight
heap.heappush(pq, Node(new_f_value, new_g_value, succ, curr))
def goal_test(city):
return city == dest_city
solution_node = astar_search(src_city, goal_test)
###Output
0%| | 432/1000000 [00:00<00:04, 243376.67it/s]
###Markdown
Resultado
###Code
path = []
aux_node = solution_node
while aux_node is not None:
path.append(aux_node.element)
aux_node = aux_node.parent
path.reverse()
print('LENGTH: ', len(path))
for city in path:
print(city.name, ' ->')
###Output
LENGTH: 124
Alice Springs ->
Andamooka ->
Armidale ->
Ayr ->
Ballarat ->
Bairnsdale East ->
Ballina ->
Batemans Bay ->
Bathurst ->
Bendigo ->
Bicheno ->
Birdsville ->
Bordertown ->
Bourke ->
Brisbane ->
Broome ->
Bundaberg ->
Burnie ->
Byron Bay ->
Cairns ->
Caboolture ->
Caloundra ->
Canberra ->
Ceduna ->
Charleville ->
Clare ->
Cobram ->
Colac ->
Cowell ->
Cranbourne ->
Dalby ->
Deniliquin ->
Dubbo ->
East Maitland ->
Eidsvold ->
Esperance ->
Forbes ->
Gawler ->
Georgetown ->
Gingin ->
Geraldton ->
Gladstone ->
Goondiwindi ->
Griffith ->
Gympie South ->
Hamilton ->
Hobart ->
Hughenden ->
Inverell ->
Kalbarri ->
Karratha ->
Katanning ->
Katoomba ->
Kiama ->
Kimba ->
Kingoonya ->
Kingston South East ->
Kwinana ->
Laverton ->
Leonora ->
Longreach ->
Manjimup ->
Maryborough ->
Meekatharra ->
Melton ->
Melbourne ->
Meningie ->
Mildura ->
Morawa ->
Mount Barker ->
Mount Isa ->
Mudgee ->
Muswellbrook ->
Narrabri West ->
Newcastle ->
Norseman ->
North Mackay ->
North Lismore ->
North Scottsdale ->
Nowra ->
Oatlands ->
Orange ->
Pambula ->
Parkes ->
Perth ->
Penola ->
Peterborough ->
Port Augusta West ->
Port Douglas ->
Port Lincoln ->
Port Pirie ->
Portland ->
Queanbeyan ->
Quilpie ->
Richmond ->
Rockhampton ->
Roma ->
Scone ->
Shepparton ->
Seymour ->
Singleton ->
South Grafton ->
South Melbourne ->
Stawell ->
Sunbury ->
Sydney ->
Thargomindah ->
Three Springs ->
Toowoomba ->
Traralgon ->
Tumut ->
Ulladulla ->
Wagga Wagga ->
Wallaroo ->
Wangaratta ->
Warwick ->
West Tamworth ->
Wilcannia ->
Winton ->
Windorah ->
Wollongong ->
Woomera ->
Yeppoon ->
Yulara ->
|
.ipynb_checkpoints/novelRejection-checkpoint.ipynb | ###Markdown
This file contains code of the paper 'Rejecting Novel Motions in High-Density Myoelectric Pattern Recognition using Hybrid Neural Networks'
###Code
import scipy.io as sio
import numpy as np
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense,Dropout, Input, BatchNormalization
from keras.models import Model
from keras.losses import categorical_crossentropy
from keras.optimizers import Adadelta
import keras
# load data
path = './data/data'
data=sio.loadmat(path)
wristPronation = data['wristPronation']
wristSupination = data['wristSupination']
wristExtension = data['wristExtension']
wristFlexion = data['wristFlexion']
handOpen = data['handOpen']
handClose = data['handClose']
shoot = data['shoot']
pinch = data['pinch']
typing = data['typing']
writing = data['writing']
mouseManipulating = data['mouseManipulating']
radialDeviation = data['radialDeviation']
ulnarDeviation = data['ulnarDeviation']
###Output
_____no_output_____
###Markdown
part1: CNN
###Code
def Spatial_Model(input_shape):
input_layer = Input(input_shape)
x = Conv2D(filters=32, kernel_size=(3, 3),activation='relu',name = 'conv_layer1')(input_layer)
x = Conv2D(filters=32, kernel_size=(3, 3), activation='relu',name = 'conv_layer2')(x)
x = Flatten()(x)
x = Dense(units=1024, activation='relu',name = 'dense_layer1')(x)
x = Dropout(0.4)(x)
x = Dense(units=512, activation='relu',name = 'dense_layer2')(x)
x = Dropout(0.4)(x)
output_layer = Dense(units=7, activation='softmax',name = 'output_layer')(x)
model = Model(inputs=input_layer, outputs=output_layer)
return model
def getIntermediate(layer_name,X,model):
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(X)
return intermediate_output
def getPointedGesture(X,y,flag):
index = np.where(y==flag)
temp = X[index]
return temp
classNum = 7
X_inliers = np.concatenate((wristPronation,wristSupination,wristExtension,wristFlexion,handOpen,handClose,shoot),axis=0)
print('X_inliers.shape: ',X_inliers.shape)
y_inliers = np.concatenate((np.ones(wristPronation.shape[0])*0,np.ones(wristSupination.shape[0])*1,
np.ones(wristExtension.shape[0])*2,np.ones(wristFlexion.shape[0])*3,
np.ones(handOpen.shape[0])*4,np.ones(handClose.shape[0])*5,
np.ones(shoot.shape[0])*6),axis=0)
print('y_inliers.shape: ',y_inliers.shape)
X_outliers = np.concatenate((typing,writing,mouseManipulating,pinch),axis=0)
print('X_outliers.shape: ',X_outliers.shape)
y_outliers = np.concatenate((np.ones(typing.shape[0])*7,np.ones(writing.shape[0])*8, np.ones(mouseManipulating.shape[0])*9,np.ones(pinch.shape[0])*10),axis=0)
print('y_outliers.shape: ',y_outliers.shape)
model = Spatial_Model((12, 8, 3))
model.summary()
trainModel = False
from sklearn.model_selection import train_test_split
X_train, X_test_norm, y_train, y_test_norm = train_test_split(X_inliers, y_inliers, test_size=0.20, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=1)
y_train_onehot = keras.utils.to_categorical(y_train, classNum)
y_test_onehot = keras.utils.to_categorical(y_test_norm, classNum)
model.compile(loss=categorical_crossentropy, optimizer=Adadelta(lr=0.1), metrics=['acc'])
if trainModel:
model.fit(x=X_train, y=y_train_onehot, batch_size=16, epochs=50, shuffle=True, validation_split=0.05)
model.compile(loss=categorical_crossentropy, optimizer=Adadelta(lr=0.01), metrics=['acc'])
model.fit(x=X_train, y=y_train_onehot, batch_size=16, epochs=50, shuffle=True, validation_split=0.05)
model.save_weights('./model/modelCNN.h5')
else:
model.load_weights('./model/modelCNN.h5')
model_evaluate = []
model_evaluate.append(model.evaluate(X_test_norm,y_test_onehot))
print('model_evaluate',model_evaluate)
layer_name = 'dense_layer2'
X_train_intermediate = getIntermediate(layer_name,X_train,model)
X_test_intermediate_norm = getIntermediate(layer_name,X_test_norm,model)
typing_intermediate = getIntermediate(layer_name,typing,model)
writing_intermediate = getIntermediate(layer_name,writing,model)
mouseManipulating_intermediate = getIntermediate(layer_name,mouseManipulating,model)
pinch_intermediate = getIntermediate(layer_name,pinch,model)
radialDeviation_intermediate = getIntermediate(layer_name,radialDeviation,model)
ulnarDeviation_intermediate = getIntermediate(layer_name,ulnarDeviation,model)
## train Data
wristPronation_intermediate_train = getPointedGesture(X_train_intermediate,y_train,0)
wristSupination_intermediate_train = getPointedGesture(X_train_intermediate,y_train,1)
wristExtension_intermediate_train = getPointedGesture(X_train_intermediate,y_train,2)
wristFlexion_intermediate_train = getPointedGesture(X_train_intermediate,y_train,3)
handOpen_intermediate_train = getPointedGesture(X_train_intermediate,y_train,4)
handClose_intermediate_train = getPointedGesture(X_train_intermediate,y_train,5)
shoot_intermediate_train = getPointedGesture(X_train_intermediate,y_train,6)
## test data
wristPronation_intermediate_test = getPointedGesture(X_test_intermediate_norm,y_test_norm,0)
wristSupination_intermediate_test = getPointedGesture(X_test_intermediate_norm,y_test_norm,1)
wristExtension_intermediate_test = getPointedGesture(X_test_intermediate_norm,y_test_norm,2)
wristFlexion_intermediate_test = getPointedGesture(X_test_intermediate_norm,y_test_norm,3)
handOpen_intermediate_test = getPointedGesture(X_test_intermediate_norm,y_test_norm,4)
handClose_intermediate_test = getPointedGesture(X_test_intermediate_norm,y_test_norm,5)
shoot_intermediate_test = getPointedGesture(X_test_intermediate_norm,y_test_norm,6)
typing_intermediate_test = typing_intermediate
writing_intermediate_test = writing_intermediate
mouseManipulating_intermediate_test = mouseManipulating_intermediate
pinch_intermediate_test = pinch_intermediate
radialDeviation_intermediate_test = radialDeviation_intermediate
ulnarDeviation_intermediate_test = ulnarDeviation_intermediate
outlierData = {'typing_intermediate_test':typing_intermediate_test,
'writing_intermediate_test':writing_intermediate_test,
'mouseManipulating_intermediate_test':mouseManipulating_intermediate_test,
'pinch_intermediate_test':pinch_intermediate_test}
motionNameList = ['wristPronation','wristSupination','wristExtension','wristFlexion','handOpen','handClose','shoot']
trainDataDict = {motionNameList[0]:wristPronation_intermediate_train,motionNameList[1]:wristSupination_intermediate_train,
motionNameList[2]:wristExtension_intermediate_train,motionNameList[3]:wristFlexion_intermediate_train,
motionNameList[4]:handOpen_intermediate_train,motionNameList[5]:handClose_intermediate_train,
motionNameList[6]:shoot_intermediate_train}
testDataNameList = ['wristPronation','wristSupination','wristExtension','wristFlexion','handOpen','handClose','shoot',
'typing','writing','mouseManipulating','pinch','radialDeviation','ulnarDeviation']
testDataDict = {testDataNameList[0]:wristPronation_intermediate_test,testDataNameList[1]:wristSupination_intermediate_test,
testDataNameList[2]:wristExtension_intermediate_test,testDataNameList[3]:wristFlexion_intermediate_test,
testDataNameList[4]:handOpen_intermediate_test,testDataNameList[5]:handClose_intermediate_test,
testDataNameList[6]:shoot_intermediate_test,testDataNameList[7]:typing_intermediate_test[0:150],
testDataNameList[8]:writing_intermediate_test[0:150],testDataNameList[9]:mouseManipulating_intermediate_test[0:150],
testDataNameList[10]:pinch_intermediate_test[0:150],testDataNameList[11]:radialDeviation_intermediate_test[0:150],
testDataNameList[12]:ulnarDeviation_intermediate_test[0:150]}
X_val_intermediate = getIntermediate(layer_name,X_val,model)
wristPronation_intermediate_val = getPointedGesture(X_val_intermediate,y_val,0)
wristSupination_intermediate_val = getPointedGesture(X_val_intermediate,y_val,1)
wristExtension_intermediate_val = getPointedGesture(X_val_intermediate,y_val,2)
wristFlexion_intermediate_val = getPointedGesture(X_val_intermediate,y_val,3)
handOpen_intermediate_val = getPointedGesture(X_val_intermediate,y_val,4)
handClose_intermediate_val = getPointedGesture(X_val_intermediate,y_val,5)
shoot_intermediate_val = getPointedGesture(X_val_intermediate,y_val,6)
valDataDict = {motionNameList[0]:wristPronation_intermediate_val,motionNameList[1]:wristSupination_intermediate_val,
motionNameList[2]:wristExtension_intermediate_val,motionNameList[3]:wristFlexion_intermediate_val,
motionNameList[4]:handOpen_intermediate_val,motionNameList[5]:handClose_intermediate_val,
motionNameList[6]:shoot_intermediate_val}
###Output
_____no_output_____
###Markdown
part2: autoEncoder
###Code
from keras import regularizers
from keras.losses import mean_squared_error
from keras.optimizers import SGD
def autoModel(input_shape):
input_img = Input(input_shape)
encoded = Dense(256, activation='relu',kernel_regularizer=regularizers.l2(0.002))(input_img)
encoded = BatchNormalization()(encoded)
encoded = Dense(64, activation='relu',kernel_regularizer=regularizers.l2(0.002))(encoded)
encoded = BatchNormalization()(encoded)
decoded = Dense(256, activation='relu',kernel_regularizer=regularizers.l2(0.002))(encoded)
decoded = BatchNormalization()(decoded)
decoded = Dense(512, activation='relu',kernel_regularizer=regularizers.l2(0.002))(decoded)
model = Model(input_img, decoded)
return model
trainAutoFlag = False
if trainAutoFlag:
for motionId in range(len(motionNameList)):
motionName = motionNameList[motionId]
x_train = trainDataDict[motionName]
x_val = valDataDict[motionName]
autoencoder = autoModel((512,))
autoencoder.compile(loss=mean_squared_error, optimizer=SGD(lr=0.1))
autoencoder.fit(x_train, x_train,
epochs=600,
batch_size=16,
shuffle=True,
validation_data=(x_val, x_val))
autoencoder.compile(loss=mean_squared_error, optimizer=SGD(lr=0.01))
autoencoder.fit(x_train, x_train,
epochs=300,
batch_size=16,
shuffle=True,
validation_data=(x_val, x_val))
autoencoder.save_weights('./model/autoencoder/Autoencoder_'+motionName+'.h5')
###Output
_____no_output_____
###Markdown
Calculate ROC curve
###Code
import matplotlib
import matplotlib.pyplot as plt
from scipy.spatial.distance import pdist
from sklearn.metrics import roc_curve, auc
targetDict = {}
for motionId in range(len(motionNameList)):
targetList = []
motionName = motionNameList[motionId]
print('motionName: ', motionName)
# load models
autoencoder = autoModel((512,))
autoencoder.compile(loss=mean_squared_error, optimizer=Adadelta(lr=0.5))
autoencoder.load_weights('./model/autoencoder/Autoencoder_'+motionName+'.h5')
original = valDataDict[motionName]
decoded_imgs = autoencoder.predict(original)
num = decoded_imgs.shape[0]
for i in range(num):
X = np.vstack([original[i,:],decoded_imgs[i,:]])
lose = pdist(X,'braycurtis')
targetList.append(lose[0])
targetDict[motionName] = targetList
mdDict = {}
for motionId in range(len(motionNameList)):
motionName = motionNameList[motionId]
print('motionName: ', motionName)
# load models
autoencoder = autoModel((512,))
autoencoder.compile(loss=mean_squared_error, optimizer=Adadelta(lr=0.5))
autoencoder.load_weights('./model/autoencoder/AutoEncoder_'+motionName+'.h5')
originalDict = {}
decodedDict = {}
for gestureId in range(len(testDataNameList)):
originalDict[testDataNameList[gestureId]] = testDataDict[testDataNameList[gestureId]]
decodedDict[testDataNameList[gestureId]] = autoencoder.predict(originalDict[testDataNameList[gestureId]])
reconstruction_error = []
for gestureID in range(len(testDataNameList)):
original = originalDict[testDataNameList[gestureID]]
decoded_imgs = decodedDict[testDataNameList[gestureID]]
num = decoded_imgs.shape[0]
for i in range(num):
X = np.vstack([original[i,:],decoded_imgs[i,:]])
lose = pdist(X,'braycurtis')
reconstruction_error.append(lose[0])
mdDict[motionName] = reconstruction_error
outlierAllNum = 150 * 6 #six novel motions, 150 samples for each motion
y_label = []
for motionId in range(len(motionNameList)):
motionName = motionNameList[motionId]
y_label.extend(np.ones(len(testDataDict[motionName])))
y_label.extend(np.zeros(len(testDataDict['typing'])))
y_label.extend(np.zeros(len(testDataDict['writing'])))
y_label.extend(np.zeros(len(testDataDict['mouseManipulating'])))
y_label.extend(np.zeros(len(testDataDict['pinch'])))
y_label.extend(np.zeros(len(testDataDict['radialDeviation'])))
y_label.extend(np.zeros(len(testDataDict['radialDeviation'])))
outliers_fraction_List = []
P_List = []
R_List = []
F1_List = []
TPR_List = []
FPR_List = []
#outliers_fraction = 0.02
for outliers_i in range(-1,101):
outliers_fraction = outliers_i/100
outliers_fraction_List.append(outliers_fraction)
y_pred = np.zeros(len(y_label))
thresholdDict = {}
for motionId in range(len(motionNameList)):
# motionId = 0
motionName = motionNameList[motionId]
distances = targetDict[motionName]
distances = np.sort(distances)
num = len(distances)
# print('outliers_fraction:',outliers_fraction)
if outliers_fraction >= 0:
threshold = distances[num-1-int(outliers_fraction*num)]# get threshold
if outliers_fraction < 0:
threshold = 10000.0
if outliers_fraction == 1.0:
threshold = 0
thresholdDict[motionName] = threshold
mdDistances = mdDict[motionName]
y_pred_temp = (np.array(mdDistances)<=threshold)*1
y_pred = y_pred + y_pred_temp
y_pred = (y_pred>0)*1
TP = np.sum(y_pred[0:-outlierAllNum])
FN = len(y_pred[0:-outlierAllNum])-TP
FP = np.sum(y_pred[-outlierAllNum:])
TN = outlierAllNum - FP
t = 0.00001
P = TP/(TP+FP+t)
R = TP/(TP+FN+t)
F1 = 2*P*R/(P+R+t)
TPR = TP/(TP+FN+t)
FPR = FP/(TN+FP+t)
P_List.append(P)
R_List.append(R)
F1_List.append(F1)
TPR_List.append(TPR)
FPR_List.append(FPR)
roc_auc = auc(FPR_List, TPR_List)
fig, ax = plt.subplots(figsize=(5, 5))
plt.plot(FPR_List, TPR_List, lw=2,label='AUC = %0.2f' % ( roc_auc))
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',label='Chance', alpha=.8)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic(ROC)')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
calculate classification accuracies
###Code
resultDict = {}
for motionId in range(len(motionNameList)):
motionName = motionNameList[motionId]
# load models
autoencoder = autoModel((512,))
autoencoder.compile(loss=mean_squared_error, optimizer=Adadelta(lr=0.5))
autoencoder.load_weights('./model/autoencoder/AutoEncoder_'+motionName+'.h5')
# refactore data
originalDict = {}
decodedDict = {}
for gestureId in range(len(testDataNameList)):
originalDict[testDataNameList[gestureId]] = testDataDict[testDataNameList[gestureId]]
decodedDict[testDataNameList[gestureId]] = autoencoder.predict(originalDict[testDataNameList[gestureId]])
loseDict = {}
for gestureID in range(len(testDataNameList)):
loseList= []
original = originalDict[testDataNameList[gestureID]]
decoded_imgs = decodedDict[testDataNameList[gestureID]]
num = decoded_imgs.shape[0]
for i in range(num):
X = np.vstack([original[i,:],decoded_imgs[i,:]])
lose = pdist(X,'braycurtis')
loseList.append(lose[0])
loseDict[testDataNameList[gestureID]] = loseList
resultDict[motionName] = loseDict
outliers_fraction = 0.15
thresholdDict = {}
for motionId in range(len(motionNameList)):
motionName = motionNameList[motionId]
# load model
autoencoder = autoModel((512,))
autoencoder.compile(loss=mean_squared_error, optimizer=Adadelta(lr=0.5))
autoencoder.load_weights('./model/autoencoder/AutoEncoder_'+motionName+'.h5')
# val data
original_val = valDataDict[motionName]
decoded_val = autoencoder.predict(original_val)
loseList= []
original = original_val
decoded_imgs = decoded_val
num = decoded_imgs.shape[0]
for i in range(num):
X = np.vstack([original[i,:],decoded_imgs[i,:]])
lose = pdist(X,'braycurtis')
loseList.append(lose[0])
## calculate threshold for each task
loseArray = np.array(loseList)
loseArraySort = np.sort(loseArray)
anomaly_threshold = loseArraySort[-(int((outliers_fraction*len(loseArray)))+1)]
thresholdDict[motionName] = anomaly_threshold
# plot lose and threshold
fig, ax = plt.subplots(figsize=(5, 5))
t = np.arange(num)
s = loseArray
ax.scatter(t,s,label=motionName)
ax.hlines(anomaly_threshold,0,150,colors = "r")
ax.set(xlabel='sample (n)', ylabel='MSE',
title='MSEs of '+ motionName + ', threshold:' + str(anomaly_threshold))
ax.grid()
plt.legend(loc="lower right")
plt.xlim(xmin = -3)
plt.xlim(xmax = 70)
plt.show()
errorSum = 0
testSum = 0
barDict = {}
outlierClass = 6
rejectMotion = {}
for motionId in range(len(testDataNameList)):
recogList = []
motionName = testDataNameList[motionId]
for recogId in range(len(testDataNameList)-outlierClass):
identyResult = resultDict[testDataNameList[recogId]]
targetResult = np.array(identyResult[motionName])
recogList.append((targetResult<=thresholdDict[testDataNameList[recogId]])*1) # 每一个类别有自己的threshold用于拒判
recogArray = np.array(recogList)
recogArray = np.sum(recogArray,axis=0)
recogArray = (recogArray>0)*1
rejectMotion[testDataNameList[motionId]] = recogArray
if motionId<(len(testDataNameList)-outlierClass):
numError = np.sum(1-recogArray)
else:
numError = np.sum(recogArray)
numTarget = len(recogArray)
if motionId<(len(testDataNameList)-outlierClass):
errorSum = errorSum + numError
testSum = testSum + numTarget
barDict[testDataNameList[motionId]] = (numError/numTarget)
barDict['target overall'] = errorSum/testSum
print(barDict)
import matplotlib.pyplot as plt; plt.rcdefaults()
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
figure(num=None, figsize=(15, 6))
objects = ('wristPronation','wristSupination','wristExtension','wristFlexion','handOpen','handClose','shoot','target overall',
'typing','writing','mouseManipulating','pinch','radialDeviation','ulnarDeviation')
y_pos = np.arange(len(objects))
proposed = []
for i in range(len(objects)):
proposed.append(barDict[objects[i]])
bar_width = 0.35
opacity = 0.8
rects2 = plt.bar(y_pos + bar_width, proposed, bar_width,
alpha=opacity,
label='Proposed')
plt.xticks(y_pos + bar_width, objects)
plt.ylabel('Error Rates of Novelty Detection (%)')
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
ad_operation_insight.ipynb | ###Markdown
缺失值有三种常用的处理方法,分别是删除法、替换法、插补法。2.11 删除法删除法可以通过删除观测样本或者删除变量来实现。删除法适用于变量有较大缺失且对研究目标影响不大的情况。如果删除了之后应该很多不建议用删除法。在kaggle中有人这样处理缺失数据,如果数据的缺失达到15%,且并没有发现该变量有多大作用,就删除该变量。`del data['column_name'] 删除某一列``data['column_name'].dropna() 删除某一行为空值或者NA的元素``data.drop(data.columns[[0,1]],axis=1,inplace=True) 删除第1,2列,inplace=True表示直接就在内存中替换了,不用二次赋值生效。``data.dropna(axis=0) 删除带有空值的行 ``data.dropna(axis=1) 删除带有空值的列 `2.12 替换法如果缺失值所在变量为数值型,一般用均值来替换;如果为非数值型变量,则使用中位数或者是众数来替换。`data['column_name']=data['column_name'].fillna(num) 将该列中的空值或者NA填充为num.其中num是某位数字,这个代码用于用数字进行替换。``data['column_name'][data['column_name'].isnull()]=data['column_name'].dropna().mode().values如果该列是字符串的,就将该列中出现次数最多的字符串赋予空值,mode()函数就是取出现次数最多的元素``data['column_name'].fillna(method='pad') 使用前一个数值替代空值或者NA,就是NA前面最近的非空数值替换``data['column_name'].fillna(method='bfill',limit=1) 使用后一个数值替代空值或者NA,limit=1就是限制如果几个连续的空值,只能最近的一个空值可以被填充。``data['column_name'].fillna(data['column_name'].mean()) 使用平均值进行填充``data= data.fillna(data.mean()) 将缺失值全部用该列的平均值代替,这个时候一般已经提前将字符串特征转换成了数值。`2.13 插补法使用删除法和替换法有时会存在信息浪费的问题且数据结构会发生变动,以致最后得到有偏的统计结果。用插补法可以很轻松地解决此类问题。常用的插补法有回归插补,多重插补、拉格朗日插补法等。这里就不插入代码了,这个要看情况而选择插补。拉格朗日插值法参考:https://blog.csdn.net/sinat_22510827/article/details/80972389
###Code
import matplotlib.pyplot as plt
plt.boxplot(df['good_type'],vert = False)#箱线图
plt.show()
plt.plot(df['good_id'], df['good_type'], 'o', color='black')#散点图
df['good_type'].describe()#描述性统计
def count_box(Q_3,Q_1):#Q_3为75%分位数(第三四分位数),Q_1为25%分位数(第一四分位数)
IQR=Q_3-Q_1
down_line=Q_1-1.5*IQR
up_line=Q_3+1.5*IQR
print("异常值上限:",up_line," 异常值下限:",down_line)
count_box(18,3)
###Output
_____no_output_____ |
Deep Learning with Python/Chapter 3 - Boston House Pricing Regression.ipynb | ###Markdown
Chapter 3 - Boston House Pricing Regression
###Code
import random
import numpy as np
from keras import Sequential
from keras.layers import Dense
from keras.datasets import boston_housing
from matplotlib import pyplot as plt
from keras.utils.vis_utils import plot_model
###Output
_____no_output_____
###Markdown
Loading the Dataset
###Code
(train_data, train_labels), (test_data, test_labels) = boston_housing.load_data()
###Output
_____no_output_____
###Markdown
Normalizing the Data
###Code
def normalize(data):
mean = data.mean(axis = 0)
std = train_labels.std(axis = 0)
data /= std
data -= mean
return data
###Output
_____no_output_____
###Markdown
building the Model
###Code
def build_model(dim):
model = Sequential()
# Add layers
model.add(Dense(units = 64, activation = 'relu', input_shape = (dim, )))
model.add(Dense(units = 64, activation = 'relu'))
model.add(Dense(units = 1))
# Compile model
model.compile(optimizer = 'rmsprop', loss = 'mse', metrics = ['mae'])
return model
def train_model(model, x_train, y_train, epochs, batch_size, validation_data):
history = model.fit(x_train, y_train, epochs = epochs, batch_size = batch_size, verbose = 0, validation_data = validation_data)
return history
###Output
_____no_output_____
###Markdown
Evaluating the Model
###Code
def evaluate_model(model, x_test, y_test):
results = model.evaluate(x_test, y_test, verbose = 0)
return results
###Output
_____no_output_____
###Markdown
Manual K-Means Cross Validation
###Code
def k_means_validation(k, train_data, epochs):
num_samples = len(train_data) // k
history_mae = []
for i in range(k):
print(f'Processing Fold #{i}...')
# Validation split
val_data = train_data[i * num_samples: (i + 1) * num_samples]
val_labels = train_labels[i * num_samples: (i + 1) * num_samples]
# Remaining training data
partial_train_data = np.concatenate([train_data[: i * num_samples], train_data[(i + 1) * num_samples:]], axis = 0)
partial_train_labels = np.concatenate([train_labels[: i * num_samples], train_labels[(i + 1) * num_samples:]], axis = 0)
# Build the model
model = build_model(dim = train_data.shape[1])
# Train the model
history = train_model(
model = model,
x_train = partial_train_data,
y_train = partial_train_labels,
validation_data = (val_data, val_labels),
epochs = epochs,
batch_size = 1,
)
history_mae.append(history.history['val_mae'])
return history_mae
###Output
_____no_output_____
###Markdown
plotting the Training Process
###Code
def plot(ax, value, metric):
ax.set_title(f'{metric} per Epoch')
ax.plot(value, label = metric)
ax.legend()
###Output
_____no_output_____
###Markdown
Making Predictions
###Code
def evaluate_model(model, x_test, y_test):
results = model.evaluate(x_test, y_test, verbose = 0)
return results
###Output
_____no_output_____
###Markdown
Wrap Up
###Code
# Normalize data
train_data = normalize(train_data)
test_data = normalize(test_data)
k = 4
epochs = 100
# K-Means validation
history_mae = k_means_validation(k = k, train_data = train_data, epochs = epochs)
# Calculate history
avg_mae_hisotry = [np.mean([x[i] for x in history_mae]) for i in range(epochs)]
# Plot
fig, ax = plt.subplots(1, figsize = (5, 3))
# Graph training process
plot(ax, avg_mae_hisotry, 'mae')
plt.show()
###Output
_____no_output_____ |
{{ cookiecutter.repo_name }}/notebooks/1.0-{{ cookiecutter.repo_name }}_start.py2.ipynb | ###Markdown
{{ cookiecutter.repo_name }} Description: {{ cookiecutter.description }}
###Code
import socket
print(socket.gethostname())
import sys
sys.path.append('../{{ cookiecutter.repo_name }}')
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____ |
tutorials/hydro_thermal/Markov_chain_approximation.ipynb | ###Markdown
In this module, we make the Markov chain approximation for the Markovian inflow energy $X_t$. The generator function obtained in TS.py is used to train the Markov chain.
###Code
import pandas
import numpy
from msppy.utils.plot import fan_plot
from msppy.discretize import Markovian
import matplotlib.pyplot as plt
gamma = numpy.array(pandas.read_csv(
"./data/gamma.csv",
names=[0,1,2,3],
index_col=0,
skiprows=1,
))
sigma = [
numpy.array(pandas.read_csv(
"./data/sigma_{}.csv".format(i),
names=[0,1,2,3],
index_col=0,
skiprows=1,
)) for i in range(12)
]
exp_mu = numpy.array(pandas.read_csv(
"./data/exp_mu.csv",
names=[0,1,2,3],
index_col=0,
skiprows=1,
))
inflow_initial = numpy.array([41248.7153,7386.860854,10124.56146,6123.808537])
T = 12
def generator(random_state,size):
inflow = numpy.empty([size,T,4])
inflow[:,0,:] = inflow_initial[numpy.newaxis:,]
for t in range(1,T):
noise = numpy.exp(random_state.multivariate_normal(mean=[0]*4, cov=sigma[t%12],size=size))
inflow[:,t,:] = noise * (
(1-gamma[t%12]) * exp_mu[t%12]
+ gamma[t%12] * exp_mu[t%12]/exp_mu[(t-1)%12] * inflow[:,t-1,:]
)
return inflow
###Output
_____no_output_____
###Markdown
Use robust stochastic approximation to iteratively train a non-homogenuous four-dimensional Markov chain with one initial Markov states and one hundred Markov states from stage two on. We make 1000 iterations to train the Markov states and then 10000 iterations to train the transition matrix.Refer to:
###Code
markovian = Markovian(
f=generator,
T=T,
n_Markov_states=[1]+[12]*(T-1),
n_sample_paths=1000)
Markov_states, transition_matrix = markovian.SA()
s = generator(numpy.random,100)
pandas.DataFrame(Markov_states[1]).head()
markovian.Markov_states[1]
pandas.DataFrame(Markov_states[1]).head()
pandas.DataFrame(transition_matrix[1])
###Output
_____no_output_____
###Markdown
Let us see how do the returned markov_states and transition_matrix look like
###Code
# stage 0: The initial four-dimensional inflow
Markov_states[0]
# stage 1: The trained 100 Markov states. Each column is a Markov state.
pandas.DataFrame(Markov_states[1]).head()
# Stage 0: transition matrix always begins with [[1]] since the first stage is always
# assumed to be deterministic.
transition_matrix[0]
# stage 1: transition matrix between the initial single Markov state
# and the 100 Markov states in the second stage, so it is 1 by 100.
pandas.DataFrame(transition_matrix[1])
# stage 2: transition matrix between the 100 Markov states in the second stage
# and the 100 Markov states in the third stage, so it is 100 by 100.
pandas.DataFrame(transition_matrix[2]).head()
###Output
_____no_output_____
###Markdown
Use the trained Markov state space and transition matrix to simulate inflow data. It is clear that the fan plot of the simulated sample path is very similar to that of the historical data (see TS.py).
###Code
sim = markovian.simulate(100)
fig = plt.figure(figsize=(10,5))
ax = [None] * 4
for i in range(4):
ax[i] = plt.subplot(221+i)
fan_plot(pandas.DataFrame(sim[:,:,i]),ax[i])
###Output
_____no_output_____
###Markdown
Let us also try the SAA, SA and RSA approaches to make MC approximation.
###Code
markovian = Markovian(
f=generator,
T=T,
n_Markov_states=[1]+[100]*(T-1),
n_sample_paths=10000,
)
Markov_states, transition_matrix = markovian.SAA()
fig = plt.figure(figsize=(10,5))
sim = markovian.simulate(100)
ax = [None]*4
for i in range(4):
ax[i] = plt.subplot(221+i)
fan_plot(pandas.DataFrame(sim[:,:,i]),ax[i])
markovian = Markovian(
f=generator,
T=T,
n_Markov_states=[1]+[100]*(T-1),
n_sample_paths=10000,
)
Markov_states, transition_matrix = markovian.SA()
fig = plt.figure(figsize=(10,5))
sim = markovian.simulate(100)
ax = [None]*4
for i in range(4):
ax[i] = plt.subplot(221+i)
fan_plot(pandas.DataFrame(sim[:,:,i]),ax[i])
markovian = Markovian(
f=generator,
T=T,
n_Markov_states=[1]+[100]*(T-1),
n_sample_paths=10000,
)
Markov_states, transition_matrix = markovian.RSA()
fig = plt.figure(figsize=(10,5))
sim = markovian.simulate(100)
ax = [None]*4
for i in range(4):
ax[i] = plt.subplot(221+i)
fan_plot(pandas.DataFrame(sim[:,:,i]),ax[i])
###Output
_____no_output_____ |
Layer_Activation_Visualization_from_Saved_Model_Malaya_Kew.ipynb | ###Markdown
Load Libraries:
###Code
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K
from tensorflow.keras import activations
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model, load_model
from tensorflow.keras import models
from tensorflow.keras import layers
import cv2
import numpy as np
from tqdm import tqdm
import math
import os
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load Model:
###Code
work_dir = "drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/Records/"
checkpointer_name = "weights.MK.D2.TTV.rgb.256p.DataAug4.DataFlow.pad0.TL.3D.DenseNet201.wInit.imagenet.TrainableAfter.allDefault.Dense.1024.1024.2048.actF.elu.opt.Adam.drop.0.5.batch16.Flatten.l2.0.001.run_1.hdf5"
model_loaded = load_model(work_dir+checkpointer_name)
print("Loaded "+work_dir+checkpointer_name+".")
model_loaded.summary()
###Output
_____no_output_____
###Markdown
Model Layers:
###Code
layer_names = [] # conv4_block48_2_conv, conv3_block12_2_conv
for layer in model_loaded.layers:
layer_names.append(layer.name)
print(layer_names)
layer_no = -9
print(f"layer_names[{layer_no}] = {layer_names[layer_no]}")
###Output
_____no_output_____
###Markdown
By Loading Entire Test at Once:
###Code
'''
input_path = "drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/"
filename = "D2_Plant_Leaf_MalayaKew_MK_impl_1_Original_RGB_test_X.pkl.npy"
#'''
#input_test = np.load(f"{input_path}{filename}", allow_pickle=True)
'''
print(f"input_test.shape = {input_test.shape}")
#'''
'''
layer_outputs = [layer.output for layer in model_loaded.layers]
activation_model = models.Model(inputs=model_loaded.input, outputs=layer_outputs)
activations = activation_model.predict(input_test)
#'''
###Output
_____no_output_____
###Markdown
By Loading Single at a Time:
###Code
root_path = "drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/MK/D2/test_patch/"
#num_classes = 44
#list_classes = [f"Class{i+1}" for i in range(num_classes)]
list_classes = [f"Class{i}" for i in [1,11,22,33,44]]
list_input_path = []
for class_name in list_classes:
list_input_path.append(f"{root_path}{class_name}/")
print(f"len(list_input_path) = {len(list_input_path)}")
os.listdir(list_input_path[0])[0]
list_full_paths = []
choose_different_index = 0
for input_path in list_input_path:
filename = os.listdir(input_path)[choose_different_index]
choose_different_index += 15
list_full_paths.append(f"{input_path}{filename}")
print(f"len(list_full_paths) = {len(list_full_paths)}")
list_full_paths
'''
filename = "Class44(8)R315_00277.jpg"
test_image = cv2.imread(f"{input_path}{filename}")
print(f"test_image.shape = {test_image.shape}")
input_test = np.expand_dims(test_image, 0)
print(f"input_test.shape = {input_test.shape}")
#'''
list_test_images = []
for file_full_path in list_full_paths:
test_image = cv2.imread(file_full_path)
print(f"file_full_path: {file_full_path}")
list_test_images.append(test_image)
np_test_images = np.array(list_test_images)
print(f"np_test_images.shape = {np_test_images.shape}")
###Output
file_full_path: drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/MK/D2/test_patch/Class1/Class1(1)R135_00023.jpg
file_full_path: drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/MK/D2/test_patch/Class11/Class11(1)R90_00008.jpg
file_full_path: drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/MK/D2/test_patch/Class22/Class22(11)R225_00070.jpg
file_full_path: drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/MK/D2/test_patch/Class33/Class33(2)R180_00054.jpg
file_full_path: drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/MK/D2/test_patch/Class44/Class44(2)R45_00051.jpg
np_test_images.shape = (5, 256, 256, 3)
###Markdown
Get Layer Activation Outputs:
###Code
layer_outputs = [layer.output for layer in model_loaded.layers]
activation_model = models.Model(inputs=model_loaded.input, outputs=layer_outputs)
#activations = activation_model.predict(input_test)
list_activations = []
for test_image in tqdm(np_test_images):
activations = activation_model.predict(np.array([test_image]))
list_activations.append(activations)
print(f"\nlen(list_activations) = {len(list_activations)}")
###Output
_____no_output_____
###Markdown
Visualize:
###Code
'''
input_1(256,256,3), conv1/relu(128,128,64), pool2_relu(64,64,256), pool3_relu(32,32,512), pool4_relu(16,16,1792), relu(8,8,1920)
'''
#target_layer_name = "conv3_block12_concat"
list_target_layer_names = ['input_1', 'conv1/relu', 'pool2_relu', 'pool3_relu', 'pool4_relu', 'relu']
list_layer_indices = []
for target_layer_name in list_target_layer_names:
for target_layer_index in range(len(layer_names)):
if layer_names[target_layer_index]==target_layer_name:
#layer_no = target_layer_index
list_layer_indices.append(target_layer_index)
#print(f"layer_names[{layer_no}] = {layer_names[layer_no]}")
print(f"list_layer_indices = {list_layer_indices}")
for activations in list_activations:
print(len(activations))
'''
current_layer = activations[layer_no]
num_neurons = current_layer.shape[1:][-1]
print(f"current_layer.shape = {current_layer.shape}")
print(f"image_dimension = {current_layer.shape[1:][:-1]}")
print(f"num_neurons = {num_neurons}")
#'''
list_all_activations_layers = []
list_all_num_neurons = []
# list_all_activations_layers -> list_activations_layers -> current_layer -> activations[layer_no]
for activations in list_activations:
list_activations_layers = []
list_neurons = []
for layer_no in list_layer_indices:
current_layer = activations[layer_no]
#print(f"current_layer.shape = {current_layer.shape}")
list_activations_layers.append(current_layer)
#list_current_layers.append(current_layer)
list_neurons.append(current_layer.shape[1:][-1])
list_all_activations_layers.append(list_activations_layers)
list_all_num_neurons.append(list_neurons)
print(f"len(list_all_activations_layers) = {len(list_all_activations_layers)}")
print(f"len(list_all_activations_layers[0]) = {len(list_all_activations_layers[0])}")
print(f"list_all_activations_layers[0][0] = {list_all_activations_layers[0][0].shape}")
print(f"list_all_num_neurons = {list_all_num_neurons}")
print(f"list_all_num_neurons[0] = {list_all_num_neurons[0]}")
print(f"list_all_activations_layers[0][0] = {list_all_activations_layers[0][0].shape}")
print(f"list_all_activations_layers[0][1] = {list_all_activations_layers[0][1].shape}")
print(f"list_all_activations_layers[0][2] = {list_all_activations_layers[0][2].shape}")
print(f"list_all_activations_layers[0][3] = {list_all_activations_layers[0][3].shape}")
print(f"list_all_activations_layers[0][4] = {list_all_activations_layers[0][4].shape}")
#print(f"list_all_activations_layers[0][5] = {list_all_activations_layers[0][5].shape}")
#'''
current_layer = list_all_activations_layers[0][1]
superimposed_activation_image = current_layer[0, :, :, 0]
for activation_image_index in range(1, current_layer.shape[-1]):
current_activation_image = current_layer[0, :, :, activation_image_index]
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
plt.imshow(superimposed_activation_image, cmap='viridis')
#'''
#'''
current_layer = list_all_activations_layers[-1][1]
superimposed_activation_image = current_layer[0, :, :, 0]
for activation_image_index in range(1, current_layer.shape[-1]):
current_activation_image = current_layer[0, :, :, activation_image_index]
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
plt.imshow(superimposed_activation_image, cmap='viridis')
#'''
#plt.matshow(current_layer[0, :, :, -1], cmap ='PiYG')
#plt.matshow(current_layer[0, :, :, -1], cmap ='viridis')
'''
superimposed_activation_image = current_layer[0, :, :, 0]
for activation_image_index in range(1, num_neurons):
current_activation_image = current_layer[0, :, :, activation_image_index]
#superimposed_activation_image = np.multiply(superimposed_activation_image, current_activation_image) # elementwise multiplication
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
plt.imshow(superimposed_activation_image, cmap='viridis')
#'''
'''
superimposed_activation_image = current_layer[0, :, :, 0]
for activation_image_index in range(1, len(num_neurons)):
current_activation_image = current_layer[0, :, :, activation_image_index]
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
plt.imshow(superimposed_activation_image, cmap='viridis')
#'''
#'''
# list_all_activations_layers -> list_activations_layers -> current_layer -> activations[layer_no]
#num_activation_per_test_image = len(list_target_layer_names)
list_all_superimposed_activation_image = []
for list_activations_layers_index in range(len(list_all_activations_layers)):
list_activations_layers = list_all_activations_layers[list_activations_layers_index]
list_current_num_neurons = list_all_num_neurons[list_activations_layers_index]
#print(f"list_activations_layers_index = {list_activations_layers_index}")
#print(f"list_all_num_neurons = {list_all_num_neurons}")
#print(f"list_current_num_neurons = {list_current_num_neurons}")
list_superimposed_activation_image = []
for activations_layer_index in range(len(list_activations_layers)):
activations_layers = list_activations_layers[activations_layer_index]
#print(f"activations_layers.shape = {activations_layers.shape}")
num_neurons = list_current_num_neurons[activations_layer_index]
superimposed_activation_image = activations_layers[0, :, :, 0]
for activation_image_index in range(1, num_neurons):
current_activation_image = activations_layers[0, :, :, activation_image_index]
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
#print(f"superimposed_activation_image.shape = {superimposed_activation_image.shape}")
list_superimposed_activation_image.append(superimposed_activation_image)
#print(f"list_superimposed_activation_image[0].shape = {list_superimposed_activation_image[0].shape}")
list_all_superimposed_activation_image.append(list_superimposed_activation_image)
#print(f"list_all_superimposed_activation_image[0][0].shape = {list_all_superimposed_activation_image[0][0].shape}")
#plt.imshow(superimposed_activation_image, cmap='viridis')
print(f"len(list_all_superimposed_activation_image) = {len(list_all_superimposed_activation_image)}")
print(f"len(list_all_superimposed_activation_image[0]) = {len(list_all_superimposed_activation_image[0])}")
print(f"len(list_all_superimposed_activation_image[0][0]) = {len(list_all_superimposed_activation_image[0][0])}")
print(f"list_all_superimposed_activation_image[0][0].shape = {list_all_superimposed_activation_image[0][0].shape}")
#'''
'''
interpolation = cv2.INTER_LINEAR # INTER_LINEAR, INTER_CUBIC, INTER_NEAREST
# list_all_activations_layers -> list_activations_layers -> current_layer -> activations[layer_no]
#num_activation_per_test_image = len(list_target_layer_names)
list_all_superimposed_activation_image = []
for list_activations_layers_index in range(len(list_all_activations_layers)):
list_activations_layers = list_all_activations_layers[list_activations_layers_index]
list_current_num_neurons = list_all_num_neurons[list_activations_layers_index]
#print(f"list_activations_layers_index = {list_activations_layers_index}")
#print(f"list_all_num_neurons = {list_all_num_neurons}")
#print(f"list_current_num_neurons = {list_current_num_neurons}")
list_superimposed_activation_image = []
for activations_layer_index in range(len(list_activations_layers)):
activations_layers = list_activations_layers[activations_layer_index]
#print(f"activations_layers.shape = {activations_layers.shape}")
num_neurons = list_current_num_neurons[activations_layer_index]
superimposed_activation_image = activations_layers[0, :, :, 0]
superimposed_activation_image_resized = cv2.resize(superimposed_activation_image, (256,256), interpolation = interpolation)
for activation_image_index in range(1, num_neurons):
current_activation_image = activations_layers[0, :, :, activation_image_index]
#superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
current_activation_image_resized = cv2.resize(current_activation_image, (256,256), interpolation = interpolation)
superimposed_activation_image_resized = np.add(superimposed_activation_image_resized, current_activation_image_resized) # elementwise addition
#print(f"superimposed_activation_image.shape = {superimposed_activation_image.shape}")
#list_superimposed_activation_image.append(superimposed_activation_image)
list_superimposed_activation_image.append(superimposed_activation_image_resized)
#print(f"list_superimposed_activation_image[0].shape = {list_superimposed_activation_image[0].shape}")
list_all_superimposed_activation_image.append(list_superimposed_activation_image)
#print(f"list_all_superimposed_activation_image[0][0].shape = {list_all_superimposed_activation_image[0][0].shape}")
#plt.imshow(superimposed_activation_image, cmap='viridis')
print(f"len(list_all_superimposed_activation_image) = {len(list_all_superimposed_activation_image)}")
print(f"len(list_all_superimposed_activation_image[0]) = {len(list_all_superimposed_activation_image[0])}")
print(f"len(list_all_superimposed_activation_image[0][0]) = {len(list_all_superimposed_activation_image[0][0])}")
print(f"list_all_superimposed_activation_image[0][0].shape = {list_all_superimposed_activation_image[0][0].shape}")
print(f"list_all_superimposed_activation_image[0][-1].shape = {list_all_superimposed_activation_image[0][-1].shape}")
#'''
'''
supported cmap values are: 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r',
'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r',
'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr',
'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn',
'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn',
'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r',
'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r',
'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar',
'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r',
'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'nipy_spectral',
'nipy_spectral_r', 'ocean', 'oc...
'''
sub_fig_num_rows = len(list_test_images)
sub_fig_num_cols = len(list_target_layer_names)
fig_heigth = 11
fig_width = 11
cmap = "Greens" # PuOr_r, Dark2, Dark2_r, RdBu, RdBu_r, coolwarm, viridis, PiYG, gray, binary, afmhot, PuBu, copper
fig, axes = plt.subplots(sub_fig_num_rows,sub_fig_num_cols, figsize=(fig_width,fig_heigth))
#plt.suptitle(f"Layer {str(layer_no+1)}: {layer_names[layer_no]} {str(current_layer.shape[1:])}", fontsize=20, y=1.1)
for i,ax in enumerate(axes.flat):
row = i//sub_fig_num_cols
col = i%sub_fig_num_cols
#print(f"i={i}; row={row}, col={col}")
#'''
ax.imshow(list_all_superimposed_activation_image[row][col], cmap=cmap)
#ax.imshow(list_all_superimposed_activation_image[row][col])
ax.set_xticks([])
ax.set_yticks([])
if col == 0:
ax.set_ylabel(f"{list_classes[row]}")
if row == 0:
#ax.set_xlabel(f"Layer {str(list_layer_indices[col])}") # , rotation=0, ha='right'
ax.set_xlabel(str(list_target_layer_names[col]))
#ax.set_xlabel(f"Layer {str(list_layer_indices[col])}: {str(list_target_layer_names[col])}") # , rotation=0, ha='right'
ax.xaxis.set_label_position('top')
ax.set_aspect('auto')
plt.subplots_adjust(wspace=0.02, hspace=0.05)
img_path = 'drive/My Drive/Visualizations/'+checkpointer_name[8:-5]+'.png'
plt.savefig(img_path, dpi=600)
plt.show()
print('img_path =', img_path)
#'''
# good cmap for this work: PuOr_r, Dark2_r, RdBu, RdBu_r, coolwarm, viridis, PiYG
'''
for activation_image_index in range(num_neurons):
plt.imshow(current_layer[0, :, :, activation_image_index], cmap='PiYG')
#'''
plt.imshow(superimposed_activation_image, cmap='gray')
###Output
_____no_output_____
###Markdown
Weight Visualization:
###Code
layer_outputs = [layer.output for layer in model_loaded.layers]
#activation_model = models.Model(inputs=model_loaded.input, outputs=layer_outputs)
#activations = activation_model.predict(input_test)
layer_configs = []
layer_weights = []
for layer in model_loaded.layers:
layer_configs.append(layer.get_config())
layer_weights.append(layer.get_weights())
print(f"len(layer_configs) = {len(layer_configs)}")
print(f"len(layer_weights) = {len(layer_weights)}")
layer_configs[-9]
layer_name = 'conv2_block1_1_conv' # conv5_block32_1_conv
model_weight = model_loaded.get_layer(layer_name).get_weights()[0]
#model_biases = model_loaded.get_layer(layer_name).get_weights()[1]
print(f"type(model_weight) = {type(model_weight)}")
print(f"model_weight.shape = {model_weight.shape}")
model_weight[0][0].shape
plt.matshow(model_weight[0, 0, :, :], cmap ='viridis')
###Output
_____no_output_____ |
Skin_Lesion_ResNet50.ipynb | ###Markdown
**Project Contributors: 18111006 (Alan Biju), 18111015 (Ayush Rai), 18111016 (Banoth Naveen Kumar), 18111025 (Gaurav Kar)****ResNet50**
###Code
from google.colab import drive
drive.mount('/content/drive')
from zipfile import ZipFile
filename="/content/drive/My Drive/HAM10000.zip"
with ZipFile(filename,'r') as zip:
zip.extractall()
print("done")
import pandas as pd
import numpy as np
import os
import tensorflow as tf
import cv2
from keras import backend as K
from keras.layers import Layer,InputSpec
import keras.layers as kl
from glob import glob
from sklearn.metrics import roc_curve, auc
from keras.preprocessing import image
from tensorflow.keras.models import Sequential
from sklearn.metrics import roc_auc_score
from tensorflow.keras import callbacks
from tensorflow.keras.callbacks import ModelCheckpoint,EarlyStopping
from matplotlib import pyplot as plt
from tensorflow.keras import Model
from tensorflow.keras.layers import concatenate,Dense, Conv2D, MaxPooling2D, Flatten,Input,Activation,add,AveragePooling2D,GlobalAveragePooling2D,BatchNormalization,Dropout
%matplotlib inline
import shutil
from sklearn.metrics import precision_score, recall_score, accuracy_score,classification_report ,confusion_matrix
from tensorflow.python.platform import build_info as tf_build_info
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.model_selection import train_test_split
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
data_pd = pd.read_csv('/content/drive/MyDrive/HAM10000_metadata.csv')
data_pd.head()
train_dir = os.path.join('HAM10000', 'train_dir')
test_dir = os.path.join('HAM10000', 'test_dir')
df_count = data_pd.groupby('lesion_id').count()
df_count.head()
df_count = df_count[df_count['dx'] == 1]
df_count.reset_index(inplace=True)
def duplicates(x):
unique = set(df_count['lesion_id'])
if x in unique:
return 'no'
else:
return 'duplicates'
data_pd['is_duplicate'] = data_pd['lesion_id'].apply(duplicates)
data_pd.head()
df_count = data_pd[data_pd['is_duplicate'] == 'no']
train, test_df = train_test_split(df_count, test_size=0.15, stratify=df_count['dx'])
def identify_trainOrtest(x):
test_data = set(test_df['image_id'])
if str(x) in test_data:
return 'test'
else:
return 'train'
#creating train_df
data_pd['train_test_split'] = data_pd['image_id'].apply(identify_trainOrtest)
train_df = data_pd[data_pd['train_test_split'] == 'train']
train_df.head()
test_df.head()
# Image id of train and test images
train_list = list(train_df['image_id'])
test_list = list(test_df['image_id'])
len(test_list)
len(train_list)
# Set the image_id as the index in data_pd
data_pd.set_index('image_id', inplace=True)
os.mkdir(train_dir)
os.mkdir(test_dir)
targetnames = ['akiec', 'bcc', 'bkl', 'df', 'mel', 'nv', 'vasc']
for i in targetnames:
directory1=train_dir+'/'+i
directory2=test_dir+'/'+i
os.mkdir(directory1)
os.mkdir(directory2)
for image in train_list:
file_name = image+'.jpg'
label = data_pd.loc[image, 'dx']
# path of source image
source = os.path.join('HAM10000', file_name)
# copying the image from the source to target file
target = os.path.join(train_dir, label, file_name)
shutil.copyfile(source, target)
for image in test_list:
file_name = image+'.jpg'
label = data_pd.loc[image, 'dx']
# path of source image
source = os.path.join('HAM10000', file_name)
# copying the image from the source to target file
target = os.path.join(test_dir, label, file_name)
shutil.copyfile(source, target)
targetnames = ['akiec', 'bcc', 'bkl', 'df', 'mel', 'nv', 'vasc']
# Augmenting images and storing them in temporary directories
for img_class in targetnames:
#creating temporary directories
# creating a base directory
aug_dir = 'aug_dir'
os.mkdir(aug_dir)
# creating a subdirectory inside the base directory for images of the same class
img_dir = os.path.join(aug_dir, 'img_dir')
os.mkdir(img_dir)
img_list = os.listdir('HAM10000/train_dir/' + img_class)
# Copy images from the class train dir to the img_dir
for file_name in img_list:
# path of source image in training directory
source = os.path.join('HAM10000/train_dir/' + img_class, file_name)
# creating a target directory to send images
target = os.path.join(img_dir, file_name)
# copying the image from the source to target file
shutil.copyfile(source, target)
# Temporary augumented dataset directory.
source_path = aug_dir
# Augmented images will be saved to training directory
save_path = 'HAM10000/train_dir/' + img_class
# Creating Image Data Generator to augment images
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=180,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.1,
horizontal_flip=True,
vertical_flip=True,
fill_mode='nearest'
)
batch_size = 50
aug_datagen = datagen.flow_from_directory(source_path,save_to_dir=save_path,save_format='jpg',target_size=(224, 224),batch_size=batch_size)
# Generate the augmented images
aug_images = 8000
num_files = len(os.listdir(img_dir))
num_batches = int(np.ceil((aug_images - num_files) / batch_size))
# creating 8000 augmented images per class
for i in range(0, num_batches):
images, labels = next(aug_datagen)
# delete temporary directory
shutil.rmtree('aug_dir')
train_path = 'HAM10000/train_dir'
test_path = 'HAM10000/test_dir'
batch_size=16
datagen=ImageDataGenerator(preprocessing_function=tf.keras.applications.inception_resnet_v2.preprocess_input)
image_size = 224
print("\nTrain Batches: ")
train_batches = datagen.flow_from_directory(directory=train_path,
target_size=(image_size,image_size),
batch_size=batch_size,
shuffle=True)
print("\nTest Batches: ")
test_batches =datagen.flow_from_directory(test_path,
target_size=(image_size,image_size),
batch_size=batch_size,
shuffle=False)
resnet = tf.keras.applications.ResNet50(
include_top=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
)
# Exclude the last 3 layers of the model.
conv = resnet.layers[-3].output
output = GlobalAveragePooling2D()(conv)
output = Dense(7, activation='softmax')(output)
model = Model(inputs=resnet.input, outputs=output)
model.summary()
opt1=tf.keras.optimizers.Adam(learning_rate=0.01,epsilon=0.1)
model.compile(optimizer=opt1,
loss='categorical_crossentropy',
metrics=['accuracy'])
class_weights = {
0: 1.0, # akiec
1: 1.0, # bcc
2: 1.0, # bkl
3: 1.0, # df
4: 5.0, # mel
5: 1.0, # nv
6: 1.0, # vasc
}
checkpoint= ModelCheckpoint(filepath = 'ResNet50.hdf5',monitor='val_accuracy',save_best_only=True,save_weights_only=True)
Earlystop = EarlyStopping(monitor='val_loss', mode='min',patience=40, min_delta=0.001)
history = model.fit(train_batches,
steps_per_epoch=(len(train_df)/10),
epochs=300,
verbose=2,
validation_data=test_batches,validation_steps=len(test_df)/batch_size,callbacks=[checkpoint,Earlystop],class_weight=class_weights)
from tensorflow.keras import models
model.load_weights("ResNet50.hdf5")
predictions = model.predict(test_batches, steps=len(test_df)/batch_size, verbose=0)
#geting predictions on test dataset
y_pred = np.argmax(predictions, axis=1)
targetnames = ['akiec', 'bcc', 'bkl', 'df', 'mel', 'nv', 'vasc']
#getting the true labels per image
y_true = test_batches.classes
#getting the predicted labels per image
y_prob=predictions
from tensorflow.keras.utils import to_categorical
y_test = to_categorical(y_true)
# Creating classification report
report = classification_report(y_true, y_pred, target_names=targetnames)
print("\nClassification Report:")
print(report)
print("Precision: "+ str(precision_score(y_true, y_pred, average='weighted')))
print("Recall: "+ str(recall_score(y_true, y_pred, average='weighted')))
print("Accuracy: " + str(accuracy_score(y_true, y_pred)))
print("weighted Roc score: " + str(roc_auc_score(y_test,y_prob,multi_class='ovr',average='weighted')))
print("Precision: "+ str(precision_score(y_true, y_pred, average='macro')))
print("Recall: "+ str(recall_score(y_true, y_pred, average='macro')))
print("Accuracy: " + str(accuracy_score(y_true, y_pred)))
print("Macro Roc score: " + str(roc_auc_score(y_test,y_prob,multi_class='ovr',average='macro')))
print("Precision: "+ str(precision_score(y_true, y_pred, average='micro')))
print("Recall: "+ str(recall_score(y_true, y_pred, average='micro')))
print("Accuracy: " + str(accuracy_score(y_true, y_pred)))
tpr={}
fpr={}
roc_auc={}
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_prob.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
print("Micro Roc score: " + str(roc_auc["micro"]))
fpr = {}
tpr = {}
roc_auc = {}
for i in range(7):
r = roc_auc_score(y_test[:, i], y_prob[:, i])
print("The ROC AUC score of "+targetnames[i]+" is: "+str(r))
# Compute ROC curve and ROC area for each class
fpr = {}
tpr = {}
roc_auc = dict()
for i in range(7):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_prob[:, i], drop_intermediate=False)
roc_auc[i] = auc(fpr[i], tpr[i])
plt.plot(fpr[0], tpr[0],'v-',label='akiec: ROC curve of (area = %0.2f)' % roc_auc[0])
plt.plot(fpr[1], tpr[1],'c',label='bcc: ROC curve of (area = %0.2f)' % roc_auc[1])
plt.plot(fpr[2], tpr[2],'b',label='bkl: ROC curve of (area = %0.2f)' % roc_auc[2])
plt.plot(fpr[3], tpr[3],'g',label='df: ROC curve of (area = %0.2f)' % roc_auc[3])
plt.plot(fpr[4], tpr[4],'y',label='mel: ROC curve of (area = %0.2f)' % roc_auc[4])
plt.plot(fpr[5], tpr[5],'o-',label='nv: ROC curve of (area = %0.2f)' % roc_auc[5])
plt.plot(fpr[6], tpr[6],'r',label='vasc: ROC curve of (area = %0.2f)' % roc_auc[6])
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([-0.1, 1.1])
plt.ylim([-0.1, 1.1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic of %s'%targetnames[i])
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
ssd7_training-parallel_training.ipynb | ###Markdown
SSD7 Training TutorialThis tutorial explains how to train an SSD7 on the Udacity road traffic datasets, and just generally how to use this SSD implementation.Disclaimer about SSD7:As you will see below, training SSD7 on the aforementioned datasets yields alright results, but I'd like to emphasize that SSD7 is not a carefully optimized network architecture. The idea was just to build a low-complexity network that is fast (roughly 127 FPS or more than 3 times as fast as SSD300 on a GTX 1070) for testing purposes. Would slightly different anchor box scaling factors or a slightly different number of filters in individual convolution layers make SSD7 significantly better at similar complexity? I don't know, I haven't tried.
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau, TerminateOnNaN, CSVLogger
from tensorflow.keras import backend as K
from tensorflow.keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from models.keras_ssd7 import build_model
#from models.keras_ssd7_quantize import build_model_quantize
from models.keras_ssd7_quantize2 import build_model_quantize2
#from keras_loss_function.keras_ssd_loss import SSDLoss #commented to test TF2.0
from keras_loss_function.keras_ssd_loss_tf2 import SSDLoss # added for TF2.0
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_AnchorBoxes_1 import DefaultDenseQuantizeConfig
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
from data_generator.data_augmentation_chain_variable_input_size import DataAugmentationVariableInputSize
from data_generator.data_augmentation_chain_constant_input_size import DataAugmentationConstantInputSize
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation
## imports used for pruning
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
%matplotlib inline
## loading tensorboard
#%load_ext tensorboard
tf.__version__
import tensorflow as tf
if tf.test.gpu_device_name():
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
print("Please install GPU version of TF")
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
physical_device = tf.config.experimental.list_physical_devices('GPU')
print(f'Device found : {physical_device}')
physical_devices = tf.config.experimental.list_physical_devices('GPU')
print(physical_devices)
if physical_devices:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
###Output
_____no_output_____
###Markdown
1. Set the model configuration parametersThe cell below sets a number of parameters that define the model configuration. The parameters set here are being used both by the `build_model()` function that builds the model as well as further down by the constructor for the `SSDInputEncoder` object that is needed to to match ground truth and anchor boxes during the training.Here are just some comments on a few of the parameters, read the documentation for more details:* Set the height, width, and number of color channels to whatever you want the model to accept as image input. If your input images have a different size than you define as the model input here, or if your images have non-uniform size, then you must use the data generator's image transformations (resizing and/or cropping) so that your images end up having the required input size before they are fed to the model. to convert your images to the model input size during training. The SSD300 training tutorial uses the same image pre-processing and data augmentation as the original Caffe implementation, so take a look at that to see one possibility of how to deal with non-uniform-size images.* The number of classes is the number of positive classes in your dataset, e.g. 20 for Pascal VOC or 80 for MS COCO. Class ID 0 must always be reserved for the background class, i.e. your positive classes must have positive integers as their IDs in your dataset.* The `mode` argument in the `build_model()` function determines whether the model will be built with or without a `DecodeDetections` layer as its last layer. In 'training' mode, the model outputs the raw prediction tensor, while in 'inference' and 'inference_fast' modes, the raw predictions are being decoded into absolute coordinates and filtered via confidence thresholding, non-maximum suppression, and top-k filtering. The difference between latter two modes is that 'inference' uses the decoding procedure of the original Caffe implementation, while 'inference_fast' uses a faster, but possibly less accurate decoding procedure.* The reason why the list of scaling factors has 5 elements even though there are only 4 predictor layers in tSSD7 is that the last scaling factor is used for the second aspect-ratio-1 box of the last predictor layer. Refer to the documentation for details.* `build_model()` and `SSDInputEncoder` have two arguments for the anchor box aspect ratios: `aspect_ratios_global` and `aspect_ratios_per_layer`. You can use either of the two, you don't need to set both. If you use `aspect_ratios_global`, then you pass one list of aspect ratios and these aspect ratios will be used for all predictor layers. Every aspect ratio you want to include must be listed once and only once. If you use `aspect_ratios_per_layer`, then you pass a nested list containing lists of aspect ratios for each individual predictor layer. This is what the SSD300 training tutorial does. It's your design choice whether all predictor layers should use the same aspect ratios or whether you think that for your dataset, certain aspect ratios are only necessary for some predictor layers but not for others. Of course more aspect ratios means more predicted boxes, which in turn means increased computational complexity.* If `two_boxes_for_ar1 == True`, then each predictor layer will predict two boxes with aspect ratio one, one a bit smaller, the other one a bit larger.* If `clip_boxes == True`, then the anchor boxes will be clipped so that they lie entirely within the image boundaries. It is recommended not to clip the boxes. The anchor boxes form the reference frame for the localization prediction. This reference frame should be the same at every spatial position.* In the matching process during the training, the anchor box offsets are being divided by the variances. Leaving them at 1.0 for each of the four box coordinates means that they have no effect. Setting them to less than 1.0 spreads the imagined anchor box offset distribution for the respective box coordinate.* `normalize_coords` converts all coordinates from absolute coordinate to coordinates that are relative to the image height and width. This setting has no effect on the outcome of the training.
###Code
img_height = 300 # Height of the input images
img_width = 480 # Width of the input images
img_channels = 3 # Number of color channels of the input images
intensity_mean = 127.5 # Set this to your preference (maybe `None`). The current settings transform the input pixel values to the interval `[-1,1]`.
intensity_range = 127.5 # Set this to your preference (maybe `None`). The current settings transform the input pixel values to the interval `[-1,1]`.
n_classes = 5 # Number of positive classes
scales = [0.08, 0.16, 0.32, 0.64, 0.96] # An explicit list of anchor box scaling factors. If this is passed, it will override `min_scale` and `max_scale`.
aspect_ratios = [0.5, 1.0, 2.0] # The list of aspect ratios for the anchor boxes
two_boxes_for_ar1 = True # Whether or not you want to generate two anchor boxes for aspect ratio 1
steps = None # In case you'd like to set the step sizes for the anchor box grids manually; not recommended
offsets = None # In case you'd like to set the offsets for the anchor box grids manually; not recommended
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [1.0, 1.0, 1.0, 1.0] # The list of variances by which the encoded target coordinates are scaled
normalize_coords = True # Whether or not the model is supposed to use coordinates relative to the image size
###Output
_____no_output_____
###Markdown
2. Build or load the modelYou will want to execute either of the two code cells in the subsequent two sub-sections, not both. 2.1 Create a new modelIf you want to create a new model, this is the relevant section for you. If you want to load a previously saved model, skip ahead to section 2.2.The code cell below does the following things:1. It calls the function `build_model()` to build the model.2. It optionally loads some weights into the model.3. It then compiles the model for the training. In order to do so, we're defining an optimizer (Adam) and a loss function (SSDLoss) to be passed to the `compile()` method.`SSDLoss` is a custom Keras loss function that implements the multi-task log loss for classification and smooth L1 loss for localization. `neg_pos_ratio` and `alpha` are set as in the paper.
###Code
# 1: Build the Keras model
K.clear_session() # Clear previous models from memory.
model = build_model(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_global=aspect_ratios,
aspect_ratios_per_layer=None,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=intensity_mean,
divide_by_stddev=intensity_range)
# 2: Optional: Load some weights
#model.load_weights('./ssd7_weights.h5', by_name=True)
# 3: Instantiate an Adam optimizer and the SSD loss function and compile the model
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
model.summary()
###Output
_____no_output_____
###Markdown
2.2 Load a saved modelIf you have previously created and saved a model and would now like to load it, simply execute the next code cell. The only thing you need to do is to set the path to the saved model HDF5 file that you would like to load.The SSD model contains custom objects: Neither the loss function, nor the anchor box or detection decoding layer types are contained in the Keras core library, so we need to provide them to the model loader.This next code cell assumes that you want to load a model that was created in 'training' mode. If you want to load a model that was created in 'inference' or 'inference_fast' mode, you'll have to add the `DecodeDetections` or `DecodeDetectionsFast` layer type to the `custom_objects` dictionary below.
###Code
#'''
# TODO: Set the path to the `.h5` file of the model to be loaded.
model_path = './striped_pruned_polynomial_20to80p_from_base_model_2202.h5'
#model_path = './saved_models/base_model_13_01/trained_a_base_model_1301.h5'
# We need to create an SSDLoss object in order to pass that to the model loader.
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
#K.clear_session() # Clear previous models from memory.
model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes,
'compute_loss': ssd_loss.compute_loss})
#model = load_model(model_path, custom_objects={'compute_loss': ssd_loss.compute_loss})
#'''
model.summary()
model.save('saved_finroc_model/model')
###Output
_____no_output_____
###Markdown
2.2.1 Compile if you are retraining a pruned model
###Code
model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
###Output
_____no_output_____
###Markdown
2.3 Load a quantized Model
###Code
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
quantize_scope = tfmot.quantization.keras.quantize_scope
with quantize_scope():
#quantized_model = tf.keras.models.load_model('./saved_models/quantized_model_27_01/ssd7_quant_epoch-04_loss-2.5155_val_loss-2.5268.h5', custom_objects={'AnchorBoxes': AnchorBoxes, 'compute_loss': ssd_loss.compute_loss})
quant_aware_model = tf.keras.models.load_model('./saved_models/pruned_quantized_models/ssd7_pruned_50p_quantized_1802_epoch-29_loss-1.9227_val_loss-2.1054.h5', custom_objects={'AnchorBoxes': AnchorBoxes, 'compute_loss': ssd_loss.compute_loss})
#loaded_model = tf.keras.models.load_model('./quantize_ready_model_20_01_Conv2D.h5')
#with quantize_scope(
# {'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
# 'AnchorBoxes': AnchorBoxes}):
# loaded_model = tf.keras.models.load_model('./ssd7_quant_epoch-04_loss-2.5155_val_loss-2.5268.h5', custom_objects={'AnchorBoxes': AnchorBoxes})
#loaded_model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
#quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
quant_aware_model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
###Output
_____no_output_____
###Markdown
2.4 Load the pruned model
###Code
import tensorflow as tf
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
with tfmot.sparsity.keras.prune_scope():
pruned_model = tf.keras.models.load_model('./saved_models/pruned_models/50percent/1502/striped_pruned_model_1501_50p.h5', custom_objects={'AnchorBoxes': AnchorBoxes,
'compute_loss': ssd_loss.compute_loss})
pruned_model.summary()
###Output
_____no_output_____
###Markdown
3. Set up the data generators for the trainingThe code cells below set up data generators for the training and validation datasets to train the model. You will have to set the file paths to your dataset. Depending on the annotations format of your dataset, you might also have to switch from the CSV parser to the XML or JSON parser, or you might have to write a new parser method in the `DataGenerator` class that can handle whatever format your annotations are in. The [README](https://github.com/pierluigiferrari/ssd_keras/blob/master/README.md) of this repository provides a summary of the design of the `DataGenerator`, which should help you in case you need to write a new parser or adapt one of the existing parsers to your needs.Note that the generator provides two options to speed up the training. By default, it loads the individual images for a batch from disk. This has two disadvantages. First, for compressed image formats like JPG, this is a huge computational waste, because every image needs to be decompressed again and again every time it is being loaded. Second, the images on disk are likely not stored in a contiguous block of memory, which may also slow down the loading process. The first option that `DataGenerator` provides to deal with this is to load the entire dataset into memory, which reduces the access time for any image to a negligible amount, but of course this is only an option if you have enough free memory to hold the whole dataset. As a second option, `DataGenerator` provides the possibility to convert the dataset into a single HDF5 file. This HDF5 file stores the images as uncompressed arrays in a contiguous block of memory, which dramatically speeds up the loading time. It's not as good as having the images in memory, but it's a lot better than the default option of loading them from their compressed JPG state every time they are needed. Of course such an HDF5 dataset may require significantly more disk space than the compressed images. You can later load these HDF5 datasets directly in the constructor.Set the batch size to to your preference and to what your GPU memory allows, it's not the most important hyperparameter. The Caffe implementation uses a batch size of 32, but smaller batch sizes work fine, too.The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.The image processing chain defined further down in the object named `data_augmentation_chain` is just one possibility of what a data augmentation pipeline for unform-size images could look like. Feel free to put together other image processing chains, you can use the `DataAugmentationConstantInputSize` class as a template. Or you could use the original SSD data augmentation pipeline by instantiting an `SSDDataAugmentation` object and passing that to the generator instead. This procedure is not exactly efficient, but it evidently produces good results on multiple datasets.An `SSDInputEncoder` object, `ssd_input_encoder`, is passed to both the training and validation generators. As explained above, it matches the ground truth labels to the model's anchor boxes and encodes the box coordinates into the format that the model needs. Note:The example setup below was used to train SSD7 on two road traffic datasets released by [Udacity](https://github.com/udacity/self-driving-car/tree/master/annotations) with around 20,000 images in total and 5 object classes (car, truck, pedestrian, bicyclist, traffic light), although the vast majority of the objects are cars. The original datasets have a constant image size of 1200x1920 RGB. I consolidated the two datasets, removed a few bad samples (although there are probably many more), and resized the images to 300x480 RGB, i.e. to one sixteenth of the original image size. In case you'd like to train a model on the same dataset, you can download the consolidated and resized dataset I used [here](https://drive.google.com/open?id=1tfBFavijh4UTG4cGqIKwhcklLXUDuY0D) (about 900 MB).
###Code
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Optional: If you have enough memory, consider loading the images into memory for the reasons explained above.
train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
# 2: Parse the image and label lists for the training and validation datasets.
# TODO: Set the paths to your dataset here.
# Images
images_dir = '../ssd_pruning/udacity_driving_datasets/'
# Ground truth
train_labels_filename = '../ssd_pruning/udacity_driving_datasets/labels_train_car.csv'
val_labels_filename = '../ssd_pruning/udacity_driving_datasets/labels_val_car.csv'
train_dataset.parse_csv(images_dir=images_dir,
labels_filename=train_labels_filename,
input_format=['image_name', 'xmin', 'xmax', 'ymin', 'ymax', 'class_id'], # This is the order of the first six columns in the CSV file that contains the labels for your dataset. If your labels are in XML format, maybe the XML parser will be helpful, check the documentation.
include_classes='all')
val_dataset.parse_csv(images_dir=images_dir,
labels_filename=val_labels_filename,
input_format=['image_name', 'xmin', 'xmax', 'ymin', 'ymax', 'class_id'],
include_classes='all')
# Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will
# speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory`
# option in the constructor, because in that cas the images are in memory already anyway. If you don't
# want to create HDF5 datasets, comment out the subsequent two function calls.
#train_dataset.create_hdf5_dataset(file_path='dataset_udacity_traffic_train.h5',
# resize=False,
# variable_image_size=True,
# verbose=True)
#
#val_dataset.create_hdf5_dataset(file_path='dataset_udacity_traffic_val.h5',
# resize=False,
# variable_image_size=True,
# verbose=True)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
# 3: Set the batch size.
batch_size = 8
# 4: Define the image processing chain.
data_augmentation_chain = DataAugmentationConstantInputSize(random_brightness=(-48, 48, 0.5),
random_contrast=(0.5, 1.8, 0.5),
random_saturation=(0.5, 1.8, 0.5),
random_hue=(18, 0.5),
random_flip=0.5,
random_translate=((0.03,0.5), (0.03,0.5), 0.5),
random_scale=(0.5, 2.0, 0.5),
n_trials_max=3,
clip_boxes=True,
overlap_criterion='area',
bounds_box_filter=(0.3, 1.0),
bounds_validator=(0.5, 1.0),
n_boxes_min=1,
background=(0,0,0))
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
'''
predictor_sizes = [quant_aware_model.get_layer('quant_classes4').output_shape[1:3],
quant_aware_model.get_layer('quant_classes5').output_shape[1:3],
quant_aware_model.get_layer('quant_classes6').output_shape[1:3],
quant_aware_model.get_layer('quant_classes7').output_shape[1:3]]
'''
'''
predictor_sizes = [quant_aware_model.get_layer('classes4').output_shape[1:3],
quant_aware_model.get_layer('classes5').output_shape[1:3],
quant_aware_model.get_layer('classes6').output_shape[1:3],
quant_aware_model.get_layer('classes7').output_shape[1:3]]
'''
#'''
predictor_sizes = [model.get_layer('classes4').output_shape[1:3],
model.get_layer('classes5').output_shape[1:3],
model.get_layer('classes6').output_shape[1:3],
model.get_layer('classes7').output_shape[1:3]]
#'''
'''
predictor_sizes = [pruned_model.get_layer('classes4').output_shape[1:3],
pruned_model.get_layer('classes5').output_shape[1:3],
pruned_model.get_layer('classes6').output_shape[1:3],
pruned_model.get_layer('classes7').output_shape[1:3]]
'''
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_global=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.3,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[data_augmentation_chain],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
print('Doned')
print('Value', train_dataset_size/batch_size)
###Output
_____no_output_____
###Markdown
4. Set the remaining training parameters and train the unpruned-modelWe've already chosen an optimizer and a learning rate and set the batch size above, now let's set the remaining training parameters.I'll set a few Keras callbacks below, one for early stopping, one to reduce the learning rate if the training stagnates, one to save the best models during the training, and one to continuously stream the training history to a CSV file after every epoch. Logging to a CSV file makes sense, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Feel free to add more callbacks if you want TensorBoard summaries or whatever.
###Code
# Define model callbacks.
# TODO: Set the filepath under which you want to save the weights.
model_checkpoint = ModelCheckpoint(filepath='ssd7_polynomial_20pto80p_quantized_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
csv_logger = CSVLogger(filename='ssd7_training_log_polynomial_20pto80p_quantized.csv',
separator=',',
append=True)
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0.0,
patience=10,
verbose=1)
reduce_learning_rate = ReduceLROnPlateau(monitor='val_loss',
factor=0.2,
patience=8,
verbose=1,
epsilon=0.001,
cooldown=0,
min_lr=0.00001)
#logdir = 'logs_tboard'
#tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
callbacks_unpruned = [model_checkpoint,
csv_logger,
early_stopping,
reduce_learning_rate]
###Output
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of samples seen.
WARNING:tensorflow:`epsilon` argument is deprecated and will be removed, use `min_delta` instead.
###Markdown
I'll set one epoch to consist of 1,000 training steps I'll arbitrarily set the number of epochs to 20 here. This does not imply that 20,000 training steps is the right number. Depending on the model, the dataset, the learning rate, etc. you might have to train much longer to achieve convergence, or maybe less.Instead of trying to train a model to convergence in one go, you might want to train only for a few epochs at a time.In order to only run a partial training and resume smoothly later on, there are a few things you should note:1. Always load the full model if you can, rather than building a new model and loading previously saved weights into it. Optimizers like SGD or Adam keep running averages of past gradient moments internally. If you always save and load full models when resuming a training, then the state of the optimizer is maintained and the training picks up exactly where it left off. If you build a new model and load weights into it, the optimizer is being initialized from scratch, which, especially in the case of Adam, leads to small but unnecessary setbacks every time you resume the training with previously saved weights.2. You should tell `fit_generator()` which epoch to start from, otherwise it will start with epoch 0 every time you resume the training. Set `initial_epoch` to be the next epoch of your training. Note that this parameter is zero-based, i.e. the first epoch is epoch 0. If you had trained for 10 epochs previously and now you'd want to resume the training from there, you'd set `initial_epoch = 10` (since epoch 10 is the eleventh epoch). Furthermore, set `final_epoch` to the last epoch you want to run. To stick with the previous example, if you had trained for 10 epochs previously and now you'd want to train for another 10 epochs, you'd set `initial_epoch = 10` and `final_epoch = 20`.3. Callbacks like `ModelCheckpoint` or `ReduceLROnPlateau` are stateful, so you might want ot save their state somehow if you want to pick up a training exactly where you left off. 5. Train unpruned model
###Code
%tensorboard --logdir logs_tboard
# TODO: Set the epochs to train for.
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 30
steps_per_epoch = 1000
history = model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks_unpruned,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
#tf.saved_model.save(model, './saved_finroc_model/')
print('Inputs --> ', model.inputs)
print('Outputs --> ', model.outputs)
from keras import backend as K
import tensorflow as tf
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
"""
Freezes the state of a session into a pruned computation graph.
Creates a new computation graph where variable nodes are replaced by
constants taking their current value in the session. The new graph will be
pruned so subgraphs that are not necessary to compute the requested
outputs are removed.
@param session The TensorFlow session to be frozen.
@param keep_var_names A list of variable names that should not be frozen,
or None to freeze all the variables in the graph.
@param output_names Names of the relevant graph outputs.
@param clear_devices Remove the device directives from the graph for better portability.
@return The frozen graph definition.
"""
from tensorflow.python.framework.graph_util import convert_variables_to_constants
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
# Graph -> GraphDef ProtoBuf
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = convert_variables_to_constants(session, input_graph_def,
output_names, freeze_var_names)
return frozen_graph
frozen_graph = freeze_session(K.get_session(),
output_names=[out.op.name for out in model_for_pruning.outputs])
# Save to ./model/tf_model.pb
tf.train.write_graph(frozen_graph, "model", "tf_model_20p_to_80p_polynomial_base_pruned_2202.pb", as_text=False)
###Output
_____no_output_____
###Markdown
Let's look at how the training and validation loss evolved to check whether our training is going in the right direction:
###Code
plt.figure(figsize=(20,12))
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.legend(loc='upper right', prop={'size': 24});
###Output
_____no_output_____
###Markdown
5.1 Saving unpruned model
###Code
model.save('trained_a_base_model_30ep_1502.h5', include_optimizer=True)
###Output
_____no_output_____
###Markdown
Compile quantized model
###Code
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
quant_aware_model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
###Output
WARNING:tensorflow:From /home/mohan/mlp/git/ssd_keras/keras_loss_function/keras_ssd_loss_tf2.py:76: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
5. Train quantized model
###Code
# TODO: Set the epochs to train for.
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 30
steps_per_epoch = 1000
history_quant = quant_aware_model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks_unpruned,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
###Output
Epoch 1/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1970Epoch 1/30
531/1000 [==============>...............] - ETA: 43s - loss: 2.5720
Epoch 00001: val_loss improved from inf to 2.57204, saving model to ssd7_polynomial_20pto80p_quantized_epoch-01_loss-2.1968_val_loss-2.5720.h5
1000/1000 [==============================] - 246s 246ms/step - loss: 2.1968 - val_loss: 2.5720
Epoch 2/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1763Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.3620
Epoch 00002: val_loss improved from 2.57204 to 2.36203, saving model to ssd7_polynomial_20pto80p_quantized_epoch-02_loss-2.1758_val_loss-2.3620.h5
1000/1000 [==============================] - 227s 227ms/step - loss: 2.1758 - val_loss: 2.3620
Epoch 3/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1624Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.5316
Epoch 00003: val_loss did not improve from 2.36203
1000/1000 [==============================] - 229s 229ms/step - loss: 2.1620 - val_loss: 2.5316
Epoch 4/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1674Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.5257
Epoch 00004: val_loss did not improve from 2.36203
1000/1000 [==============================] - 230s 230ms/step - loss: 2.1672 - val_loss: 2.5257
Epoch 5/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1606Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.6188
Epoch 00005: val_loss did not improve from 2.36203
1000/1000 [==============================] - 229s 229ms/step - loss: 2.1606 - val_loss: 2.6188
Epoch 6/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1593Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.3986
Epoch 00006: val_loss did not improve from 2.36203
1000/1000 [==============================] - 229s 229ms/step - loss: 2.1596 - val_loss: 2.3986
Epoch 7/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1702Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.4144
Epoch 00007: val_loss did not improve from 2.36203
1000/1000 [==============================] - 229s 229ms/step - loss: 2.1694 - val_loss: 2.4144
Epoch 8/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1611Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.4241
Epoch 00008: val_loss did not improve from 2.36203
1000/1000 [==============================] - 230s 230ms/step - loss: 2.1613 - val_loss: 2.4241
Epoch 9/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1710Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.3583
Epoch 00009: val_loss improved from 2.36203 to 2.35828, saving model to ssd7_polynomial_20pto80p_quantized_epoch-09_loss-2.1709_val_loss-2.3583.h5
1000/1000 [==============================] - 230s 230ms/step - loss: 2.1709 - val_loss: 2.3583
Epoch 10/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1793Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.3509
Epoch 00010: val_loss improved from 2.35828 to 2.35090, saving model to ssd7_polynomial_20pto80p_quantized_epoch-10_loss-2.1789_val_loss-2.3509.h5
1000/1000 [==============================] - 229s 229ms/step - loss: 2.1789 - val_loss: 2.3509
Epoch 11/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1671Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.3540
Epoch 00011: val_loss did not improve from 2.35090
1000/1000 [==============================] - 228s 228ms/step - loss: 2.1669 - val_loss: 2.3540
Epoch 12/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1773Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.4583
Epoch 00012: val_loss did not improve from 2.35090
1000/1000 [==============================] - 229s 229ms/step - loss: 2.1776 - val_loss: 2.4583
Epoch 13/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1534Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.3478
Epoch 00013: val_loss improved from 2.35090 to 2.34785, saving model to ssd7_polynomial_20pto80p_quantized_epoch-13_loss-2.1533_val_loss-2.3478.h5
1000/1000 [==============================] - 230s 230ms/step - loss: 2.1533 - val_loss: 2.3478
Epoch 14/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1743Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.7382
Epoch 00014: val_loss did not improve from 2.34785
1000/1000 [==============================] - 228s 228ms/step - loss: 2.1742 - val_loss: 2.7382
Epoch 15/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1741Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.5864
Epoch 00015: val_loss did not improve from 2.34785
1000/1000 [==============================] - 230s 230ms/step - loss: 2.1739 - val_loss: 2.5864
Epoch 16/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1731Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.4839
Epoch 00016: val_loss did not improve from 2.34785
1000/1000 [==============================] - 230s 230ms/step - loss: 2.1727 - val_loss: 2.4839
Epoch 17/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1688Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.7611
Epoch 00017: val_loss did not improve from 2.34785
1000/1000 [==============================] - 231s 231ms/step - loss: 2.1686 - val_loss: 2.7611
Epoch 18/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1496Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.5321
Epoch 00018: val_loss did not improve from 2.34785
1000/1000 [==============================] - 230s 230ms/step - loss: 2.1496 - val_loss: 2.5321
Epoch 19/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1511Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.4713
Epoch 00019: val_loss did not improve from 2.34785
1000/1000 [==============================] - 232s 232ms/step - loss: 2.1511 - val_loss: 2.4713
Epoch 20/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1554Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.4410
Epoch 00020: val_loss did not improve from 2.34785
1000/1000 [==============================] - 230s 230ms/step - loss: 2.1560 - val_loss: 2.4410
Epoch 21/30
999/1000 [============================>.] - ETA: 0s - loss: 2.1517Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.3860
Epoch 00021: val_loss did not improve from 2.34785
Epoch 00021: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.
1000/1000 [==============================] - 232s 232ms/step - loss: 2.1526 - val_loss: 2.3860
Epoch 22/30
999/1000 [============================>.] - ETA: 0s - loss: 2.0313Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.2533
Epoch 00022: val_loss improved from 2.34785 to 2.25325, saving model to ssd7_polynomial_20pto80p_quantized_epoch-22_loss-2.0314_val_loss-2.2533.h5
1000/1000 [==============================] - 233s 233ms/step - loss: 2.0314 - val_loss: 2.2533
Epoch 23/30
999/1000 [============================>.] - ETA: 0s - loss: 1.9699Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.2014
Epoch 00023: val_loss improved from 2.25325 to 2.20137, saving model to ssd7_polynomial_20pto80p_quantized_epoch-23_loss-1.9699_val_loss-2.2014.h5
1000/1000 [==============================] - 229s 229ms/step - loss: 1.9699 - val_loss: 2.2014
Epoch 24/30
999/1000 [============================>.] - ETA: 0s - loss: 1.9846Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.1706
Epoch 00024: val_loss improved from 2.20137 to 2.17055, saving model to ssd7_polynomial_20pto80p_quantized_epoch-24_loss-1.9841_val_loss-2.1706.h5
1000/1000 [==============================] - 231s 231ms/step - loss: 1.9841 - val_loss: 2.1706
Epoch 25/30
999/1000 [============================>.] - ETA: 0s - loss: 1.9722Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.1446
Epoch 00025: val_loss improved from 2.17055 to 2.14459, saving model to ssd7_polynomial_20pto80p_quantized_epoch-25_loss-1.9719_val_loss-2.1446.h5
1000/1000 [==============================] - 229s 229ms/step - loss: 1.9719 - val_loss: 2.1446
Epoch 26/30
999/1000 [============================>.] - ETA: 0s - loss: 1.9507Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.2291
Epoch 00026: val_loss did not improve from 2.14459
1000/1000 [==============================] - 228s 228ms/step - loss: 1.9507 - val_loss: 2.2291
Epoch 27/30
999/1000 [============================>.] - ETA: 0s - loss: 1.9432Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.1202
Epoch 00027: val_loss improved from 2.14459 to 2.12023, saving model to ssd7_polynomial_20pto80p_quantized_epoch-27_loss-1.9438_val_loss-2.1202.h5
1000/1000 [==============================] - 229s 229ms/step - loss: 1.9438 - val_loss: 2.1202
Epoch 28/30
999/1000 [============================>.] - ETA: 0s - loss: 1.9380Epoch 1/30
531/1000 [==============>...............] - ETA: 42s - loss: 2.1343
Epoch 00028: val_loss did not improve from 2.12023
1000/1000 [==============================] - 229s 229ms/step - loss: 1.9386 - val_loss: 2.1343
Epoch 29/30
999/1000 [============================>.] - ETA: 0s - loss: 1.9324Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.1254
Epoch 00029: val_loss did not improve from 2.12023
1000/1000 [==============================] - 228s 228ms/step - loss: 1.9329 - val_loss: 2.1254
Epoch 30/30
999/1000 [============================>.] - ETA: 0s - loss: 1.9212Epoch 1/30
531/1000 [==============>...............] - ETA: 41s - loss: 2.1373
Epoch 00030: val_loss did not improve from 2.12023
1000/1000 [==============================] - 229s 229ms/step - loss: 1.9210 - val_loss: 2.1373
###Markdown
Let's look at how the training and validation loss evolved to check whether our training is going in the right direction:
###Code
plt.figure(figsize=(20,12))
plt.plot(history_quant.history['loss'], label='loss')
plt.plot(history_quant.history['val_loss'], label='val_loss')
plt.legend(loc='upper right', prop={'size': 24});
#tf.saved_model.save(model, './saved_finroc_model/')
print('Inputs --> ', quant_aware_model.inputs)
print('Outputs --> ', quant_aware_model.outputs)
###Output
_____no_output_____
###Markdown
5.1 Saving quantized model
###Code
model.save('quantized_model_from_trained_base_model_1601.h5', include_optimizer=True)
###Output
_____no_output_____
###Markdown
The validation loss has been decreasing at a similar pace as the training loss, indicating that our model has been learning effectively over the last 30 epochs. We could try to train longer and see if the validation loss can be decreased further. Once the validation loss stops decreasing for a couple of epochs in a row, that's when we will want to stop training. Our final weights will then be the weights of the epoch that had the lowest validation loss. 6. Introducing Pruning
###Code
#'''
# Helper function uses `prune_low_magnitude` to make only the
# Dense layers train with pruning.
import tensorflow as tf
from tensorflow_model_optimization.python.core.sparsity.keras import pruning_schedule as pruning_sched
#end_step = 30000
'''
def apply_pruning_to_dense(layer):
if isinstance(layer, tf.keras.layers.Conv2D):
return tfmot.sparsity.keras.prune_low_magnitude(layer,pruning_schedule=pruning_sched.ConstantSparsity(0.5, 0))
return layer
'''
#'''
def apply_pruning_to_dense2(layer):
if isinstance(layer, tf.keras.layers.Conv2D):
return tfmot.sparsity.keras.prune_low_magnitude(layer,pruning_schedule=pruning_sched.PolynomialDecay(initial_sparsity=0.2,
final_sparsity=0.8, begin_step=0, end_step=30000))
return layer
#'''
# Use `tf.keras.models.clone_model` to apply `apply_pruning_to_dense`
# to the layers of the model.
model_for_pruning = tf.keras.models.clone_model(
model,
clone_function=apply_pruning_to_dense2,
)
'''
# Defining pruning parameters
#pruning_params = {
# 'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
# final_sparsity=0.80,
# begin_step=0,
# end_step=1000)
#}
'''
####model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(model_for_pruning, **pruning_params)
model_for_pruning.compile(optimizer=adam, loss=ssd_loss.compute_loss)
#model_for_pruning.compile(optimizer=adam, loss='sparse_categorical_crossentropy')
model_for_pruning.summary()
#'''
#'''
# Helper function uses `prune_low_magnitude` to make only the
# Dense layers train with pruning.
import tensorflow as tf
from tensorflow_model_optimization.python.core.sparsity.keras import pruning_schedule as pruning_sched
# Compute end step to finish pruning after 2 epochs.
batch_size = 8
epochs = 2
validation_split = 0.1 # 10% of training set will be used for validation set.
num_images = 40000
end_step = 800
#'''
#Defining pruning parameters
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.80,
begin_step=0,
end_step=1000)
}
#'''
model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(model, **pruning_params)
model_for_pruning.compile(optimizer=adam, loss=ssd_loss.compute_loss)
#model_for_pruning.compile(optimizer=adam, loss='sparse_categorical_crossentropy')
model_for_pruning.summary()
#'''
###Output
_____no_output_____
###Markdown
7. Set the remaining training parameters and train the pruned-model
###Code
#'''
# Define model callbacks.
# TODO: Set the filepath under which you want to save the weights.
import tensorflow as tf
'''
'''
#model_checkpoint = ModelCheckpoint(filepath='ssd7_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
# monitor='val_loss',
# verbose=1,
# save_best_only=True,
# save_weights_only=False,
# mode='auto',
# period=1)
#'''
#'''
csv_logger = CSVLogger(filename='ssd7_training_base_polynomialdecay_pruned_20p_80p_2202.csv',
separator=',',
append=True)
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0.0,
patience=10,
verbose=1)
reduce_learning_rate = ReduceLROnPlateau(monitor='val_loss',
factor=0.2,
patience=8,
verbose=1,
epsilon=0.001,
cooldown=0,
min_lr=0.00001)
#'''
#'''
#callbacks = [model_checkpoint,
# csv_logger,
# early_stopping,
# reduce_learning_rate]
#'''
#logdir = tempfile.mkdtemp()
#'''
logdir='pruning_summaries/'
filepath = 'ssd7_base_polynomialdecay_pruned_20p_80p_2202.{epoch:02d}-{val_loss:.2f}.h5'
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, mode='auto', period=1, save_best_only=True, save_weights_only=False)
callbacks_pruning = [
tfmot.sparsity.keras.UpdatePruningStep(),
tf.keras.callbacks.TensorBoard(log_dir=logdir, profile_batch = 100000000, histogram_freq=0, batch_size=32, write_graph=True, write_grads=False, write_images=False, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None, embeddings_data=None, update_freq='epoch'),
tfmot.sparsity.keras.PruningSummaries(log_dir=logdir, update_freq='epoch'),
checkpoint,
csv_logger,
early_stopping,
reduce_learning_rate
]
print("Done")
#'''
###Output
_____no_output_____
###Markdown
8. Train Pruned Model
###Code
#'''
initial_epoch = 0
final_epoch = 30
steps_per_epoch = 1000
print(model_for_pruning.optimizer)
history2 = model_for_pruning.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
validation_data=val_generator,
epochs=final_epoch,
validation_steps=ceil(val_dataset_size/batch_size),
callbacks=callbacks_pruning,
initial_epoch=initial_epoch)
#'''
#use_multiprocessing=False
#tf.saved_model.save(model, './saved_finroc_model/')
print('Inputs --> ', model_for_pruning.inputs)
print('Outputs --> ', model_for_pruning.outputs)
model_for_pruning.summary()
###Output
_____no_output_____
###Markdown
8.1 Save pruned model
###Code
model_for_pruning.save('pruned_model_1802.h5', include_optimizer=True)
###Output
_____no_output_____
###Markdown
8.2 Save stripped pruned model
###Code
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
model_for_export.save('striped_pruned_polynomial_20to80p_from_base_model_2202.h5', include_optimizer=True)
###Output
_____no_output_____
###Markdown
8.3 Plotting pruned model
###Code
#'''
plt.figure(figsize=(20,12))
plt.plot(history2.history['loss'], label='loss')
plt.plot(history2.history['val_loss'], label='val_loss')
plt.legend(loc='upper right', prop={'size': 24});
#'''
###Output
_____no_output_____
###Markdown
9. Make predictionsNow let's make some predictions on the validation dataset with the trained model. For convenience we'll use the validation generator which we've already set up above. Feel free to change the batch size.You can set the `shuffle` option to `False` if you would like to check the model's progress on the same image(s) over the course of the training.
###Code
# 1: Set the generator for the predictions.
predict_generator = val_dataset.generate(batch_size=10,
shuffle=True,
transformations=[],
label_encoder=None,
returns={'processed_images',
'processed_labels',
'filenames'},
keep_images_without_gt=False)
# 2: Generate samples
batch_images, batch_labels, batch_filenames = next(predict_generator)
i = 9 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(batch_labels[i])
model.summary()
# 3: Make a prediction
y_pred = model_for_pruning.predict(batch_images)
print(y_pred)
###Output
_____no_output_____
###Markdown
Now let's decode the raw predictions in `y_pred`.Had we created the model in 'inference' or 'inference_fast' mode, then the model's final layer would be a `DecodeDetections` layer and `y_pred` would already contain the decoded predictions, but since we created the model in 'training' mode, the model outputs raw predictions that still need to be decoded and filtered. This is what the `decode_detections()` function is for. It does exactly what the `DecodeDetections` layer would do, but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).`decode_detections()` with default argument values follows the procedure of the original SSD implementation: First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes, then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45, and out of what is left after that, the top 200 highest confidence boxes are returned. Those settings are for precision-recall scoring purposes though. In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5, since we're only interested in the very confident predictions.
###Code
# 4: Decode the raw prediction `y_pred`
y_pred_decoded = decode_detections(y_pred,
confidence_thresh=0.5,
iou_threshold=0.45,
top_k=200,
normalize_coords=normalize_coords,
img_height=img_height,
img_width=img_width)
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_decoded[i])
###Output
_____no_output_____
###Markdown
Finally, let's draw the predicted boxes onto the image. Each predicted box says its confidence next to the category name. The ground truth boxes are also drawn onto the image in green for comparison.
###Code
# 5: Draw the predicted boxes onto the image
plt.figure(figsize=(20,12))
plt.imshow(batch_images[i])
current_axis = plt.gca()
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist() # Set the colors for the bounding boxes
classes = ['background', 'car', 'truck', 'pedestrian', 'bicyclist', 'light'] # Just so we can print class names onto the image instead of IDs
# Draw the ground truth boxes in green (omit the label for more clarity)
for box in batch_labels[i]:
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(box[0])])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
#current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0})
# Draw the predicted boxes in blue
for box in y_pred_decoded[i]:
xmin = box[-4]
ymin = box[-3]
xmax = box[-2]
ymax = box[-1]
color = colors[int(box[0])]
label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0})
###Output
_____no_output_____
###Markdown
9.1 Multiple Predictions using loops
###Code
#'''
# 1: Set the generator for the predictions.
predict_generator = val_dataset.generate(batch_size=300,
shuffle=True,
transformations=[],
label_encoder=None,
returns={'processed_images',
'processed_labels',
'filenames'},
keep_images_without_gt=False)
#'''
%%time
#'''
for x in range(100):
# 2: Generate samples
batch_images, batch_labels, batch_filenames = next(predict_generator)
#i = 0 # Which batch item to look at
#print("Image:", batch_filenames[i])
#print()
#print("Ground truth boxes:\n")
#print(batch_labels[i])
y_pred = quant_aware_model.predict(batch_images)
#print(batch_images)
%%time
#'''
for x in range(5):
# 2: Generate samples
batch_images, batch_labels, batch_filenames = next(predict_generator)
i = 0 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(batch_labels[i])
y_pred = quant_aware_model.predict(batch_images)
# 4: Decode the raw prediction `y_pred`
y_pred_decoded = decode_detections(y_pred,
confidence_thresh=0.5,
iou_threshold=0.45,
top_k=200,
normalize_coords=normalize_coords,
img_height=img_height,
img_width=img_width)
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_decoded[i])
# 5: Draw the predicted boxes onto the image
plt.figure(figsize=(20,12))
plt.imshow(batch_images[i])
current_axis = plt.gca()
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist() # Set the colors for the bounding boxes
classes = ['background', 'car', 'truck', 'pedestrian', 'bicyclist', 'light'] # Just so we can print class names onto the image instead of IDs
# Draw the ground truth boxes in green (omit the label for more clarity)
for box in batch_labels[i]:
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(box[0])])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
#current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0})
# Draw the predicted boxes in blue
for box in y_pred_decoded[i]:
xmin = box[-4]
ymin = box[-3]
xmax = box[-2]
ymax = box[-1]
color = colors[int(box[0])]
label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0})
#'''
###Output
_____no_output_____
###Markdown
Quantization Applying Quantization aware training
###Code
#adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
#ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
quantize_scope = tfmot.quantization.keras.quantize_scope
def apply_quantization_to_Conv2D(layer):
#print(layer)
if isinstance(layer, tf.keras.layers.Reshape):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
model,
clone_function=apply_quantization_to_Conv2D,
)
#annotated_model.save('quantize_ready_model_20_01_Conv2D.h5', include_optimizer=True)
annotated_model.summary()
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'AnchorBoxes': AnchorBoxes}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
#quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
#quant_aware_model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
#quant_aware_model.summary()
quant_aware_model.summary()
model_for_pruning.summary()
for i, layer in enumerate(model_for_pruning.layers):
#print(model_for_pruning.layers)
if(layer._name == 'prune_low_magnitude_boxes4'):
layer._name='boxes4'
###Output
_____no_output_____ |
docs/user_guide/elements/preprocessing.ipynb | ###Markdown
Data preprocessingSometimes we need to preprocess data. `momepy` has a set of tools to edit given geometry to make it fit for morphological analysis.This guide introduces a selection of tools to preprocess the street network and eliminate unwanted gaps in the network and fix its topology.
###Code
import momepy
import geopandas as gpd
from shapely.geometry import LineString
###Output
_____no_output_____
###Markdown
Close gapsThe first issue which may happen is the network which is disconnected. The endpoints do not match. Such a network would result in incorrect results of any graph-based analysis. `momepy.close_gaps` can fix the issue by snapping nearby endpoints to a midpoint between the two.
###Code
l1 = LineString([(1, 0), (2, 1)])
l2 = LineString([(2.1, 1), (3, 2)])
l3 = LineString([(3.1, 2), (4, 0)])
l4 = LineString([(4.1, 0), (5, 0)])
l5 = LineString([(5.1, 0), (6, 0)])
df = gpd.GeoDataFrame(['a', 'b', 'c', 'd', 'e'], geometry=[l1, l2, l3, l4, l5])
df.plot(figsize=(10, 10)).set_axis_off()
###Output
_____no_output_____
###Markdown
All LineStrings above need to be fixed.
###Code
df.geometry = momepy.close_gaps(df, .25)
df.plot(figsize=(10, 10)).set_axis_off()
###Output
_____no_output_____
###Markdown
Now we can compare how the fixed network looks compared to the original one.
###Code
ax = df.plot(alpha=.5, figsize=(10, 10))
gpd.GeoDataFrame(geometry=[l1, l2, l3, l4, l5]).plot(ax=ax, color='r', alpha=.5)
ax.set_axis_off()
###Output
_____no_output_____
###Markdown
Remove false nodesA very common issue is incorrect topology. LineString should end either at road intersections or in dead-ends. However, we often see geometry split randomly along the way. `momepy.remove_false_nodes` can fix that.We will use `mapclassify.greedy` to highlight each segment.
###Code
from mapclassify import greedy
df = gpd.read_file(momepy.datasets.get_path('tests'), layer='broken_network')
df.plot(greedy(df), categorical=True, figsize=(10, 10), cmap="Set3").set_axis_off()
###Output
_____no_output_____
###Markdown
You can see that the topology of the network above is not as it should be.For a reference, let's check how many geometries we have now:
###Code
len(df)
###Output
_____no_output_____
###Markdown
Okay, 83 is a starting value. Now let's remove false nodes.
###Code
fixed = momepy.remove_false_nodes(df)
fixed.plot(greedy(fixed), categorical=True, figsize=(10, 10), cmap="Set3").set_axis_off()
###Output
_____no_output_____
###Markdown
From the figure above, it is clear that the network is now topologically correct. How many features are there now?
###Code
len(fixed)
###Output
_____no_output_____
###Markdown
We have been able to represent the same network using 27 features less. Extend linesIn some cases, like in generation of [enclosures](enclosed.ipynb), we may want to close some gaps by extending existing LineStrings until they meet other geometry.
###Code
l1 = LineString([(0, 0), (2, 0)])
l2 = LineString([(2.1, -1), (2.1, 1)])
l3 = LineString([(3.1, 2), (4, 0.1)])
l4 = LineString([(3.5, 0), (5, 0)])
l5 = LineString([(2.2, 0), (3.5, 1)])
df = gpd.GeoDataFrame(['a', 'b', 'c', 'd', 'e'], geometry=[l1, l2, l3, l4, l5])
df.plot(figsize=(10, 10)).set_axis_off()
###Output
_____no_output_____
###Markdown
The situation above is typical. The network is almost connected, but there are gaps. Let's extend geometries and close them. Note that we cannot use `momepy.close_gaps` in this situation as we are nor snapping endpoints to endpoints.
###Code
extended = momepy.extend_lines(df, tolerance=.2)
extended.plot(figsize=(10, 10)).set_axis_off()
ax = extended.plot(figsize=(10, 10), color='r')
df.plot(ax=ax)
ax.set_axis_off()
###Output
_____no_output_____ |
6_google_trace/VMFuzzyPrediction/KerasTutorial.ipynb | ###Markdown
LSTM neural network
###Code
from keras.layers import LSTM
batch_size = 10
time_steps = 1
Xtrain = np.reshape(X_train, (X_train.shape[0], time_steps, n_sliding_window))
ytrain = np.reshape(y_train, (y_train.shape[0], time_steps, y_train.shape[1]))
Xtest = np.reshape(X_test, (X_test.shape[0], time_steps, n_sliding_window))
model = Sequential()
model.add(LSTM(6,batch_input_shape=(batch_size,time_steps,n_sliding_window),stateful=True,activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adagrad')
history = model.fit(Xtrain[:2900],y_train[:2900], nb_epoch=6000,batch_size=batch_size,shuffle=False, verbose=0,
validation_data=(Xtest[:len_test],y_test[:len_test]))
log = history.history
df = pd.DataFrame.from_dict(log)
%matplotlib
df.plot(kind='line')
len_test = 1200
y_pred = model.predict(Xtest[:len_test],batch_size=batch_size)
# mean_absolute_error(y_pred,y_test[:len_test])
y_pred
%matplotlib
plot_figure(y_pred=scaler.inverse_transform(y_pred), y_true=scaler.inverse_transform(y_test))
results = []
results.append({'score':mean_absolute_error(y_pred,y_test[:len_test]),'y_pred':y_pred})
pd.DataFrame.from_dict(results).to_csv("lstm_result.csv",index=None)
###Output
_____no_output_____ |
Deep Learning/NLP Tutorials/.ipynb_checkpoints/vanilla_lstm_in_keras-checkpoint.ipynb | ###Markdown
RNN classifier in keras Load dependencies
###Code
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding # new!
from keras.layers import SpatialDropout1D, SimpleRNN
from keras.callbacks import ModelCheckpoint # new!
import os # new!
from sklearn.metrics import roc_auc_score, roc_curve # new!
import matplotlib.pyplot as plt # new!
%matplotlib inline
# output directory name:
output_dir = 'model_output/rnn'
# training:
epochs = 10
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 10000
max_review_length = 100
pad_type = trunc_type = 'pre'
drop_embed = 0.2
# neural network architecture:
n_rnn = 256
droput_rnn = 0.2
###Output
_____no_output_____
###Markdown
Load data For a given data set: * the Keras text utilities [here](https://keras.io/preprocessing/text/) quickly preprocess natural language and convert it into an index* the `keras.preprocessing.text.Tokenizer` class may do everything you need in one line: * tokenize into words or characters * `num_words`: maximum unique tokens * filter out punctuation * lower case * convert words to an integer index
###Code
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words)
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_train[:6]
for i in range(6):
print len(x_train[i])
###Output
100
100
100
100
100
100
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(SpatialDropout1D(drop_embed))
# model.add(Conv1D(n_conv, k_conv, activation='relu'))
# model.add(Conv1D(n_conv, k_conv, activation='relu'))
# model.add(GlobalMaxPooling1D())
# model.add(Dense(n_dense, activation='relu'))
# model.add(Dropout(dropout))
model.add(SimpleRNN(n_rnn, dropout=droput_rnn))
model.add(Dense(1, activation='sigmoid'))
model.summary()
n_dim, n_unique_words, n_dim * n_unique_words
max_review_length, n_dim, n_dim * max_review_length
###Output
_____no_output_____
###Markdown
Configure Model
###Code
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,
verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/10
25000/25000 [==============================] - 69s 3ms/step - loss: 0.7029 - acc: 0.5167 - val_loss: 0.7237 - val_acc: 0.5010
Epoch 2/10
25000/25000 [==============================] - 64s 3ms/step - loss: 0.6845 - acc: 0.5523 - val_loss: 0.6752 - val_acc: 0.5582
Epoch 3/10
25000/25000 [==============================] - 58s 2ms/step - loss: 0.6368 - acc: 0.6280 - val_loss: 0.6303 - val_acc: 0.6282
Epoch 4/10
25000/25000 [==============================] - 64s 3ms/step - loss: 0.5184 - acc: 0.7480 - val_loss: 0.4637 - val_acc: 0.7940
Epoch 5/10
25000/25000 [==============================] - 59s 2ms/step - loss: 0.5110 - acc: 0.7455 - val_loss: 0.6605 - val_acc: 0.5921
Epoch 6/10
25000/25000 [==============================] - 49s 2ms/step - loss: 0.5287 - acc: 0.7298 - val_loss: 0.6208 - val_acc: 0.6564
Epoch 7/10
25000/25000 [==============================] - 46s 2ms/step - loss: 0.4689 - acc: 0.7763 - val_loss: 0.6448 - val_acc: 0.6993
Epoch 8/10
25000/25000 [==============================] - 54s 2ms/step - loss: 0.4519 - acc: 0.7858 - val_loss: 0.6519 - val_acc: 0.6202
Epoch 9/10
25000/25000 [==============================] - 64s 3ms/step - loss: 0.4085 - acc: 0.8197 - val_loss: 0.7274 - val_acc: 0.6858
Epoch 10/10
25000/25000 [==============================] - 47s 2ms/step - loss: 0.4238 - acc: 0.8090 - val_loss: 0.7091 - val_acc: 0.5610
###Markdown
Evaluate
###Code
model.load_weights(output_dir+"/weights.03.hdf5") # zero-indexed
y_hat = model.predict_proba(x_valid)
len(y_hat)
y_hat[0]
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat)*100.0
"{:0.2f}".format(pct_auc)
###Output
_____no_output_____ |
siamese-network-one-shot-learning.ipynb | ###Markdown
Summary -> In this notebook, we'll learn how to perform Siamese Classification. Siamese Classification is a One-Shot Learning primarily used in Facial Recognition, Signature Verfication. Data Preparation for Siamese Network Learning is usually a challenge to deal with. Apart from that, comes the network architecture, which has two inputs feeding to a common network architecture. Let's begin step by step. Necessary Imports
###Code
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Flatten, Dense, Dropout, Lambda
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.python.keras.utils.vis_utils import plot_model
from tensorflow.keras import backend as K
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image, ImageFont, ImageDraw
import random
from keras.layers import Input, Conv2D, Lambda, merge, Dense, Flatten, MaxPooling2D, Activation, Dropout
from keras.models import Model, Sequential
from keras.regularizers import l2
from keras import backend as K
from keras.optimizers import Adam
from skimage.io import imshow
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
###Output
_____no_output_____
###Markdown
Loading the Dataset Using a already processed provided [here](https://www.kaggle.com/arpandhatt/package-data), we'll apply the preprocessing needed for training a Siamese Network Model.
###Code
npz = np.load('../input/package-data/input_data.npz')
X_train = npz['X_train']
Y_train = npz['Y_train']
del npz
print(f"We have {Y_train.shape[0] - 1000} examples to work with")
###Output
We have 3113 examples to work with
###Markdown
Developing the Siamese Network Model
###Code
left_input = Input((75,75,3))
right_input = Input((75,75,3))
convnet = Sequential([
Conv2D(5,3,input_shape = (75,75,3)),
Activation('relu'),
MaxPooling2D(),
Conv2D(5,3),
Activation('relu'),
MaxPooling2D(),
Conv2D(7,2),
Activation('relu'),
MaxPooling2D(),
Conv2D(7,2),
Activation('relu'),
Flatten(),
Dense(18),
Activation('sigmoid')
])
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)
L1_layer = Lambda(lambda tensor : K.abs(tensor[0] - tensor[1]))
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(1, activation = 'sigmoid')(L1_distance)
siamese_net = Model(inputs = [left_input, right_input], outputs = prediction)
optimizer = Adam(0.001, decay = 2.5e-4)
siamese_net.compile(loss = 'binary_crossentropy', optimizer = optimizer, metrics = ['accuracy'])
plot_model(siamese_net)
###Output
_____no_output_____
###Markdown
Preprocessing the Dataset We're taking 1000 samples into consideration for initial model training.
###Code
image_list = np.split(X_train[:1000], 1000)
label_list = np.split(Y_train[:1000], 1000)
###Output
_____no_output_____
###Markdown
Preparing the test dataset
###Code
iceimage = X_train[101]
test_left = []
test_right = []
test_targets = []
for i in range(Y_train.shape[0] - 1000):
test_left.append(iceimage)
test_right.append(X_train[i + 1000])
test_targets.append(Y_train[i + 1000])
test_left = np.squeeze(np.array(test_left))
test_right = np.squeeze(np.array(test_right))
test_targets = np.squeeze(np.array(test_targets))
###Output
_____no_output_____
###Markdown
For each image, 10 pairs are created of which 6 are same & 4 are different.
###Code
"""
Train Data Creation
"""
left_input = []
right_input = []
targets = []
# define the number of same pairs to be created per image
same = 6
# define the number of different pairs to be created per image
diff = 4
for i in range(len(image_list)):
# obtain the label of the ith image
label_i = int(label_list[i])
#print(label_i)
# get same pairs
print("-"* 10 ); print("Same")
# randomly select 'same' number of items for the images with the 'same' label as that of ith image.
print(random.sample(list(np.where(np.array(label_list) == label_i)[0]), same))
for j in random.sample(list(np.where(np.array(label_list) == label_i)[0]), same):
left_input.append(image_list[i])
right_input.append(image_list[j])
targets.append(1.)
# get diff pairs
print('='*10); print("Different")
# randomly select 'same' number of items for the images with the 'same' label as that of ith image.
print(random.sample(list(np.where(np.array(label_list) != label_i)[0]), diff))
for j in random.sample(list(np.where(np.array(label_list) != label_i)[0]), diff):
left_input.append(image_list[i])
right_input.append(image_list[j])
targets.append(0.)
left_input = np.squeeze(np.array(left_input))
right_input = np.squeeze(np.array(right_input))
targets = np.squeeze(np.array(targets))
###Output
_____no_output_____
###Markdown
Begin Model Training
###Code
siamese_net.summary()
siamese_net.fit([left_input, right_input], targets, batch_size = 16, epochs = 30, verbose = 1,
validation_data = ([test_left, test_right], test_targets))
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 75, 75, 3)] 0
__________________________________________________________________________________________________
input_2 (InputLayer) [(None, 75, 75, 3)] 0
__________________________________________________________________________________________________
sequential (Sequential) (None, 18) 6912 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
lambda (Lambda) (None, 18) 0 sequential[0][0]
sequential[1][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 19 lambda[0][0]
==================================================================================================
Total params: 6,931
Trainable params: 6,931
Non-trainable params: 0
__________________________________________________________________________________________________
Epoch 1/30
625/625 [==============================] - 8s 7ms/step - loss: 0.6783 - accuracy: 0.6036 - val_loss: 0.7418 - val_accuracy: 0.5194
Epoch 2/30
625/625 [==============================] - 3s 5ms/step - loss: 0.6039 - accuracy: 0.7037 - val_loss: 0.6907 - val_accuracy: 0.6264
Epoch 3/30
625/625 [==============================] - 3s 5ms/step - loss: 0.5343 - accuracy: 0.7538 - val_loss: 0.7315 - val_accuracy: 0.5027
Epoch 4/30
625/625 [==============================] - 3s 5ms/step - loss: 0.4947 - accuracy: 0.7780 - val_loss: 0.6710 - val_accuracy: 0.6428
Epoch 5/30
625/625 [==============================] - 4s 6ms/step - loss: 0.4557 - accuracy: 0.7996 - val_loss: 0.5436 - val_accuracy: 0.7700
Epoch 6/30
625/625 [==============================] - 4s 6ms/step - loss: 0.4242 - accuracy: 0.8221 - val_loss: 0.5031 - val_accuracy: 0.7896
Epoch 7/30
625/625 [==============================] - 3s 5ms/step - loss: 0.3937 - accuracy: 0.8285 - val_loss: 0.6629 - val_accuracy: 0.6720
Epoch 8/30
625/625 [==============================] - 3s 5ms/step - loss: 0.3744 - accuracy: 0.8386 - val_loss: 0.5428 - val_accuracy: 0.7642
Epoch 9/30
625/625 [==============================] - 3s 6ms/step - loss: 0.3624 - accuracy: 0.8442 - val_loss: 0.5887 - val_accuracy: 0.7424
Epoch 10/30
625/625 [==============================] - 3s 5ms/step - loss: 0.3250 - accuracy: 0.8631 - val_loss: 0.4746 - val_accuracy: 0.8031
Epoch 11/30
625/625 [==============================] - 3s 5ms/step - loss: 0.3086 - accuracy: 0.8705 - val_loss: 0.5904 - val_accuracy: 0.7453
Epoch 12/30
625/625 [==============================] - 3s 5ms/step - loss: 0.2865 - accuracy: 0.8859 - val_loss: 0.5469 - val_accuracy: 0.7758
Epoch 13/30
625/625 [==============================] - 3s 5ms/step - loss: 0.2817 - accuracy: 0.8820 - val_loss: 0.4838 - val_accuracy: 0.7963
Epoch 14/30
625/625 [==============================] - 3s 5ms/step - loss: 0.2645 - accuracy: 0.8930 - val_loss: 0.6150 - val_accuracy: 0.7147
Epoch 15/30
625/625 [==============================] - 4s 6ms/step - loss: 0.2577 - accuracy: 0.8982 - val_loss: 0.4882 - val_accuracy: 0.8012
Epoch 16/30
625/625 [==============================] - 3s 5ms/step - loss: 0.2346 - accuracy: 0.9060 - val_loss: 0.5183 - val_accuracy: 0.7816
Epoch 17/30
625/625 [==============================] - 3s 5ms/step - loss: 0.2253 - accuracy: 0.9124 - val_loss: 0.4731 - val_accuracy: 0.8021
Epoch 18/30
625/625 [==============================] - 3s 5ms/step - loss: 0.2116 - accuracy: 0.9222 - val_loss: 0.4543 - val_accuracy: 0.8143
Epoch 19/30
625/625 [==============================] - 3s 5ms/step - loss: 0.1941 - accuracy: 0.9275 - val_loss: 0.4461 - val_accuracy: 0.8175
Epoch 20/30
625/625 [==============================] - 3s 5ms/step - loss: 0.1935 - accuracy: 0.9295 - val_loss: 0.4690 - val_accuracy: 0.8150
Epoch 21/30
625/625 [==============================] - 3s 5ms/step - loss: 0.1883 - accuracy: 0.9306 - val_loss: 0.4668 - val_accuracy: 0.8092
Epoch 22/30
625/625 [==============================] - 3s 5ms/step - loss: 0.1924 - accuracy: 0.9281 - val_loss: 0.4632 - val_accuracy: 0.8121
Epoch 23/30
625/625 [==============================] - 3s 5ms/step - loss: 0.1796 - accuracy: 0.9368 - val_loss: 0.4603 - val_accuracy: 0.8159
Epoch 24/30
625/625 [==============================] - 4s 6ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.4726 - val_accuracy: 0.8191
Epoch 25/30
625/625 [==============================] - 3s 6ms/step - loss: 0.1665 - accuracy: 0.9399 - val_loss: 0.4736 - val_accuracy: 0.8179
Epoch 26/30
625/625 [==============================] - 3s 6ms/step - loss: 0.1572 - accuracy: 0.9459 - val_loss: 0.4759 - val_accuracy: 0.8214
Epoch 27/30
625/625 [==============================] - 3s 5ms/step - loss: 0.1519 - accuracy: 0.9510 - val_loss: 0.4630 - val_accuracy: 0.8137
Epoch 28/30
625/625 [==============================] - 3s 5ms/step - loss: 0.1478 - accuracy: 0.9501 - val_loss: 0.4834 - val_accuracy: 0.8243
Epoch 29/30
625/625 [==============================] - 3s 6ms/step - loss: 0.1447 - accuracy: 0.9541 - val_loss: 0.4887 - val_accuracy: 0.8224
Epoch 30/30
625/625 [==============================] - 3s 5ms/step - loss: 0.1313 - accuracy: 0.9584 - val_loss: 0.5034 - val_accuracy: 0.8249
|
01_view_data.ipynb | ###Markdown
Views and describes the data, before much processing.
###Code
#default_exp surveyors
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config Completer.use_jedi = False
from matplotlib import pyplot as plt
import seaborn as sns
import plotnine as pn
import pandas as pd
from mizani.formatters import date_format
import random
#export
import restaurants_timeseries.core as core
#export
class Logger:
def log(self, s):
print(s)
logger = Logger()
###Output
_____no_output_____
###Markdown
Process reservations.
###Code
def add_time_between(reservations):
reservations['time_between'] = (
(reservations['visit_datetime'] - reservations['reserve_datetime']))
reservations['total_seconds_between'] = reservations['time_between'].dt.seconds
reservations['days_between'] = reservations['time_between'].dt.days
reservations['hours_between'] = reservations['total_seconds_between'] / 3600
#export
class ReservationsSurveyor:
def __init__(self, reservations: pd.DataFrame):
self.reservations = reservations.copy()
self.reservations['visit_date'] = pd.to_datetime(self.reservations['visit_datetime'].dt.date)
add_time_between(self.reservations)
self.reservations = self.reservations[
['air_store_id', 'visit_date',
'total_seconds_between', 'days_between', 'hours_between',
'reserve_visitors']]
def get_daily_sums(self):
""" This wastes information! """
daily = (self.reservations
.groupby(['air_store_id', 'visit_date'])['reserve_visitors']
.sum()
.reset_index(name='reserve_visitors'))
return daily
rs = ReservationsSurveyor(core.data['reservations'])
daily_sums = rs.get_daily_sums()
print('Reservations raw data:')
display(core.data['reservations'].head())
print('\nReservations processed data:')
display(rs.reservations.head())
print('\nDaily sums (wastes information!):')
display(daily_sums.head())
###Output
Reservations raw data:
###Markdown
Process visits.
###Code
def get_earliest_date(dat):
return dat.query("visitors > 0")['visit_date'].min()
def get_latest_date(dat):
return dat.query("visitors > 0")['visit_date'].max()
#export
class VisitsSurveyor:
def __init__(self, visits: pd.DataFrame, make_report: bool):
"""
Does a bunch of processing and, if make_report is true, prints
a bunch of messages and dataframes and plots describing
the input dataframe.
"""
self.joined_already = False
self.make_report = make_report
self.visits = visits.copy()
if len(self.visits['air_store_id'].unique()) == 1:
logger.log(f"Input is a single store.")
self.visits.loc[:, 'day'] = range(self.visits.shape[0])
else:
logger.log(f"Input is multiple stores.")
self.visits.loc[:, 'day'] = (
self.visits
.sort_values(['air_store_id', 'visit_date'])
.groupby(['air_store_id'], sort=False)
.cumcount())
self._assign_never_visited()
self._filter_zero_periods()
self.count_visited_days()
self._get_populated_spans()
self.plot_spans()
self.visits = self.visits.sort_values(['air_store_id', 'visit_date'], ascending=True)
def attach_reservations(self, rs: ReservationsSurveyor) -> None:
"""
Attaches the daily reservation counts in `rs` to self.visits. Errors
if this function has already been called.
:note: In the future we should
avoid the brute daily sum, and use all the information in the reservations,
especially how far in advance each was made.
"""
if self.joined_already:
raise AssertionError(f"Appears attach_reservations was already called.")
logger.log("Attaching daily reservation sums.")
self.joined_already = True
pre_count = self.visits.shape[0]
self.visits = self.visits.merge(
rs.get_daily_sums(), how='left', on=['air_store_id', 'visit_date'])
post_count = self.visits.shape[0]
if pre_count != post_count:
raise AssertionError(
f"Had {pre_count} rows before join to reservations, but {post_count} after.")
def report(self, s: str) -> None:
if self.make_report:
if issubclass(type(s), core.pd.DataFrame):
display(s)
else:
print(s)
def _assign_never_visited(self) -> None:
"""
Saves the stores that were never visited, and
(if make_report is True) reports how many there were.
"""
self.store_counts = self.visits.groupby('air_store_id').visitors.sum()
self.never_visited = self.store_counts[self.store_counts == 0]
self.report("The visits data looks like this:")
self.report(self.visits.head())
self.report(" ")
self.report(f"There are {len(self.store_counts)} stores.")
self.report(f"{len(self.never_visited)} stores had no visits ever.")
def _filter_zero_periods(self) -> None:
"""
Removes records from self.visits if they fall outside the
dataset-wide earliest and latest days with positive visitors.
"""
self.daily_total_visits = (
self.visits
[['visit_date', 'visitors']]
.groupby('visit_date')
.sum()
.reset_index())
earliest_day = get_earliest_date(self.daily_total_visits)
latest_day = get_latest_date(self.daily_total_visits)
self.visits = self.visits[
(self.visits.visit_date >= earliest_day) &
(self.visits.visit_date <= latest_day)]
self.report(f"Populated data is from {earliest_day} to {latest_day}.")
def count_visited_days(self) -> None:
"""
if make_report is False this function does nothing.
Plots a histogram describing the distribution of all-time visit
counts for the restaurants.
"""
self.visited_days_counts = (
self.visits[self.visits.visitors > 0].
groupby('air_store_id')['visitors'].
count().
reset_index(name='days_visited'))
if self.make_report:
hist = (
pn.ggplot(self.visited_days_counts, pn.aes(x='days_visited')) +
pn.geom_histogram(bins=60) +
pn.theme_bw() +
pn.labs(x = "days visited", y = "count restaurants") +
pn.theme(figure_size=(13, 3)))
self.report("")
self.report("Visited days per restaurant:")
display(hist)
def _get_populated_spans(self) -> core.pd.DataFrame:
"""
Creates a dataframe field named `spans` inside this object.
Contains the store id and the earliest and latest visit dates
the each restaurant had a positive visitor count.
"""
rows = []
for store_id, df in self.visits.groupby('air_store_id'):
earliest_day = get_earliest_date(df)
latest_day = get_latest_date(df)
row = (store_id, earliest_day, latest_day)
rows.append(row)
spans = core.pd.DataFrame(
rows, columns=['air_store_id', 'earliest_visit_date', 'latest_visit_date'])
spans['length'] = spans['latest_visit_date'] - spans['earliest_visit_date']
spans.sort_values('length', inplace=True)
spans['air_store_id'] = core.pd.Categorical(spans.air_store_id,
categories=core.pd.unique(spans.air_store_id))
self.spans = spans
def plot_spans(self):
"""
if make_report is False this function does nothing.
Displays a plot where there is one line per
restaurant. The length of the line is the number of days in the contiguous period
between
1) the earliest date the restaurant had a non-zero amount of visitors.
2) the latest date the restaurant had a non-zero amount of visitors.
"""
if self.make_report:
x = 'air_store_id'
spans_plot = (
pn.ggplot(self.spans, pn.aes(x=x, xend=x,
y='earliest_visit_date', yend='latest_visit_date')) +
pn.geom_segment(color='gray') +
pn.theme_bw() +
pn.scale_y_date(breaks="1 month", labels=date_format("%b %Y")) +
pn.theme(figure_size=(30, 4), axis_text_x=pn.element_blank(),
axis_ticks_minor_x=pn.element_blank(), axis_ticks_major_x=pn.element_blank(),
axis_text_y=pn.element_text(size=10),
panel_grid=pn.element_blank()))
# takes long, so comment out during development
#display(spans_plot)
vs = VisitsSurveyor(core.data['visits'], True)
N_MOST_OPEN = 10
N_LEAST_OPEN = 10
vs = VisitsSurveyor(core.data['visits'], False)
#export
random.seed(40)
most_open_stores = list(
vs.visited_days_counts
.sort_values(by='days_visited', ascending=False)
.air_store_id[0:N_MOST_OPEN])
least_open_stores = list(
vs.visited_days_counts
.sort_values(by='days_visited', ascending=True)
.air_store_id[0:N_LEAST_OPEN])
###Output
Input is multiple stores.
###Markdown
Describing visits and reservations together.
###Code
vs.attach_reservations(rs)
vs.visits[vs.visits['reserve_visitors'].isnull()]
from nbdev.export import *
notebook2script()
###Output
Converted 00_core.ipynb.
Converted 01_view_data.ipynb.
Converted 02_model_one.ipynb.
Converted index.ipynb.
|
demo_quick_start_notebook.ipynb | ###Markdown
DISCLAIMERCopyright 2021 Google LLC. *This solution, including any related sample code or data, is made available on an “as is,” “as available,” and “with all faults” basis, solely for illustrative purposes, and without warranty or representation of any kind. This solution is experimental, unsupported and provided solely for your convenience. Your use of it is subject to your agreements with Google, as applicable, and may constitute a beta feature as defined under those agreements. To the extent that you make any data available to Google in connection with your use of the solution, you represent and warrant that you have all necessary and appropriate rights, consents and permissions to permit Google to use and process that data. By using any portion of this solution, you acknowledge, assume and accept all risks, known and unknown, associated with its usage, including with respect to your deployment of any portion of this solution in your systems, or usage in connection with your business, if at all.* Crystalvalue Demo: Predictive Customer LifeTime Value using Synthetic DataThis demo runs the Crystalvalue python library in a notebook using generated synthetic data. This notebook assumes that it is being run from within a [Google Cloud Platform AI Notebook](https://console.cloud.google.com/vertex-ai/notebooks/list/instances) with a Compute Engine default service account (the default setting when an AI Notebook is created) and with a standard Python 3 environment. For more details on the library please see the readme or for a more in-depth guide, including scheduling predictions, please see the `demo_with_real_data_notebook.ipynb`.
###Code
import pandas as pd
from src import crystalvalue
# Initiate the CrystalValue class with the relevant parameters.
pipeline = crystalvalue.CrystalValue(
project_id='your_project_name', # Enter your GCP Project name.
dataset_id='a_dataset_name' # The dataset will be created if it doesn't exist.
)
# Create a synthetic dataset and load it to BigQuery.
data = pipeline.create_synthetic_data(table_name='synthetic_data')
# Create summary statistics of the data and load it to Bigquery.
summary_statistics = pipeline.run_data_checks(transaction_table_name='synthetic_data')
# Feature engineering for model training with test/train/validation split.
crystalvalue_train_data = pipeline.feature_engineer(
transaction_table_name='synthetic_data')
# Train an AutoML model.
model_object = pipeline.train_automl_model()
# Deploy the model.
model_object = pipeline.deploy_model()
# Evaluate the model.
metrics = pipeline.evaluate_model()
# Create features for prediction.
crystalvalue_predict_data = pipeline.feature_engineer(
transaction_table_name='synthetic_data',
query_type='predict_query')
# Predict LTV for all customers.
predictions = pipeline.predict(
input_table=crystalvalue_predict_data,
destination_table='crystalvalue_predictions' # The table will be created if it doesn't exist.
)
###Output
_____no_output_____
###Markdown
Clean Up To clean up tables created during this demo, delete the BigQuery tables that were created. All Vertex AI resources can be removed from the [Vertex AI console](https://console.cloud.google.com/vertex-ai).
###Code
pipeline.delete_table('synthetic_data')
pipeline.delete_table('crystalvalue_data_statistics')
pipeline.delete_table('crystalvalue_evaluation')
pipeline.delete_table('crystalvalue_train_data')
pipeline.delete_table('crystalvalue_predict_data')
pipeline.delete_table('crystalvalue_predictions')
###Output
_____no_output_____ |
examples/3.05b-Wind-Turbine_Simulation_Workflow.ipynb | ###Markdown
Advanced Wind Turbine Simulation Workflow* A more advanced turbine workflow will include many steps (Which have been showcased in previous examples)* The process exemplified here includes: 1. Weather data extraction from a MERRA dataset (windspeed, pressure, temperature) 2. Spatial adjustment of the windspeeds 3. Vertical projection of the wind speeds 4. Wind speed density correction 5. Power curve convolution 6. Capacity Factor Estimation
###Code
import reskit as rk
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Simulate a Single Location
###Code
# Set some constants for later
TURBINE_CAPACITY = 4200 # kW
TURBINE_HUB_HEIGHT = 120 # meters
TURBINE_ROTOR_DIAMETER = 136 # meters
TURBINE_LOCATION = (6.0,50.5) # (lon, lat)
# 1. Create a weather source, load, and extract weather variables
src = rk.weather.MerraSource(rk.TEST_DATA["merra-like"], bounds=[5,49,7,52], verbose=False)
src.sload_elevated_wind_speed()
src.sload_surface_pressure()
src.sload_surface_air_temperature()
raw_windspeeds = src.get("elevated_wind_speed", locations=TURBINE_LOCATION, interpolation='bilinear')
raw_pressure = src.get("surface_pressure", locations=TURBINE_LOCATION, interpolation='bilinear')
raw_temperature = src.get("surface_air_temperature", locations=TURBINE_LOCATION, interpolation='bilinear')
print(raw_windspeeds.head())
# 2. Vertically project wind speeds to hub height
roughness = rk.wind.roughness_from_clc(
clc_path=rk.TEST_DATA['clc-aachen_clipped.tif'],
loc=TURBINE_LOCATION)
projected_windspeed = rk.wind.apply_logarithmic_profile_projection(
measured_wind_speed=raw_windspeeds,
measured_height=50, # The MERRA dataset offers windspeeds at 50m
target_height=TURBINE_HUB_HEIGHT,
roughness=roughness)
print(projected_windspeed.head())
# 3. Apply density correction
pressure_corrected_windspeeds = rk.wind.apply_air_density_adjustment(
wind_speed=projected_windspeed,
pressure=raw_pressure,
temperature=raw_temperature,
height=TURBINE_HUB_HEIGHT)
pressure_corrected_windspeeds.head()
# 4. Power curve estimation and convolution
power_curve = rk.wind.PowerCurve.from_capacity_and_rotor_diam(
capacity=TURBINE_CAPACITY,
rotor_diam=TURBINE_ROTOR_DIAMETER)
convoluted_power_curve = power_curve.convolute_by_gaussian(scaling=0.06, base=0.1)
# 5. Capacity factor estimation
capacity_factors = convoluted_power_curve.simulate(wind_speed=pressure_corrected_windspeeds)
capacity_factors
plt.plot(capacity_factors)
plt.show()
###Output
_____no_output_____
###Markdown
Simulate multiple locations at once (recommended)
###Code
TURBINE_CAPACITY = 4200 # kW
TURBINE_HUB_HEIGHT = 120 # meters
TURBINE_ROTOR_DIAMETER = 136 # meters
TURBINE_LOCATION = np.array([(6.25,51.), (6.50,51.), (6.25,50.75)]) # (lon,lat)
# 1
raw_windspeeds = src.get("elevated_wind_speed", locations=TURBINE_LOCATION, interpolation='bilinear')
raw_pressure = src.get("surface_pressure", locations=TURBINE_LOCATION, interpolation='bilinear')
raw_temperature = src.get("surface_air_temperature", locations=TURBINE_LOCATION, interpolation='bilinear')
# 2
roughness = rk.wind.roughness_from_clc(
clc_path=rk.TEST_DATA['clc-aachen_clipped.tif'],
loc=TURBINE_LOCATION)
projected_windspeed = rk.wind.apply_logarithmic_profile_projection(
measured_wind_speed=raw_windspeeds,
measured_height=50, # The MERRA dataset offers windspeeds at 50m
target_height=TURBINE_HUB_HEIGHT,
roughness=roughness)
# 3
pressure_corrected_windspeeds = rk.wind.apply_air_density_adjustment(
wind_speed=projected_windspeed,
pressure=raw_pressure,
temperature=raw_temperature,
height=TURBINE_HUB_HEIGHT)
convoluted_power_curve = power_curve.convolute_by_gaussian(scaling=0.06, base=0.1)
# 4
power_curve = rk.wind.PowerCurve.from_capacity_and_rotor_diam(
capacity=TURBINE_CAPACITY,
rotor_diam=TURBINE_ROTOR_DIAMETER)
# 5
capacity_factors = convoluted_power_curve.simulate(wind_speed=pressure_corrected_windspeeds)
# Print result
capacity_factors
capacity_factors.plot()
plt.show()
###Output
_____no_output_____ |
doc/gallery/thumbnail-from-conf-py.ipynb | ###Markdown
This notebook is part of the `nbsphinx` documentation: https://nbsphinx.readthedocs.io/. Specifying Thumbnails in `conf.py`This notebook doesn't contain any thumbnail metadata.But in the file [conf.py](../conf.py),a thumbnail is specified (via the[nbsphinx_thumbnails](../usage.ipynbnbsphinx_thumbnails)option),which will be used in the [gallery](../subdir/gallery.ipynb).The keys in the `nbsphinx_thumbnails` dictionary can contain wildcards,which behave very similarly to the[html_sidebars](https://www.sphinx-doc.org/en/master/usage/configuration.htmlconfval-html_sidebars)option.The thumbnail files can be local image files somewhere in the source directory,but you'll need to create at least one[link](../markdown-cells.ipynbLinks-to-Local-Files)to them in order to copy them to the HTML output directory.You can also use files from the `_static` directory(which contains all files in your [html_static_path](https://www.sphinx-doc.org/en/master/usage/configuration.htmlconfval-html_static_path)).If you want, you can also use files from the `_images` directory,which contains all notebook outputs. To demonstrate this feature,we are creating an image file here:
###Code
%matplotlib agg
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot([4, 8, 15, 16, 23, 42])
fig.savefig('a-local-file.png')
###Output
_____no_output_____ |
material/04_zeros_funcoes.ipynb | ###Markdown
**Método da Falsa Posição (ou Método das Cordas)**De forma análoga ao método da Bisseção, este também é um método de quebra.No método das cordas, esta quebra é feita exatamente no ponto de interseção da reta definida pelos pontos (a, f(a)) e (b, f(b)) com o eixo dos x. Isto equivale a substituir a função f em [a; b] por uma reta.Logo, **y = f(a) + (x - a)*(f(b) - f(a))/(b - a)**cuja interseção com o eixo dos x é dada por (basta fazer y = 0, e explicitar x):  Portanto, no intervalo de separação a função em estudo é substituída por uma reta e a raiz desta é tomada como uma aproximação da raiz procurada. A ideia geométrica do método está ilustrada na figura abaixo. Graficamente, este ponto x (ou x0 na figura abaixo) é a intersecção entre o eixo X e a reta que passa por (a,f(a)) e (b, f(b)).O algoritmo do Método da Falsa Posição é descrito abaixo (visualizado na figura abaixo):* 1) Aplicando o teorema de Bolzano (f(a)*f(b) < 0) ao intervalo inicial (a, b), percebemos que há pelo menos uma raiz neste intervalo.* 2) Aplicando o Método da Falsa Posição, a reta que intersepta o eixo dos X (reta azul na figura), divide o intervalo (a, b) em dois subintervalos (a, x0) e (x0, b), aplicamos o método de Bolzano em cada subintervalo (f(a).f(x0) < 0 ? ou f(x0).f(b) < 0 ?) e verificamos que a raiz encontra-se no subintervalo (x0, b) e abandonamos o intervalo (a, x0) que não contém a raiz procurada.* 3) No novo intervalo (x0, b), aplicamos o Método da Falsa Posição e repetimos o processo (2) para o subintervalo onde a raiz está, e assim sucessivamente. Dessa forma, aplicando o Método da Falsa Posição ao subintervalo (x0, b), a reta que intersepta o eixo dos X (reta verde na figura), divide o intervalo (x0, b) em dois subintervalos (x0, x1) e (x1, b), aplicamos o método de Bolzano em cada subintervalo (f(x0).f(x1) < 0 ? ou f(x1).f(b) < 0 ?) e verificamos que a raiz encontra-se no subintervalo (x1, b) e abandonamos o intervalo (x0, x1) que não contém a raiz procurada.* 4) Novamente, para o novo intervalo (x1, b), aplicamos o Método da Falsa Posição ao subintervalo (x1, b), a reta que intersepta o eixo dos X (reta amarela na figura), divide o intervalo (x1, b) em dois subintervalos (x1, x2) e (x2, b), aplicamos o método de Bolzano em cada subintervalo (f(x1).f(x2) < 0 ? ou f(x2).f(b) < 0 ?) e verificamos que a raiz encontra-se no subintervalo (x2, b) e abandonamos o intervalo (x1, x2) que não contém a raiz procurada.* 5) O algoritmo do Método da Falsa Posição termina quando o critério de parada adotado é atendido. **Execício(1): Determinar, usando o método da Falsa Posição, o valor aproximado da raiz das funções abaixo (importe a biblioteca math):*** (a) f(x) = x**2 - 5; no intervalo I=[a, b]=[2, 3], utilizando como critério de parada (b-a)/2 <= E (erro), E=0.01.* (b) f(x) = sin(x) + ln(x); no intervalo I=[a, b]=[0.2, 0.8], utilizando como critério de parada c = b - a <= l (amplitude final), l = 0.05.* (c) f(x) = sin(x) + ln(x); no intervalo I=[a, b]=[0.2, 0.8], utilizando como critério de parada: f(x0) <= P2, onde: x0= (a+b)/2 e P2 = precisão relacionada à distância da imagem de 𝑥0 para o eixo x), P2 = 0.01. **RESPOSTA:**(a) f(x) = x**2 - 5; no intervalo I=[2, 3], utilizando como critério de parada quando acontecer (b-a)/2 <= E (erro), E=0.01
###Code
# Importar a biblioteca matemática
import math
# Definir um intervalo [a, b] e um erro E
a = 2
b = 3
E = 0.01
# Definir uma função (LETRA (a))
def f(x):
return x**2 - 5
# Plotar o gráfico de f(x) = x**2 - 5
import numpy as np
import matplotlib.pyplot as plt # biblioteca para plotar gráficos
x = np.linspace(-4,4) #limites no eixo x
plt.plot(x, f(x))
plt.grid()
plt.show()
# Teorema de Bolzano e Método da Falsa Posição
if f(a)*f(b) < 0:
while (math.fabs(b-a)/2 > E):
xi = (a*f(b)-b*f(a))/(f(b)-f(a)) ## Para o método da Falsa Posição trocamos a "máquina geradora", ao invés de x=(a+b)/2 para o método da Bisseção
if f(xi) == 0:
print("A raiz é:", xi)
break
else:
if f(a)*f(xi) < 0:
b = xi
else:
a = xi
print("O valor da raiz é: ", xi)
else:
print("Não há raiz neste intervalo")
# OBS: Compare com o resultado do Exercício(2) do método da Bisseção.
###Output
O valor da raiz é: 2.23606797749979
###Markdown
**RESPOSTA:**(b) f(x) = sin(x) + ln(x); no intervalo I=[a, b]=[0.2, 0.8], utilizando como critério de parada c = b - a <= l (amplitude final), l = 0.05.
###Code
# Importar a biblioteca matemática
import math
###Output
_____no_output_____
###Markdown
**RESPOSTA:**(c) f(x) = sin(x) + ln(x); no intervalo I=[a, b]=[0.2, 0.8], utilizando como critério de parada: f(x0) <= P2, onde: x0= (a+b)/2 e P2 = precisão relacionada à distância da imagem de 𝑥0 para o eixo x), P2 = 0.01.
###Code
# Importar a biblioteca matemática
import math
###Output
_____no_output_____ |
notebooks/GRU_215.ipynb | ###Markdown
GRU 215* Operate on 16000 GenCode 34 seqs.* 5-way cross validation. Save best model per CV.* Report mean accuracy from final re-validation with best 5.* Use Adam with a learn rate decay schdule.
###Code
NC_FILENAME='ncRNA.gc34.processed.fasta'
PC_FILENAME='pcRNA.gc34.processed.fasta'
DATAPATH=""
try:
from google.colab import drive
IN_COLAB = True
PATH='/content/drive/'
drive.mount(PATH)
DATAPATH=PATH+'My Drive/data/' # must end in "/"
NC_FILENAME = DATAPATH+NC_FILENAME
PC_FILENAME = DATAPATH+PC_FILENAME
except:
IN_COLAB = False
DATAPATH=""
EPOCHS=200
SPLITS=1
K=3
VOCABULARY_SIZE=4**K+1 # e.g. K=3 => 64 DNA K-mers + 'NNN'
EMBED_DIMEN=16
FILENAME='GRU215'
NEURONS=64
DROP=0.5
ACT="tanh"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import StratifiedKFold
import tensorflow as tf
from tensorflow import keras
from keras.wrappers.scikit_learn import KerasRegressor
from keras.models import Sequential
from keras.layers import Bidirectional
from keras.layers import GRU
from keras.layers import Dense
from keras.layers import LayerNormalization
import time
dt='float32'
tf.keras.backend.set_floatx(dt)
###Output
_____no_output_____
###Markdown
Build model
###Code
def compile_model(model):
adam_default_learn_rate = 0.001
schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate = adam_default_learn_rate*10,
#decay_steps=100000, decay_rate=0.96, staircase=True)
decay_steps=10000, decay_rate=0.99, staircase=True)
# learn rate = initial_learning_rate * decay_rate ^ (step / decay_steps)
alrd = tf.keras.optimizers.Adam(learning_rate=schedule)
bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
print("COMPILE...")
#model.compile(loss=bc, optimizer=alrd, metrics=["accuracy"])
model.compile(loss=bc, optimizer="adam", metrics=["accuracy"])
print("...COMPILED")
return model
def build_model():
embed_layer = keras.layers.Embedding(
#VOCABULARY_SIZE, EMBED_DIMEN, input_length=1000, input_length=1000, mask_zero=True)
#input_dim=[None,VOCABULARY_SIZE], output_dim=EMBED_DIMEN, mask_zero=True)
input_dim=VOCABULARY_SIZE, output_dim=EMBED_DIMEN, mask_zero=True)
#rnn1_layer = keras.layers.Bidirectional(
rnn1_layer = keras.layers.GRU(NEURONS, return_sequences=True,
input_shape=[1000,EMBED_DIMEN], activation=ACT, dropout=DROP) #)#bi
#rnn2_layer = keras.layers.Bidirectional(
rnn2_layer = keras.layers.GRU(NEURONS, return_sequences=False,
activation=ACT, dropout=DROP) #)#bi
dense1_layer = keras.layers.Dense(NEURONS, activation=ACT,dtype=dt)
drop1_layer = keras.layers.Dropout(DROP)
dense2_layer = keras.layers.Dense(NEURONS, activation=ACT,dtype=dt)
drop2_layer = keras.layers.Dropout(DROP)
output_layer = keras.layers.Dense(1, activation="sigmoid", dtype=dt)
mlp = keras.models.Sequential()
mlp.add(embed_layer)
mlp.add(rnn1_layer)
mlp.add(rnn2_layer)
mlp.add(dense1_layer)
mlp.add(drop1_layer)
mlp.add(dense2_layer)
mlp.add(drop2_layer)
mlp.add(output_layer)
mlpc = compile_model(mlp)
return mlpc
###Output
_____no_output_____
###Markdown
Load and partition sequences
###Code
# Assume file was preprocessed to contain one line per seq.
# Prefer Pandas dataframe but df does not support append.
# For conversion to tensor, must avoid python lists.
def load_fasta(filename,label):
DEFLINE='>'
labels=[]
seqs=[]
lens=[]
nums=[]
num=0
with open (filename,'r') as infile:
for line in infile:
if line[0]!=DEFLINE:
seq=line.rstrip()
num += 1 # first seqnum is 1
seqlen=len(seq)
nums.append(num)
labels.append(label)
seqs.append(seq)
lens.append(seqlen)
df1=pd.DataFrame(nums,columns=['seqnum'])
df2=pd.DataFrame(labels,columns=['class'])
df3=pd.DataFrame(seqs,columns=['sequence'])
df4=pd.DataFrame(lens,columns=['seqlen'])
df=pd.concat((df1,df2,df3,df4),axis=1)
return df
def separate_X_and_y(data):
y= data[['class']].copy()
X= data.drop(columns=['class','seqnum','seqlen'])
return (X,y)
###Output
_____no_output_____
###Markdown
Make K-mers
###Code
def make_kmer_table(K):
npad='N'*K
shorter_kmers=['']
for i in range(K):
longer_kmers=[]
for mer in shorter_kmers:
longer_kmers.append(mer+'A')
longer_kmers.append(mer+'C')
longer_kmers.append(mer+'G')
longer_kmers.append(mer+'T')
shorter_kmers = longer_kmers
all_kmers = shorter_kmers
kmer_dict = {}
kmer_dict[npad]=0
value=1
for mer in all_kmers:
kmer_dict[mer]=value
value += 1
return kmer_dict
KMER_TABLE=make_kmer_table(K)
def strings_to_vectors(data,uniform_len):
all_seqs=[]
for seq in data['sequence']:
i=0
seqlen=len(seq)
kmers=[]
while i < seqlen-K+1 -1: # stop at minus one for spaced seed
#kmer=seq[i:i+2]+seq[i+3:i+5] # SPACED SEED 2/1/2 for K=4
kmer=seq[i:i+K]
i += 1
value=KMER_TABLE[kmer]
kmers.append(value)
pad_val=0
while i < uniform_len:
kmers.append(pad_val)
i += 1
all_seqs.append(kmers)
pd2d=pd.DataFrame(all_seqs)
return pd2d # return 2D dataframe, uniform dimensions
def make_kmers(MAXLEN,train_set):
(X_train_all,y_train_all)=separate_X_and_y(train_set)
X_train_kmers=strings_to_vectors(X_train_all,MAXLEN)
# From pandas dataframe to numpy to list to numpy
num_seqs=len(X_train_kmers)
tmp_seqs=[]
for i in range(num_seqs):
kmer_sequence=X_train_kmers.iloc[i]
tmp_seqs.append(kmer_sequence)
X_train_kmers=np.array(tmp_seqs)
tmp_seqs=None
labels=y_train_all.to_numpy()
return (X_train_kmers,labels)
def make_frequencies(Xin):
Xout=[]
VOCABULARY_SIZE= 4**K + 1 # plus one for 'NNN'
for seq in Xin:
freqs =[0] * VOCABULARY_SIZE
total = 0
for kmerval in seq:
freqs[kmerval] += 1
total += 1
for c in range(VOCABULARY_SIZE):
freqs[c] = freqs[c]/total
Xout.append(freqs)
Xnum = np.asarray(Xout)
return (Xnum)
def make_slice(data_set,min_len,max_len):
slice = data_set.query('seqlen <= '+str(max_len)+' & seqlen>= '+str(min_len))
return slice
###Output
_____no_output_____
###Markdown
Cross validation
###Code
def do_cross_validation(X,y,given_model):
cv_scores = []
fold=0
splitter = ShuffleSplit(n_splits=SPLITS, test_size=0.1, random_state=37863)
for train_index,valid_index in splitter.split(X):
fold += 1
X_train=X[train_index] # use iloc[] for dataframe
y_train=y[train_index]
X_valid=X[valid_index]
y_valid=y[valid_index]
# Avoid continually improving the same model.
model = compile_model(keras.models.clone_model(given_model))
bestname=DATAPATH+FILENAME+".cv."+str(fold)+".best"
mycallbacks = [keras.callbacks.ModelCheckpoint(
filepath=bestname, save_best_only=True,
monitor='val_accuracy', mode='max')]
print("FIT")
start_time=time.time()
history=model.fit(X_train, y_train, # batch_size=10, default=32 works nicely
epochs=EPOCHS, verbose=1, # verbose=1 for ascii art, verbose=0 for none
callbacks=mycallbacks,
validation_data=(X_valid,y_valid) )
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
best_model=keras.models.load_model(bestname)
scores = best_model.evaluate(X_valid, y_valid, verbose=0)
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
cv_scores.append(scores[1] * 100)
print()
print("%d-way Cross Validation mean %.2f%% (+/- %.2f%%)" % (fold, np.mean(cv_scores), np.std(cv_scores)))
###Output
_____no_output_____
###Markdown
Train on RNA lengths 200-1Kb
###Code
MINLEN=200
MAXLEN=1000
print("Load data from files.")
nc_seq=load_fasta(NC_FILENAME,0)
pc_seq=load_fasta(PC_FILENAME,1)
train_set=pd.concat((nc_seq,pc_seq),axis=0)
nc_seq=None
pc_seq=None
print("Ready: train_set")
#train_set
subset=make_slice(train_set,MINLEN,MAXLEN)# One array to two: X and y
print ("Data reshape")
(X_train,y_train)=make_kmers(MAXLEN,subset)
#print ("Data prep")
#X_train=make_frequencies(X_train)
print ("Compile the model")
model=build_model()
print ("Summarize the model")
print(model.summary()) # Print this only once
model.save(DATAPATH+FILENAME+'.model')
print ("Cross valiation")
do_cross_validation(X_train,y_train,model)
print ("Done")
###Output
_____no_output_____ |
Complete-Python-3-Bootcamp-master/09-Built-in Functions/03-Filter.ipynb | ###Markdown
______Content Copyright by Pierian Data filterThe function filter(function, list) offers a convenient way to filter out all the elements of an iterable, for which the function returns True. The function filter(function,list) needs a function as its first argument. The function needs to return a Boolean value (either True or False). This function will be applied to every element of the iterable. Only if the function returns True will the element of the iterable be included in the result.Like map(), filter() returns an *iterator* - that is, filter yields one result at a time as needed. Iterators and generators will be covered in an upcoming lecture. For now, since our examples are so small, we will cast filter() as a list to see our results immediately.Let's see some examples:
###Code
#First let's make a function
def even_check(num):
if num%2 ==0:
return True
###Output
_____no_output_____
###Markdown
Now let's filter a list of numbers. Note: putting the function into filter without any parentheses might feel strange, but keep in mind that functions are objects as well.
###Code
lst =range(20)
list(filter(even_check,lst))
###Output
_____no_output_____
###Markdown
filter() is more commonly used with lambda functions, because we usually use filter for a quick job where we don't want to write an entire function. Let's repeat the example above using a lambda expression:
###Code
list(filter(lambda x: x%2==0,lst))
###Output
_____no_output_____ |
machine_learning/Mega-BDT-kappa-lambda-HL-LHC.ipynb | ###Markdown
___________________ The hh analysis
###Code
channels = [df_hhsm_t, df_hhsm_i, df_hhsm_b, df_bbh_tth, df_bbxaa]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$Q\bar{Q}h$', r'$hh^{gg\rm F}_{\rm box}$', r'$hh^{ggF}_{\rm int}$', r'$hh^{gg\rm F}_{\rm tri}$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-5class-hhsm.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-bbxaa-bbh-tth-hhsm.pdf'
classifier, x_test, y_test, shap_values_5, X_shap_5 = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_5, X_shap_5, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
hhsm_t_p = df_hhsm_t_test.sample(n=round(weight_hhsm_t), replace=True, random_state=seed).reset_index(drop=True)
hhsm_i_p = df_hhsm_i_test.sample(n=round(weight_hhsm_i), replace=True, random_state=seed).reset_index(drop=True)
hhsm_i_b = df_hhsm_b_test.sample(n=round(weight_hhsm_b), replace=True, random_state=seed).reset_index(drop=True)
bbh_tth_p = df_bbh_tth_test.sample(n=round(weight_bbh_tth), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_p = df_bbxaa_test.sample(n=round(weight_bbxaa), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for hhsm triangle: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_t_p['class'].values, classifier.predict(hhsm_t_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm interference: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_t_p['class'].values, classifier.predict(hhsm_t_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm box: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_i_b['class'].values, classifier.predict(hhsm_i_b.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbh+tth: {:4.2f}% '.format(100*metrics.accuracy_score(bbh_tth_p['class'].values, classifier.predict(bbh_tth_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(bbxaa_p['class'].values, classifier.predict(bbxaa_p.drop(columns=['class', 'weight']).values))))
df_array = [df_bbxaa_test, df_bbh_tth_test, df_hhsm_b_test, df_hhsm_i_test, df_hhsm_t_test]
weight_array = [weight_bbxaa, weight_bbh_tth, weight_hhsm_b, weight_hhsm_i, weight_hhsm_t]
keys = ['tri', 'int', 'box', 'tth+bbh', 'bbxaa']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-5class-hhsm.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
Checks for the confusion matrix
###Code
now = time.time()
def func(i):
df = pd.concat([df_bbxaa_test.sample(n=int(weight_bbxaa), replace=True),
df_bbh_tth_test.sample(n=int(weight_bbh_tth)),
df_hhsm_b_test.sample(n=int(weight_hhsm_b)),
df_hhsm_i_test.sample(n=int(weight_hhsm_i)),
df_hhsm_t_test.sample(n=int(weight_hhsm_t))])
true_class = df['class'].values
predicted_class = classifier.predict(df.iloc[:, :-2].values)
conf = metrics.confusion_matrix(y_pred=predicted_class, y_true=true_class)[::-1].T
return conf[4-i][i]/np.sqrt(np.sum(conf[4-i]))
results = Parallel(n_jobs=N_THREADS, backend="loky")(delayed(func)(1) for _ in range(100))
print(time.time() - now, np.mean(results))
###Output
98.18621635437012 2.425919994255141
###Markdown
________________________ The kappa_u & kappa_lambda
###Code
channels = [df_ku, df_hhsm_t, df_hhsm_i, df_hhsm_b, df_bbh_tth, df_bbxaa]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$Q\bar{Q}h$', r'$hh^{gg\rm F}_{\rm box}$', r'$hh^{gg\rm F}_{\rm int}$', r'$hh^{gg\rm F}_{\rm tri}$', r'$u \bar u hh$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-6class-ku.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-bbxaa-bbh-tth-hhsm-ku.pdf'
classifier, x_test, y_test, shap_values_6u, X_shap_6u = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_6u, X_shap_6u, shap_plot, names=names, class_names=class_names, cmp=cmp_6)
ku_p = df_ku_test.sample(n=round(weight_ku), replace=True, random_state=seed).reset_index(drop=True)
hhsm_t_p = df_hhsm_t_test.sample(n=round(weight_hhsm_t), replace=True, random_state=seed).reset_index(drop=True)
hhsm_i_p = df_hhsm_i_test.sample(n=round(weight_hhsm_i), replace=True, random_state=seed).reset_index(drop=True)
hhsm_i_b = df_hhsm_b_test.sample(n=round(weight_hhsm_b), replace=True, random_state=seed).reset_index(drop=True)
bbh_tth_p = df_bbh_tth_test.sample(n=round(weight_bbh_tth), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_p = df_bbxaa_test.sample(n=round(weight_bbxaa), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(ku_p['class'].values, classifier.predict(ku_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm triangle: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_t_p['class'].values, classifier.predict(hhsm_t_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm interference: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_t_p['class'].values, classifier.predict(hhsm_t_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm box: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_i_b['class'].values, classifier.predict(hhsm_i_b.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbh+tth: {:4.2f}% '.format(100*metrics.accuracy_score(bbh_tth_p['class'].values, classifier.predict(bbh_tth_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(bbxaa_p['class'].values, classifier.predict(bbxaa_p.drop(columns=['class', 'weight']).values))))
df_array = [df_bbxaa_test, df_bbh_tth_test, df_hhsm_b_test, df_hhsm_i_test, df_hhsm_t_test, df_ku_test]
weight_array = [weight_bbxaa, weight_bbh_tth, weight_hhsm_b, weight_hhsm_i, weight_hhsm_t, weight_ku]
keys = ['ku', 'tri', 'int', 'box', 'tth+bbh', 'bbxaa']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-6class-ku.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
_________________________________ The kappa_d & kappa_lambda
###Code
channels = [df_kd, df_hhsm_t, df_hhsm_i, df_hhsm_b, df_bbh_tth, df_bbxaa]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$Q\bar{Q}h$', r'$hh^{gg\rm F}_{\rm box}$', r'$hh^{gg\rm F}_{\rm int}$', r'$hh^{gg\rm F}_{\rm tri}$', r'$d \bar d hh$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-6class-kd.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-bbxaa-bbh-tth-hhsm-kd.pdf'
classifier, x_test, y_test, shap_values_6d, X_shap_6d = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_6d, X_shap_6d, shap_plot, names=names, class_names=class_names, cmp=cmp_6)
kd_p = df_kd_test.sample(n=round(weight_kd), replace=True, random_state=seed).reset_index(drop=True)
hhsm_t_p = df_hhsm_t_test.sample(n=round(weight_hhsm_t), replace=True, random_state=seed).reset_index(drop=True)
hhsm_i_p = df_hhsm_i_test.sample(n=round(weight_hhsm_i), replace=True, random_state=seed).reset_index(drop=True)
hhsm_i_b = df_hhsm_b_test.sample(n=round(weight_hhsm_b), replace=True, random_state=seed).reset_index(drop=True)
bbh_tth_p = df_bbh_tth_test.sample(n=round(weight_bbh_tth), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_p = df_bbxaa_test.sample(n=round(weight_bbxaa), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(kd_p['class'].values, classifier.predict(kd_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm triangle: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_t_p['class'].values, classifier.predict(hhsm_t_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm interference: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_t_p['class'].values, classifier.predict(hhsm_t_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm box: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_i_b['class'].values, classifier.predict(hhsm_i_b.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbh+tth: {:4.2f}% '.format(100*metrics.accuracy_score(bbh_tth_p['class'].values, classifier.predict(bbh_tth_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(bbxaa_p['class'].values, classifier.predict(bbxaa_p.drop(columns=['class', 'weight']).values))))
df_array = [df_bbxaa_test, df_bbh_tth_test, df_hhsm_b_test, df_hhsm_i_test, df_hhsm_t_test, df_kd_test]
weight_array = [weight_bbxaa, weight_bbh_tth, weight_hhsm_b, weight_hhsm_i, weight_hhsm_t, weight_kd]
keys = ['kd', 'tri', 'int', 'box', 'tth+bbh', 'bbxaa']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-6class-kd.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
________________________ The kappa_u, kappa_d and kappa_lambda
###Code
df_ku['class'] = 6
df_ku_test['class'] = 6
channels = [df_ku, df_kd, df_hhsm_t, df_hhsm_i, df_hhsm_b, df_bbh_tth, df_bbxaa]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$Q\bar{Q}h$', r'$hh^{gg\rm F}_{\rm box}$', r'$hh^{gg\rm F}_{\rm int}$', r'$hh^{gg\rm F}_{\rm tri}$', r'$d \bar d hh$', r'$u \bar u hh$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-7class-ku-kd.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-bbxaa-bbh-tth-hhsm-ku-kd-kl.pdf'
classifier, x_test, y_test, shap_values_7ud, X_shap_7ud = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_7ud, X_shap_7ud, shap_plot, names=names, class_names=class_names, cmp=cmp_7)
ku_p = df_ku_test.sample(n=round(weight_ku), replace=True, random_state=seed).reset_index(drop=True)
kd_p = df_kd_test.sample(n=round(weight_kd), replace=True, random_state=seed).reset_index(drop=True)
hhsm_t_p = df_hhsm_t_test.sample(n=round(weight_hhsm_t), replace=True, random_state=seed).reset_index(drop=True)
hhsm_i_p = df_hhsm_i_test.sample(n=round(weight_hhsm_i), replace=True, random_state=seed).reset_index(drop=True)
hhsm_i_b = df_hhsm_b_test.sample(n=round(weight_hhsm_b), replace=True, random_state=seed).reset_index(drop=True)
bbh_tth_p = df_bbh_tth_test.sample(n=round(weight_bbh_tth), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_p = df_bbxaa_test.sample(n=round(weight_bbxaa), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(ku_p['class'].values, classifier.predict(ku_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for kd: {:4.2f}% '.format(100*metrics.accuracy_score(kd_p['class'].values, classifier.predict(kd_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm triangle: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_t_p['class'].values, classifier.predict(hhsm_t_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm interference: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_t_p['class'].values, classifier.predict(hhsm_t_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm box: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_i_b['class'].values, classifier.predict(hhsm_i_b.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbh+tth: {:4.2f}% '.format(100*metrics.accuracy_score(bbh_tth_p['class'].values, classifier.predict(bbh_tth_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(bbxaa_p['class'].values, classifier.predict(bbxaa_p.drop(columns=['class', 'weight']).values))))
df_array = [df_bbxaa_test, df_bbh_tth_test, df_hhsm_b_test, df_hhsm_i_test, df_hhsm_t_test, df_kd_test, df_ku_test]
weight_array = [weight_bbxaa, weight_bbh_tth, weight_hhsm_b, weight_hhsm_i, weight_hhsm_t, weight_kd, weight_ku]
keys = ['ku', 'kd', 'tri', 'int', 'box', 'tth+bbh', 'bbxaa']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-7class-ku-kd.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
_____________________________ Standard Model di-Higgs
###Code
channels = [df_hhsm, df_bbh_tth, df_bbxaa]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$Q\bar{Q}h$', r'$hh^{gg\rm F}$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-3class-hhsm-SM.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-bbxaa-bbh-tth-hhsm-SM.pdf'
classifier, x_test, y_test, shap_values_3, X_shap_3 = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_3, X_shap_3, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
hhsm_p = df_hhsm_test.sample(n=round(weight_hhsm), replace=True, random_state=seed).reset_index(drop=True)
bbh_tth_p = df_bbh_tth_test.sample(n=round(weight_bbh_tth), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_p = df_bbxaa_test.sample(n=round(weight_bbxaa), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_p['class'].values, classifier.predict(hhsm_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbh+tth: {:4.2f}% '.format(100*metrics.accuracy_score(bbh_tth_p['class'].values, classifier.predict(bbh_tth_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(bbxaa_p['class'].values, classifier.predict(bbxaa_p.drop(columns=['class', 'weight']).values))))
df_array = [df_bbxaa_test, df_bbh_tth_test, df_hhsm_test]
weight_array = [weight_bbxaa, weight_bbh_tth, weight_hhsm]
keys = ['hhsm', 'tth+bbh', 'bbxaa']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-3class-hhsm-SM.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
Checks for the confusion matrix
###Code
now = time.time()
def func(i):
df = pd.concat([df_bbxaa_test.sample(n=int(weight_bbxaa), replace=True), df_bbh_tth_test.sample(n=int(weight_bbh_tth)), df_hhsm.sample(n=int(weight_hhsm))])
true_class = df['class'].values
predicted_class = classifier.predict(df.iloc[:, :-2].values)
conf = metrics.confusion_matrix(y_pred=predicted_class, y_true=true_class)[::-1].T
return conf[2][0]/np.sqrt(np.sum(conf[2]))
results = Parallel(n_jobs=N_THREADS, backend="loky")(delayed(func)(1) for _ in range(500))
print(time.time() - now, np.mean(results))
###Output
186.54820227622986 4.772608539390341
|
cantilever_example.ipynb | ###Markdown
Static cantilever
###Code
# define geometric properties of the rod
geometry = Geometry(length=0.1, radius=5e-4)
# define material properties of the rod
material = Material(geometry=geometry, elastic_modulus=200e9, shear_modulus=85e9)
# initialize the solver for the cantilever problem (fixed displacement and rotation at leftmost end)
cantilever = Cantilever(geometry, material, number_of_elements=100)
# set the Neumann boundary condition at the rightmost end
cantilever.boundary_condition = [0, 1, 0, 0, 0, 0]
%time cantilever.run_simulation()
cantilever.result.plot_centerline()
cantilever.result.plot_load_step_iterations()
cantilever.result.plot_norms_in_loadstep(load_step=-1)
###Output
_____no_output_____
###Markdown
Sandbox ...
###Code
import numba
import matplotlib.pyplot as plt
cantilever.result.centerline[:, -1]
numba.typeof(cantilever.result.centerline)
numba.typeof(cantilever.result.increments_norm_evolution)
numba.typeof(cantilever.result.load_step_iterations)
numba.typeof(cantilever.result.load_steps)
numba.typeof(cantilever.result.residuals_norm_evolution)
numba.typeof(cantilever.load_control_parameters[0])
numba.typeof(cantilever.maximum_iterations_per_loadstep)
numba.typeof(cantilever.boundary_condition)
numba.typeof(cantilever.geometry.length)
###Output
_____no_output_____ |
ds7333_case_study_5/ML1_SVM_classifier.ipynb | ###Markdown
Week 8: SVM ClassifierInstructor: Nedelina Teneva Email: [email protected] Citations: - Chapter 3: Python Machine Learning 3rd Edition by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2019 - https://jakevdp.github.io/PythonDataScienceHandbook/05.07-support-vector-machines.html - https://pythonmachinelearning.pro/using-neural-networks-for-regression-radial-basis-function-networks/ Objectives: - support vector machines for classification. - regularization. - breakout room exercise. Support Vector Machines (SVM): - to make classification predictions, SVM rely on few support vectors (suport vectors = observations in the data). - as a result, they take up very little memory (!). - their relyability on support vectors (observations near the margins) makes them very suitable for high-dimensional data (even when there are more features than observations). - finally, SVM is able to adapt to many types of data (linearly separable or not) due to the integration of kernel methods. The not so good news: - the results can change significantly depending on the choice of the regularization parameter C (use cross-validation to choose it). - no direct probability interpretation (but easily added using the probability parameter in cross-validation). - and a final tought: be aware that the bigger the data the more expensive cross-validation is. Linear SVM - This model performs well on linearly separable classes. Linear logistic regression and linear SVM often yield very similar results. - Logistic regression tries to maximize the conditional likelihood of the training data; so this means that it pays equal attention to outliers. - SVM avoids this problem, because it cares mostly about the points that are closest to the decission boundary (support vectors). Nonlinear SVM (kernel SVM)- SVM can be easily **kernalized** to solve nonlinear classification problems. - The idea is to create nonlinear combinations of the original features, and then project them onto a higher-dimensional space via a mapping function. Step 1: Import packages
###Code
# standard libraries
import pandas as pd
import numpy as np
import os
# data visualization
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import Image
# data preprocessing
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn import preprocessing
# prediction models
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Step 2: Define working directories Step 3: Define classes Step 4: Define functions
###Code
def wine_data():
"""Read the wine dataset from 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data'
# param: None
# return df and X,y np.arrays for training and test (cleaning included)
"""
# read data
df = pd.read_csv('https://archive.ics.uci.edu/'
'ml/machine-learning-databases/wine/wine.data',
header=None)
df.columns = ['class_label', 'alcohol', 'malic_acid', 'ash',
'alcalinity_of_ash', 'magnesium', 'total_pphenols',
'flavanoids', 'nonflavanoid_phenols', 'proanthocyanins',
'color_intensity', 'hue', 'OD280/OD315_of_diluted_wines',
'proline']
print('Shape of df wine:', df.shape)
# recode class labels (from 0 to 2)
class_mapping = {label: idx for idx, label in enumerate(np.unique(df.class_label))}
class_mapping
df['class_label'] = df.class_label.map(class_mapping)
# select only labels 0 and 1
df = df[df.class_label !=2]
# select only 2 features
labels = ['class_label']
features = ['alcohol', 'ash']
df = df[labels+features]
print('Class labels:', df['class_label'].unique())
print('Features:', df.columns[1:])
# create X and y arrays
X = np.array(df.iloc[:, 1:])
y = np.array(df.iloc[:, 0])
# standardize features
df['alcohol'] = preprocessing.scale(df['alcohol'])
df['ash'] = preprocessing.scale(df['ash'])
X = preprocessing.scale(X)
return df, X, y
def plot_svc_decision_boundary(model, ax=None, plot_support=True):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=100, linewidth=1, facecolors='none', edgecolor='black');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
###Output
_____no_output_____
###Markdown
--- Step 5: Linear SVM example--- Step 5.1 Read Iris data
###Code
s = os.path.join('https://archive.ics.uci.edu', 'ml',
'machine-learning-databases', 'iris','iris.data').replace('\\', '//')
print('URL:', s)
df = pd.read_csv(s,
header=None,
encoding='utf-8')
print('Shape if Iris:', df.shape)
df.head()
# print type of flowers
df[4].unique()
###Output
_____no_output_____
###Markdown
Step 5.2 Data preprocessing We will consider only two flower classes (Setosa and Versicolor) for practical reasons.We will also restrict the analysis to only two feature variables, sepal length and petal length (easier to visualize the decission boundary in a 2D space).
###Code
# select setosa and versicolor
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
# extract sepal length and petal length
X = df.iloc[0:100, [0, 2]].values
# plot head of X
X[:5]
###Output
_____no_output_____
###Markdown
Let's also standardize the variables for optimal performance.
###Code
sc = StandardScaler()
# estimate the sample mean and standard deviation for each feature in X_train
sc.fit(X)
# use the two parameters to standardize both X_train and X_test
X_std = sc.transform(X)
# we don't standardize y but let's rename it to be consistent with notation
y_std = y
# plot head of X_std
X_std[:5]
###Output
_____no_output_____
###Markdown
Finally, let's visualize the data to understand what we are trying to predict (classify)
###Code
xfit = np.linspace(-2, 2.5)
# plot X data for setosa
plt.scatter(X_std[:50, 0], X_std[:50, 1],
color='red', marker='o', label='setosa')
# plot X data for versicolor
plt.scatter(X_std[50:100, 0], X_std[50:100, 1],
color='blue', marker='x', label='versicolor')
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
###Output
_____no_output_____
###Markdown
The linear SVM classifier would attempt to draw a straight line separating the two sets of data to create a model for classification.But there is a problem: there is more than one possible dividing line (decision boundary) that can perfectly discriminate between the two classes!
###Code
xfit = np.linspace(-2, 2.5)
# plot X data for setosa
plt.scatter(X_std[:50, 0], X_std[:50, 1],
color='red', marker='o', label='setosa')
# plot X data for versicolor
plt.scatter(X_std[50:100, 0], X_std[50:100, 1],
color='blue', marker='x', label='versicolor')
# plot decision boundaries (class separating lines)
#for m, b in [(0.2, -0.2)]:
for m, b in [(0.2, -0.2), (0.25, -0.25)]:
#for m, b in [(0.2, -0.2), (0.25, -0.25), (0.28, -0.28)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
###Output
_____no_output_____
###Markdown
So how do we choose the optimal line? Linear SVM intuition: rather than simply drawing a zero-width line between the two classes, we can draw around each line a margin of some width, up to the nearest point (observation). The line that maximizes this margin is the one we will choose as the optimal model. Thus, SVM are an example of a maximum margin estimator.
###Code
xfit = np.linspace(-2, 2.5)
# plot X data for setosa
plt.scatter(X_std[:50, 0], X_std[:50, 1],
color='red', marker='o', label='setosa')
# plot X data for versicolor
plt.scatter(X_std[50:100, 0], X_std[50:100, 1],
color='blue', marker='x', label='versicolor')
# plot decision boundaries with margins
for m, b, d in [(0.2, -0.2, 0.25), (0.25, -0.25, 0.55)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
###Output
_____no_output_____
###Markdown
How does it compare with the perceptron model? Well, in the perceptron model, the objective function is to minimize the classification error. In SVM we want to choose the weights, **w**, to maximize the margin (this is the cost function):$\mathbf{\frac{w^T(x\_pos - x\_neg)}{||w||}} = \frac{2}{\mathbf{||w||}}$, where $x\_pos$ are the points to the right of the decision boundary and $x\_neg$ are the points to the left. Step 5.3 Prediction for linear SVM
###Code
svm_iris = SVC(kernel='linear', C=1) # C sets regularization
svm_iris.fit(X_std, y_std)
###Output
_____no_output_____
###Markdown
Let's visualize the optimal decission boundary and it's associated margin width.
###Code
# plot X data for setosa
plt.scatter(X_std[:50, 0], X_std[:50, 1],
color='red', marker='o', label='setosa')
# plot X data for versicolor
plt.scatter(X_std[50:100, 0], X_std[50:100, 1],
color='blue', marker='x', label='versicolor')
# add labels
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
# plot decision boudary
plot_svc_decision_boundary(svm_iris)
###Output
_____no_output_____
###Markdown
The continous grey line is the dividing line that maximizes the margin between the two sets of points. Notice that a few of the training points just touch the margin (points highlighted with black circles). These points are the important piece of the SVM solution, and are known as the support vectors. SVM relies mostly on these support vectors to make predictions!In Scikit-Learn, the identity of these points are stored in the support_vectors_ attribute of the classifier.
###Code
svm_iris.support_vectors_
###Output
_____no_output_____
###Markdown
A key to the SVM's success is that for the fit, only the position of the support vectors matter; any points further from the margin which are on the correct side do not modify the fit! The reason? these points do not contribute to the loss function used to fit the model, so their position and number do not matter as long as they do not cross the margin.We can see this, for example, if we plot the model learned from X and the model learned from X where standardized petal length < 1cm (remove the top points):
###Code
# from X array keep only if standardized petal length (index 1 in X_std or index 2 in concatenated (y, X_std)) < 1
temp = pd.concat((pd.DataFrame(y),pd.DataFrame(X_std)),axis=1)
temp = temp[temp.iloc[:,2]<1]
X_std2 = np.array(temp.iloc[:,[1,2]])
y_std2 = np.array(temp.iloc[:,0])
X_std2.shape
svm_iris2 = SVC(kernel='linear', C=1) # C sets regularization
svm_iris2.fit(X_std2, y_std2)
## create subplots
models = ['svm_iris', 'svm_iris2']
plt.figure(figsize=(12, 4))
for i in range(len(models)):
# create sublots that are all on the same row
ax = plt.subplot(1, len(models), i+1)
# plot X data for setosa
if i == 0:
plt.scatter(X_std[:50, 0], X_std[:50, 1],
color='red', marker='o', label='setosa')
else:
plt.scatter(X_std2[:50, 0], X_std2[:50, 1],
color='red', marker='o', label='setosa')
# plot X data for versicolor
if i == 0:
plt.scatter(X_std[50:, 0], X_std[50:, 1],
color='blue', marker='x', label='versicolor')
else:
plt.scatter(X_std2[50:, 0], X_std2[50:, 1],
color='blue', marker='x', label='setosa')
# plot decision boundary
if i == 0:
plot_svc_decision_boundary(svm_iris)
else:
plot_svc_decision_boundary(svm_iris2)
# set y limit
plt.ylim(-1.5, 1.5)
plt.title('Model: '+ str(models[i]), size=14)
###Output
_____no_output_____
###Markdown
In the left panel, we see the model and the support vectors for our initial svm model. In the right panel, we droped the points where standardized petal length < 1cm (remove the top points), but the model has not changed. The support vectors from the left panel are still the support vectors from the right panel! This insensitivity to distant points is one of the strengths of the SVM model! --- Step 6: Nonlinear (kernel) SVM example--- The power of SVM becomes more obvious when we combine it with kernels. These kernels are very useful when dealing with nonlinearly separable data.To give you an understanding of what a kernel is, think about the Linear Regression classifier. This classifier is used to fit linearly separable data, but if we project the data into a higher dimensional space (e.g., by adding polynomials and Gaussian basis functions), we can actually fit a nonlinear relationsip with this linear classifier. Step 6.1 Read wine data Now we will turn our attention to the wine dataset.Let's focus only on two class labels (cultivar 0 and 1) and two features ('alcohol' and 'ash').To save time and space , I defined the wine_data() function to import the data, keep only classes and features of interest, and standardizes the features.
###Code
df, X_std, y_std = wine_data()
df.head()
###Output
Shape of df wine: (178, 14)
Class labels: [0 1]
Features: Index(['alcohol', 'ash'], dtype='object')
###Markdown
Step 6.2 Data vizualizationto understand what we are trying to predict (classify)
###Code
plt.scatter(df.loc[df.class_label==0, 'alcohol'], df.loc[df.class_label==0, 'ash'], label='cultivar0')
plt.scatter(df.loc[df.class_label==1, 'alcohol'], df.loc[df.class_label==1, 'ash'], label='cultivar1')
plt.xlabel('alcohol');
plt.ylabel('ash');
plt.legend(loc='upper left');
###Output
_____no_output_____
###Markdown
Step 6.3 Prediction for linear SVM
###Code
svm_wine = SVC(kernel='linear', C=1.0)
svm_wine.fit(X_std, y_std)
###Output
_____no_output_____
###Markdown
Let's visualize the optimal decission boundary and it's associated margin width.
###Code
plt.scatter(df.loc[df.class_label==0, 'alcohol'], df.loc[df.class_label==0, 'ash'], label='cultivar0')
plt.scatter(df.loc[df.class_label==1, 'alcohol'], df.loc[df.class_label==1, 'ash'], label='cultivar1')
plt.xlabel('alcohol');
plt.ylabel('ash');
plt.legend(loc='upper left');
plot_svc_decision_boundary(svm_wine, plot_support=True);
###Output
_____no_output_____
###Markdown
It is clear that no linear discrimination will be able to separate this data. How we can go around it? Let's think about how we might project the data into a higher dimensional space (i.e., add more features) such that a linear separator would be sufficient. For example, one simple projection we could use would be to compute a radial basis function (RBF, also known as Gaussian kernel) **centered on the middle clump**.
###Code
def rbf(X, gamma):
""" Code up Gaussian RBF
# param X: np.array containing features of interest
# param gamma: float, a free parameter, a cut-off for the Gaussian sphere (in scikit-learn gamma = 1 / (n_features * X.var()))
# param s: float, standard deviations of the clusters.
"""
return np.exp(-gamma* (np.subtract(X[:,0], X[:,1])**2))
rbf = rbf(X_std, 10) # hint: try changing the value of gamma and see what happens with the separating plane!
print(rbf.min())
print(rbf.max())
print (rbf.shape)
###Output
2.3100834081857688e-91
0.9977023411644994
(130,)
###Markdown
We can visualize this extra data dimension using a three-dimensional plot
###Code
from mpl_toolkits import mplot3d
from ipywidgets import interact, fixed
ax = plt.subplot(projection='3d')
ax.scatter3D(X_std[:, 0], X_std[:, 1], rbf, c=y_std, s=50, cmap='autumn')
ax.view_init(elev=10, azim=30)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r');
###Output
_____no_output_____
###Markdown
We can see that with this additional dimension, the data did not really become linearly separable. Linearly separable == we can draw a clearly separating plane at, say, r=0.01.After projection, we would like our data to look like in the figure below.
###Code
Image(filename='ideal_rbf.png', width=350)
###Output
_____no_output_____
###Markdown
So we can conclude that we did not carefully tune our projection.We should have searched for the value of gamma that centers our RBF in the right location in order to see such a clea, linearly separable data.But randomly choosing the best RBF funtion is time consumption, so we would like to find this automatically. The kernalized SVM in scikit-learn does this automatically for us by changing our linear kernel to an RBF, using the kernel model hyperparameter! Step 6.4 Prediction for nonlinear (kernel) SVM
###Code
svm_wine = SVC(kernel='rbf', gamma=0.1, C=1.0)
svm_wine.fit(X_std, y_std)
###Output
_____no_output_____
###Markdown
Let's look at the plots now
###Code
plt.scatter(df.loc[df.class_label==0, 'alcohol'], df.loc[df.class_label==0, 'ash'], label='cultivar0')
plt.scatter(df.loc[df.class_label==1, 'alcohol'], df.loc[df.class_label==1, 'ash'], label='cultivar1')
plt.xlabel('alcohol');
plt.ylabel('ash');
plt.legend(loc='upper left');
plot_svc_decision_boundary(svm_wine, plot_support=True);
###Output
_____no_output_____
###Markdown
Using this kernelized support vector machine, we learn a suitable nonlinear decision boundary.Question: What are the support vectors used to classify examples? (show them on the graph)
###Code
svm_wine.support_vectors_
###Output
_____no_output_____
###Markdown
Step 6.4.1 The role of the gamma parameter This parameter can be understood as the cut-off point for the Gaussian sphere. If we increase the value of gamma we increase the power of training examples (look at the RBF formula above to see the connection), which leads to a tighter and bumpier decision boundary. If we fit the training data very well (high gamma value) then we would likely end up with a high generalization error on unseen (test) data.
###Code
svm_wine = SVC(kernel='rbf', gamma=0.5, C=5) # change gamma from 0.1 to 5
svm_wine.fit(X_std, y_std)
plt.scatter(df.loc[df.class_label==0, 'alcohol'], df.loc[df.class_label==0, 'ash'], label='cultivar0')
plt.scatter(df.loc[df.class_label==1, 'alcohol'], df.loc[df.class_label==1, 'ash'], label='cultivar1')
plt.xlabel('alcohol');
plt.ylabel('ash');
plt.legend(loc='upper left');
plot_svc_decision_boundary(svm_wine, plot_support=True);
###Output
_____no_output_____
###Markdown
Step 6.4.2 The role of the regularization (C) parameter A familiar concept? Bias vs. Variance tradeoff If a model suffers from overfitting, we also say that the model has high variance (the model is too complex, fits very well the training data).If a model suffers from underfitting, we also say that the model has high bias (the model is not complex enough to capture the patern in the training data). Decreasing the value of C, increases the bias and lowers the variance. So when we decrease the value of C we increase the width of the margin.
###Code
C = [20.0, 0.1]
plt.figure(figsize=(12, 4))
for i, c in enumerate(C):
# create sublots that are all on the same row
ax = plt.subplot(1, len(models), i+1)
svm_wine = SVC(kernel='linear', C=c)
svm_wine.fit(X_std, y_std)
plt.scatter(df.loc[df.class_label==0, 'alcohol'], df.loc[df.class_label==0, 'ash'], label='cultivar0')
plt.scatter(df.loc[df.class_label==1, 'alcohol'], df.loc[df.class_label==1, 'ash'], label='cultivar1')
plt.xlabel('alcohol');
plt.ylabel('ash');
plt.legend(loc='upper left');
plot_svc_decision_boundary(svm_wine, plot_support=True)
if i == 0:
plt.title('C = {0:.1f}'.format(c) + str(', i.e. large value of C'), size=14)
else:
plt.title('C = {0:.1f}'.format(c) + str(', i.e. small value of C'), size=14)
###Output
_____no_output_____
###Markdown
Additional exercises Using the wine dataset, execute the following tasks: - [1] split the data into train and test examples: - use train_test_split() method from scikit-learn's preprocessing module; set test_size parameter to 0.30; - report number of observations in the train and test data; ``Add answer here:`` train = 91, test = 39 - [2] fit a linear SVM on training data and predict on test data: - set C hyperparameter to 1.0; - report classification error for training and test data; ``Add answer here:`` 92.3% accuracy - [3] fit a kernel SVM on training data and predict on test data: - set gamma and C hyperparameters to 0.20 and 10.0, respectively; - report classification error for training and test data; ``Add answer here:`` 92.3% accuracy - [4] fit a kernel SVM on training data and predict on test data: - set gamma and C hyperparameters to 0.20 and 1.0, respectively; - report classification error for training and test data; ``Add answer here:`` 94.9% accuracy
###Code
# [1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_std, y_std, test_size = 0.3)
print(len(X_train),len(y_train),len(X_test),len(y_test))
# [2]
from sklearn import metrics as mt
svm_wine_2 = SVC(kernel='linear', C=1.0)
svm_wine_2.fit(X_train, y_train)
y_hat_2 = svm_wine_2.predict(X_test) # get test set predictions
acc_2 = mt.accuracy_score(y_test,y_hat_2)
conf_2 = mt.confusion_matrix(y_test,y_hat_2)
print('accuracy:', acc_2 )
print(conf_2)
# [3]
svm_wine_3 = SVC(kernel='rbf', gamma=0.2, C=10.0)
svm_wine_3.fit(X_train, y_train)
y_hat_3 = svm_wine_3.predict(X_test) # get test set predictions
acc_3 = mt.accuracy_score(y_test,y_hat_3)
conf_3 = mt.confusion_matrix(y_test,y_hat_3)
print('accuracy:', acc_3 )
print(conf_3)
# [4]
svm_wine_4 = SVC(kernel='rbf', gamma=0.2, C=1.0)
svm_wine_4.fit(X_train, y_train)
y_hat_4 = svm_wine_4.predict(X_test) # get test set predictions
acc_4 = mt.accuracy_score(y_test,y_hat_4)
conf_4 = mt.confusion_matrix(y_test,y_hat_4)
print('accuracy:', acc_4 )
print(conf_4)
###Output
accuracy: 0.9487179487179487
[[19 2]
[ 0 18]]
|
time_series_analysis/time_series_analysis_notes.ipynb | ###Markdown
Time series basics * Trends* Seasonality* Cyclical Introduction to statsmodels
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import statsmodels.api as sm
# Importing built-in datasets in statsmodels
df = sm.datasets.macrodata.load_pandas().data
df.head()
print(sm.datasets.macrodata.NOTE)
df.head()
df.tail()
# statsmodels.timeseriesanalysis.datetools
index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
index
df.index = index
df.head()
df['realgdp'].plot()
###Output
_____no_output_____
###Markdown
Using the Hodrick-Prescott Filter for trend analysis
###Code
result = sm.tsa.filters.hpfilter(df['realgdp'])
result
type(result)
type(result[0])
type(result[1])
gdp_cycle, gdp_trend = result
df['trend'] = gdp_trend
df[['realgdp', 'trend']].plot()
# zooming in
df[['realgdp', 'trend']]['2000-03-31':].plot()
###Output
_____no_output_____
###Markdown
ETS Theory (Error-Trend-Seasonality) * Exponential Smoothing* Trend Methods Models* ETS Decomposition EWMA Theory (Exponentially Weighted Moving Averages)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
airline = pd.read_csv('airline_passengers.csv', index_col = 'Month')
airline.head()
# this is a normal index
airline.index
# Get rid of all the missing values in this dataset
airline.dropna(inplace=True)
airline.index = pd.to_datetime(airline.index)
airline.head()
# now its a DatetimeIndex
airline.index
# Recap of making the SMA
airline['6-month-SMA'] = airline['Thousands of Passengers'].rolling(window=6).mean()
airline['12-month-SMA'] = airline['Thousands of Passengers'].rolling(window=12).mean()
airline.plot(figsize=(10,8))
###Output
_____no_output_____
###Markdown
Weakness of SMA* Smaller windows will lead to more noise, rather than signal* It will always lag by the size of the window* It will never reach to full peak or valley of the data due to the averaging.* Does not really inform you about possible future behaviour, all it really does is describe trends in your data.* Extreme historical values can skew your SMA significantly Creating EWMA
###Code
airline['EWMA-12'] = airline['Thousands of Passengers'].ewm(span=12).mean()
airline[['Thousands of Passengers', 'EWMA-12']].plot(figsize=(10,8))
###Output
_____no_output_____
###Markdown
Full reading on mathematics of EWMA* http://pandas.pydata.org/pandas-docs/stable/computation.htmlexponentially-weighted-windows ETS continued...
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
airline = pd.read_csv('airline_passengers.csv', index_col = 'Month')
airline.head()
airline.plot()
airline.dropna(inplace = True)
airline.index = pd.to_datetime(airline.index)
airline.head()
from statsmodels.tsa.seasonal import seasonal_decompose
# additive/ multiplicative models available
# suspected linear trend = use additive
# suspected non-linear trend = multiplicative model
result = seasonal_decompose(airline['Thousands of Passengers'], model='multiplicative')
result.seasonal
result.seasonal.plot()
result.trend.plot()
result.resid.plot()
fig = result.plot()
###Output
_____no_output_____
###Markdown
ARIMA models Auto regressive integrated moving averages[https://people.duke.edu/~rnau/411arim3.htm] * **Autoregressive integrated moving average (ARIMA)** model is a generalization of an autoregressive moving average (ARMA) model. * ARIMA model types - **Non-seasonal ARIMA (for non-seasonal data)** - **Seasonal ARIMA (for seasonal data)** * ARIMA models are applied in some cases where data show evidence of non-stationarity, where an initial differencing step (corresponding to the 'integrated' part of the model) can be applied one or more times to eliminate the non-stationarity. * **Non-seasonal ARIMA models are generally denoted as ARIMA(p, d, q)** where parameters p, d and q are non-negative integers. * **AR(p): Autoregression component** - A regression model that utilizes the dependent relationship between a current observation and observations over a previous period.* **I(d): Integrated** - Differencing of observations (subtracting an observation from an observation at the previous time step) in order to make the time series stationary.* **MA(q): Moving Average** - A model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observations Stationary vs Non-Stationary Data * A stationary series has a constant mean and variance over time* A stationary dataset will allow our model to predict that the mean and variance will be the same in future periods. ______ * Note above for stationary data (mean and variance both are constant over time)* Another aspect to look for is covariance not be a function of time in stationary data * If you've determined your data is not stationary (either visually or mathematically), you will then need to transform it to be stationary in order to evaluate it and what type of ARIMA terms you will use.* One simple way to do this is through **"differencing"** * Original DataTime110Time212Time38Time414Time57* First DifferenceTime1NATime22Time3-4Time46Time5-7* Second DifferenceTime1NATime2NATime3-6Time410Time5-13 * **For seasonal data,** you can also difference by season. If you had monthly data with yearly seasonality, you could difference by a time unit of 12, instead of just 1.* Another common techinique with seasonal ARIMA models is to combine both methods, taking the seasonal difference of the first difference. ARIMA models continued... 1 The general process for ARIMA models is the following:* Visualize the Time Series Data* Make the time series data stationary* Plot the Correlation and AutoCorrelation Charts* Construct the ARIMA Model* Use the model to make predictionsLet's go through these steps![https://people.duke.edu/~rnau/arimrule.htm]
###Code
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('monthly-milk-production-pounds-p.csv')
df.head()
df.columns = ['Month', 'Milk in Pounds per Cow']
df.head()
df.tail()
df.drop(168, axis=0, inplace=True)
df.tail()
df['Month'] = pd.to_datetime(df['Month'])
df.head()
df.set_index('Month', inplace=True)
df.head()
df.index
df.describe()
df.describe().transpose()
###Output
_____no_output_____
###Markdown
Step 1 - Visualize the data
###Code
df.plot();
time_series = df['Milk in Pounds per Cow']
type(time_series)
time_series.rolling(12).mean().plot(label='12 SMA')
time_series.rolling(12).std().plot(label='12 STD')
time_series.plot()
plt.legend();
###Output
_____no_output_____
###Markdown
Conclusion: The scale of STD (standard deviation) is always pretty much smaller than the actual scale. If 12 STD does not show crazy behaviour is comparitively flat then its 'workable'
###Code
from statsmodels.tsa.seasonal import seasonal_decompose
decomp = seasonal_decompose(time_series)
fig = decomp.plot()
fig.set_size_inches(15,8)
###Output
_____no_output_____
###Markdown
ARIMA models continued... 2 Step 2 - Make the time series data stationary (if non-stationary) We can use the Augmented [Dickey-Fuller](https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test) [unit root test](https://en.wikipedia.org/wiki/Unit_root_test).In statistics and econometrics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity.Basically, we are trying to whether to accept the Null Hypothesis **H0** (that the time series has a unit root, indicating it is non-stationary) or reject **H0** and go with the Alternative Hypothesis (that the time series has no unit root and is stationary).We end up deciding this based on the p-value return.* A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.* A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.Let's run the Augmented Dickey-Fuller test on our data:
###Code
from statsmodels.tsa.stattools import adfuller
result = adfuller(df['Milk in Pounds per Cow'])
result
def adf_check(time_series):
result = adfuller(time_series)
print('Augumented Dicky-Fuller Test')
labels = ['ADF Test Statistic', 'p-value', '# of lags', 'Num of Observations used']
for value, label in zip(result, labels):
print(label + ': ' + str(value))
if result[1] < 0.05:
print('Strong evidence against null hypothesis')
print('Rejecting null hypothesis')
print('Data has no unit root! and is stationary')
else:
print('Weak evidence against null hypothesis')
print('Fail to reject null hypothesis')
print('Data has a unit root, it is non-stationary')
adf_check(df['Milk in Pounds per Cow'])
###Output
Augumented Dicky-Fuller Test
ADF Test Statistic: -1.30381158742
p-value: 0.627426708603
# of lags: 13
Num of Observations used: 154
Weak evidence against null hypothesis
Fail to reject null hypothesis
Data has a unit root, it is non-stationary
###Markdown
* Thus, the ADF test confirms our assumption from visual analysis that definately the data is non-stationary and has a seasionality and trend factor to it.
###Code
# Now making the data stationary
df['First Difference'] = df['Milk in Pounds per Cow'] - df['Milk in Pounds per Cow'].shift(1)
df['First Difference'].plot()
# adf_check(df['First Difference']) - THIS RESULTS IN LinAlgError: SVD did not converge ERROR
# Note: we need to drop the first NA value before plotting this
adf_check(df['First Difference'].dropna())
df['Second Difference'] = df['First Difference'] - df['First Difference'].shift(1)
df['Second Difference'].plot();
adf_check(df['Second Difference'].dropna())
###Output
Augumented Dicky-Fuller Test
ADF Test Statistic: -14.3278736456
p-value: 1.11269893321e-26
# of lags: 11
Num of Observations used: 154
Strong evidence against null hypothesis
Rejecting null hypothesis
Data has no unit root! and is stationary
###Markdown
* Since, **p-value('Original') - p-value('First Difference') < p-value('First Difference') - p-value('Second Difference')**, it is the first difference that did most of the elimination of the trend.
###Code
# Let's plot seasonal difference
df['Seasonal Difference'] = df['Milk in Pounds per Cow'] - df['Milk in Pounds per Cow'].shift(12)
df['Seasonal Difference'].plot();
adf_check(df['Seasonal Difference'].dropna())
###Output
Augumented Dicky-Fuller Test
ADF Test Statistic: -2.33541931436
p-value: 0.160798805277
# of lags: 12
Num of Observations used: 143
Weak evidence against null hypothesis
Fail to reject null hypothesis
Data has a unit root, it is non-stationary
###Markdown
* Thus, we conclude that seasonal difference *does not make* the data stationary here, **in fact we can observe visually that as we go further in time the variance began to increase**.
###Code
# Plotting 'Seasonal first difference'
df['Seasonal First Difference'] = df['First Difference'] - df['First Difference'].shift(12)
df['Seasonal First Difference'].plot();
adf_check(df['Seasonal First Difference'].dropna())
###Output
Augumented Dicky-Fuller Test
ADF Test Statistic: -5.03800227492
p-value: 1.86542343188e-05
# of lags: 11
Num of Observations used: 143
Strong evidence against null hypothesis
Rejecting null hypothesis
Data has no unit root! and is stationary
###Markdown
ARIMA models continued... 3 Step 3 - Plot the Correlation and Autocorrelation Charts
###Code
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
# Plotting the gradual decline autocorrelation
fig_first = plot_acf(df['First Difference'].dropna())
fig_first = plot_acf(df['First Difference'].dropna(), use_vlines=False)
fig_seasonal_first = plot_acf(df['Seasonal First Difference'].dropna(), use_vlines=False)
fig_seasonal_first_pacf = plot_pacf(df['Seasonal First Difference'].dropna(), use_vlines=False)
###Output
_____no_output_____
###Markdown
Plotting the final 'Autocorrelation' and 'Partial autocorrelation'
###Code
plot_acf(df['Seasonal First Difference'].dropna());
plot_pacf(df['Seasonal First Difference'].dropna());
###Output
_____no_output_____
###Markdown
ARIMA models continued... 4 Step 4 - Construct the ARIMA model
###Code
# ARIMA model for non-sesonal data
from statsmodels.tsa.arima_model import ARIMA
# help(ARIMA)
# ARIMA model from seasonal data
# from statsmodels.tsa.statespace import sarimax
###Output
_____no_output_____
###Markdown
Choosing the **p, d, q values** of the **order** and **seasonal_order** tuple is reading taskMore information here...[https://stackoverflow.com/questions/22770352/auto-arima-equivalent-for-python][https://stats.stackexchange.com/questions/44992/what-are-the-values-p-d-q-in-arima][https://people.duke.edu/~rnau/arimrule.htm]
###Code
model = sm.tsa.statespace.SARIMAX(df['Milk in Pounds per Cow'], order=(0,1,0), seasonal_order=(1,1,1,12))
results = model.fit()
print(results.summary())
# residual errors of prediction on the original training data
results.resid
# plot of residual errors of prediction on the original training data
results.resid.plot();
# KDE plot of residual errors of prediction on the original training data
results.resid.plot(kind='kde');
# Creating a column forecast to house the forecasted values for existing values
df['forecast'] = results.predict(start=150, end=168)
df[['Milk in Pounds per Cow', 'forecast']].plot(figsize=(12,8));
# Forecasting for future data
df.tail()
from pandas.tseries.offsets import DateOffset
future_dates = [df.index[-1] + DateOffset(months=x) for x in range(0,24)]
future_dates
future_df = pd.DataFrame(index=future_dates, columns=df.columns)
future_df.head()
final_df = pd.concat([df, future_df])
final_df.head()
final_df.tail()
final_df['forecast'] = results.predict(start=168, end=192)
final_df.tail()
final_df[['Milk in Pounds per Cow', 'forecast']].plot()
###Output
_____no_output_____ |
Day36_LSTMs_and_Convolutional_networks_NLP/Sarcasm_with_1D_Conv_layer.ipynb | ###Markdown
###Code
import numpy as np
import json
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \
-O /tmp/sarcasm.json
vocab_size = 1000
embedding_dim = 16
max_length = 120
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
training_size = 20000
with open("/tmp/sarcasm.json", 'r') as f:
datastore = json.load(f)
sentences = []
labels = []
urls = []
for item in datastore:
sentences.append(item['headline'])
labels.append(item['is_sarcastic'])
training_sentences = sentences[0:training_size]
testing_sentences = sentences[training_size:]
training_labels = labels[0:training_size]
testing_labels = labels[training_size:]
tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
training_sequences = tokenizer.texts_to_sequences(training_sentences)
training_padded = pad_sequences(training_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
# Sarcasm with 1D Convolutional Layer
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Conv1D(128, 5, activation='relu'),
tf.keras.layers.GlobalMaxPooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 50
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=1)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
model.save("test.h5")
###Output
_____no_output_____ |
tutorials/zh/03_run_graphscope_like_networkx.ipynb | ###Markdown
用 GraphScope 像 NetworkX 一样进行图分析GraphScope 在大规模图分析基础上,提供了一套兼容 NetworkX 的图分析接口。本文中我们将介绍如何用 GraphScope 像 NetworkX 一样进行图分析。 NetworkX 是如何进行图分析的NetworkX 的图分析过程一般首先进行图的构建,示例中我们首先建立一个空图,然后通过 NetworkX 的接口逐步扩充图的数据。
###Code
# Install graphscope package if you are NOT in the Playground
!pip3 install graphscope
import networkx
# 初始化一个空的无向图
G = networkx.Graph()
# 通过 add_edges_from 接口添加边列表,此处添加了两条边(1, 2)和(1, 3)
G.add_edges_from([(1, 2), (1, 3)])
# 通过 add_node 添加点4
G.add_node(4)
###Output
_____no_output_____
###Markdown
接着使用 NetworkX 来查询一些图的信息
###Code
# 使用 G.number_of_nodes 查询图G目前点的数目
G.number_of_nodes()
# 类似地,G.number_of_edges 可以查询图G中边的数量
G.number_of_edges()
# 通过 G.degree 来查看图G中每个点的度数
sorted(d for n, d in G.degree())
###Output
_____no_output_____
###Markdown
最后通过调用 NetworkX 内置的算法对图G进行分析
###Code
# 调用 connected_components 算法分析图G的联通分量
list(networkx.connected_components(G))
# 调用 clustering 算法分析图G的聚类情况
networkx.clustering(G)
###Output
_____no_output_____
###Markdown
如何使用 GraphScope 的 NetworkX 接口 图的构建使用 GraphScope 的 NetworkX 兼容接口,我们只需要简单地将教程中的`import netwokx as nx`替换为`import graphscope.nx as nx`即可。GraphScope 支持与 NetworkX 完全相同的载图语法,这里我们使用`nx.Graph()`来建立一个空的无向图。
###Code
import graphscope.nx as nx
# 我们可以建立一个空的无向图
G = nx.Graph()
###Output
_____no_output_____
###Markdown
增加节点和边GraphScope 的图操作接口也保持了与 NetworkX 的兼容,用户可以通过`add_node`和 `add_nodes_from`来添加节点,通过`add_edge`和`add_edges_from`来添加边。
###Code
# 通过 add_node 一次添加一个节点
G.add_node(1)
# 或从任何 iterable 容器中添加节点,如列表
G.add_nodes_from([2, 3])
# 如果容器中是元组的形式,还可以在添加节点的同时,添加节点属性
G.add_nodes_from([(4, {"color": "red"}), (5, {"color": "green"})])
# 对于边,可以通过 add_edge 的一次添加一条边
G.add_edge(1, 2)
e = (2, 3)
G.add_edge(*e)
# 通过 add_edges_from 添加边列表
G.add_edges_from([(1, 2), (1, 3)])
# 或者通过边元组的方式,在添加边的同时,添加边的属性
G.add_edges_from([(1, 2), (2, 3, {'weight': 3.1415})])
###Output
_____no_output_____
###Markdown
查询图的元素GraphScope 支持兼容 NetworkX 的图查询接口。用户可以通过`number_of_nodes`和`number_of_edges`来获取图点和边的数量,通过`nodes`, `edges`,`adj`和`degree`等接口来获取图当前的点和边,以及点的邻居和度数等信息。
###Code
# 查询目前图中点和边的数目
G.number_of_nodes()
G.number_of_edges()
# 列出目前图中的点和边
list(G.nodes)
list(G.edges)
# 查询某个点的邻居
list(G.adj[1])
# 查询某个点的度
G.degree(1)
###Output
_____no_output_____
###Markdown
从图中删除元素像 NetworkX 一样, GraphScope 也可以使用与添加元素相类似的方式从图中删除点和边,对图进行修改。例如可以通过`remove_node`和`remove_nodes_from`来删除图中的节点,通过`remove_edge`和`remove_edges_from`来删除图中的边。
###Code
# 通过 remove_node 删除一个点
G.remove_node(5)
list(G.nodes)
G.remove_nodes_from([4, 5])
list(G.nodes)
# 通过 remove_edge 删除一条边
G.remove_edge(1, 2)
list(G.edges)
# 通过 remove_edges_from 删除多条边
G.remove_edges_from([(1, 3), (2, 3)])
list(G.edges)
# 我们再来看一下现在的点和边的数目
G.number_of_nodes()
G.number_of_edges()
###Output
_____no_output_____
###Markdown
图分析GraphScope 可以通过兼容 NetworkX 的接口来对图进行各种算法的分析,示例里我们构建了一个简单图,然后分别使用`connected_components`分析图的联通分量,使用`clustering`来得到图中每个点的聚类系数,以及使用`all_pairs_shortest_path`来获取节点两两之间的最短路径。
###Code
# 首先构建图
G = nx.Graph()
G.add_edges_from([(1, 2), (1, 3)])
G.add_node(4)
# 通过 connected_components 算法找出图中的联通分量
list(nx.connected_components(G))
# 通过 clustering 算法计算每个点的聚类系数
nx.clustering(G)
sp = dict(nx.all_pairs_shortest_path(G))
sp[3]
###Output
_____no_output_____
###Markdown
图的简单绘制同 NetworkX 一样,GraphScope支持通过`draw`将图进行简单地绘制出来,底层依赖的是`matplotlib`的绘图功能。 如果系统未安装`matplotlib`, 我们首先需要安装一下`matplotlib`包
###Code
!pip3 install matplotlib
###Output
_____no_output_____
###Markdown
使用 GraphScope 来进行简单地绘制图
###Code
# 创建一个5个点的 star graph
G = nx.star_graph(5)
# 使用 nx.draw 绘制图
nx.draw(G, with_labels=True, font_weight='bold')
###Output
_____no_output_____
###Markdown
GraphScope 相对 NetworkX 算法性能上有着数量级的提升我们通过一个简单的实验来看一下 GraphScope 对比 NetworkX 在算法性能上到底提升多少。实验使用来自 snap 的 [twitter] 图数据(https://snap.stanford.edu/data/ego-Twitter.html), 测试算法是 NetworkX 内置的 [clustering](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.cluster.clustering.htmlnetworkx.algorithms.cluster.clustering) 算法。 我们首先准备下数据,使用 wget 将数据集下载到本地
###Code
!wget https://raw.githubusercontent.com/GraphScope/gstest/master/twitter.e -P /tmp
###Output
_____no_output_____
###Markdown
接着我们分别使用 GraphScope 和 NetworkX 载入 snap-twitter 数据
###Code
import os
import graphscope.nx as gs_nx
import networkx as nx
# 使用 NetworkX 载入 snap-twitter 图数据
g1 = nx.read_edgelist(
os.path.expandvars('/tmp/twitter.e'), nodetype=int, data=False, create_using=nx.Graph
)
type(g1)
# 使用 GraphScope 载入相同的 snap-twitter 图数据
g2 = gs_nx.read_edgelist(
os.path.expandvars('/tmp/twitter.e'), nodetype=int, data=False, create_using=gs_nx.Graph
)
type(g2)
###Output
_____no_output_____
###Markdown
最后我们使用 clustering 算法来对图进行聚类分析,来看一下 GraphScope 对比 NetworkX 在算法性能上有多少提升
###Code
%%time
# 使用 GraphScope 计算图中每个点的聚类系数
ret_gs = gs_nx.clustering(g2)
%%time
# 使用 NetworkX 计算图中每个点的聚类系数
ret_nx = nx.clustering(g1)
# 对比下两者的结果是否一致
ret_gs == ret_nx
###Output
_____no_output_____ |
kovalevskyi/Windowing_Tutorial.ipynb | ###Markdown
Enriching Data with moving statistics The bigquery cell magic allows for a rapid explorative development, but clutters the notebook with large result sets.
###Code
%load_ext google.cloud.bigquery
%%bigquery
SELECT
DEP_T, AIRLINE, ARR, DEP_DELAY, ARR_DELAY
from `going-tfx.examples.ATL_JUNE_SIGNATURE`
where date='2006-06-12'
order by dep_t limit 2
###Output
_____no_output_____
###Markdown
---Doing it with ```datalab.bigquery``` doesn't display the large result set
###Code
import google.datalab.bigquery as bq
samples=bq.Query("""
SELECT
DEP_T, AIRLINE, ARR, DEP_DELAY, ARR_DELAY
FROM
`going-tfx.examples.ATL_JUNE_SIGNATURE`
WHERE
date='2006-06-12'
ORDER BY
dep_t
""").execute().result().to_dataframe()
two_records = samples[:2].to_dict(orient='records')
print("%s samples, showing first 2:" % len(samples))
two_records
###Output
1075 samples, showing first 2:
###Markdown
--- Beam Transform ```DoFn```s
###Code
import apache_beam as beam
from apache_beam import window
from apache_beam.options.pipeline_options import PipelineOptions
###Output
_____no_output_____
###Markdown
Add Timestamp
###Code
class AddTimeStampDoFn(beam.DoFn):
def __init__(self, offset, *args, **kwargs):
self.offset = offset
super(beam.DoFn, self).__init__(*args, **kwargs)
def process(self, element):
timestamp = (self.offset +
(element['DEP_T'] // 100) * 3600 +
(element['DEP_T'] % 100) * 60)
time_stamped = beam.window.TimestampedValue(element, timestamp)
yield time_stamped
###Output
_____no_output_____
###Markdown
Add hour of day
###Code
class Add_HOD(beam.DoFn):
def process(self, element):
element=element.copy()
dep_t = element['DEP_T']
element['HOD'] = dep_t // 100
yield element
###Output
_____no_output_____
###Markdown
Key out DEP_DELAY for averaging
###Code
class DEP_DELAY_by_HOD(beam.DoFn):
def process(self, element):
element=element.copy()
yield element['HOD'], element['DEP_DELAY']
###Output
_____no_output_____
###Markdown
Key the whole recordsKeying the records allows as to CoGroupByKey after the windowed statistics are available
###Code
class Record_by_HOD(beam.DoFn):
def process(self, element):
element=element.copy()
yield element['HOD'], element
###Output
_____no_output_____
###Markdown
Unnest Unnest the resulting structure coming from ```CoGroupByKey``` to simple records
###Code
class Flatten_EnrichedFN(beam.DoFn):
def process(self, element):
hod=element[0]
avg=element[1]['avg'][0]
cnt=element[1]['cnt'][0]
records=element[1]['rec'][0]
for record in records:
record['CNT_BTH']=cnt
record['AVG_DEP_DELAY_BTH']=avg
yield record
###Output
_____no_output_____
###Markdown
We'll add a timestamp as if the records were all today's records Creating the pipeline
###Code
import time
import datetime
OFFSET = int(time.time() // 86400 * 86400)
OFFSET
data = samples.to_dict(orient='records')
windowed = (
data
| "Add_timestamp" >> beam.ParDo(AddTimeStampDoFn(OFFSET))
| "Add_HOD" >> beam.ParDo(Add_HOD())
| "Window_1h" >> beam.WindowInto(window.FixedWindows(3600))
)
len(windowed)
###Output
_____no_output_____
###Markdown
Counts by the hour
###Code
records_by_hod = (
windowed
| "Record_by_HOD" >> beam.ParDo(Record_by_HOD())
| "Group_by_HOD" >> beam.GroupByKey()
)
counts_by_hod = (
records_by_hod
| "Count" >> beam.CombineValues(beam.combiners.CountCombineFn())
)
counts_by_hod[:5]
#records_by_hod[2]
###Output
_____no_output_____
###Markdown
Averages by the hour
###Code
avgs_by_hod = (
windowed
| "Make_HOD" >> beam.ParDo(DEP_DELAY_by_HOD())
| "Avg_by_HOD" >> beam.CombinePerKey(beam.combiners.MeanCombineFn())
)
avgs_by_hod[:3]
###Output
_____no_output_____
###Markdown
Co-Group and Flatten
###Code
combined = ( {'cnt': counts_by_hod, 'avg': avgs_by_hod, 'rec': records_by_hod }
| "Co_Group_HOD" >> beam.CoGroupByKey()
| "Flatten" >> beam.ParDo(Flatten_EnrichedFN())
)
combined[:3]
###Output
_____no_output_____ |
notebooks/04_feature_analytics.ipynb | ###Markdown
Exploração de carcterísticas
###Code
import os
from os.path import join, dirname
from dotenv import load_dotenv
dotenv_path = join(dirname('__file__'), '.env')
load_dotenv(dotenv_path)
ROOT_PATH = os.environ.get("ROOT_PATH")
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import seaborn as sns
import numpy as np
df = pd.read_parquet(f"{ROOT_PATH}/features/features.parquet")
df = df.astype({
'var': float,
'skew': float,
'kur': float,
'label': str
})
###Output
_____no_output_____
###Markdown
Agrupamento Geral
###Code
fig = plt.figure(figsize=(7, 14))
gs = GridSpec(nrows=3, ncols=1)
# Boxplots
ax0 = fig.add_subplot(gs[0])
g1 = sns.boxplot(data=df, x='label',y='var',ax=ax0, showfliers=False)
g1.set(xticklabels=['Ictal','Pré-ictal','Pós-ictal','Normal','Recuperação'])
g1.set_ylabel('Variance + Log Entropy', size=15)
g1.set_xlabel('')
ax1 = fig.add_subplot(gs[1])
g2 = sns.boxplot(data=df, x='label',y='kur',ax=ax1, showfliers=False)
g2.set(xticklabels=['Ictal','Pré-ictal','Pós-ictal','Normal','Recuperação'])
g2.set_ylabel('Kurtosis + Log Entropy', size=15)
g2.set_xlabel('')
ax2 = fig.add_subplot(gs[2])
g3 = sns.boxplot(data=df, x='label',y='skew',ax=ax2, showfliers=False)
g3.set(xticklabels=['Ictal','Pré-ictal','Pós-ictal','Normal','Recuperação'])
g3.set_ylabel('Skewness + Log Entropy', size=15)
g3.set_xlabel('')
###Output
_____no_output_____
###Markdown
Ampliando Grupos
###Code
df1 = df[df['label'].isin(['normal','rep'])]
df2 = df[df['label'].isin(['pre','pos'])]
df3 = df[df['label']=='ictal']
df1['type'] = np.repeat('normal_rep', len(df1))
df2['type'] = np.repeat('pre_pos', len(df2))
df3['type'] = np.repeat('ictal', len(df3))
df = pd.concat([df1,df2,df3])
fig = plt.figure(figsize=(7, 14))
gs = GridSpec(nrows=3, ncols=1)
# Boxplots
ax0 = fig.add_subplot(gs[0])
g1 = sns.boxplot(data=df, x='type',y='var',ax=ax0)
g1.set_xlabel('')
ax1 = fig.add_subplot(gs[1])
sns.kdeplot(df[df['type']=='ictal']['var'].to_numpy(),label='Ictal', ax=ax1)
sns.kdeplot(df[df['type']== 'normal_rep']['var'].to_numpy(), label='pre-pos', ax=ax1)
sns.kdeplot(df[df['type']== 'pre_pos']['var'].to_numpy(), label='normal-rep', ax=ax1)
ax1.legend()
###Output
_____no_output_____ |
notebooks/misc/D3-simple-example.ipynb | ###Markdown
Simple D3 exampleThe following demonstration of D3 usage comes from[Custom D3.js Visualization in a Jupyter Notebook](https://www.stefaanlippens.net/jupyter-custom-d3-visualization.html). To cover the basics, let's start simple with just inline `%%javascript` snippet cells.First, we tell the RequireJS environment where to find the version of D3.js we want. Note that the .js extension is omitted in the URL.
###Code
%%javascript
require.config({
paths: {
d3: 'https://d3js.org/d3.v5.min'
}
});
###Output
_____no_output_____
###Markdown
We can now create a D3.js powered SVG drawing, for example as follows:
###Code
%%javascript
(function(element) {
require(['d3'], function(d3) {
var data = [1, 2, 4, 8, 16, 8, 4, 2, 1]
var svg = d3.select(element.get(0)).append('svg')
.attr('width', 400)
.attr('height', 200);
svg.selectAll('circle')
.data(data)
.enter()
.append('circle')
.attr("cx", function(d, i) {return 40 * (i + 1);})
.attr("cy", function(d, i) {return 100 + 30 * (i % 3 - 1);})
.style("fill", "#1570a4")
.transition().duration(2000)
.attr("r", function(d) {return 2*d;})
;
})
})(element);
###Output
_____no_output_____ |
ML_projects/6. Insurance Premium with AWS SageMaker AutoPilot/Insurance_Prediction_AutoPilot_Startup.ipynb | ###Markdown
Task 1: Import Dataset
###Code
import pandas as pd
# read the csv file
insurance_df = pd.read_csv('insurance_data.csv')
insurance_df.head()
###Output
_____no_output_____
###Markdown
Task 2: Upload the data to S3 bucket
###Code
import sagemaker
# Creating Sagemaker session using Sagemaker SDK
sess = sagemaker.Session()
#Prefix that we want to use
prefix = 'modern-ai-zero-coding/input'
#Uploading the data to S3 bucket
uri = sess.upload_data(path = 'insurance_data.csv', key_prefix = prefix)
print(uri)
# location to store the output
output = "s3://sagemaker-us-east-1-126821927778/modern-ai-zero-coding/output"
print(output)
###Output
s3://sagemaker-us-east-1-126821927778/modern-ai-zero-coding/output
|
best_naborhood_analysis.ipynb | ###Markdown
Team Name: The Avengers IntroductionOur goal is to use data driven analysis of residential neighborhoods in Pittsburgh to determine the best residential neighborhood. We used datasets provided by the Western Pennsylvania Regional Data Center for our analysis. Looking through the datasets we were immediately fixed with the task of how we were to determine the best residential neighborhood. After some thought, we decided to focus on data associated with traffic signs and intersection markings. We assumed there would be a correlation between the number of traffic signs and intersection markings and the safest residential neighborhood with the least amount of vehicle offenses. Therefore, our metric for the best residential neighborhood was determined by the largest number of both traffic signs and intersection markings in a particular residential neighborhood, as it resulted in fewer traffic violations. Essentially, we were trying to find the safest residential neighborhood to live in, which we determined to be the best. Before coming to this final metric for the best residential neighborhood, we explored some other ideas for how we might define our metric. We were informed that we didn't need to justify our metric if it was nosensical, so our first idea was the take the number of traffic signs in a residential neighborhood and divide it by the number of pools in that same residential neighborhood. We shortly discovered thereafter that using datasets that made sense and were even directly related to one another was a lot easier to talk about and made a lot more sense. We also looked at a dataset on Pittsburgh landscapes that showed soil quality and density of people in an area. However, in the end, we decided to use the aforementioned traffic sign and intersection marking data. More relatably to our datasets regarding traffic signs and intersection markings, we have also looked at other datasets that showed crash data. However, that was for the Allegheny county with no residential neighborhoods named, so we decided not to use it since we already have two other datasets. Criteria for BestThe criteria for the best residential neighborhood in Pittsburgh is determined by the largest number of traffic signs and intersection markings in a residential neighborhood. Mentioned in the introduction, we assumed that the residential neighborhood that had the most traffic signs and intersection markings would be the safest place to live. This meant the the best residential neighborhood would be the safest on the idea that it would have the least amount of car crashes, traffic violations, or other vehicular offenses. Our hopes was that through data analysis we were able to prove our hypothesis. Given our datasets we would use pandas to count the number of traffic signs and intersection markings per residential neighborhood, and display the amounts in an organized, easy to analyze graphs, we had a high level of confidence our data would point to a clear winner. The traffic signs in the dataset include stop signs, stop ahead signs, and yield signs. The intersection markings in the other dataset include things like crosswalks and stop lines for intersections. Data Results / Best Residential NeighborhoodWe decided to use a fairly simple yet effective way of analizing the two data sets. Because our criteria was determined simply by the quantaty of traffic signs and intersection markings, we decided that using a bar graph to depict the two datasets would allow us to effectively come to a conclusion. Using pandas, the two bar graphs were made, each of which showed the top 10 neighborhoods in descending order. Using only the top 10 in each of the graphs wouldn't be an issue due to the fact that some neighborhoods were top 10 in both, and would therfore have a higher number of signs and markings than other neighborhoods that weren't on the list. The data was very easy to analyze and allowed us to come up with some decisive conclusions. ConclusionThe project asked that we use a data driven arguement to dignify the best residential neighborhood in Pittsburgh. based on our results, the best residential neighborhood in Pittsburgh is based on our criteria was the **Central Business District**. This neighborhood had the largest combined number of traffic signs and intersection markings. However, the **Central business District** isn't actually a residential neighborhood. We believe the point of the project was to find the best residential neighborhood of people. Essentially, the best residential neighborhood in Pittsburgh as opposed to the best area in Pittsburgh. Based on this elaboration, the best residential neighborhood in Pittsburgh would actually be **Squirrel Hill South**. **Squirrel Hill South** is the safest, and therefore best neighborhood in Pittsburgh having both the most amount of traffic signs and intersection markings.*Logan*- Compared to my favorite neighborhood, **Squirrel Hill South** is a much safer and therfore better neighborhood. My favorite residential neighborhood in Pittsburgh is **Robinson**. **Robinson** however, did not make the top 10 for either list, and whether that was based on the need for traffic signs, the population of people in the nieghborhood, it wasn't even on the radar for best neighborhood.*Brad*- Since having only lived four months in Pittsburgh, spefically and exclusively in the university's dorms, I do not really have a "favorite" naborhood, spelled *fonetically*. Howver, the proccess of going through the data and mining it was very interesting to me. I personally have had this idea of collecting data on road and intersection safety in residential naborhoods. As I have shared in our presentaion, I was a cross gaurd in high school for an elementary school nearby due to the lack of traffic signs, intersection markings, speed bumps, and many other factors which had led to a few fatal car accidents to occur, unfortunatly. Therefore, it is nice to finally actually work on such data to analyze and test my theory for naborhoods' safety.
###Code
import pandas as pd
%matplotlib inline
#importing datasets
traffic_sings_data = pd.read_csv("Traffic Signs Data.csv")
inter_marks_data = pd.read_csv("Intersection Markings Data.csv")
#grouping naborhoods with traffic signs
traffic_signs_series = traffic_sings_data.groupby("neighborhood").count().loc[:,"id"].sort_values(ascending=False)
#plotting the top 10 naborhoods in Pittsburgh, PA based on the number of traffic signs
traffic_signs_series[:10].plot(kind="bar")
#1st place clarification for the naborhood with the most amount of traffic signs
traffic_signs_series[:3]
#grouping naborhoods with intersection markings
inter_marks_series = inter_marks_data.groupby("neighborhood").count().loc[:,"id"].sort_values(ascending=False)
#plotting the top 10 naborhoods in Pittsburgh, PA based on the number of intersection markings
inter_marks_series[:10].plot(kind="bar")
#1st place clarification for the naborhood with the most amount of traffic signs
inter_marks_series[:3]
###Output
_____no_output_____ |
cvicenia/tyzden-10/IAU_103_random-forest_hyperparameter-tuning.ipynb | ###Markdown
Hyperparameter tuning with Random ForestBased on https://towardsdatascience.com/optimizing-hyperparameters-in-random-forest-classification-ec7741f9d3f6
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn
plt.rcParams['figure.figsize'] = 9, 6
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score
from sklearn.model_selection import validation_curve
###Output
_____no_output_____
###Markdown
Plotting function
###Code
def plot_valid_curve(param_range, train_scores_mean, train_scores_std, test_scores_mean, test_scores_std):
plt.title("Validation Curve")
plt.xlabel(r"$\gamma$")
plt.ylabel("Score")
plt.ylim(0.991, 1.001)
lw = 2
plt.semilogx(param_range,
train_scores_mean,
label="Training score",
color="darkorange",
lw=lw
)
plt.fill_between(param_range,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha=0.2,
color="darkorange",
lw=lw
)
plt.semilogx(param_range,
test_scores_mean,
label="Cross-validation score",
color="navy",
lw=lw
)
plt.fill_between(param_range,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.2,
color="navy",
lw=lw
)
plt.legend(loc="best")
return
###Output
_____no_output_____
###Markdown
Wine dataset with split ratio 65:35URL https://archive.ics.uci.edu/ml/datasets/wine+quality
###Code
red = pd.read_csv('data/winequality-red.csv', delimiter=';')
red['color'] = 1
white = pd.read_csv('data/winequality-white.csv', delimiter=';')
white['color'] = 0
data = pd.concat([red, white], ignore_index=True, sort=False)
n_samples, n_features = data.shape
n_samples, n_features
###Output
_____no_output_____
###Markdown
add noisy columns (simulate real data)
###Code
# you can try a change to 100 to see computational effect
n_cols = 20 * n_features
random_state = np.random.RandomState(0)
df_cols = pd.DataFrame(data=random_state.randn(n_samples, n_cols),
columns=range(1, n_cols+1)
)
print(df_cols.shape)
data = pd.concat([data, df_cols], axis=1)
data.shape
###Output
_____no_output_____
###Markdown
split dataset
###Code
train, test = train_test_split(data, test_size=0.35, shuffle=True, stratify=None)
print(len(train), len(test))
x_train, y_train = train.loc[:, train.columns != 'color'], train['color']
x_test, y_test = test.loc[:, test.columns != 'color'], test['color']
###Output
_____no_output_____
###Markdown
Random Forest classifier
###Code
# default n_estimators=100
forest = RandomForestClassifier(random_state=1)
model = forest.fit(x_train, y_train)
y_pred = model.predict(x_test)
print(accuracy_score(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Hyperparameter *n_estimators* tuning**Note: long running time - a few minutes**
###Code
param_range = [100, 200, 300, 400, 500]
train_scores, test_scores = validation_curve(RandomForestClassifier(),
X = x_train,
y = y_train,
param_name = 'n_estimators',
param_range = param_range,
scoring="accuracy",
cv = 3
)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
print(train_scores)
print(test_scores)
###Output
_____no_output_____
###Markdown
Plotting
###Code
plot_valid_curve(param_range, train_scores_mean, train_scores_std, test_scores_mean, test_scores_std)
###Output
_____no_output_____
###Markdown
Random Forest with hyperparameter setting
###Code
forest = RandomForestClassifier(random_state = 1,
n_estimators = 200,
max_depth = None,
min_samples_split = 2,
min_samples_leaf = 1
)
model = forest.fit(x_train, y_train)
y_pred = model.predict(x_test)
print(accuracy_score(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Grid Search**Note: long running time cca 30m**
###Code
n_estimators = [100, 200, 300, 400, 500]
max_depth = [5, 10, 20, 30, 40, 50]
min_samples_split = [2, 5, 10, 15, 20]
min_samples_leaf = [1, 2, 5, 10]
hyper = dict(n_estimators = n_estimators,
max_depth = max_depth,
min_samples_split = min_samples_split,
min_samples_leaf = min_samples_leaf
)
gs = GridSearchCV(forest, hyper, cv=3, verbose=1, n_jobs=-1)
best = gs.fit(x_train, y_train)
print(best)
###Output
_____no_output_____
###Markdown
The best model
###Code
forest = RandomForestClassifier(n_estimators=200, random_state=1)
model = forest.fit(x_train, y_train)
y_pred = model.predict(x_test)
print(accuracy_score(y_test, y_pred))
###Output
_____no_output_____ |
notebooks/Optimize-results-debug.ipynb | ###Markdown
Unique colors per elem ratio:
###Code
all_elems = set()
for tbls in all_tbls.values():
all_elems = all_elems.union(tbls.keys())
elem_to_color = {}
for i, elem in enumerate(all_elems):
elem_to_color[elem] = turbo(i / len(all_elems))
fiducials = {
'mdisk_f': 1.,
'disk_hz': 0.28,
'zsun': 20.8,
'vzsun': 7.78
}
colcols = [
('mdisk_f', 'disk_hz'),
('mdisk_f', 'vzsun'),
('zsun', 'vzsun')
]
for data_name, tbls in all_tbls.items():
fig, axes = plt.subplots(1, 3, figsize=(15, 5.5),
constrained_layout=True)
for elem in tbls:
for i, (col1, col2) in enumerate(colcols):
ax = axes[i]
ax.plot(tbls[elem][col1], tbls[elem][col2],
ls='none', marker='o', mew=0, ms=4,
label=elem_to_label(elem), color=elem_to_color[elem])
axes[0].legend()
axes[0].set_xlabel(r'${\rm M}_{\rm disk} / {\rm M}_{\rm disk}^\star$')
axes[1].set_xlabel(r'${\rm M}_{\rm disk} / {\rm M}_{\rm disk}^\star$')
axes[2].set_xlabel(r'$z_\odot$ [pc]')
axes[0].set_ylabel(r'$h_z$ [kpc]')
axes[1].set_ylabel(r'$v_{z,\odot}$ ' + f'[{u.km/u.s:latex_inline}]')
axes[2].set_ylabel(r'$v_{z,\odot}$ ' + f'[{u.km/u.s:latex_inline}]')
for ax, (col1, col2) in zip(axes, colcols):
ax.axvline(fiducials[col1], zorder=-10, color='#aaaaaa', linestyle='--')
ax.axhline(fiducials[col2], zorder=-10, color='#aaaaaa', linestyle='--')
fig.set_facecolor('w')
fig.suptitle(data_name, fontsize=24)
###Output
_____no_output_____
###Markdown
Error ellipses
###Code
# From https://matplotlib.org/devdocs/gallery/statistics/confidence_ellipse.html
from matplotlib.patches import Ellipse
import matplotlib.transforms as transforms
def confidence_ellipse(x, y, ax, n_std=1.0, facecolor='none', **kwargs):
cov = np.cov(x, y)
pearson = cov[0, 1] / np.sqrt(cov[0, 0] * cov[1, 1])
# Using a special case to obtain the eigenvalues of this
# two-dimensionl dataset.
ell_radius_x = np.sqrt(1 + pearson)
ell_radius_y = np.sqrt(1 - pearson)
ellipse = Ellipse((0, 0), width=ell_radius_x * 2, height=ell_radius_y * 2,
facecolor=facecolor, **kwargs)
# Calculating the stdandard deviation of x from
# the squareroot of the variance and multiplying
# with the given number of standard deviations.
scale_x = np.sqrt(cov[0, 0]) * n_std
mean_x = np.mean(x)
# calculating the stdandard deviation of y ...
scale_y = np.sqrt(cov[1, 1]) * n_std
mean_y = np.mean(y)
transf = transforms.Affine2D() \
.rotate_deg(45) \
.scale(scale_x, scale_y) \
.translate(mean_x, mean_y)
ellipse.set_transform(transf + ax.transData)
return ax.add_patch(ellipse)
def plot_cov_ellipse(m, C, ax, n_std=1.0, facecolor='none', **kwargs):
pearson = C[0, 1] / np.sqrt(C[0, 0] * C[1, 1])
# Using a special case to obtain the eigenvalues of this
# two-dimensionl dataset.
ell_radius_x = np.sqrt(1 + pearson)
ell_radius_y = np.sqrt(1 - pearson)
ellipse = Ellipse((0, 0), width=ell_radius_x * 2, height=ell_radius_y * 2,
facecolor=facecolor, **kwargs)
transf = transforms.Affine2D() \
.rotate_deg(45) \
.scale(n_std * np.sqrt(C[0, 0]),
n_std * np.sqrt(C[1, 1])) \
.translate(m[0], m[1])
ellipse.set_transform(transf + ax.transData)
return ax.add_patch(ellipse)
def make_ell_plot(tbls):
elem_names = tbls.keys()
means = np.zeros((len(elem_names), 4))
covs = np.zeros((len(elem_names), 4, 4))
for j, elem in enumerate(elem_names):
mask = (np.isfinite(tbls[elem]['mdisk_f']) &
np.isfinite(tbls[elem]['zsun']) &
np.isfinite(tbls[elem]['vzsun']))
X = np.stack((tbls[elem]['mdisk_f'][mask],
tbls[elem]['disk_hz'][mask],
tbls[elem]['zsun'][mask],
tbls[elem]['vzsun'][mask]))
covs[j] = np.cov(X)
means[j] = np.mean(X, axis=1)
C = np.linalg.inv(np.sum([np.linalg.inv(cov) for cov in covs], axis=0))
m = np.sum([C @ np.linalg.inv(cov) @ mean
for mean, cov in zip(means, covs)], axis=0)
logdets = [np.linalg.slogdet(cov)[1] for cov in covs]
norm = mpl.colors.Normalize(vmin=np.nanmin(logdets),
vmax=np.nanmax(logdets),
clip=True)
norm2 = mpl.colors.Normalize(vmin=-0.2, vmax=1.1)
def get_alpha(ld):
return norm2(1 - norm(ld))
fig, axes = plt.subplots(1, 3, figsize=(15, 5.5),
constrained_layout=True)
for elem, logdet in zip(elem_names, logdets):
for i, (col1, col2) in enumerate(colcols):
ax = axes[i]
color = elem_to_color[elem]
mask = np.isfinite(tbls[elem][col1]) & np.isfinite(tbls[elem][col2])
if mask.sum() < 100:
print(f'skipping {elem} {col1} {col2}')
continue
ell = confidence_ellipse(tbls[elem][col1][mask],
tbls[elem][col2][mask],
ax,
n_std=1.,
linewidth=0, facecolor=color,
alpha=get_alpha(logdet),
label=elem_to_label(elem))
ell = confidence_ellipse(tbls[elem][col1][mask],
tbls[elem][col2][mask],
ax,
n_std=2.,
linewidth=0, facecolor=color,
alpha=get_alpha(logdet) / 2)
for j, i in enumerate([[2, 3], [1, 2], [0, 1]]):
mm = np.delete(m, i)
CC = np.delete(np.delete(C, i, axis=0), i, axis=1)
ell = plot_cov_ellipse(mm, CC, ax=axes[j],
n_std=1.,
linewidth=0, facecolor='k',
alpha=0.5, label='joint', zorder=100)
ell = plot_cov_ellipse(mm, CC, ax=axes[j],
n_std=2.,
linewidth=0, facecolor='k',
alpha=0.2, zorder=100)
axes[0].set_xlim(0.4, 1.8)
axes[1].set_xlim(0.4, 1.8)
axes[2].set_xlim(-60, 30)
axes[0].set_ylim(0, 0.8)
axes[1].set_ylim(0, 15)
axes[2].set_ylim(0, 15)
axes[2].legend(ncol=2)
axes[0].set_xlabel(r'${\rm M}_{\rm disk} / {\rm M}_{\rm disk}^\star$')
axes[1].set_xlabel(r'${\rm M}_{\rm disk} / {\rm M}_{\rm disk}^\star$')
axes[2].set_xlabel(r'$z_\odot$ [pc]')
axes[0].set_ylabel(r'$h_z$ [kpc]')
axes[1].set_ylabel(r'$v_{z,\odot}$ ' + f'[{u.km/u.s:latex_inline}]')
axes[2].set_ylabel(r'$v_{z,\odot}$ ' + f'[{u.km/u.s:latex_inline}]')
for ax, (col1, col2) in zip(axes, colcols):
ax.axvline(fiducials[col1], zorder=-10, color='#aaaaaa', linestyle='--')
ax.axhline(fiducials[col2], zorder=-10, color='#aaaaaa', linestyle='--')
fig.set_facecolor('w')
return fig, axes
for data_name, tbls in all_tbls.items():
fig, axes = make_ell_plot(tbls)
fig.suptitle(data_name, fontsize=24)
fig.savefig(plot_path / data_name / 'bootstrap-error-ellipses.png', dpi=250)
###Output
_____no_output_____ |
Python Basics/.ipynb_checkpoints/Lab Assignment - 1-checkpoint.ipynb | ###Markdown
Exercise 1 1.Wap to check is a number is divisible by 5 2.Wap to check if no. is even or odd 3.WAP to check if roots of a quadratic equation are real,real and equal or immaginary 4.WAP to print grades of a student Less than 40 - NC 40 - Less than 50 - D 50 - Less than 60 - C 60 - Less than 70 - B 70 - Less than 80 - A 80 and Above - O 5.WAP print the Electricity Bill Upto 200 - 0.5/unit 201 - 500 - 1/unit for units consumed above 200 501 - 1000 - 2.5/unit " " 500 10001 - 1500 - 3.5/unit 1501 - 2500 - 5/unit Above 2500 - 10/unit
###Code
# Exercise ! prg 1
n = input("Enter a number")
n = int(n)
print(n)
if n%5 == 0:
print("Number is divisible by 5")
else:
print("Not divisible by 5")
# Exercise 1 prg 2
n = input("Enter a number")
n = int(n)
print(n)
if n%2 == 0:
print("Number is Even")
else:
print("Number is Odd")
# Exercise 1 prg 3
a = int(input())
b = int(input())
c = int(input())
d = (b*b)-(4*a*c)
d = int(d)
print(d)
if d > 0:
print("Roots are Real")
elif d == 0:
print("Roots are real and equal")
else:
print("Roots are Imaginary")
# Exercise 1 prg 4
# WAP to print grades of a student
# Less than 40 - NC
# 40 - Less than 50 - D
# 50 - Less than 60 - C
# 60 - Less than 70 - B
# 70 - Less than 80 - A
# 80 and Above - O
marks = int(input("Enter marks")) # By default any variable as String
if marks < 40:
print ("NC")
elif marks > 40 and marks < 50:
print("Grade D")
elif marks > 50 and marks < 60:
print("Grade C")
elif marks > 60 and marks < 70:
print("Grade B")
elif marks > 70 and marks < 80:
print("Grade A")
elif marks > 80:
print("Grade O")
# Exercise 1 prg 5
# WAP print the Electricity Bill
# Upto 200 - 0.5/unit
# 201 - 500 - 1/unit for units consumed above 200
# 501 - 1000 - 2.5/unit " " 500
# 10001 - 1500 - 3.5/unit
# 1501 - 2500 - 5/unit
# Above 2500 - 10/unit
unit = int(input("Enter units")) # By default any variable as String
if unit <= 200:
bill = (0.5*unit)
elif unit > 200 and unit <= 500:
bill = (0.5*200) + ((unit-200)*1)
elif unit > 500 and unit <= 1000:
bill = (0.5*200) + (500*1) + ((unit-500)*2.5)
elif unit > 1000 and unit <= 1500:
bill = (0.5*200) + (500*1) + (1000*2.5) + ((unit-1000)*3.5)
elif unit > 1500 and unit <= 2500:
bill = (0.5*200) + (500*1) + (1000*2.5) + (1500*3.5) + ((unit-1500)*5)
elif unit > 2500:
bill = (0.5*200) + (500*1) + (1000*2.5) + (1500*3.5) + (2500*5) + ((unit-2500)*10)
bill = float(bill)
print(bill)
###Output
Enter units550
725.0
###Markdown
Lab 31. WAP to check if a string is Palindrome or not2. WAP that takes string as multiple words and then capitalize the first letter of each word and forms a new string3. WAP to read a string and display the longest substring of the given string having just the consonants4. WAP to read a string and then prints a sting that capitalizes every other letter in the string e.g. goel becomes gOeL5. WAP to read an email id of a person and vertify if it belongs to banasthali.in domain6. WAP to count the number of vowels and consonants in an input string7. WAP to read a sentnece and count the articles it has8. WAP to check if a sentence is a paniltimate or not9. WAP to read a string and check if it has "good" in it
###Code
# Exercise 3 prg 1
str = input("Enter a string")
ss= str[::-1] #to reverse the string
print(str)
print(ss)
if(str == ss):
print("palindrome")
else:
print('Not palindrome')
# Exercise 3 prg 2
str = input("Enter string")
s = ""
lstr = str.split()
for i in range(0,len(lstr)):
lstr[i] = lstr[i].capitalize()
#lstr[i] = lstr[i].replace(lstr[i][0],lstr[i][0].upper())
print(s.join(lstr))
# Exercise 3 prg 4
str = input("Enter string")
for i in range(1,len(str),2):
str = str.replace(str[i],str[i].upper())
print(str)
# Exercise 3 prg 5
str = input("Enter email")
a = str.find("@")
s = "banasthali.in"
if(str[a+1:] == s):
print("Verified")
else:
print("Wrong email")
# Exercise 3 prg 6,7
str = input("Enter string")
articles = int(str.count("a")+str.count("an")+str.count("the"))
vowel = int(str.count("a")+str.count("e")+str.count("i")+str.count("o")+str.count("u"))
consonents = int(len(str)-vowel-str.count(" "))
print(articles)
print(vowel)
print(consonents)
# Exercise 3 prg 9
str = input("Enter string")
if(str.find("good") >= 0):
print("yes")
else:
print("no")
###Output
Enter string not good
yes
###Markdown
Lab 41. WAP to find minimum element from a list of integers alongwith its index in the list2. WAP to calcualte mean of a given list of numbers3. WAP to search an element in a list4. WAP to count the frequency of an element in a list5. WAP to find frequencies of all elements in a list6. WAP to extact two slices from a list and add the slices and find which one is greater7. WAP to find second largest number of a list8. WAP to input a list of numbers and shift all zeros to the right and all non zeros to left of the list9. WAP to add two matrices10. WAP to multiply two matrices11. WAP to print transpose of a matrix12. WAP to add elements of diagonal of a matrix13. WAP to find an element in a matrix14. WAP to find inverse of a matrix
###Code
#1
l = list(input("Enter list"))
min = l[0]
for i in range(len(l)-1):
if(l[i+1]<min):
min=l[i+1]
print(min)
print(l.index(min))
#2
l = list(input("Enter list"))
sum = 0
sum = int(sum)
for i in range(len(l)):
sum+= int(l[i])
mean = float(sum/len(l))
print(mean)
#3
l = list(input("Enter list"))
ele = input("Enter element")
l.index(ele)
#5
l = list(input("Enter list"))
lst = []
for i in range(len(l)):
if(lst.count(l[i]) == 0):
print(l[i],l.count(l[i]))
lst.append(l[i])
#6
l = list(input("Enter list"))
lenght = int(len(l)/2)
l1 = l[0:lenght]
l2 = l[(lenght+1):int(len(l))]
if(l1>l2):
print(l1)
else:
print(l2)
#7
l = list(input("Enter list"))
min = l[0]
min2 = l[1]
for i in range(2,len(l)):
if(l[i]<min2):
if(l[i]<min):
min=l[i]
min2=min
else:
min2=l[i]
print(min2)
print(l.index(min2))
#8
l = list(input("Enter list"))
i = 0
j = len(l)-1
while(i<j):
if(l[i]==0):
if(l[j]==0):
j-=1
if(i!=j):
t=l[i]
l[i]=l[j]
l[j]=t
i+=1
print(l)
#9
mat1=[]
mat2=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix 1 :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat1.append(a)
print("Enter matrix 2 :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat2.append(a)
res = [[mat1[i][j] + mat2[i][j] for j in range(len(mat1[0]))] for i in range(len(mat1))]
print("Added Matrix is : ")
print("Added Matrix is : ")
for r in res :
print(r)
#10
mat1=[[4,3],[2,4]]
mat2=[[1,7],[3,4]]
res=[[0,0],[0,0]]
for i in range(len(mat1)):
for j in range(len(mat2[0])):
for k in range(len(mat2)):
res[i][j]+=mat1[i][k]*mat2[k][j]
print("Result Array : ")
for r in res:
print(r)
#11
mat=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat.append(a)
trans=[]
for j in range(c):
trans.append([])
for i in range(r):
t=mat[i][j]
trans[j].append(t)
print("Transposed Matirx : ")
for r in trans:
print(r)
#12
mat=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat.append(a)
sum=int(0)
for i in range(r):
for j in range(c):
if i==j:
sum+=mat[i][j]
break
print(sum)
# 13
mat=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat.append(a)
elem=int(input("Enter Element to be searched : "))
ind_r=0
ind_c=0
flag=0
for i in range(r):
for j in range(c):
if mat[i][j]==elem :
ind_r=i
ind_c=j
flag=1
break
if flag==1:
print(str(ind_r), str(ind_c))
else:
print("Element not found.")
# 14
mat=[]
r=int(input("Enter no. of rows : "))
c=int(input("Enter no. of columns : "))
print("Enter matrix :")
for i in range(r):
a=[]
for j in range(c):
ele=int(input())
a.append(ele)
mat.append(a)
###Output
Enter no. of rows : 2
Enter no. of columns : 3
Enter matrix :
1
2
3
4
5
6
###Markdown
Lab 51. Write a function in python to find max of three numbers2. Write a function in python to sum all the numbers in a list3. Write a function in python to multiply all the numbrers in a list4. Write a function in python to reverse a string5. Write a function in python that takes a list and returns a new list with unique elements of the first list6. Write a function in python that checks whether a passed string is palindrome or not7. Write a function in python to access a function inside a function8. Write a function in python to generate even series upto the nth term9. Write a function in python to check if a number is prime or not10. Write a function in python to generate prime series between range inputted by a user.11. Write a recursive function in python to sum all the numbers in a list12. Write a recursive function in python to multiply all the numbrers in a list13. Write a recursive function in python to reverse a string14. Write a recursive function in python that takes a list and returns a new list with unique elements of the first list15. Write a recursive function in python that checks whether a passed string is palindrome or not16. Write a recursive function in python to generate even series upto the nth term17. Write a recursive function in python to check if a number is prime or not18. Write a recursive function in python to generate prime series between range inputted by a user.
###Code
#1
def max(a,b,c):
m = a if (a if a > b else b)>c else c
print(m)
max(1,2,3)
#2
def sum(l):
s = 0
for i in range(len(l)):
s+= int(l[i])
print(s)
l = list(input("Enter list"))
sum(l)
#3
def mul(l):
m= 1
for i in range(len(l)):
m= m*int(l[i])
print(m)
l = list(input("Enter list"))
mul(l)
#4
def rev(s):
r = s[::-1]
print(r)
rev("Kashish")
#6
def palin(str):
ss= str[::-1] #to reverse the string
print(str)
print(ss)
if(str == ss):
print("palindrome")
else:
print('Not palindrome')
str = input("Enter a string")
palin(str)
#7
def fun1():
print("First Function")
fun2()
def fun2():
print("Second Function")
fun1()
#8
def even_s(n):
for i in range(n):
if(i%2 == 0):
print(i, end=" ")
even_s(8)
#9
def prime(n):
c=0
for i in range(1,n):
if n%i == 0:
c+=1
if c == 1:
print("Prime number")
else:
print("Not a prime number")
n = int(input("Enter number"))
prime(n)
#10
def prime_s(n):
for j in range(n):
c = 0
for i in range(1,j):
if j%i == 0:
c+=1
if c == 1:
print(j, end=" ")
prime_s(5)
#11
def sum_no(n,l):
if n == 0:
return int(l[n])
else:
return(int(l[n])+sum_no(n-1,l))
l = list(input("Enter list"))
sum_no((len(l)-1),l)
#12
def mul_no(n,l):
if n == 0:
return int(l[n])
else:
return(int(l[n])*mul_no(n-1,l))
l = list(input("Enter list"))
mul_no((len(l)-1),l)
#13
def revstr(str,n):
if n==0:
return str[0]
return str[n]+revstr(str,n-1)
print(revstr("helloworld",9))
#14
def unique(list1,list2,n,i):
if i in list1:
list2.append(list1[i])
unique(list1,list2,n,i+1)
return list2
list1=[]
n=int(input("Enter size : "))
for i in range(n):
a=int(input("Enter the element :"))
list1.append(a)
print(list1)
list2=[]
list2=unique(list1,list2,n-1,0)
print(list2)
#15
def isPal(st, s, e) :
if (s == e):
return True
if (st[s] != st[e]) :
return False
if (s < e + 1) :
return isPalRec(st, s + 1, e - 1);
return True
def isPalindrome(st) :
n = len(st)
if (n == 0) :
return True
return isPal(st, 0, n - 1);
st = "kashish"
if (isPalindrome(st)) :
print("Yes")
else :
print("No")
#16
def rec_even(s,n):
if s>=n:
if(n%2 == 0):
print(n)
elif(s%2 == 0):
print(s)
return rec_even(s+2,n)
num = int(input("Enter a number"))
rec_even(0,num)
#17
def rec_prime(n,i):
if n<=2:
if n==2:
return True
return False
if n%i==0:
return False
if(i*i>n):
return True
return re_cprime(n,i+1)
num=int(input("Enter a number"))
print(rec_prime(num,2))
#18
def prime(n):
for j in range(n):
c = 0
for i in range(1,j):
if j%i == 0:
c+=1
if c == 1:
return True
else:
return False
def rec_prime(s,num):
if s>=num:
t=prime(s)
if t==True:
print(s)
else:
t=prime(s)
if t==True:
print(s)
return rec_prime(s+1,num)
num = int(input("Enter a number"))
rec_prime(0,num)
###Output
Enter a number8
###Markdown
Lab 61. WAP to read a file and print the number of characters, words and lines it has2. WAP to copy contents of one file into another file3. WAP to read a file and count the number of vowels it has4. WAP to read a file and count the number of articles it has5. WAP to read a file reverse the contents of file (last line of the file should become the first line of the file)
###Code
f = open(r'D:\Kashish Goel\Sample.txt')
print(f.read())
f = open(r'D:\Kashish Goel\Sample.txt')
words=0
lines =0;
for line in f:
print(line)
f.close()
f = open(r'D:\Kashish Goel\Sample.txt')# no of lines
words=0
lines =0;
for line in f:
lines=lines+1
print(lines)
f.close()
#1
f = open(r'D:\Kashish Goel\Sample.txt')# no of lines
words=0
lines =0
for line in f:
lines=lines+1
word = line.split()
words= words + len(word)
print(lines)
print(words)
f.seek(0)
fo = f.read() # read file in one go
print(len(fo))
f.close()
#2
f=open(r'D:\Kashish Goel\Sample.txt')
f1 = open("Copy.txt", "w+")
for line in f:
f1.write(line)
f1.seek(0)
print(f1.read())
f.close()
f1.close()
#3
f=open(r'D:\Kashish Goel\Sample.txt')
fo=f.read()
vowels=0
for i in fo:
if i in ('a','e','i','o','u','A','E','I','O','U'):
vowels=vowels+1
print(vowels)
#4
f=open(r'D:\Kashish Goel\Sample.txt')
articles=0
for line in f:
sentence=line.split()
for word in sentence:
if word in ('a','an','the','A','An','The'):
articles+=1
print(articles)
#5
f=open(r'D:\Kashish Goel\Sample.txt')
list=[]
for line in f:
list.append(line)
f.close()
f=open(r'D:\Kashish Goel\Sample.txt',"w+")
for i in range(len(list)-1,-1,-1):
f.write(list[i])
f.seek(0)
print(f.read())
f.close()
###Output
This is a New Sample file generated for File Handling in Pyhton.
Kashish Goel
Btech CSE III Year
|
notebooks/sd_rs.ipynb | ###Markdown
Sample design: simple random sample
###Code
import pandas as pd
from pathlib import Path
import numpy as np
import matplotlib as matplotlib
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
import os
import datetime as dt
from shapely import wkt
from shapely.geometry import Point, Polygon, MultiPoint
import geopandas as gpd
import xarray as xr
plt.rcParams.update({'font.size': 18})
SMALL_SIZE = 10
MEDIUM_SIZE = 14
BIGGER_SIZE = 20
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
np.random.seed(0)
repo_path = Path('/Users/etriesch/dev/ocean-carbon-sampling/')
data_clean_path = repo_path / 'data/clean/'
data_raw_path = repo_path / 'data/raw/'
geo_crs = 'epsg:4326'
proj_crs = '+proj=cea'
# load coastlines (saved locally)
boundary_fp = data_raw_path / 'stanford-vg541kt0643-shapefile.zip'
boundary = gpd.read_file(boundary_fp).to_crs(geo_crs)
# Monterrey desal mask
ca_cent = [-121.788649, 36.802834]
ca_lats = [33.48, 39.48]
ca_lons = [-125.48, -119.48]
# Texas desal mask
tx_cent = [-95.311296, 28.927239]
tx_lats = [25.57, 31.57]
tx_lons = [-98.21, -92.21]
# NH desal mask
nh_cent = [-70.799678, 42.563588]
nh_lats = [39.38, 45.38]
nh_lons = [-73.50, -67.50]
###Output
_____no_output_____
###Markdown
Create ocean boundaries
###Code
# make disks
ca_disc = gpd.GeoSeries(Point(ca_cent), crs=proj_crs).buffer(1.5).set_crs(geo_crs, allow_override=True)
ca_disc = gpd.GeoDataFrame(geometry=ca_disc)
tx_disc = gpd.GeoSeries(Point(tx_cent), crs=proj_crs).buffer(1.5).set_crs(geo_crs, allow_override=True)
tx_disc = gpd.GeoDataFrame(geometry=tx_disc)
nh_disc = gpd.GeoSeries(Point(nh_cent), crs=proj_crs).buffer(1.5).set_crs(geo_crs, allow_override=True)
nh_disc = gpd.GeoDataFrame(geometry=nh_disc)
# cut discs at coastal boundary
ca = ca_disc.overlay(boundary, how='difference')
tx = tx_disc.overlay(boundary, how='difference')
nh = nh_disc.overlay(boundary, how='difference')
# make rectangles (not used)
def get_bounding_box(lats, lons):
geometry = []
for i in lons:
for j in lats:
geometry += [Point(i,j)]
geo = Polygon(geometry).envelope
geo = gpd.GeoDataFrame(geometry=gpd.GeoSeries(geo, crs=geo_crs))
return geo
ca_box = get_bounding_box(ca_lats, ca_lons)
tx_box = get_bounding_box(tx_lats, tx_lons)
nh_box = get_bounding_box(nh_lats, nh_lons)
# plot desal plants on map
fig, ax = plt.subplots(figsize=(9, 9))
boundary.plot(ax=ax, color='darkgreen', alpha=0.2)
boundary.boundary.plot(ax=ax, color='darkgreen', alpha=0.7, linewidth=0.1)
# california
ca.plot(ax=ax, color='darkblue', alpha=0.5, label='Sample region')
gpd.GeoSeries(Point(ca_cent)).plot(ax=ax, color='darkred', markersize=50, marker='*', label='Desal. plant')
# texas
tx.plot(ax=ax, color='darkblue', alpha=0.5, label='Sample region')
gpd.GeoSeries(Point(tx_cent)).plot(ax=ax, color='darkred', markersize=50, marker='*')
# new hampshire
nh.plot(ax=ax, color='darkblue', alpha=0.5, label='Sample region')
gpd.GeoSeries(Point(nh_cent)).plot(ax=ax, color='darkred', markersize=50, marker='*')
# set limit
ax.set_xlim(-127, -66)
ax.set_ylim(24, 50)
plt.title('Selected sample regions')
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Read in temp and color data
###Code
# read data
t_raw = pd.read_csv(data_clean_path / 'sst.csv')
c_raw = pd.read_csv(data_clean_path / 'chlor_a.csv')
# c_ann_raw = pd.read_csv(data_clean_path / 'chlor_a_annual.csv')
# merge on x/y values
m = pd.merge(left=c_raw, right=t_raw, how='inner', on=['x', 'y'], suffixes=('_c', '_t'))
# make geodataframe
geo = [Point(lon, lat) for lat, lon in zip(m.lat_c, m.lon_c)]
geo_m = gpd.GeoDataFrame(m, geometry=geo, crs=geo_crs)
###Output
_____no_output_____
###Markdown
Subset to sample zones
###Code
# make sample zones
# first convert points to convex hulls, then resnip them to the coastlines
pac_sample_zone = MultiPoint((geo_m.overlay(ca, how='intersection').geometry.values)).convex_hull
pac_sample_zone = gpd.GeoSeries(pac_sample_zone, crs=geo_crs)
pac_sample_zone = gpd.GeoDataFrame(geometry=pac_sample_zone).overlay(ca, how='intersection')
atl_sample_zone = MultiPoint((geo_m.overlay(nh, how='intersection').geometry.values)).convex_hull
atl_sample_zone = gpd.GeoSeries(atl_sample_zone, crs=geo_crs)
atl_sample_zone = gpd.GeoDataFrame(geometry=atl_sample_zone).overlay(nh, how='intersection')
gul_sample_zone = MultiPoint((geo_m.overlay(tx, how='intersection').geometry.values)).convex_hull
gul_sample_zone = gpd.GeoSeries(gul_sample_zone, crs=geo_crs)
gul_sample_zone = gpd.GeoDataFrame(geometry=gul_sample_zone).overlay(tx, how='intersection')
###Output
_____no_output_____
###Markdown
Simple random samplingUsing rejection sampling. Here we scale up the number of target samples relative to the bounding box containing the sampling zone, then sample the entire bounding box, and reject any samples not in the sampling zone. It can be shown that the sample points are uniformly randomly distributed within our target sampling zone
###Code
def rejection_sample(n, region):
# get fraction of sampling area
sample_area = region.to_crs(proj_crs).area
total_area = (gpd.GeoDataFrame(
geometry=gpd.GeoSeries(
Polygon([Point([region.bounds.minx, region.bounds.miny]), Point([region.bounds.minx, region.bounds.maxy]),
Point([region.bounds.maxx, region.bounds.miny]), Point([region.bounds.maxx, region.bounds.maxy])]),
crs=geo_crs).envelope).to_crs(proj_crs).area)
pct_sample_area = sample_area / total_area
# scale up target sample size to account for this
n_scale = int(np.ceil(n / pct_sample_area))
# generate lat lons
lon = np.random.uniform(region.bounds.minx, region.bounds.maxx, n_scale)
lat = np.random.uniform(region.bounds.miny, region.bounds.maxy, n_scale)
geo = [Point(lat, lon) for lat, lon in zip(lon, lat)]
geo_sub = [pt for pt in geo if region.contains(pt).values]
print(f'Targeted {n} samples, {len(geo_sub)} returned ({len(geo_sub)-n})')
return gpd.GeoSeries(geo_sub, crs=region.crs)
SAMPLES = 165
ca_samples = rejection_sample(SAMPLES, pac_sample_zone)
tx_samples = rejection_sample(SAMPLES, gul_sample_zone)
nh_samples = rejection_sample(SAMPLES, atl_sample_zone)
# make tuples of sample zones, discs, and desalination plant locations
PAC = [ca_samples, ca, ca_cent] # pacific
ATL = [nh_samples, nh, nh_cent] # atlantic
GUL = [tx_samples, tx, tx_cent] # gulf
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(20,20))
# pacific
PAC[1].boundary.plot(ax=ax1, alpha=0.8, color='gray')
PAC[0].plot(ax=ax1, markersize=10, label='sample')
gpd.GeoSeries(Point(PAC[2])).plot(ax=ax1, color='darkred', markersize=100, marker='*', label='desal. plant')
ax1.set_title('Pacific: Monterrey, CA')
# gulf
GUL[1].boundary.plot(ax=ax2, alpha=0.8, color='gray')
GUL[0].plot(ax=ax2, markersize=10, label='sample')
gpd.GeoSeries(Point(GUL[2])).plot(ax=ax2, color='darkred', markersize=100, marker='*', label='desal. plant')
ax2.set_title('Gulf: Freetown, TX')
# atlantic
ATL[1].boundary.plot(ax=ax3, alpha=0.8, color='gray')
ATL[0].plot(ax=ax3, markersize=10, label='sample')
gpd.GeoSeries(Point(ATL[2])).plot(ax=ax3, color='darkred', markersize=100, marker='*', label='desal. plant')
ax3.set_title('Atlantic: Hamilton, MA')
ax1.legend()
ax2.legend()
ax3.legend()
plt.show()
###Output
_____no_output_____ |
Lectures notebooks/(Lectures notebooks) netology Machine learning/21. Syntactic analysis and keyword selection/syntax.ipynb | ###Markdown
СинтаксисНаша цель -- представить предложение естественного языка в виде дерева, так, чтобы оно отражало синтаксические зависимости между словами. Зачем это нужно* «Банкомат съел карту» vs «карта съела банкомат»;* Определение правильности грамматики фразы (при порождении речи);* (правиловый) Машинный перевод;* Information extraction;* Синтаксическая роль токена как метрика его важности (подлежащее важнее определения), использование весов в классификаторе.Это можно сделать разными способами. Constituency parsing(парсинг составляющих) Что это?* слова предложения -- листья (самые нижние вершины)* они объединяются в более крупные вершины в логичном для синтаксиса порядке Depencency parsing(парсинг зависимостей) Что это?* слова предложения -- вершины; *зависимости (dependencies)* между ними -- рёбра* зависимости могут быть разными: например, субъект глагола, объект глагола, прилагательное-модификатор, и так далее ФорматСуществует несколько форматов записи деревьев зависимостей, но самый популярный и общеиспользуемый -- [CoNLL-U](http://universaldependencies.org/format.html).Как это выглядит (пример из [русского Universal Dependency трибанка](https://github.com/UniversalDependencies/UD_Russian-SynTagRus)):
###Code
my_example = """
# sent_id = 2003Armeniya.xml_138
# text = Перспективы развития сферы высоких технологий.
1 Перспективы перспектива NOUN _ Animacy=Inan|Case=Nom|Gender=Fem|Number=Plur 0 ROOT 0:root _
2 развития развитие NOUN _ Animacy=Inan|Case=Gen|Gender=Neut|Number=Sing 1 nmod 1:nmod _
3 сферы сфера NOUN _ Animacy=Inan|Case=Gen|Gender=Fem|Number=Sing 2 nmod 2:nmod _
4 высоких высокий ADJ _ Case=Gen|Degree=Pos|Number=Plur 5 amod 5:amod _
5 технологий технология NOUN _ Animacy=Inan|Case=Gen|Gender=Fem|Number=Plur 3 nmod 3:nmod SpaceAfter=No
6 . . PUNCT _ _ 1 punct 1:punct _
"""
###Output
_____no_output_____
###Markdown
Комментарии + таблица c 9 колонками (разделители табы):* ID* FORM: токен* LEMMA: начальная форма* UPOS: универсальная часть речи* XPOS: лингво-специфичная часть речи* FEATS: морфологическая информация: падеж, род, число etc* HEAD: id ролителя* DEPREL: тип зависимости, то есть отношение к токену-родителю* DEPS: альтернативный подграф (не будем углубляться :))* MISC: всё остальноеОтсутствующие данные представляются с помощью `_`. Больше подробностей про формат -- в [официальной документаци](http://universaldependencies.org/format.html).User-friendly визуализация: Отрытый инструмент для визуализации, ручной разметки и конвертации в другие форматы: UD Annotatrix. [Online-интерфейс](https://universal-dependencies.linghub.net/annotatrix), [репозиторий](https://github.com/jonorthwash/ud-annotatrix).Трибанк -- много таких предложений. Обычно они разделяются двумя переносами строки. Как считывать данные в питонеИспользуем библиотеку [conllu](https://github.com/EmilStenstrom/conllu).
###Code
!pip3 install conllu
from conllu import parse
help(parse)
sentences = parse(my_example)
sentence = sentences[0]
sentence[0]
sentence[-1]
###Output
_____no_output_____
###Markdown
ВизуализацияВ nltk есть DependencyGraph, который умеет рисовать деревья (и ещё многое другое). Для того, чтобы визуализация работала корректно, ему нужна зависимость: graphviz.
###Code
!apt-get install graphviz
!pip3 install graphviz
from nltk import DependencyGraph
###Output
_____no_output_____
###Markdown
В отличие от `conllu`, `DependencyGraph` не справляется с комментариями, поэтому придётся их убрать. Кроме того ему обязательно нужен `deprel` *ROOT* в верхнем регистре, иначе он не находит корень.
###Code
sents = []
for sent in my_example.split('\n\n'):
# убираем коменты
sent = '\n'.join([line for line in sent.split('\n') if not line.startswith('#')])
# заменяем deprel для root
sent = sent.replace('\troot\t', '\tROOT\t')
sents.append(sent)
graph = DependencyGraph(tree_str=sents[0])
graph
tree = graph.tree()
print(tree.pretty_print())
###Output
Перспективы
_______|__________
| развития
| |
| сферы
| |
| технологий
| |
. высоких
None
###Markdown
UDPipeЕсть разные инструменты для парсинга зависимостей. Сегодня мы посмотрим на [UDPipe](http://ufal.mff.cuni.cz/udpipe). UDPipe умеет парсить текст с помощью готовых моделей (которые можно скачать [здесь](https://github.com/jwijffels/udpipe.models.ud.2.0/tree/master/inst/udpipe-ud-2.0-170801)) и обучать модели на своих трибанках.Собственно, в UDPipe есть три вида моделей:* токенизатор (разделить текст на предложения, предложения на токены, сделать заготовку для CoNLL-U)* тэггер (лемматизировать, разметить части речи)* сам парсер (проставить каждому токену `head` и `deprel`)Мы сегодня не будем обучать новых моделей (это слишком долго), а используем готовую модель для русского. The Python bindingУ udpipe есть питоновская обвязка. Она довольно [плохо задокументирована](https://pypi.org/project/ufal.udpipe/), но зато можно использовать прямо в питоне :)
###Code
!pip install ufal.udpipe
from ufal.udpipe import Model, Pipeline
!wget https://github.com/jwijffels/udpipe.models.ud.2.0/raw/master/inst/udpipe-ud-2.0-170801/russian-ud-2.0-170801.udpipe
model = Model.load("russian-ud-2.0-170801.udpipe") # path to the model
# если успех, должно быть так (model != None)
model
pipeline = Pipeline(model, 'generic_tokenizer', '', '', '')
example = "Если бы мне платили каждый раз. Каждый раз, когда я думаю о тебе."
parsed = pipeline.process(example)
print(parsed)
###Output
# newdoc
# newpar
# sent_id = 1
# text = Если бы мне платили каждый раз.
1 Если ЕСЛИ SCONJ IN _ 4 mark _ _
2 бы БЫ PART RP _ 4 discourse _ _
3 мне Я PRON PRP Case=Dat|Number=Sing|Person=1 4 iobj _ _
4 платили ПЛАТИТЬ VERB VBC Aspect=Imp|Mood=Ind|Number=Plur|Tense=Past|VerbForm=Fin 0 root _ _
5 каждый КАЖДЫЙ DET DT Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing 6 amod _ _
6 раз РАЗ NOUN NN Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing 4 advmod _ SpaceAfter=No
7 . . PUNCT . _ 4 punct _ _
# sent_id = 2
# text = Каждый раз, когда я думаю о тебе.
1 Каждый КАЖДЫЙ DET DT Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing 2 amod _ _
2 раз РАЗ NOUN NN Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing 6 advmod _ SpaceAfter=No
3 , , PUNCT , _ 6 punct _ _
4 когда КОГДА ADV WRB _ 6 advmod _ _
5 я Я PRON PRP Case=Nom|Number=Sing|Person=1 6 nsubj _ _
6 думаю дУМАТЬ VERB VBC Aspect=Imp|Mood=Ind|Number=Sing|Person=1|Tense=Pres|VerbForm=Fin 0 root _ _
7 о О ADP IN _ 8 case _ _
8 тебе ТЫ PRON PRP Case=Dat|Number=Sing|Person=2 6 obl _ SpaceAfter=No
9 . . PUNCT . _ 6 punct _ SpacesAfter=\n
###Markdown
Как видим, UDPipe и токенизировал, и лематизировал текст, сделал POS-tagging и, собственно, синтаксический парсинг. Command line interfaceНо с обвязкой бывают проблемы, и вообще довольно удобно пользоваться прекомпилированной утилитой `udpipe` из шелла.
###Code
!wget https://github.com/ufal/udpipe/releases/download/v1.2.0/udpipe-1.2.0-bin.zip
# !unzip udpipe-1.2.0-bin.zip
!ls udpipe-1.2.0-bin/
###Output
AUTHORS bin-linux64 bin-win64 LICENSE MANUAL.pdf src_lib_only
bindings bin-osx CHANGES MANUAL README
bin-linux32 bin-win32 INSTALL MANUAL.html src
###Markdown
Внутри бинарники для всех популярных ОС, выбираем свою. У меня путь к бинарнику такой: `udpipe-1.2.0-bin/bin-linux64`.Синтаксис:
###Code
! udpipe-1.2.0-bin/bin-linux64/udpipe --help
###Output
Usage: udpipe-1.2.0-bin/bin-linux64/udpipe [running_opts] model_file [input_files]
udpipe-1.2.0-bin/bin-linux64/udpipe --train [training_opts] model_file [input_files]
udpipe-1.2.0-bin/bin-linux64/udpipe --detokenize [detokenize_opts] raw_text_file [input_files]
Running opts: --accuracy (measure accuracy only)
--input=[conllu|generic_tokenizer|horizontal|vertical]
--immediate (process sentences immediately during loading)
--outfile=output file template
--output=[conllu|epe|matxin|horizontal|plaintext|vertical]
--tokenize (perform tokenization)
--tokenizer=tokenizer options, implies --tokenize
--tag (perform tagging)
--tagger=tagger options, implies --tag
--parse (perform parsing)
--parser=parser options, implies --parse
Training opts: --method=[morphodita_parsito] which method to use
--heldout=heldout data file name
--tokenizer=tokenizer options
--tagger=tagger options
--parser=parser options
Detokenize opts: --outfile=output file template
Generic opts: --version
--help
###Markdown
Типичная команда для парсинга будет выглядеть так:
###Code
with open('example.txt', 'w') as f:
f.write(example)
! udpipe-1.2.0-bin/bin-linux64/udpipe --tokenize --tag --parse\
russian-ud-2.0-170801.udpipe example.txt > parsed_example.conllu
! cat parsed_example.conllu
###Output
Loading UDPipe model: done.
# newdoc id = example.txt
# newpar
# sent_id = 1
# text = Если бы мне платили каждый раз.
1 Если ЕСЛИ SCONJ IN _ 4 mark _ _
2 бы БЫ PART RP _ 4 discourse _ _
3 мне Я PRON PRP Case=Dat|Number=Sing|Person=1 4 iobj _ _
4 платили ПЛАТИТЬ VERB VBC Aspect=Imp|Mood=Ind|Number=Plur|Tense=Past|VerbForm=Fin 0 root _ _
5 каждый КАЖДЫЙ DET DT Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing 6 amod _ _
6 раз РАЗ NOUN NN Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing 4 advmod _ SpaceAfter=No
7 . . PUNCT . _ 4 punct _ _
# sent_id = 2
# text = Каждый раз, когда я думаю о тебе.
1 Каждый КАЖДЫЙ DET DT Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing 2 amod _ _
2 раз РАЗ NOUN NN Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing 6 advmod _ SpaceAfter=No
3 , , PUNCT , _ 6 punct _ _
4 когда КОГДА ADV WRB _ 6 advmod _ _
5 я Я PRON PRP Case=Nom|Number=Sing|Person=1 6 nsubj _ _
6 думаю дУМАТЬ VERB VBC Aspect=Imp|Mood=Ind|Number=Sing|Person=1|Tense=Pres|VerbForm=Fin 0 root _ _
7 о О ADP IN _ 8 case _ _
8 тебе ТЫ PRON PRP Case=Dat|Number=Sing|Person=2 6 obl _ SpaceAfter=No
9 . . PUNCT . _ 6 punct _ SpacesAfter=\n
###Markdown
Если нас интересует только тэггинг:
###Code
with open('example.txt', 'w') as f:
f.write(example)
! udpipe-1.2.0-bin/bin-linux64/udpipe --tokenize --tag\
russian-ud-2.0-170801.udpipe example.txt > tagged_example.conllu
! cat tagged_example.conllu
###Output
Loading UDPipe model: done.
# newdoc id = example.txt
# newpar
# sent_id = 1
# text = Если бы мне платили каждый раз.
1 Если ЕСЛИ SCONJ IN _ _ _ _ _
2 бы БЫ PART RP _ _ _ _ _
3 мне Я PRON PRP Case=Dat|Number=Sing|Person=1 _ _ _ _
4 платили ПЛАТИТЬ VERB VBC Aspect=Imp|Mood=Ind|Number=Plur|Tense=Past|VerbForm=Fin _ _ _ _
5 каждый КАЖДЫЙ DET DT Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing _ _ _ _
6 раз РАЗ NOUN NN Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing _ _ _ SpaceAfter=No
7 . . PUNCT . _ _ _ _ _
# sent_id = 2
# text = Каждый раз, когда я думаю о тебе.
1 Каждый КАЖДЫЙ DET DT Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing _ _ _ _
2 раз РАЗ NOUN NN Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing _ _ _ SpaceAfter=No
3 , , PUNCT , _ _ _ _ _
4 когда КОГДА ADV WRB _ _ _ _ _
5 я Я PRON PRP Case=Nom|Number=Sing|Person=1 _ _ _ _
6 думаю дУМАТЬ VERB VBC Aspect=Imp|Mood=Ind|Number=Sing|Person=1|Tense=Pres|VerbForm=Fin _ _ _ _
7 о О ADP IN _ _ _ _ _
8 тебе ТЫ PRON PRP Case=Dat|Number=Sing|Person=2 _ _ _ SpaceAfter=No
9 . . PUNCT . _ _ _ _ SpacesAfter=\n
###Markdown
(Ну а потом снова считываем проанализированные предложения питоном).Вот два способа работать с UDPipe. Choose your fighter! ЗаданиеНапишите функцию, которая проверяет, не состоит ли предложение из большого числа однородных предложений. SVO-triplesС помощью синтекстического парсинга можно извлекать из предложений тройки субъект-объект-глагол, которые можно использовать для извлечения информации из текста.
###Code
sent = """1 Собянин _ NOUN _ Animacy=Anim|Case=Nom|Gender=Masc|Number=Sing|fPOS=NOUN++ 2 nsubj _ _
2 открыл _ VERB _ Aspect=Perf|Gender=Masc|Mood=Ind|Number=Sing|Tense=Past|VerbForm=Fin|Voice=Act|fPOS=VERB++ 0 ROOT _ _
3 новый _ ADJ _ Animacy=Inan|Case=Acc|Degree=Pos|Gender=Masc|Number=Sing|fPOS=ADJ++ 4 amod _ _
4 парк _ NOUN _ Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing|fPOS=NOUN++ 2 dobj _ _
5 и _ CONJ _ fPOS=CONJ++ 4 cc _ _
6 детскую _ ADJ _ Case=Acc|Degree=Pos|Gender=Fem|Number=Sing|fPOS=ADJ++ 7 amod _ _
7 площадку _ NOUN _ Animacy=Inan|Case=Acc|Gender=Fem|Number=Sing|fPOS=NOUN++ 4 conj _ _
8 . _ PUNCT . fPOS=PUNCT++. 2 punct _ _"""
###Output
_____no_output_____
###Markdown
Тройки слово-слово-связь:
###Code
graph = DependencyGraph(tree_str=sent)
list(graph.triples())
###Output
_____no_output_____
###Markdown
Тройки субьект-объект-глагол:
###Code
def get_sov(sent):
graph = DependencyGraph(tree_str=sent)
sov = {}
for triple in graph.triples():
if triple:
if triple[0][1] == 'VERB':
sov[triple[0][0]] = {'subj':'','obj':''}
for triple in graph.triples():
if triple:
if triple[1] == 'nsubj':
if triple[0][1] == 'VERB':
sov[triple[0][0]]['subj'] = triple[2][0]
if triple[1] == 'dobj':
if triple[0][1] == 'VERB':
sov[triple[0][0]]['obj'] = triple[2][0]
return sov
sov = get_sov(sent)
print(sov)
###Output
{'открыл': {'obj': 'парк', 'subj': 'Собянин'}}
|
python-numpy/6. NumPy.ipynb | ###Markdown
6. NumPyNumPy 是一个强大的 Python 的数学计算库,可以处理例如矩阵乘法、统计函数等数学操作。(若在自己的电脑上运行 NumPy,请先安装 NumPy:)
###Code
%pip install numpy
###Output
_____no_output_____
###Markdown
要在 Python 里调用一个库,使用 `import as ` 的语法就可以了。
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
数组 `np.array`为什么我们要用 NumPy 呢?这是因为它可以以数组的结构,存储并操作大量数据。一个基本的数组可以是一个 1 维的向量。下面就是几种创建向量数组的方式:
###Code
a = np.array([1, 2, 3, 4])
b = np.array([4, 3, 2, 1])
c = np.arange(4) # 相当于 Python range(4)
d = np.zeros(4) # 全是 0 的长度为 4 的向量
e = np.ones(4) # 全是 1 的长度为 4 的向量
print(a)
print(b)
print(c)
print(d)
print(e)
###Output
[1 2 3 4]
[4 3 2 1]
[0 1 2 3]
[0. 0. 0. 0.]
[1. 1. 1. 1.]
###Markdown
用 NumPy 的好处就是可以对大批数据进行各种操作,比如加减乘除:
###Code
# 对每个元素操作
print(a + 1)
print(a - 1)
print(a * 1)
print(a / 1)
# 对 a 和 b 之间对应的元素操作
print(a + b)
print(a - b)
print(a * b)
print(a / b)
###Output
[2 3 4 5]
[0 1 2 3]
[1 2 3 4]
[1. 2. 3. 4.]
[5 5 5 5]
[-3 -1 1 3]
[4 6 6 4]
[0.25 0.66666667 1.5 4. ]
###Markdown
是不是很方便呢?注意,我们现在不需要 `for` 语法,就可以来对一组数据操作了。我们在线性代数里学习了点乘,那如果我们要点乘 $a \cdot b = [1, 2, 3, 4] \cdot [4, 3, 2, 1]$,该怎么做呢?
###Code
np.dot(a, b)
###Output
_____no_output_____
###Markdown
2 维矩阵和数组形状我们看到了怎么在数组里创建 1 维的向量,那怎么创建 2 维的矩阵呢?使用多层的列表 `list` 就可以了:
###Code
f = np.array([[1, 2, 3], [4, 5, 6]])
print(f)
###Output
[[1 2 3]
[4 5 6]]
###Markdown
上面的 `f` 就是一个 2 × 3 的矩阵。要查看它的形状等信息,可以用 `f.ndim`、`f.shape` 和 `f.size`:
###Code
print(f.ndim) # 2 维矩阵
print(f.shape) # (2 × 3) 的形状
print(f.size) # 6 个总共元素
###Output
2
(2, 3)
6
###Markdown
那我们要改变 `f` 的形状,怎么办呢?
###Code
print(f)
###Output
[[1 2 3]
[4 5 6]]
###Markdown
要做矩阵转置可以用 `.transpose()`:
###Code
print(f.transpose())
###Output
[[1 4]
[2 5]
[3 6]]
###Markdown
矩阵展开成一个 1 维向量 `.flatten()`:
###Code
print(f.flatten())
###Output
[1 2 3 4 5 6]
###Markdown
转换成给定的形状(3 × 2)`.reshape(shape)`:
###Code
print(f.reshape((3, 2)))
###Output
[[1 2]
[3 4]
[5 6]]
###Markdown
同样地,我们可以对矩阵进行各种计算。比如说,如果我们要算 `f` 和它的转置的乘积 $\mathbf{F}\mathbf{F}^\top$ 的话,可以用 `np.matmul`:
###Code
ft = f.transpose()
np.matmul(f, ft)
###Output
_____no_output_____
###Markdown
或者用 `@` 来更简便地写矩阵乘积:
###Code
f @ ft
###Output
_____no_output_____
###Markdown
数组索引如果我们想要访问 1 维向量里面的个别元素,可以用和 `list` 一样的方法 `a[index]`(注意索引 `index` 从 0 开始算):
###Code
a = np.array([1, 2, 3, 4, 5, 6, 7, 8]) # 长度为 8 的数组
print(a)
print(a[1]) # 第 2 个元素
print(a[1:]) # 第 2(含)个之后的所有元素
print(a[:1]) # 第 2(不含)个之前的所有元素
print(a[1:3]) # 第 2(含)到 4(不含)个元素
###Output
[1 2 3 4 5 6 7 8]
2
[2 3 4 5 6 7 8]
[1]
[2 3]
###Markdown
但是如果是多维数组,比如 2 维的矩阵或者 3 维的张量,怎么访问它的元素呢?我们可以在 `[]` 里间加逗号 `,` 分隔每个轴的索引:
###Code
a = np.arange(9).reshape((3, 3)) # 3 × 3 矩阵
print(a)
print(a[1, 2]) # 第 2 行第 3 列
print(a[1, 1:]) # 第 2 行第 2+ 列(第 2(含)以上的列)
print(a[1, :1]) # 第 2 行第 2- 列(第 2(不含)以下的列)
print(a[1, :]) # 第 2 行所有列
print(a[1:, 0]) # 第 2+ 行第 1 列
###Output
[[0 1 2]
[3 4 5]
[6 7 8]]
5
[4 5]
[3]
[3 4 5]
[3 6]
###Markdown
同样地,3 维张量也可以做类似的操作:
###Code
a = np.arange(27).reshape((3, 3, 3)) # 3 × 3 × 3 张量
print(a)
print(a[0, 1, 2])
print(a[0, :, 2])
# 以此类推
###Output
[[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]]
[[ 9 10 11]
[12 13 14]
[15 16 17]]
[[18 19 20]
[21 22 23]
[24 25 26]]]
5
[2 5 8]
###Markdown
其它函数NumPy 提供了很多方便的数学函数,请见下面代码例子:
###Code
a = np.array([1, 2, 3])
print(np.log(a)) # 自然对数
print(np.exp(a)) # e 的 a 次方
print(np.sin(a)) # 正弦 sin
print(np.cos(a)) # 余弦 cos
print(np.sum(a)) # a 向量之和(也可以写作 a.sum(),下同)
print(np.mean(a)) # a 的平均值
print(np.std(a)) # a 的标准差
print(np.max(a)) # a 的最大值
print(np.min(a)) # a 的最小值
print(np.argmax(a)) # a 的最大值位置
print(np.argmin(a)) # a 的最小值位置
###Output
[0. 0.69314718 1.09861229]
[ 2.71828183 7.3890561 20.08553692]
[0.84147098 0.90929743 0.14112001]
[ 0.54030231 -0.41614684 -0.9899925 ]
6
2.0
0.816496580927726
3
1
2
0
###Markdown
这些函数很好用,但是不需要一次全部记住,只要需要用的时候来看一眼它们的名字,或者网上查一下就可以了! 任务你可以求出下面这个矩阵第 1 行、第 2 列的值的自然对数吗?
###Code
a = np.random.randn(3, 3) # np.random.randn 是创建随机化的数组的意思
a
###Output
_____no_output_____ |
vanderplas/PythonDataScienceHandbook-master/notebooks/02.04-Computation-on-arrays-aggregates.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* Aggregations: Min, Max, and Everything In Between Often when faced with a large amount of data, a first step is to compute summary statistics for the data in question.Perhaps the most common summary statistics are the mean and standard deviation, which allow you to summarize the "typical" values in a dataset, but other aggregates are useful as well (the sum, product, median, minimum and maximum, quantiles, etc.).NumPy has fast built-in aggregation functions for working on arrays; we'll discuss and demonstrate some of them here. Summing the Values in an ArrayAs a quick example, consider computing the sum of all values in an array.Python itself can do this using the built-in ``sum`` function:
###Code
import numpy as np
L = np.random.random(100)
sum(L)
###Output
_____no_output_____
###Markdown
The syntax is quite similar to that of NumPy's ``sum`` function, and the result is the same in the simplest case:
###Code
np.sum(L)
###Output
_____no_output_____
###Markdown
However, because it executes the operation in compiled code, NumPy's version of the operation is computed much more quickly:
###Code
big_array = np.random.rand(1000000)
%timeit sum(big_array)
%timeit np.sum(big_array)
###Output
10 loops, best of 3: 104 ms per loop
1000 loops, best of 3: 442 µs per loop
###Markdown
Be careful, though: the ``sum`` function and the ``np.sum`` function are not identical, which can sometimes lead to confusion!In particular, their optional arguments have different meanings, and ``np.sum`` is aware of multiple array dimensions, as we will see in the following section. Minimum and MaximumSimilarly, Python has built-in ``min`` and ``max`` functions, used to find the minimum value and maximum value of any given array:
###Code
min(big_array), max(big_array)
###Output
_____no_output_____
###Markdown
NumPy's corresponding functions have similar syntax, and again operate much more quickly:
###Code
np.min(big_array), np.max(big_array)
%timeit min(big_array)
%timeit np.min(big_array)
###Output
10 loops, best of 3: 82.3 ms per loop
1000 loops, best of 3: 497 µs per loop
###Markdown
For ``min``, ``max``, ``sum``, and several other NumPy aggregates, a shorter syntax is to use methods of the array object itself:
###Code
print(big_array.min(), big_array.max(), big_array.sum())
###Output
1.17171281366e-06 0.999997678497 499911.628197
###Markdown
Whenever possible, make sure that you are using the NumPy version of these aggregates when operating on NumPy arrays! Multi dimensional aggregatesOne common type of aggregation operation is an aggregate along a row or column.Say you have some data stored in a two-dimensional array:
###Code
M = np.random.random((3, 4))
print(M)
###Output
[[ 0.8967576 0.03783739 0.75952519 0.06682827]
[ 0.8354065 0.99196818 0.19544769 0.43447084]
[ 0.66859307 0.15038721 0.37911423 0.6687194 ]]
###Markdown
By default, each NumPy aggregation function will return the aggregate over the entire array:
###Code
M.sum()
###Output
_____no_output_____
###Markdown
Aggregation functions take an additional argument specifying the *axis* along which the aggregate is computed. For example, we can find the minimum value within each column by specifying ``axis=0``:
###Code
M.min(axis=0)
###Output
_____no_output_____
###Markdown
The function returns four values, corresponding to the four columns of numbers.Similarly, we can find the maximum value within each row:
###Code
M.max(axis=1)
###Output
_____no_output_____
###Markdown
The way the axis is specified here can be confusing to users coming from other languages.The ``axis`` keyword specifies the *dimension of the array that will be collapsed*, rather than the dimension that will be returned.So specifying ``axis=0`` means that the first axis will be collapsed: for two-dimensional arrays, this means that values within each column will be aggregated. Other aggregation functionsNumPy provides many other aggregation functions, but we won't discuss them in detail here.Additionally, most aggregates have a ``NaN``-safe counterpart that computes the result while ignoring missing values, which are marked by the special IEEE floating-point ``NaN`` value (for a fuller discussion of missing data, see [Handling Missing Data](03.04-Missing-Values.ipynb)).Some of these ``NaN``-safe functions were not added until NumPy 1.8, so they will not be available in older NumPy versions.The following table provides a list of useful aggregation functions available in NumPy:|Function Name | NaN-safe Version | Description ||-------------------|---------------------|-----------------------------------------------|| ``np.sum`` | ``np.nansum`` | Compute sum of elements || ``np.prod`` | ``np.nanprod`` | Compute product of elements || ``np.mean`` | ``np.nanmean`` | Compute mean of elements || ``np.std`` | ``np.nanstd`` | Compute standard deviation || ``np.var`` | ``np.nanvar`` | Compute variance || ``np.min`` | ``np.nanmin`` | Find minimum value || ``np.max`` | ``np.nanmax`` | Find maximum value || ``np.argmin`` | ``np.nanargmin`` | Find index of minimum value || ``np.argmax`` | ``np.nanargmax`` | Find index of maximum value || ``np.median`` | ``np.nanmedian`` | Compute median of elements || ``np.percentile`` | ``np.nanpercentile``| Compute rank-based statistics of elements || ``np.any`` | N/A | Evaluate whether any elements are true || ``np.all`` | N/A | Evaluate whether all elements are true |We will see these aggregates often throughout the rest of the book. Example: What is the Average Height of US Presidents? Aggregates available in NumPy can be extremely useful for summarizing a set of values.As a simple example, let's consider the heights of all US presidents.This data is available in the file *president_heights.csv*, which is a simple comma-separated list of labels and values:
###Code
!head -4 data/president_heights.csv
###Output
order,name,height(cm)
1,George Washington,189
2,John Adams,170
3,Thomas Jefferson,189
###Markdown
We'll use the Pandas package, which we'll explore more fully in [Chapter 3](03.00-Introduction-to-Pandas.ipynb), to read the file and extract this information (note that the heights are measured in centimeters).
###Code
import pandas as pd
data = pd.read_csv('data/president_heights.csv')
heights = np.array(data['height(cm)'])
print(heights)
###Output
[189 170 189 163 183 171 185 168 173 183 173 173 175 178 183 193 178 173
174 183 183 168 170 178 182 180 183 178 182 188 175 179 183 193 182 183
177 185 188 188 182 185]
###Markdown
Now that we have this data array, we can compute a variety of summary statistics:
###Code
print("Mean height: ", heights.mean())
print("Standard deviation:", heights.std())
print("Minimum height: ", heights.min())
print("Maximum height: ", heights.max())
###Output
Mean height: 179.738095238
Standard deviation: 6.93184344275
Minimum height: 163
Maximum height: 193
###Markdown
Note that in each case, the aggregation operation reduced the entire array to a single summarizing value, which gives us information about the distribution of values.We may also wish to compute quantiles:
###Code
print("25th percentile: ", np.percentile(heights, 25))
print("Median: ", np.median(heights))
print("75th percentile: ", np.percentile(heights, 75))
###Output
25th percentile: 174.25
Median: 182.0
75th percentile: 183.0
###Markdown
We see that the median height of US presidents is 182 cm, or just shy of six feet.Of course, sometimes it's more useful to see a visual representation of this data, which we can accomplish using tools in Matplotlib (we'll discuss Matplotlib more fully in [Chapter 4](04.00-Introduction-To-Matplotlib.ipynb)). For example, this code generates the following chart:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # set plot style
plt.hist(heights)
plt.title('Height Distribution of US Presidents')
plt.xlabel('height (cm)')
plt.ylabel('number');
###Output
_____no_output_____ |
09 More Trees/homework/Skinner_Barnaby_9_2.ipynb | ###Markdown
Using the readings, try and create a RandomForestClassifier for the iris dataset
###Code
iris = datasets.load_iris()
iris.keys()
X = iris.data[:,2:]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42, test_size=0.25,train_size=0.75)
#What is random_state?
#What is stratify?
#What is this doing in the moon example exactly?
#X, y = make_moons(n_samples=100, noise=0.25, random_state=3)
forest = RandomForestClassifier(n_estimators=5, random_state=100)
forest.fit(X_train, y_train)
print("accuracy on training set: %f" % forest.score(X_train, y_train))
print("accuracy on test set: %f" % forest.score(X_test, y_test))
###Output
accuracy on training set: 0.990991
accuracy on test set: 0.948718
###Markdown
Using a 25/75 training/test split, compare the results with the original decision tree model and describe the result to the best of your ability in your PR
###Code
X_train, X_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
dt = tree.DecisionTreeClassifier()
dt = dt.fit(X_train,y_train)
y_pred=dt.predict(X_test)
Accuracy_score = metrics.accuracy_score(y_test, y_pred)
Accuracy_score
#Comments on RandomForestClassifiers & Original Decision Tree Model
#While the Random Trees result is consistent, varying depending how you choose the random_state or the n_estimators,
#the result of the orgininal decision tree model varies a lot.
#The random_state defines how random the versions of the data is that the modelling takes into consideration, and
#the n_estimators regulates how many "random" datasets are used. It's fascinating to see how the this makes the
#result so much more consistent than the orginal decision tree model.
#General commets on the homework
#I really enjoyed this homework and it really helped me understand, what is going on under the hood.
#I found this reading while I was doing the homework. It looks nice to go deeper? Do you know the
#guy? https://github.com/amueller/introduction_to_ml_with_python
#I feel I now need practice on real life dirty data sets, to fully understand how predictions models
#can work. I take my comments back, that I can't see how I can implement this into my reporting. I can. But how
#can I do this technically? i.e. with the data on PERM visas? Say input nationality, wage, lawyer, job title, and get a reply what the chances could be of
#getting a work visa? I also feel a little shaky on how I need to prep my data to feed in it into the predictor
#correctly.
#Comments on classifier
#Questions:
#Not sure why it's 10fold cross validation, cv is set at 5?
#Why are we predicting the
###Output
_____no_output_____ |
enrichment_analysis.ipynb | ###Markdown
Cluster independent variables
###Code
annotation_df = pd.read_csv(f"{ANNOTATIONS_DIRECTORY}/BioGRID-SGD_CC_sc.csv")
GO_population = {go_id for go_id in set(annotation_df.GO_ID)
if 5 <= len(annotation_df[annotation_df.GO_ID == go_id]) <= 500}
annotation_df = annotation_df[annotation_df.GO_ID.isin(GO_population)]
# annotation_df = annotation_df[annotation_df.Level > -1]
# GO_population = set(annotation_df.GO_ID)
# Conversion dictionaries
int2GO = dict(enumerate(GO_population))
GO2int = dict(zip(int2GO.values(), int2GO.keys()))
GO2genes = {go_id:set(annotation_df.Systematic_ID[annotation_df.GO_ID == go_id])
for go_id in GO_population}
gene2GO = {gene :set(annotation_df.GO_ID[annotation_df.Systematic_ID == gene])
for gene in set(annotation_df.Systematic_ID)}
###Output
_____no_output_____
###Markdown
Preparation Let $N$ be the number of genes in the PPI. Each GO-term defines a 'state' in which $K$ proteins are annotated with this term; these are seen a _successes_. A given cluster defines an 'experiment', in which the number of draws, $n$, corresponds to the length of the cluster. The number of _successful draws_ $k$ corresponds to the number of annotated genes in the given cluster.
###Code
# List of success states
list_of_success_states = list(GO2genes.values())
# This will be our K, see below. Reshped to fit the shape of k 'array_of_observed_successes'
array_of_total_successes = np.array(list(map(len,list_of_success_states))).reshape(-1,1)
###Output
_____no_output_____
###Markdown
Here we GO
###Code
MIN_CLUSTERS = 2
MAX_CLUSTERS = 20
alpha = [0.01, 0.05, 0.1]
hc_cluster_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
mc_cluster_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
lc_cluster_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
hc_GO_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
mc_GO_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
lc_GO_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
hc_gene_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
mc_gene_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
lc_gene_coverage = defaultdict(partial(np.ndarray, MAX_CLUSTERS-MIN_CLUSTERS))
all_distances = ['euclidean', 'cityblock', 'seuclidean', 'sqeuclidean',
'cosine', 'correlation', 'chebyshev', 'canberra',
'braycurtis', 'mahalanobis']
ALL_distances = [A+B for (A,B) in product(['GDV_', 'GDV_zscore_', 'GCV_', 'GCV_zscore_'], all_distances)]
METHOD = "kmedoids"
for distance in ['tvd0123']:
print(distance)
MATRIX_NAME = f"sc_BioGRID_{distance}"
t1 = time.time()
for i, n_clusters in enumerate(range(2, MAX_CLUSTERS)):
with open(f"{CLUSTERS_DIRECTORY}/{METHOD}/{MATRIX_NAME}_{n_clusters}.txt", 'r') as f:
clusters = list(map(str.split, f))
list_of_experiments = list(map(set,clusters))
# For each GO term and cluster we get an experiment
array_of_observed_successes = np.array([[len(draws & success_states) for draws in list_of_experiments]
for success_states in list_of_success_states])
N = sum(map(len,clusters)) # PPI size, i.e. number of all genes that appear in a cluster
K = array_of_total_successes # defined in section 'Preparation'
n = list(map(len, clusters)) # cluster lengths
k = array_of_observed_successes
# scipy has a really messed up nomeclature...
p_values_array = 1-hypergeom.cdf(k=k-1, M=N, N=n, n=K)
p_values_df = pd.DataFrame(p_values_array, index=GO_population)
GO_index = p_values_df.index
m = p_values_array.size
hc_enrichment_df = p_values_df < alpha[0]/m
mc_enrichment_df = p_values_df < alpha[1]/m
lc_enrichment_df = p_values_df < alpha[2]/m
# Calculate cluster coverage
hc_cluster_coverage[distance][i] = sum(hc_enrichment_df.any())/n_clusters
mc_cluster_coverage[distance][i] = sum(mc_enrichment_df.any())/n_clusters
lc_cluster_coverage[distance][i] = sum(lc_enrichment_df.any())/n_clusters
# Calculate GO-term coverage
hc_GO_coverage[distance][i] = sum(hc_enrichment_df.any(axis=1))/len(GO_population)
mc_GO_coverage[distance][i] = sum(mc_enrichment_df.any(axis=1))/len(GO_population)
lc_GO_coverage[distance][i] = sum(lc_enrichment_df.any(axis=1))/len(GO_population)
# Calculate gene coverage
hc_gene_coverage[distance][i] = sum(1 for (i, cluster) in enumerate(clusters) for gene in cluster
if gene2GO.get(gene, set()) & set(GO_index[hc_enrichment_df[i]]))/N
mc_gene_coverage[distance][i] = sum(1 for (i, cluster) in enumerate(clusters) for gene in cluster
if gene2GO.get(gene, set()) & set(GO_index[mc_enrichment_df[i]]))/N
lc_gene_coverage[distance][i] = sum(1 for (i, cluster) in enumerate(clusters) for gene in cluster
if gene2GO.get(gene, set()) & set(GO_index[lc_enrichment_df[i]]))/N
t2 = time.time()
print(f'{n_clusters}: {t2-t1:.2f}sec', end='\r')
list(map(len,clusters)) #tijana
list(map(len,clusters)) #tvd0123
plot_distances = ['tijana', 'tvd0123']
#Cluster coverage
fig, ax = plt.subplots(figsize=(12,9))
fig.patch.set_alpha(0)
fig.subplots_adjust(hspace = 0.4)
Blues = iter(sns.color_palette("Blues",6)[::-1])
Reds = iter(sns.color_palette("Reds", 6)[::-1])
for distance in plot_distances:
if distance.startswith('GDV'):
color = next(Reds)
elif distance.startswith('GCV'):
color = next(Blues)
ax.plot(range(2,MAX_CLUSTERS), mc_cluster_coverage[distance],
label=f'${name2string[distance]}$',
linewidth=2.5,
# color=color,
alpha=0.75
);
ax.fill_between(range(2,MAX_CLUSTERS),
hc_cluster_coverage[distance],
lc_cluster_coverage[distance],
alpha=0.1,
# color=color
);
ax.set_title('Cluster coverage', fontsize=28)
ax.patch.set_alpha(0)
ax.set_xlabel('# clusters', fontsize=24)
ax.set_ylabel('% enriched', fontsize=24)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.spines['left'].set_linewidth(2.5)
ax.spines['left'].set_color('black')
ax.spines['bottom'].set_linewidth(2.5)
ax.spines['bottom'].set_color('black')
ax.legend(fontsize=18, shadow=True, facecolor=[0.95, 0.95, 0.95, 0]);
fig.savefig(f"{DATA_DIRECTORY}/plots/dummy1.png")
#Cluster coverage
fig, ax = plt.subplots(figsize=(12,9))
fig.patch.set_alpha(0)
fig.subplots_adjust(hspace = 0.4)
for distance in plot_distances:
ax.plot(range(2,MAX_CLUSTERS), mc_GO_coverage[distance],
label=f'${name2string[distance]}$',
linewidth=2.5);
ax.fill_between(range(2,MAX_CLUSTERS),
hc_GO_coverage[distance],
lc_GO_coverage[distance],
alpha=0.1);
ax.set_title('GO-term coverage', fontsize=28)
ax.patch.set_alpha(0)
ax.set_xlabel('# clusters', fontsize=24)
ax.set_ylabel('% enriched', fontsize=24)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.spines['left'].set_linewidth(2.5)
ax.spines['left'].set_color('black')
ax.spines['bottom'].set_linewidth(2.5)
ax.spines['bottom'].set_color('black')
ax.legend(fontsize=18, shadow=True, facecolor=[0.95, 0.95, 0.95, 0]);
fig.savefig(f"{DATA_DIRECTORY}/plots/dummy2.png")
#Cluster coverage
fig, ax = plt.subplots(figsize=(12,9))
fig.patch.set_alpha(0)
fig.subplots_adjust(hspace = 0.4)
for distance in plot_distances:
ax.plot(range(2,MAX_CLUSTERS), mc_gene_coverage[distance],
label=f'${name2string[distance]}$',
linewidth=2.5);
ax.fill_between(range(2,MAX_CLUSTERS),
hc_gene_coverage[distance],
lc_gene_coverage[distance],
alpha=0.1);
ax.set_title('gene-term coverage', fontsize=28)
ax.patch.set_alpha(0)
ax.set_xlabel('# clusters', fontsize=24)
ax.set_ylabel('% enriched', fontsize=24)
ax.tick_params(axis='both', which='major', labelsize=24)
ax.spines['left'].set_linewidth(2.5)
ax.spines['left'].set_color('black')
ax.spines['bottom'].set_linewidth(2.5)
ax.spines['bottom'].set_color('black')
ax.legend(fontsize=18, shadow=True, facecolor=[0.95, 0.95, 0.95, 0]);
fig.savefig(f"{DATA_DIRECTORY}/plots/dummy3.png")
###Output
_____no_output_____ |
Loyers++/Loyers++.ipynb | ###Markdown
Prédiction de loyersOn souhaite estimer le prix des loyers à partir des données surface et arrondissement. Le prix est une valeur continue donc on utilise uniquement des alorithmes de régression:* Régression linéaire avec 1 feature (surface)* Régression linéaire avec 2 features (surface + arrondissement)* Régression polynomiale avec 2 features* Régression avec k-nearest neighbors
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Chargement des données
###Code
house_data_source = pd.read_csv('house_data.csv')
print(house_data_source.shape)
house_data_source.head()
###Output
(827, 3)
###Markdown
Nettoyage des donnéesLes lignes avec des valeurs manquantes ne sont pas prises en compte
###Code
house_data = house_data_source.dropna(axis=0, how='any')
print(house_data.shape)
###Output
(822, 3)
###Markdown
Premières observations des données
###Code
sns.pairplot(data=house_data, hue="arrondissement");
###Output
_____no_output_____
###Markdown
Séparation training/testing setOn utilise la fonction fournie par sklearn.
###Code
from sklearn import linear_model
from sklearn.model_selection import train_test_split
# pour les features, on prend surface et l'arrondissement (tout sauf le prix)
data = house_data.drop('price', axis=1)
# la variable à prédire est le prix
target = house_data['price']
xtrain_source, xtest_source, ytrain_source, ytest_source = train_test_split(data, target, test_size=0.2)
[print(x.shape) for x in [xtrain_source, xtest_source, ytrain_source, ytest_source]]
error = {}
pass
###Output
(657, 2)
(165, 2)
(657,)
(165,)
###Markdown
Régression linéaire avec une seule featureIl s'agit du même modèle qu'initialement proposé dans l'exercice. Il est repris à titre de comparaison uniquement.
###Code
# on limite à une seule feature
xtrain = xtrain_source.copy().drop('arrondissement', axis=1)
xtest = xtest_source.copy().drop('arrondissement', axis=1)
ytrain = ytrain_source.copy()
ytest = ytest_source.copy()
###Output
_____no_output_____
###Markdown
Calcul de la régression
###Code
regr = linear_model.LinearRegression()
regr.fit(xtrain, ytrain)
###Output
_____no_output_____
###Markdown
Taux d'erreur
###Code
error['Linear Regression w/ 1 feature'] = 1 - regr.score(xtest, ytest)
print('Erreur: %f' % error['Linear Regression w/ 1 feature'])
fig = plt.figure(figsize=(8,5))
ax = plt.axes(facecolor='#f3f3f3')
plt.grid(color='w', linestyle='dashed')
plt.scatter(xtrain['surface'], ytrain, alpha = .8, marker = '+', label='Données entraînement')
plt.scatter(xtest['surface'], ytest, alpha = .8, marker = 'o', label='Données de test')
plt.plot([0,400], regr.predict([[0],[400]]), color='red', linestyle='dotted', label='Regression linéaire')
plt.title("Loyers vs. Surface")
plt.legend(loc='best')
ax = ax.set(xlabel='Surface', ylabel='Loyer')
###Output
_____no_output_____
###Markdown
Amélioration 1 : Régression linéaire avec deux featuresOn utilise cette fois les deux features surface et arrondissement
###Code
xtrain = xtrain_source.copy()
xtest = xtest_source.copy()
ytrain = ytrain_source.copy()
ytest = ytest_source.copy()
###Output
_____no_output_____
###Markdown
Calcul de la régression
###Code
regr_2_features = linear_model.LinearRegression()
regr_2_features.fit(xtrain, ytrain)
###Output
_____no_output_____
###Markdown
Taux d'erreur
###Code
error['Linear Regression w/ 2 features'] = 1 - regr_2_features.score(xtest, ytest)
print('Erreur: %f' % error['Linear Regression w/ 2 features'])
###Output
Erreur: 0.173871
###Markdown
Il est meilleur qu'avec une seule feature Amélioration 2 : Régression polynomiale avec deux featuresOn utilise toujours les deux features. On essaie de faire correspondre un polynôme de degré > 1 aux données. Le degré du polynôme utilisé pour la régression est un hyperparamètre
###Code
# on recrée une copie de la dataframe à chaque fois
xtrain = xtrain_source.copy()
xtest = xtest_source.copy()
ytrain = ytrain_source.copy()
ytest = ytest_source.copy()
###Output
_____no_output_____
###Markdown
Calcul de la régressionOn effectue un premier test de régression avec un degré 2 pour valider le modèle
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
model = Pipeline([('poly', PolynomialFeatures(degree=2)),
('linear', linear_model.LinearRegression(fit_intercept=False))])
model.fit(xtrain, ytrain)
###Output
_____no_output_____
###Markdown
Taux d'erreur
###Code
error['Polynomial Regression degree 2 w/ 2 features'] = 1 - model.score(xtest, ytest)
print('Erreur: %f' % error['Polynomial Regression degree 2 w/ 2 features'])
###Output
Erreur: 0.116996
###Markdown
Il est meilleur qu'avec une seule feature ou qu'avec une régression linéaire Variation du degré
###Code
xtrain = xtrain_source.copy()
xtest = xtest_source.copy()
ytrain = ytrain_source.copy()
ytest = ytest_source.copy()
errors_per_degree = []
degrees = range(1,7)
for degree in degrees:
model = Pipeline([('poly', PolynomialFeatures(degree=degree)),
('linear', linear_model.LinearRegression(fit_intercept=False))])
errors_per_degree.append(100*(1 - model.fit(xtrain, ytrain).score(xtest, ytest)))
fig = plt.figure(figsize=(8,5))
ax = plt.axes(facecolor='#f3f3f3')
plt.grid(color='w', linestyle='dashed')
plt.plot(degrees, errors_per_degree, label='Erreur')
plt.title("Taux d'erreur vs degré de regression")
plt.legend(loc='best')
ax.set(xlabel='Degré du polynome', ylabel='Erreur')
ax = ax.set_yscale('log')
plt.show()
###Output
_____no_output_____
###Markdown
Optimisation du modèleOn observe les résultats obtenus en fonction du degré du polynôme
###Code
pd.DataFrame(data={'degree': degrees, 'error_rate': errors_per_degree})
import operator
min_index, min_value = min(enumerate(errors_per_degree), key=operator.itemgetter(1))
error['Polynomial Regression degree %d w/ 2 features' % (min_index+1)] = min_value / 100
print('Erreur: %f' % error['Polynomial Regression degree %d w/ 2 features' % (min_index+1)])
###Output
Erreur: 0.107073
###Markdown
C'est le meilleur résultat obtenu pour l'instant Amélioration 3 : Régression k-NNOn utilise toujours les deux features. On essaie d'utiliser le modèle k-nearest-neighbors dans sa version "regression". Le nombre de voisins utilisé pour le k-NN est un hyperparamètre. Variation du modèleOn fait varier k pour optimiser le modèle
###Code
from sklearn.neighbors import KNeighborsRegressor
xtrain = xtrain_source.copy()
xtest = xtest_source.copy()
ytrain = ytrain_source.copy()
ytest = ytest_source.copy()
errors_per_neighbor_number = []
hyper_params = range(2,15)
for k in hyper_params:
knn = KNeighborsRegressor(k)
errors_per_neighbor_number.append(100*(1 - knn.fit(xtrain, ytrain).score(xtest, ytest)))
fig = plt.figure(figsize=(8,5))
ax = plt.axes(facecolor='#f3f3f3')
plt.grid(color='w', linestyle='dashed')
plt.plot(hyper_params, errors_per_neighbor_number, label='Erreur')
plt.title("Taux d'erreur vs nombre de voisins")
plt.legend(loc='best')
ax.set(xlabel='Nombre de voisins', ylabel='Erreur')
ax = ax.set_yscale('linear')
plt.show()
###Output
_____no_output_____
###Markdown
Optimisation du modèleOn observe les résultats obtenus en fonction du nombre de voisins
###Code
pd.DataFrame(data={'neighbors': hyper_params, 'error_rate': errors_per_neighbor_number})
import operator
min_index, min_value = min(enumerate(errors_per_neighbor_number), key=operator.itemgetter(1))
error['k-NN regressor (k=%d) w/ 2 features' % (min_index+2)] = min_value / 100
print('Erreur: %f' % error['k-NN regressor (k=%d) w/ 2 features' % (min_index+2)])
###Output
Erreur: 0.121701
###Markdown
Le résultat est assez proche de la régression polynomiale ConclusionVoici les taux d'erreur obtenus pour chaque méthode
###Code
s = pd.Series(error, name='Error')
s.index.name = 'Model'
s.reset_index()
###Output
_____no_output_____ |
Flipkart Scraper.ipynb | ###Markdown
browser = webdriver.Chrome(executable_path =r"C:\Users\lodha\Downloads\Software Installers\chromedriver.exe")
###Code
browser.get(url)
images = browser.find_elements_by_tag_name('img')
len(images)
import pandas as pd
df = pd.read_csv("flipkart_img_urls.csv")
image_urls = list(df['0'].values)
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as ec
browser = webdriver.Chrome(executable_path =r"C:\Users\lodha\Downloads\Software Installers\chromedriver.exe")
image_urls = []
url_tshirt = "https://www.flipkart.com/search?q=tshirt&otracker=search&otracker1=search&marketplace=FLIPKART&as-show=on&as=off&page={}"
url_croptop = "https://www.flipkart.com/womens-clothing/pr?sid=2oq%2Cc1r&q=tshirt&p[]=facets.serviceability%5B%5D%3Dtrue&p[]=facets.type%255B%255D%3DCrop%2BTop&otracker=categorytree&page={}"
url_shirt = "https://www.flipkart.com/men/shirts/pr?sid=2oq%2Cs9b%2Cmg4&otracker=nmenu_sub_Men_0_Shirts&sort=popularity&p%5B%5D=facets.serviceability%5B%5D%3Dtrue&page={}"
browser.get("https://www.flipkart.com/men/shirts/pr?sid=2oq%2Cs9b%2Cmg4&otracker=nmenu_sub_Men_0_Shirts&sort=popularity&p%5B%5D=facets.serviceability%5B%5D%3Dtrue&page=2")
for i in range(1,40):
url = url_shirt.format(i)
browser.get(url)
browser.implicitly_wait(60)
images = browser.find_elements_by_tag_name('img')
for img in images:
img_url = img.get_property("src")
if(img_url.find(".svg")==-1 and img_url.find("data:")==-1):
image_urls.append(img_url)
print(len(image_urls),end='\r')
import pandas as pd
name=r"D:\Projects\Fashion.io\Flipkart_Scrapper\crop_top.csv"
df = pd.DataFrame(image_urls)
df.to_csv(name)
###Output
_____no_output_____
###Markdown
Actually Downloading images
###Code
import pandas as pd
name=r"D:\Projects\Fashion.io\Flipkart_Scrapper\crop_top.csv"
df = pd.read_csv(name)
print("Number of Image Samples",len(df))
df.drop_duplicates(inplace=True)
print("Number of Unique Image Samples",len(df))
links = df["0"].values
links = [link for link in links if link.find("jpeg")!=-1]
links = [[i,link] for i,link in enumerate(links)]
from multiprocessing.pool import ThreadPool
import shutil
from time import time as timer
import requests
from keras_retinanet.utils.image import read_image_bgr
def download_image(link):
path, url = link
img_path = r"D:\Projects\Fashion.io\Flipkart_Scrapper\crop_top\{}.jpeg".format(path)
r = requests.get(url)
if r.status_code==200:
f = open(img_path,'wb')
f.write(r.content)
f.close()
del r
return img_path
start = timer()
results = ThreadPool(8).imap_unordered(download_image, links)
for path in results:
print(path,end='\r')
print(f"Elapsed Time: {timer() - start}")
###Output
Elapsed Time: 69.89245200157166_Scrapper\crop_top\1238.jpeg
|
Practicas/Practica 1/P1_B_Ejercicio1.ipynb | ###Markdown
Practica 1BSergio Ramos Mesa\David del Cerro Domínguez Ejercicio 1Un grupo de 5 personas quiere cruzar un viejo y estrecho puente. Es una noche cerrada y se necesita llevar una linterna para cruzar. El grupo solo dispone de una linterna, a la que le quedan 5 minutos de batería.1. Cada persona tarda en cruzar 10, 30, 60, 80 y 120 segundos, respectivamente.2. El puente solo resiste un máximo de 2 personas cruzando a la vez, y cuando cruzan dos personas juntas, caminan a la velocidad del más lento.3. No se puede lanzar la linterna de un extremo a otro del puente, así que cada vez que crucen dos personas, alguien tiene que volver a cruzar hacia atrás con la linterna a buscar a los compañeros que falten, y así hasta que hayan cruzado todos
###Code
class Problem(object):
"""The abstract class for a formal problem. You should subclass
this and implement the methods actions and result, and possibly
__init__, goal_test, and path_cost. Then you will create instances
of your subclass and solve them with the various search functions."""
def __init__(self, initial, goal=None):
"""The constructor specifies the initial state, and possibly a goal
state, if there is a unique goal. Your subclass's constructor can add
other arguments."""
self.initial = initial
self.goal = goal
def actions(self, state):
"""Return the actions that can be executed in the given
state. The result would typically be a list, but if there are
many actions, consider yielding them one at a time in an
iterator, rather than building them all at once."""
raise NotImplementedError
def result(self, state, action):
"""Return the state that results from executing the given
action in the given state. The action must be one of
self.actions(state)."""
raise NotImplementedError
def goal_test(self, state):
"""Return True if the state is a goal. The default method compares the
state to self.goal or checks for state in self.goal if it is a
list, as specified in the constructor. Override this method if
checking against a single self.goal is not enough."""
if isinstance(self.goal, list):
return is_in(state, self.goal)
else:
return state == self.goal
def path_cost(self, c, state1, action, state2):
"""Return the cost of a solution path that arrives at state2 from
state1 via action, assuming cost c to get up to state1. If the problem
is such that the path doesn't matter, this function will only look at
state2. If the path does matter, it will consider c and maybe state1
and action. The default method costs 1 for every step in the path."""
return c + 1
def value(self, state):
"""For optimization problems, each state has a value. Hill-climbing
and related algorithms try to maximize this value."""
raise NotImplementedError
def coste_de_aplicar_accion(self, estado, accion):
"""Hemos incluido está función que devuelve el coste de un único operador (aplicar accion a estado). Por defecto, este
coste es 1. Reimplementar si el problema define otro coste """
return 1
class elPuente(Problem):
def __init__(self, initial, goal = None):
'''Inicializacion de nuestro problema.'''
Problem.__init__(self, (0, initial, (), 0), goal)
self._actions = [(10,"10"),(30,"30"),(60,"60"),(80,"80"),(120,"120"),(30,"10","30"),(60,"10","60"),(80,"10","80"),(120,"10","120"),(60,"30","60"),(80,"30","80"),(120,"30","120"),(80,"60","80"),(120,"60","120"),(120,"80","120")]
def actions(self, state):
'''Devuelve las acciones validas para un estado.'''
t = 0
for i in state[1]: t+= i
for i in state[2]: t+= i
ret = list()
#Recorre todas las acciones posibles
for act in self._actions:
i = 1
moves = list()
actTime = 0
#Controla que de tiempo a cruzar la barca
while(i in range(len(act))):
if(int(act[i]) > actTime): actTime = int(act[i])
moves.append(int(act[i]))
i+=1
#Si supera el tiempo no hay posibilidad de seguir
if((state[0] + actTime) < t):
insert = True
#Comprueba que las dos personas estén en el mismo lado del río
for j in range(len(moves)):
if ((int(act[j]) in state[1] and state[3] == 0) or (int(act[j]) in state[2] and state[3] == 1)):
insert = insert and True
else: insert = insert and False
j+=1
#Si cumple todas las condiciones se da como transición valida
if (insert): ret.append(act)
return ret
def result(self, state, act):
'''Devuelve el estado resultante de aplicar una accion a un estado determinado.'''
left = list(state[1])
right = list(state[2])
t = state[0] + act[0]
i = 1
#Para todo el estado actual ejecuta el camino de ida y el de vuelta
while(i in range(len(act))):
aux = int(act[i])
if state[3] == 0: #Ida
left.remove(aux)
right.append(aux)
else: #Vuelta
right.remove(aux)
left.append(aux)
i+=1
turno = (state[3] + 1) % 2
return (t, tuple(left), tuple(right), turno)
def goal_test(self,estado):
'''Devuelve si el estado actual es solución'''
return (estado[0] <= 300) and (len(estado[1]) == 0) and (estado[3] == 1)
puente.initial
puente = elPuente((10,30,60,80,120))
puente.actions(puente.initial)
puente.result(puente.initial, (120,"10","120"))
puente.goal_test(puente.initial)
puente.goal_test((300, (), (10,30,60,80,120), 1))
puente.actions(120,(30,60,80),(10.120),1)
from search import *
from search import breadth_first_tree_search, depth_first_tree_search, depth_first_graph_search, breadth_first_graph_search
###Output
_____no_output_____
###Markdown
Para la ejecución se ha optado por la busqueda primero en anchura ya que la más eficiente dentro de las busquedas que no requieren de heurística. El tiempo de ejecución es ligeramente superior a otros algoritmos estudiados en clase y, aún así, son resultados muy buenos. Como podemos ver, se han adjuntado tanto los tiempo como las ejecuciones de todos los algorítmos comprobados.
###Code
breadth_first_tree_search(elPuente((10,30,60,80,120))).solution()
###Output
_____no_output_____
###Markdown
568 ms ± 2.42 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Code
depth_first_tree_search(elPuente((10,30,60,80,120))).solution()
###Output
_____no_output_____
###Markdown
998 ms ± 6.44 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Code
uniform_cost_search(elPuente((10,30,60,80,120))).solution()
###Output
_____no_output_____
###Markdown
2.92 s ± 26.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Code
# Hacemos una definición ampliada de la clase Problem de AIMA que nos va a permitir experimentar con distintos
# estados iniciales, algoritmos y heurísticas, para resolver el 8-puzzle.
# The solvability of a configuration can be checked by calculating the Inversion Permutation. If the total Inversion Permutation is even then the initial configuration is solvable else the initial configuration is not solvable which means that only 9!/2 initial states lead to a solution.
# Añadimos en la clase ampliada la capacidad para contar el número de nodos analizados durante la
# búsqueda:
class Problema_con_Analizados(Problem):
"""Es un problema que se comporta exactamente igual que el que recibe al
inicializarse, y además incorpora unos atributos nuevos para almacenar el
número de nodos analizados durante la búsqueda. De esta manera, no
tenemos que modificar el código del algoritmo de búsqueda."""
def __init__(self, problem):
self.initial = problem.initial
self.problem = problem
self.analizados = 0
self.goal = problem.goal
def actions(self, estado):
return self.problem.actions(estado)
def result(self, estado, accion):
return self.problem.result(estado, accion)
def goal_test(self, estado):
self.analizados += 1
return self.problem.goal_test(estado)
def coste_de_aplicar_accion(self, estado, accion):
return self.problem.coste_de_aplicar_accion(estado,accion)
def check_solvability(self, state):
""" Checks if the given state is solvable """
inversion = 0
for i in range(len(state)):
for j in range(i+1, len(state)):
if (state[i] > state[j]) and state[i] != 0 and state[j]!= 0:
inversion += 1
return inversion % 2 == 0
def resuelve_puente_puzzle(estado_inicial, algoritmo, h=None):
p = Problema_con_Analizados(elPuente(estado_inicial))
if p.check_solvability(estado_inicial):
if h:
sol = algoritmo(p,h).solution()
else:
sol = algoritmo(p).solution()
print("Solución: {0}".format(sol))
print("Algoritmo: {0}".format(algoritmo.__name__))
if h:
print("Heurística: {0}".format(h.__name__))
else:
pass
print("Longitud de la solución: {0}. Nodos analizados: {1}".format(len(sol),p.analizados))
else:
print("Este problema no tiene solucion. ")
E1 = (10,30,60,80,120)
resuelve_puente_puzzle(E1,breadth_first_tree_search)
###Output
Solución: [(30, '10', '30'), (10, '10'), (60, '10', '60'), (10, '10'), (120, '80', '120'), (30, '30'), (30, '10', '30')]
Algoritmo: breadth_first_tree_search
Longitud de la solución: 7. Nodos analizados: 13117
|
Proyecto_Modulo1_GalindoA_Pimentel.ipynb | ###Markdown
1.1 Problema Lineal de Producción de Helados 1.2 Objetivos 1.2.1 Objetivo general.Determinar el plan semanal de producción de los diferentes tipos de paletas que conforman la “Gama Gourmet”, con el objetivo de maximizar beneficios. 1.2.2 Objetivos específicos* Cumplir con la demanda semanal de la empresa. * Maximizar la ganancia de la empresa.* Determinar la cantidad de paletas a producir de cada sabor. 1.3 Modelo que representa el problema Nuestra empresa de origen valenciano afincada en Sevilla desde la década de los años 70, se dedica a la elaboración de helados artesanos. Después de estos años de grandes progresos en su negocio, desea abrir mercado para poder enfrentarse a la situación actual.Esta ampliación tiene como objetivo introducir sus productos en el sector de la hostelería, mediante la propuesta de una gama de helados que podemos considerar “Gourmet”. A continuación detallaremos dicha gama. Creada por el gran Filippo Zampieron está compuesta por cinco tipos de paletas artesanales: 1. Paletas de menta2. Paletas de chocolate3. Paletas de yogurt y melocotón 4. Paletas de almendras 5. Paletas “Fiordilatte”.Aunque la elaboración de todas las paletas difieren en diversos aspectos, ya sea en la composición de la base, cobertura o en las proporciones de cada componente, hay un producto común en todas ellas; “*Jarabe Base*” ya que sin este no sería posible la fabricación de la base de las paletas. Este Jarabe, está compuesto por: * Agua: 655 gr* Azúcar de caña : 180 gr* Dextosa: 35 gr* Glucosa: 130 gr A continuación detallamos el proceso de elaboración y las cantidades utilizadaspara la fabricación de un kilo de cada tipo de paletas. Paletas de mentaLa fabricación de este producto comienza con la elaboración de la base. Para ello se utilizan $550 gr$ del jarabe, seguido de unas gotas de esencia de menta ($10 gotas$) y posteriormente añadiendo unos $450 gr$ de leche fresca entera.Una vez que se ha mezclado la base y se ha dejado reposar para conseguir una textura idónea se procede a la elaboración de su cobertura.Está compuesta por unos $800 gr$ chocolate y $200 gr$ de manteca de cacao. Paletas de ChocolateLa base de estas está compuesta por: $500 gr$ de jarabe, $440 gr$ de leche entera fresca unos $25 gr$ de azúcar invertido (una combinación de glucosa y fructosa) y por último, 35 gr de cacao.La cobertura al igual que el producto anterior está compuesta por: $800 gr$ de chocolate y $200 gr$ de manteca de cacao. Paletas de Yogurt y MelocotónCon una base compuesta por: $430 gr$ de jarabe, $300 gr$ de yogurt desnatado, $20 gr$ de azúcar invertido y $250 gr$ de melocotón batido.Su cobertura es una dulce combinación de $500 gr$ de chocolate y $500 gr$ de nata. Paletas de almendraBase elaborada por: $400 gr$ de jarabe, $495 gr$ de leche fresca entera, $25 gr$ de azúcar invertido.La cobertura está elaborada por $800 gr$ de chocolate, $200 gr$ de manteca de cacao y $80 gr$ de pasta de almendras. Paletas “Fiordilatte”Su elaboración comienza con la base compuesta por $510 gr$ de jarabe, $510 gr$ de leche fresca entera, $250 gr$ de nata, $200 gr$ de azúcar invertido.Una vez que la base se haya mezclado y adoptado la textura deseada, se le inyecta un relleno compuesto por: $550 gr$ de nata y $500 gr$ de chocolate.Finalmente, esperado el tiempo necesario para que el relleno se adapte a la base, se le añade una cobertura de $800 gr$ de chocolate y $200 gr$ de manteca de cacao. **Se ha realizado un estudio de mercado y de producción, que nos proporcionan los siguientes datos:** *Beneficio esperado por cada kilo de las diferentes paletas:** Paletas de menta: $23$ pesos el kilo* Paletas de chocolate: $22.5$ pesos el kilo* Paletas de yogurt y melocotón: $21$ pesos el kilo * Paletas de almendras: $20.5$ pesos el kilo* Paletas “ Fiordilatte”: $21$ pesos el kilo *Disponibilidad semanal de las siguientes materias primas :** Jarabe Base: $20,5$ kg* Leche Fresca entera: $13$ kg* Yogurt desnatado: $5$ kg* Nata: $8,5$ kg* Azúcar invertido: $1,3$ kg* Chocolate: $27$ kg* Manteca de cacao: $5$ kg* Esencia de menta: $2$ frasco de $90 ml$, cada frasco proporciona $75$ gotas.* Cacao: $0,28$ kg ( $2$ bolsas de $140$ gr cada uno)* Melocotón batido: $4$ kg* Pasta de almendras: $0,8$ kg ( $2$ bolsas de $400$ gr cada una) *Demanda esperada semanalmente para cada tipo de paleta:** Demanda de paletas de menta y paletas de chocolate: $10$ kilos * Demanda de paletas de yogurt y paletas de almendras: $10$ kilos * No se ha estimado demanda alguna de paletas Fiordilatte. *Variables de decisión:** $x_1$ kilos a fabricar semanalmente de paletas de menta* $x_2$ kilos a fabricar semanalmente de paletas de chocolate* $x_3$ kilos a fabricar semanalmente de paletas de yogur y melocotón* $x_4$ kilos a fabricar semanalmente de paletas de almendras* $x_5$ kilos a fabricar semanalmente de paletas fiordilatte *Restricciones*Limitación de Jarabe Base: $20,5$ kilos$550x_1+500x_2+430x_3+400x_4+510x_5+\leq20500$Limitación de Leche Fresca Entera: $13$ kilos$450x_1+440x_2+495x_4+510x_5\leq13000$Limitación de Yogurt desnatado: $5$ kilos $300x_3\leq5000$Limitación de Nata: $8,5$ kilos $500x_3+550x_5\leq8500$Limitación de Azúcar invertido: $1,3$ kilo $25x_2+20x_3+25x_4+200x_5\leq1300$Limitación de Chocolate: $27$ kilos$800x_1+800x_2+500x_3+800x_4+1300x_5\leq27000$Limitación de Manteca de cacao: 5 kilos$200x_1+200x_2+200x_4+200x_5\leq5000$Limitación de Esencia de menta: $150$ gotas$10x_1\leq150$Limitación de Cacao: 0,28 kg $35x_2\leq280$Limitación de Melocotón Batido: 4 kilos$250x_3\leq4000$Limitación de Pasta de Almendras: 0.8 kilo$80x_4\leq800$Restricciones con respecto a la demanda:$x_1+x_2\geq10$ $x_3+x_4\geq10$Función Objetivo: Beneficio$23x_1+22.5x_2 21x_3 20.5x_4 21x_5 $ 1.4 Solución del problema de optimización. Sintetizando las restricciones y nuestra función a optimizar obtenemos lo siguiente:Max $23x_1+22.5x_2+21x_3+20.5x_4+21x_5$ s.a. $550x_1+500x_2+430x_3+400x_4+510x_5\leq20500$$450x_1+440x_2+495x_4+510x_5\leq13000$ $300x_3\leq5000$ $500x_3+550x5\leq8500$ $25x_2+20x_3+25x_4+200x_5\leq1300$ $800x_1+800x_2+500x_3+800x_4+1300x_5\leq27000$ $200x_1+200x_2+200x_4+200x_5\leq5000$ $10x_1\leq150$ $35x_2\leq280$ $250x_3\leq4000$ $80x_4\leq800$ $x_1+x_2\geq10$ $x_3+x_4\geq10$ $x_1,x_2,x_3,x_4,x_5\geq0$
###Code
import numpy as np
import scipy.optimize as opt
c = -np.array([23, 22.5, 21, 20.5, 21])
A = np.array([[550, 500, 430, 400, 510],
[450, 440, 0, 495, 510],
[0, 0, 300, 0, 0],
[0, 0, 500, 0, 550],
[0, 25, 20, 25, 200],
[800, 800, 500, 800, 1300],
[200, 200, 0, 200, 200],
[10, 0, 0, 0, 0],
[0, 35, 0, 0, 0],
[0, 0, 250, 0, 0],
[0, 0, 0, 80, 0],
[-1, -1, 0, 0, 0],
[0, 0, -1, -1, 0]])
b = np.array([20500, 13000, 5000, 8500, 1300, 27000, 5000, 150, 280, 4000, 800, -10, -10])
utilidad = opt.linprog(c, A_ub=A, b_ub=b)
utilidad
###Output
_____no_output_____
###Markdown
1.5 Visualización de la solución del problema Una vez que hemos descrito el proceso de producción de cada producto, hemos sintetizado toda la información en el siguiente cuadro para ver de una forma más clara, los requisitos de materias primas por tipo de paleta|Jarabe|Leche entera fresca|Yogurt desnatado|Nata|Azúcar inv.|Chocolate|Manteca cacao|Esencia menta|Cacao|Melocotón|Pasta de almendras:----|----P. Menta|550gr|450gr||||800gr|200gr|10grP. Chocolate|500|440|||25|800|200||35P. Yogurt y Melocotón|430||300|500|20|500||||250P. Almendra|400|495|||25|800|200||||80P. Fiordilatte|510|510||550|200|1300|200Teniendo una demanda:$P. Menta+P. Chocolate\geq10$ $P. Yogurt y melocotón + P. Almendra\geq10$
###Code
resultado= utilidad.x
resultado
excedente= utilidad.slack
excedente
###Output
_____no_output_____ |
src/auto_download_seats.ipynb | ###Markdown
Flight seats downloaderData taken from [US Bureau of Transportation](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236)Field descriptions provided [here](https://www.transtats.bts.gov/Fields.asp?Table_ID=311)
###Code
import requests
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
prefs = {'download.default_directory' : '/Users/jaromeleslie/Documents/MDS/Personal_projects/Ohare_taxi_demand/data/seats'}
chrome_options.add_experimental_option('prefs', prefs)
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get('https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=311')
targ_years= list(range(2013,2020,1))
targ_years = list(map(str,targ_years))
#STEP 1. LOCATE DOWNLOAD BUTTON
download_bt = driver.find_element_by_xpath('//*[@id="content"]/table[1]/tbody/tr/td[2]/table[3]/tbody/tr/td[2]/button[1]')
# download_bt.click()
#STEP 2. SELECT FIELDS OF INTEREST (IGNORING DEFAULTS)
# DEPARTURES SCHEDULED
dep_sched_sel = driver.find_element_by_xpath('/html/body/div[3]/div[3]/table[1]/tbody/tr/td[2]/table[4]/tbody/tr[3]/td[1]/input')
dep_sched_sel.click()
# DEPARTURES PERFORMED
dep_perf_sel = driver.find_element_by_xpath('/html/body/div[3]/div[3]/table[1]/tbody/tr/td[2]/table[4]/tbody/tr[4]/td[1]/input')
dep_perf_sel.click()
# SEATS
seats_sel = driver.find_element_by_xpath('/html/body/div[3]/div[3]/table[1]/tbody/tr/td[2]/table[4]/tbody/tr[6]/td[1]/input')
seats_sel.click()
# PASSENGERS
pass_sel = driver.find_element_by_xpath('/html/body/div[3]/div[3]/table[1]/tbody/tr/td[2]/table[4]/tbody/tr[7]/td[1]/input')
pass_sel.click()
#STEP 3. LOOP OVER YEARS OF INTEREST
#FIND DROPDOWN FOR SELECTABLE YEARS
year_sel = driver.find_element_by_id("XYEAR")
all_years = year_sel.find_elements_by_tag_name("option")
#OUTER LOOP FOR EACH YEAR
for year in all_years:
if year.get_attribute("value") in targ_years:
print("Value is: %s" % year.get_attribute("value"))
year.click()
#EXECUTE DOWNLOAD
download_bt.click()
###Output
Value is: 2013
Value is: 2014
Value is: 2015
Value is: 2016
Value is: 2017
Value is: 2018
Value is: 2019
###Markdown
Merge downloads into single file
###Code
#STARTING WITH 84 ZIPFILES, MAKE ORD_OTP.CSV
import pandas as pd
import numpy as np
import zipfile as zp
import requests
from selenium import webdriver
from bs4 import BeautifulSoup
from datetime import datetime
import pytest
entries = []
for i in range(72,84,1):
if i == 0:
with zp.ZipFile('../data/seats/691436399_T_T100D_SEGMENT_ALL_CARRIER.zip') as myzip:
myzip.extract('691436399_T_T100D_SEGMENT_ALL_CARRIER.csv',path='../data/seats')
df = pd.read_csv('../data/seats/691436399_T_T100D_SEGMENT_ALL_CARRIER.csv')
entries.append(df.query('DEST == "ORD"'))
else:
with zp.ZipFile('../data/otp/1051953426_T_ONTIME_REPORTING '+'('+str(i)+').zip') as myzip:
myzip.extract('1051953426_T_ONTIME_REPORTING.csv', path ='../data/otp')
df = pd.read_csv('../data/otp/1051953426_T_ONTIME_REPORTING.csv')
entries.append(df.query('DEST == "ORD"'))
combined_ord_seats = pd.concat(entries)
combined_ord_seats.head()
###Output
_____no_output_____ |
notebooks/block2/GLM_exercises.ipynb | ###Markdown
Generalized linear models Marcel Nonnenmacher, Jan-Mathis Lückmann, Pedro Goncalves, Jakob Macke > One of the central problems in systems neuroscience is that of characterizing the functional relationship between sensory stimuli and neural spike responses. Investigators call this the neural coding problem, because the spike trains of neurons can be considered a code by which the brain represents information about the state of the external world. One approach to understanding this code is to build mathematical models of the mapping between stimuli and spike responses; the code can then be interpreted by using the model to predict the neural response to a stimulus, or to decode the stimulus that gave rise to a particular response. [(Pillow, 2007)](http://pillowlab.princeton.edu/pubs/Pillow_BBchap07.pdf)Here, we will build probabilistic models for the response of a single neuron, starting from a simple model, that we will then extend. Conditional on a stimulus $x$ and model parameters $\theta$ we will model the probability of a neural reponse $y$, i.e., $p(y|x, \theta)$. Our central goal will be to find model parameters $\theta$ such that the $p(y|x,\theta)$ is a good fit to a dataset of stimulus-response pairs we observed, $\mathcal{D} = \{ (x_k,y_k) \}_{k=1}^K$. Goals of these exercisesCentral to inferring the best fitting parameters will be the likelihood function of the model. The simplest method goes by simply maximizing the likelihood using its gradient with respect to the model parameters (a technique called maximum likelihood estimation, MLE). You will learn to incorporate prior knowledge on the parameters, which leads to a method called maximum a posteriori (MAP). Finally, you will learn automatic differentiation (AD) which --- as the name suggests --- provides a automatic way to calcuate gradients of an objective function (here: the likelihood of parameters given the data). AD is a central ingredient to machine learning methods that are becoming increasingly popular. Assumptions and notationThroughout this tutorial, we will adopt the following conventions:- $T$ is the number of time bins within one trial; $t$ always specifies a time bin;- $K$ is the number of trials in the experiment; $k$ always identifies a trial;- to make the notation lighter, we will sometimes drop the subscript $k$;- $\hat{\pi}(\cdot)$ indicates an unnormalized probability, and $\pi(\cdot)$ the same probability normalize to integrate to 1;- $\mathcal{L}(\boldsymbol{\theta}) = p(\mathbf{y}\, |\, \boldsymbol{\theta})$ is the likelihood of the vector of parameters $\boldsymbol{\theta}$ for the (fixed) data $\mathbf{y}$.For all models we consider, we assume that time is discretized in bins of size $\Delta$. Given $z_t$, the instantaneous *input rate* of a neuron at time $\Delta \cdot t$, the spike counts $y_t$ are assumed to be independent, and distributed according to$\begin{equation} y_t \sim \mathrm{Poisson}\big(\eta(z_t)\big)\end{equation}$where $\eta(\cdot)$ is corresponding canonical link function (here, we will always use $\eta(\cdot) = \exp(\cdot)$ for Poisson). We further assume that there is a linear dependence between $z_i$ and a set of external covariates $\mathbf{x}_t$ at time $t$, i.e. $z_t = \boldsymbol{\theta}^\top \mathbf{x}$, and $\boldsymbol{\theta}$ is a vector of parameters which fully characterizes the neuron.Experiments are composed of $K$ trials, each subdivided into $T$ bins. Note, and in contrast to the lectures, we assume that the rate $\mu_t$ is already 'per bin size', i.e. the expected number of spikes is $\mu$ (and not $\mu\Delta$, as we had in lectures).For a Poisson neuron, the probability of producing $n$ spikes in an interval of size $\Delta$ is given by $\begin{align}P(y_t=n| \mu)= \frac{\mu^n e^{-\mu} }{n!}\end{align}$ Exercise 1 (from lectures)Assume that you have spike counts $n_1$ to $n_K$ from $K$ trials, calculate the maximum likelihood estimate (MLE) of $\mu$. LNP modelThe stimulus $\mathbf{u}_t$ is a white noise sequence, and the input rate is:$\begin{equation} z_t = \mathbf{\beta}^\top \mathbf{u}_{t-\delta{}+1:t} + b = \boldsymbol{\theta}^\top \mathbf{x}\end{equation}$,i.e. $z_t$ is the result of a linear filter $\beta$ applied to the recent stimulus history $\mathbf{u}_{t-\delta{}+1:t}$, plus some offset $b$. This results in a vector of covariates at time $t$$\mathbf{x}_{kt} = \big[1, \mathbf{u}_{kt-\delta{}+1},\ldots, \mathbf{u}_{kt} \big]^\top$ for temporal filter length $\delta \in \mathbb{N}$. Note that we can deal with any form of input in the second and third column of $\mathbf{x}_{kt}$, not just white noise.The vector of parameters is $\boldsymbol{\theta} = \left[b, \beta^\top \right]^\top$. Simulating data from the modelNext, we will want to generate data using this model. Execute the following code cell, which will load some functions you will need throughout the session.
###Code
import numpy as np
%run -i helpers.ipynb
###Output
_____no_output_____
###Markdown
The following cell generates a matrix $\mathbf{x}$ as specified above.
###Code
binsize = 0.001 # seconds
T = 10000 # 1s trials
K = 10 # number of trials
nbins = T*K
delta = 10 # length of temporal filter
# stimulus
U = np.random.normal(size=nbins)
def toyDesignMatrix(U=U, T=T, K=K):
nbins = T*K
X = np.zeros((delta+1, nbins))
X[0,:] = 1. # bias
if delta > 0:
X[1, :] = U # instantaneous input
for i in range(1,delta):
X[i+1, i+1:] = U[:-(i+1)]
return X
X = toyDesignMatrix()
###Output
_____no_output_____
###Markdown
Next, we define $\mathbf{\theta}$.
###Code
# ground-truth vector of parameters
b = -6 # controls the offset and hence overall firing rate
beta = np.cos( np.linspace(0, PI, delta))
theta = np.hstack([b, beta])
###Output
_____no_output_____
###Markdown
Given `X` and `theta`, we want to generate sample spike trains. In the following cell, we do so by just using
###Code
ys = []
for k in range(10):
y, fr = toyModel(X, theta) # spike train, firing rate
ys.append(y)
###Output
_____no_output_____
###Markdown
... plotting spike rasters and PSTH:
###Code
plt.subplot(311)
plt.plot(U[:200])
plt.subplot(312)
plt.imshow(np.asarray(ys)[:,:200], aspect=4, interpolation='None');
plt.subplot(313)
plt.plot(np.asarray(ys)[:, :200].mean(axis=0) / binsize); # PSTH
plt.plot(fr[:200], linewidth=2); # firing rate
###Output
_____no_output_____
###Markdown
Optional: Try implementing the model yourself(Optional): Above, you used an implementation of the model we provided. You can try implementing the model as stated above yourself. To do so, complete the following function template
###Code
def toyModelExercise(X, theta):
# TODO: given stimulus and theta, return spikes and firing rate
return y, fr
###Output
_____no_output_____
###Markdown
To check whether this model is correct, reproduce the PSTHs for both models and compare. MLE inference Likelihood The likelihood defines a model that connects model parameters to the observed data: $\begin{align}\log \mathcal{L}(\boldsymbol{\theta}) &= \log p(\mathbf{y} | \boldsymbol{\theta}) = \log \bigg[ \prod_{k=1}^K \prod_{t=1}^T p(y_{kt} | b, \beta_1, \beta_2) \bigg] \\ &= \sum_{k=1}^K \sum_{t=1}^T \log p(y_{tk} | b, \beta_1, \beta_2) = \sum_{k=1}^K \sum_{t=1}^T \big[ z_{tk} y_{tk} - \mathrm{e}^{z_{tk}} \big], \end{align}$where as above $z_{tk} = \theta^\top \mathbf{x}_{tk} = b + \beta^\top \mathbf{u}_{tk-\delta{}+1:tk}$.Large $\mathcal{L}$ for a given set of parameters $\theta$ indicates that the data is likely under that parameter set. We can iteratively find more likely parameters, starting from some initial guess $\theta_0$, by gradient ascent on $\mathcal{L}$.For this model, the likelihood function has a unique maximum. Gradients **Exercise 1:** Using pen and paper, derive the gradient of the $\log \mathcal{L}$ with respect to $\mathbf{\theta}$. MLE parameter inferenceWe will now want to the use gradient you just derived to do parameter inference. For that, we will need to implement the functions `ll` and `dll` (the log-likelihood function and its derivative). **Exercise 2.1: ** Implement `ll` and `dll` in the cell below.
###Code
# say, we got a single spike train
y, fr = toyModel(X, theta) # spike train, firing rate
def ll(theta):
# TODO: implement log-likelihood function
return NotImplemented
def dll(theta):
# TODO: implement derivative of log-likelihood function wrt theta
return NotImplemented
###Output
_____no_output_____
###Markdown
**Exercise 2.2**: Assume the true parameters that we used to generate the data, $\mathbf{\theta}^*$, were unknown. We want to recover $\mathbf{\theta}^*$ starting from an initial guess $\mathbf{\theta}_0$. Fill the gaps in the code block below. How good do you recover $\mathbf{\theta}^*$? What happens if you change `step_size`?
###Code
theta_true = theta.copy()
theta_initial = np.random.randn(len(theta))
print('theta_star : {}'.format(theta_true))
print('theta_0 : {}'.format(theta_initial))
def gradientAscent(theta_initial, step_size=0.0001, num_iterations=1000):
theta_hat = theta_initial.copy()
for i in range(0, num_iterations):
# TODO: fix the next lines
log_likelihood = ...
gradient = ...
theta_hat = theta_hat + ...
return theta_hat
theta_hat = gradientAscent(theta_initial)
print('theta_hat : {}'.format(theta_hat))
###Output
theta_star : [-6. 1. 0.93969262 0.76604444 0.5 0.17364818
-0.17364818 -0.5 -0.76604444 -0.93969262 -1. ]
theta_0 : [-0.86170838 -0.52051442 1.04976038 -0.58227439 1.09268153 0.7375664
-0.94986715 0.0133238 -0.62592408 -1.86035052 -0.09161797]
###Markdown
Extending the modelOur simple model assumed independent firing in each time-bin that only depends on the stimulus. In reality, we know that the activity of neurons depends also on their recent firing history. The GLM frameworks allows to flexibly extend our model simply by adding additional covariates and corresponding parameters, i.e. by adding columns to design matrix $\mathbf{X}$ and entries to parameter vector $\theta$. Let us try introducing the recent spiking history $\mathbf{y}_{kt-\tau}, \ldots, \mathbf{y}_{kt-1}$ as additional covariates.The vector of covariates at time $t$ becomes$\mathbf{x}_{kt} = \big[1, \mathbf{u}_{kt-\delta+1 \ : \ tk}, \mathbf{y}_{kt-\tau \ : \ tk-1}\big]^\top$, and we extend the vector of parameters as $\boldsymbol{\theta} = \left[b, \mathbf{\beta}^\top, \mathbf{\psi}^\top \right]^\top$, with history kernel $\mathbf{\psi} \in \mathbb{R}^\tau$ and history kernel length $\tau \in \mathbb{N}$. **Question:** What other covariates could help improve our model? MLE Inference**Exercise 3.1:** Write a function that implements the new design matrix $\mathbf{X}$ (now depends on data $\mathbf{y}$). Note that we provide a function `createDataset()` to generate data from the extended model with given parameter vector $\theta$.
###Code
tau = 5 # length of history kernel (in bins)
psi = - 1.0 * np.arange(0, tau)[::-1]
theta_true = np.hstack((theta, psi))
y = createDataset(U, T, K, theta_true, delta)
def extendedDesignMatrix(y):
# TODO: implement design matrix X with
# X[kt,:] = [1, w*cos(t), w*sin(t), y_{kt-tau:kt-1}]
return NotImplemented
X = extendedDesignMatrix(y) # you might have to re-run the cell defining ll() and dll()
# to update the used design matrix X and data y
###Output
_____no_output_____
###Markdown
**Exercise 3.2:** Write down the gradients for the extended model. What changes from our earlier simpler model? MAP inferenceThe solution $\hat{\theta}$ obtained by gradient ascent on the log-likelihood depends on the data $\mathcal{D} = \{ (x_{tk}, y_{tk}) \}_{(t,k)}$. In particular for very short traces and few trials, this data only weakly constrains the solution. We can often improve our obtained solutions by adding prior knowledge regarding what 'good' solutions should look like. In probabilistic modeling, this can be done by introducing prior distributions $p(\theta)$ on the model parameters, which together with the likelihood $\mathcal{L}$ define a posterior distribution over parameters given the data $p(\theta | \mathbf{y})$, $$ \log p(\theta | \mathbf{y}) = \log p(\mathbf{y}|\theta) + \log p(\theta) - \log p(\mathbf{y}) = \mathcal{L}(\theta) + \log p(\theta) + const.$$Maximum a-posterio (MAP) estimates parameters $\theta$ by gradient ascent on the (log-)posterior. We will assume zero-mean Gaussian priors on $\beta, \psi$, i.e. \begin{align}p(\beta) &= \mathcal{N}(0, \Sigma_\beta) \\p(\psi) &= \mathcal{N}(0, \Sigma_\psi).\end{align}We will not assume an explicit prior on $b$, which effectively assumes $b$ to be distributed 'uniformly' over $\mathbb{R}$. GradientsCompared to maximum likelihood, MAP only requires adding the prior gradient:\begin{align}\frac{\partial}{\partial \theta} p(\theta|\mathbf{y}) = \frac{\partial}{\partial \theta} \mathcal{L}(\theta) + \frac{\partial}{\partial \theta}p(\theta)\end{align} Exercises **Exercise 4: ** Derive the gradients for the prior. If you get stuck, or if you want to verify the solution, ask the tutors for help. **Exercise 5:** Fill gaps in codeblock below.
###Code
## priors
# select prior covariance for input weights
Sig_beta = np.eye(delta)
# select prior covariance for history kernel
ir = np.atleast_2d(np.arange(tau, 0, -1))
Sig_psi = np.exp(- np.abs(ir.T - ir)/5) # assuming smoothness
# convenience
P_beta, P_psi = np.linalg.inv(Sig_beta), np.linalg.inv(Sig_psi)
## functions and gradients
def po(theta):
# TODO: implement log-posterior density function
return NotImplemented
def dpo(theta):
# TODO: implement derivative of log-posterior density function wrt theta
return NotImplemented
# Hint: it can be helpful to first derive the functions for the prior below:
def pr(theta):
# TODO: implement log-prior density function
return NotImplemented
def dpr(theta):
# TODO: implement derivative of log-prior density function wrt theta
return NotImplemented
# leave as is
def ll(theta):
z = np.dot(theta, X)
return np.sum( y * z - link(z) )
# leave as is
def dll(theta):
z = np.dot(theta, X)
r = y - link(z)
return np.dot(X, r)
###Output
_____no_output_____
###Markdown
**Exercise 6:** Numerical gradient checking -- use the code below to numerically ensure your gradients are correct
###Code
from scipy import optimize
thrn = np.random.normal(size=theta_true.shape)
print(optimize.check_grad(ll, dll, thrn))
print(optimize.check_grad(pr, dpr, thrn))
print(optimize.check_grad(po, dpo, thrn))
###Output
_____no_output_____
###Markdown
**Exercise 7:** Do inference (WIP)
###Code
data = createDataset(1000, 1, theta_true, omega)
# TODO: implement gradient ascent
###Output
_____no_output_____
###Markdown
Automatic differentiation Instead of calculating the gradients w.r.t. the model parameters by hand, we can calculate them automatically. Our objective function consists of many elementary functions, each of which is differentiable. [Automatic differentiation (AD)](https://en.wikipedia.org/wiki/Automatic_differentiation) applies the chain rule to the expression graph of our objective to find the gradient. Here, we will use a Python library called `autograd` to find the gradient of our objective. AD is a central ingredient in libraries used for training artifical neural networks, including theano, TensorFlow and PyTorch. InstallationInstall the [`autograd` package](https://github.com/HIPS/autograd) through your package manager. Depending on how things are set up on your machine, install `autograd` by `pip3 install autograd --user` or by `pip install autograd --user`. You might need to restart the notebook kernel in case the simple example which follows fails with an import error. If you restart the kernel, make sure to re-run the cells. You can do that by choosing `Kernel > Restart & Run All` from the menu. `autograd` by a simple example
###Code
import autograd.numpy as np # thinly-wrapped numpy
from autograd import grad # the only autograd function you may ever need
def tanh(x): # Define a function
y = np.exp(-x)
return (1.0 - y) / (1.0 + y)
grad_tanh = grad(tanh) # Obtain its gradient function
print('Gradient at x=1.0 (autograd) : {}'.format(grad_tanh(1.0)))
print('Gradient at x=1.0 (finite diff): {}'.format((tanh(1.0001) - tanh(0.9999)) / 0.0002))
ipython nbconvert exercises.ipynb --to pdf
###Output
_____no_output_____ |
Netfilx/NetflixRecommeder.ipynb | ###Markdown
Load Data
###Code
df = pd.read_csv("../datasets/netflix_titles.csv")
df.head(5)
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
import re
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
def data_cleaning(text):
# make lower case
text = text.lower()
# remove stopwords
text = text.split()
stops = set(stopwords.words("english"))
text = [w for w in text if not w in stops]
text = " ".join(text)
# remove punctuation
tokenizer = RegexpTokenizer(r'[a-zA-z]+')
text = tokenizer.tokenize(text)
text = " ".join(text)
return text
df['cleaned'] = df['description'].apply(data_cleaning)
df['cleaned'][:5]
corpus = []
for words in df['cleaned']:
corpus.append(words.split())
###Output
_____no_output_____
###Markdown
Word Embedding (with pretrained embedded word)
###Code
import urllib.request
urllib.request.urlretrieve("https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz", filename="GoogleNews-vectors-negative300.bin.gz")
word2vec_model = Word2Vec(size = 300, window=5, min_count = 2, workers = -1)
word2vec_model.build_vocab(corpus)
word2vec_model.intersect_word2vec_format('GoogleNews-vectors-negative300.bin.gz', lockf=1.0, binary=True)
word2vec_model.train(corpus, total_examples = word2vec_model.corpus_count, epochs = 15)
###Output
_____no_output_____
###Markdown
Average of word vectors
###Code
def vectors(document_list):
document_embedding_list = []
for line in document_list:
doc2vec = None
count = 0
for word in line.split():
if word in word2vec_model.wv.vocab:
count+=1
if doc2vec is None:
doc2vec = word2vec_model[word]
else:
doc2vec = doc2vec + word2vec_model[word]
if doc2vec is not None:
doc2vec = doc2vec / count
document_embedding_list.append(doc2vec)
return document_embedding_list
document_embedding_list = vectors(df['cleaned'])
print('number of vector',len(document_embedding_list))
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarities = cosine_similarity(document_embedding_list, document_embedding_list)
print(cosine_similarities)
def recommendation(title):
show = df[['title','listed_in','description']]
indices = pd.Series(df.index, index = df['title']).drop_duplicates()
idx = indices[title]
sim_scores = list(enumerate(cosine_similarities[idx]))
sim_scores = sorted(sim_scores, key = lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:6]
show_indices = [i[0] for i in sim_scores]
recommend = show.iloc[show_indices].reset_index(drop=True)
print('Recommended list')
recommend_df = pd.DataFrame(recommend)
recommend_df.head()
recommendation("3%")
###Output
Recommended list
|
notebooks/earthquake.ipynb | ###Markdown
Análisis de la base de datos "earthquake" En este notebook realizaremos una exploración de la base de datos earthquake obtenida en [kaggle](https://www.kaggle.com/usgs/earthquake-database). Esta contiene el histórico de sismos relevantes (con magnitud mayor o igual a 5.5) desde 1965 hasta 2016.
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
sismo = pd.read_csv('../data/earthquake.csv')
print(' Dimensión del DataFrame:')
print(sismo.shape)
print('\nNombre de las columnas de nuestro data frame: \n')
print(sismo.columns.tolist())
print('\nData frame:')
sismo.head(4)
print('Porcentaje de NAs:')
sismo.isna().sum()*100/sismo.shape[0]
sismo = (sismo.loc[:,['Date', 'Time', 'Latitude', 'Longitude', 'Type', 'Depth', 'Magnitude',
'Magnitude Type', 'ID', 'Source', 'Location Source', 'Magnitude Source', 'Status']]
.rename(index=str, columns={'Location Source': 'Location_Source', 'Magnitude Source': 'Magnitude_Source',
'Magnitude Type': 'Magnitude_Type'}))
sismo.sample(5)
sismo.plot('Depth', 'Magnitude', kind='scatter', figsize=(10,8))
sismo.Date.unique()
###Output
_____no_output_____ |
social_network_ads_notebook.ipynb | ###Markdown
Support Vector Machines External Resources [Data Science Handbook / 05.07-Support-Vector-Machines.ipynb](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.07-Support-Vector-Machines.ipynb)
###Code
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Dataset: Social Network Ads The original dataset can be found at Kaggle ([Social Network Ads](https://www.kaggle.com/rakeshrau/social-network-ads)). Importing the datasets
###Code
df = pd.read_csv('Social_Network_Ads.csv')
X = df.iloc[:, [2,3]].values
y = df.iloc[:, 4].values
df.head()
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
###Output
_____no_output_____
###Markdown
Fitting the classifier into the Training set
###Code
clf = SVC(kernel = 'linear', random_state = 0)
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the test set results
###Code
y_pred = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Making the Confusion Matrix
###Code
confusion_matrix(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Visualization
###Code
def visualize(features, target, title):
X_set, y_set = features, target
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1,
stop = X_set[:, 0].max() + 1,
step = 0.01),
np.arange(start = X_set[:, 1].min() - 1,
stop = X_set[:, 1].max() + 1,
step = 0.01))
plt.contourf(X1,
X2,
clf.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75,
cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0],
X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title(title)
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Visualising the Training set results
###Code
visualize(X_train, y_train, 'Support Vector Machine (Training set)')
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
Visualising the Test set results
###Code
visualize(X_test, y_test, 'Support Vector Machine (Test set)')
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
As we can see due the linear nature of this classifier the model made some incorrect predictions(10 incorrect predictions as per the confusion matrix and graph). Now let's try to use different kernel type and see if it makes any better predictions.
###Code
clf = SVC(kernel='rbf',random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
confusion_matrix(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Visualising the Training set results for `'rbf'` kernel
###Code
visualize(X_train, y_train, 'SVM (Training set)')
visualize(X_test, y_test, 'SVM (Test set)')
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
|
tests/figure_widget.ipynb | ###Markdown
Test with my own data
###Code
from IPython.display import display
import pandas as pd
df = pd.read_csv('../data/test_pred.csv')
display(df.columns)
trace = go.Scatter(x=df.found_helpful_percentage, y=df.pred_lstm2_untuned, \
mode='markers')
layout = go.Layout(title='Predicted Proportion of Positive Votes vs Truth')
figure = go.Figure(data=[trace], layout=layout)
f2 = go.FigureWidget(figure)
'''
Ref: https://community.plotly.com/t/on-click-event-bar-chart-attributeerror-figureweidget-object-has-no-attribute-on-click/39294
'''
from ipywidgets import Output, VBox
out = Output()
@out.capture(clear_output=True)
def fetch_xy(trace, points, selector):
for i in points.point_inds:
df_sel = df[(df.found_helpful_percentage == perf_scatt.x[i]) & (df.pred_lstm2_untuned == perf_scatt.y[i])]
print(df_sel.iloc[0].review)
perf_scatt = f2.data[0]
perf_scatt.on_click(fetch_xy)
VBox([f2, out])
import plotly.graph_objs as go
from ipywidgets import Output, VBox
fig = go.FigureWidget()
pie = fig.add_pie(values=[1, 2, 3])
pie2 = fig.data[0]
out = Output()
@out.capture(clear_output=True)
def handle_click(trace, points, state):
print(points.point_inds)
pie2.on_click(handle_click)
VBox([fig, out])
###Output
_____no_output_____ |
examples/experimental_notebooks/Cohort Visualization.ipynb | ###Markdown
One interesting question for open source communities is whether they are *growing*. Often the founding members of a community would like to see new participants join and become active in the community. This is important for community longevity; ultimatley new members are required to take leadership roles if a project is to sustain itself over time.The data available for community participation is very granular, as it can include the exact traces of the messages sent by participants over a long history. One way of summarizing this information to get a sense of overall community growth is a *cohort visualization*.In this notebook, we will produce a visualization of changing participation over time.
###Code
url = "6lo"
arx = Archive(url,archive_dir="../archives")
arx.data[:1]
###Output
_____no_output_____
###Markdown
Archive objects have a method that reports for each user how many emails they sent each day.
###Code
act = arx.get_activity()
###Output
_____no_output_____
###Markdown
This plot will show when each sender sent their first post. A slow ascent means a period where many people joined.
###Code
fig = plt.figure(figsize=(12.5, 7.5))
#act.idxmax().order().T.plot()
(act > 0).idxmax().order().plot()
fig.axes[0].yaxis_date()
###Output
_____no_output_____
###Markdown
This is the same data, but plotted as a histogram. It's easier to see the trends here.
###Code
fig = plt.figure(figsize=(12.5, 7.5))
(act > 0).idxmax().order().hist()
fig.axes[0].xaxis_date()
###Output
_____no_output_____
###Markdown
While this is interesting, what if we are interested in how much different "cohorts" of participants stick around and continue to participate in the community over time?What we want to do is divide the participants into N cohorts based on the percentile of when they joined the mailing list. I.e, the first 1/N people to participate in the mailing list are the first cohort. The second 1/N people are in the second cohort. And so on.Then we can combine the activities of each cohort and do a stackplot of how each cohort has participated over time.
###Code
n = 5
from bigbang import plot
# A series, indexed by users, of the day of their first post
# This series is ordered by time
first_post = (act > 0).idxmax().order()
# Splitting the previous series into five equal parts,
# each representing a chronological quintile of list members
cohorts = np.array_split(first_post,n)
cohorts = [list(c.keys()) for c in cohorts]
plot.stack(act,partition=cohorts,smooth=10)
###Output
_____no_output_____
###Markdown
This gives us a sense of when new members are taking the lead in the community. But what if the old members are just changing their email addresses? To test that case, we should clean our data with entity resolution techniques.
###Code
cohorts[1].index.values
###Output
_____no_output_____ |
Bloc 3 - Machine Learning/Proyecto_ML_CSI.ipynb | ###Markdown
PROYECTO MACHINE LEARNING CSI QT 2019/20 Para la realización de este proyecto, se escoge el modelo de **Yacht Hydrodynamics (regresión)** y se genera un modelo predictivo que minimiza el error de generalización. Se ha prodedido a la obtención de los datos a partir del siguiente enlace:http://archive.ics.uci.edu/ml/datasets/yacht+hydrodynamicsSegún el **Dr Roberto Lopez** la contextualización del *dataset* es el siguiente:*La predicción de la resistencia residual de los yates de vela en la etapa de diseño inicial es de gran valor para evaluar el rendimiento del barco y para estimar la potencia propulsora requerida. Las entradas esenciales incluyen las dimensiones básicas del casco y la velocidad del barco.**El conjunto de datos de Delft comprende 308 experimentos a gran escala, que se realizaron en el Laboratorio de Hidromecánica de Barcos de Delft para ese propósito.**Estos experimentos incluyen 22 formas diferentes de casco, derivadas de una forma parental estrechamente relacionada con el "Standfast 43" diseñado por Frans Maas.* Según el enuncidado, el modelo tiene que predecir la variable en la última columna a partir de las 6 variables numéricas anteriores.Las variables se refieren a los coeficientes de geometría del casco y al número de Froude:1. Posición longitudinal del centro de flotabilidad, adimensional.2. Coeficiente prismático, adimensional.3. Relación longitud-desplazamiento, adimensional.4. Relación viga-tiro, adimensional.5. Relación longitud-viga, adimensional.6. Número de Froude, adimensional.La variable que se quiere predecir es la resistencia residual por unidad de peso de desplazamiento:7. Resistencia residual por unidad de peso de desplazamiento, adimensional. Importación del *dataset*
###Code
import pandas as pd
yacht_dataset = pd.read_csv('yacht_hydrodynamics.data', sep=" ")
###Output
_____no_output_____
###Markdown
Se puede observar la estructura que tiene el *dataset*: los parámetros son los descritos anteriormente, donde la última variable (*Res_residual*) es la que se desea realizar el modelo para que se aproxime más a los valores reales del *dataset*.
###Code
yacht_dataset
###Output
_____no_output_____
###Markdown
Inspección del *dataset* Mostramos los gráficos por pantalla para tener una referencia visual sobre cómo son de dispares los datos en el set que tenemos
###Code
%matplotlib inline
import matplotlib.pyplot as plt
centro_flotab = yacht_dataset.Pos_long_centro_flotab
coef_prism = yacht_dataset.Coef_prism
long_despl = yacht_dataset.Long_Despl
viga_tiro = yacht_dataset.Viga_tiro
long_viga = yacht_dataset.Long_viga
num_froude = yacht_dataset.Num_Froude
res_residual = yacht_dataset.Res_residual
# Dataset subplots of each feature
centro_flotab_plot = plt.subplot(3, 2, 1)
centro_flotab_plot.title.set_text('Pos. long. centro de flotab.')
plt.plot(centro_flotab)
coef_prism_plot = plt.subplot(3, 2, 2)
coef_prism_plot.title.set_text('Coeficiente prismático')
plt.plot(coef_prism)
long_despl_plot = plt.subplot(3, 2, 3)
long_despl_plot.title.set_text('Relación longitud-desplazamiento')
plt.plot(long_despl)
viga_tiro_plot = plt.subplot(3, 2, 4)
viga_tiro_plot.title.set_text('Relación viga-tiro')
plt.plot(viga_tiro)
long_viga_plot = plt.subplot(3, 2, 5)
long_viga_plot.title.set_text('Relación longitud-viga')
plt.plot(long_viga)
num_froude_plot = plt.subplot(3, 2, 6)
num_froude_plot.title.set_text('Número de Froude')
plt.plot(num_froude)
plt.tight_layout()
plt.show()
# Se muestra la variable la cual que se quiere aproximar el modelo
res_residual_plot = plt.plot(res_residual)
###Output
_____no_output_____
###Markdown
Mostramos un gráfico que aglutina todas las variables de entrada del dataset.
###Code
# Make a dataframe with input data
df_yacht = pd.DataFrame({'num_data': range(0,308),
'centre_flotab': centro_flotab,
'coef_prism': coef_prism,
'long_despl': long_despl,
'manega_calat': viga_tiro,
'long_manega': long_viga,
'num_froude': num_froude})
# Style of the plot
plt.style.use('seaborn-darkgrid')
# Create palette color
palette = plt.get_cmap('Set1')
# Plot multiple plots in one
num=0
for column in df_yacht.drop('num_data', axis=1):
num+=1
plt.plot(df_yacht['num_data'], df_yacht[column], marker='', color=palette(num), linewidth=2, alpha=0.9, label=column)
# Legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# Titles
plt.title("Variables de entrada", loc='center', fontsize=15, fontweight=0, color='orange')
plt.xlabel("Nombre d'experiments")
plt.ylabel("Variables de Entrada")
###Output
_____no_output_____
###Markdown
Vemos cómo están los datos distribuídos estadísticamente
###Code
yacht_dataset.describe()
###Output
_____no_output_____
###Markdown
Preprocesamiento de los datos Preprocesamos los datos normalizándolos
###Code
import numpy as np
# MinMaxScaler it is not used since it rescales the data worse than MaxAbsScaler
# from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import MaxAbsScaler
# Selected the scaler to rescale data
scaler = MaxAbsScaler()
# Make a copy of the data in order not to change original one
yacht_dataset_scaled = yacht_dataset.copy()
# Rescale the data using the scaler selected above
yacht_data_scaled = scaler.fit_transform(yacht_dataset)
yacht_dataset_scaled.loc[:,:] = yacht_data_scaled
scaler_params = scaler.get_params()
# Getting the function that relates original and resized data of each feature (used later for rescaling)
# Create array containing all the data
extract_scaling_function = np.ones((1,yacht_dataset_scaled.shape[1]))
print(extract_scaling_function)
# Fill the array with resized ratio of each feature
extract_scaling_function = scaler.inverse_transform(extract_scaling_function)
print(extract_scaling_function)
display(yacht_dataset_scaled)
# Make dataframe with input data
df_yacht_scaled = pd.DataFrame({'num_data': range(0,308),
'centre_flotab': yacht_data_scaled[:,0],
'coef_prism': yacht_data_scaled[:,1],
'long_despl': yacht_data_scaled[:,2],
'manega_calat': yacht_data_scaled[:,3],
'long_manega': yacht_data_scaled[:,4],
'num_froude': yacht_data_scaled[:,5]})
# Style
plt.style.use('seaborn-darkgrid')
# Make palette color
palette = plt.get_cmap('Set1')
# Display multiple plots in one
num=0
for column in df_yacht_scaled.drop('num_data', axis=1):
num+=1
plt.plot(df_yacht_scaled['num_data'], df_yacht_scaled[column], marker='', color=palette(num), linewidth=2, alpha=0.9, label=column)
# Legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# Titles
plt.title("Variables d'entrada Reescalades", loc='center', fontsize=15, fontweight=0, color='orange')
plt.xlabel("Nombre d'experiments")
plt.ylabel("Variables de Entrada Reescalades")
###Output
_____no_output_____
###Markdown
Separación train/test de los datos
###Code
from sklearn.model_selection import train_test_split
# Store in a vector output data of the dataset
y = yacht_dataset_scaled['Res_residual'].values.reshape(-1,1)
# Create a new dataset with scaled variables in order not to damage original one
X_df = yacht_dataset_scaled.copy()
# Delete from scaled dataset output values = Store input values
X_df.drop(['Res_residual'], axis=1, inplace=True)
X = X_df.values
# Split data into train and test (20% of data) and suffle it
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=42,
shuffle=True)
# Create a dictionary with all important values of the dataset
dataset = {'X_train': X_train, 'X_test' : X_test,
'y_train': y_train, 'y_test' : y_test,
'scaler' : scaler,
'scaler_function' : extract_scaling_function}
###Output
_____no_output_____
###Markdown
Entrenamiento del modelo Definimos la función del para el training del modelo
###Code
from sklearn.model_selection import GridSearchCV
import time
def trainYachtModelGridSearchCV(X_train, x_test,
y_train, y_test,
estimator,
parameters,
cv_size):
# Start training
start_time = time.time()
# Combine all permutation of parameters
grid_obj = GridSearchCV(estimator = estimator,
param_grid = parameters,
n_jobs = -1,
cv = cv_size,
verbose = 1)
# Train model
grid_trained = grid_obj.fit(X_train, y_train)
training_time = time.time() - start_time
# Get the score of the GridSearchCV()
score_gridsearchcv = grid_trained.score(X_test, y_test)
# Get the best parameters that GridSearchCV() found
best_parameters_gridsearchcv = grid_trained.best_params_
# Get the best estimator from GridSearchCV()
best_estimator_gridsearchcv = grid_trained.best_estimator_
# Predict all new test values using the best estimator obtained
predictions_test = best_estimator_gridsearchcv.predict(X_test)
return {'Regression type' : estimator.__class__.__name__,
'Training time' : training_time,
'Score GRCV' : score_gridsearchcv,
'Best parameters estimator GRCV' : best_parameters_gridsearchcv,
'Best estimator GRCV' : best_estimator_gridsearchcv,
'Output Predictions' : predictions_test}
###Output
_____no_output_____
###Markdown
Ejecutamos el training del modelo
###Code
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import AdaBoostRegressor
# Choose which estimator we want to train our data
linear_regression = LinearRegression()
ridge_regression = Ridge()
lasso_regression = Lasso()
decision_tree_regression = DecisionTreeRegressor(random_state = 42)
random_forest_regression = RandomForestRegressor(random_state = 42)
adaboost_regression = AdaBoostRegressor()
regression_type = {"linear_regression" : linear_regression,
"ridge_regression" : ridge_regression,
"lasso_regression" : lasso_regression,
"decision_tree_regression" : decision_tree_regression,
"random_forest_regression" : random_forest_regression,
"adaboost_regression" : adaboost_regression}
# Their parameters
grid_parameters_linear_regression = {'fit_intercept':('True', 'False'), 'normalize':('True', 'False'), 'copy_X':('True', 'False')}
grid_parameters_ridge_regression = {'alpha': [25,10,4,2,1.0,0.8,0.5,0.3,0.2,0.1,0.05,0.02,0.01]}
grid_parameters_lasso_regression = {'alpha': [1, 2, 3, 5, 7, 10, 20, -5, -3]}
grid_parameters_decision_tree_regression = {'max_depth' : [None, 3,5,7,9,10,11]}
grid_parameters_random_forest_regression = {
'bootstrap': [True],
'max_depth': [5, 10, 20],
'max_features': [2, 3],
'min_samples_leaf': [3, 4, 5],
'min_samples_split': [8, 10, 12],
'n_estimators': [10, 20, 30]
}
grid_parameters_adaboost_regression = {
'n_estimators': [50, 100],
'learning_rate' : [0.01,0.05,0.1,0.3,1],
'loss' : ['linear', 'square', 'exponential']
}
parameter_type = {"grid_parameters_linear_regression" : grid_parameters_linear_regression,
"grid_parameters_ridge_regression" : grid_parameters_ridge_regression,
"grid_parameters_lasso_regression" : grid_parameters_lasso_regression,
"grid_parameters_decision_tree_regression" : grid_parameters_decision_tree_regression,
"grid_parameters_random_forest_regression" : grid_parameters_random_forest_regression,
"grid_parameters_adaboost_regression" : grid_parameters_adaboost_regression}
# Size of samples of Cross Validation
kfold_cv_size = 10
# Run model training -> Example to train one model, below they are trained all at once
# trainedModel = trainYachtModelGridSearchCV(dataset['X_train'], dataset['X_test'],
# dataset['y_train'], dataset['y_test'],
# decision_tree_regression,
# grid_parameters_decision_tree_regression,
# kfold_cv_size)
# Output status values of trained model
# print(
# "Regression type: {}".format(trainedModel['Regression type']),
# "Training time: {}".format(trainedModel['Training time']),
# "Score GRCV: {}".format(trainedModel['Score GRCV']),
# "Best parameters estimator GRCV: {}".format(trainedModel['Best parameters estimator GRCV']),
# "Best estimator GRCV: {}".format(trainedModel['Best estimator GRCV']),
# sep = "\n"
# )
###Output
_____no_output_____
###Markdown
Definimos la obtención de métricas del entrenamiento
###Code
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import median_absolute_error
def getMetricsFromTrainedModel(original_output_data,
predicted_output_data,
data_scaling_function):
# Get metrics from scaled [-1, 1] data
r2 = r2_score(y_true = original_output_data,
y_pred = predicted_output_data)
mse = mean_squared_error(y_true = original_output_data,
y_pred = predicted_output_data)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_true = original_output_data,
y_pred = predicted_output_data)
medae = median_absolute_error(y_true = original_output_data,
y_pred = predicted_output_data)
# Rescale predicted data
predictions_true_scale = predicted_output_data * data_scaling_function[:,-1]
# Rescale output test data
y_test_true_scale = original_output_data * data_scaling_function[:,-1]
# Get metrics from true scaled data (original)
r2_true_scale = r2_score(y_true = y_test_true_scale,
y_pred = predictions_true_scale)
mse_true_scale = mean_squared_error(y_true = y_test_true_scale,
y_pred = predictions_true_scale)
rmse_true_scale = np.sqrt(mse_true_scale)
mae_true_scale = mean_absolute_error(y_true = y_test_true_scale,
y_pred = predictions_true_scale)
medae_true_scale = median_absolute_error(y_true = y_test_true_scale,
y_pred = predictions_true_scale)
return {'R2' : r2,
'R2 True scale' : r2_true_scale,
'MSE' : mse,
'MSE True scale' : mse_true_scale,
'RMSE' : rmse,
'RMSE True scale' : rmse_true_scale,
'MAE' : mae,
'MAE True scale' : mae_true_scale,
'MEDAE' : medae,
'MEDAE True scale' : medae_true_scale,
"Predictions True scale": predictions_true_scale,
"Output Test True scale" : y_test_true_scale}
###Output
_____no_output_____
###Markdown
Ejecutamos las métricas del modelo
###Code
# Run statiscitcs of the model -> Example to get the metrics, below they are obtained all at once
# Run statistics of trained model
# metricsTrainedModel = getMetricsFromTrainedModel(dataset['y_test'],
# trainedModel['Output Predictions'],
# dataset['scaler_function'])
# # Output their values
# print(
# "R2: {}".format(metricsTrainedModel['R2']),
# "R2 True scale: {}".format(metricsTrainedModel['R2 True scale']),
# "MSE: {}".format(metricsTrainedModel['MSE']),
# "MSE True scale: {}".format(metricsTrainedModel['MSE True scale']),
# "RMSE: {}".format(metricsTrainedModel['RMSE']),
# "RMSE True scale: {}".format(metricsTrainedModel['RMSE True scale']),
# "MAE: {}".format(metricsTrainedModel['MAE']),
# "MAE True scale: {}".format(metricsTrainedModel['MAE True scale']),
# "MEDAE: {}".format(metricsTrainedModel['MEDAE']),
# "MEDAE True scale: {}".format(metricsTrainedModel['MEDAE True scale']),
# sep = "\n"
# )
# Define function for running all regression types
def runAllRegressionTypes(regression_types, parameters_types):
# Create dataframe for the final table
df_metrics_regression_table = pd.DataFrame(columns = ["Tipus de Regressió",
"Temps d'entrenament [s]",
"Puntuació GRCV",
"Millor param. GRCV",
"R2",
"MSE",
"RMSE",
"MAE",
"MEDAE",
"R2 Reescalat",
"MSE Reescalat",
"RMSE Reescalat",
"MAE Reescalat",
"MEDAE Reescalat"])
# Create dataframe for output predicted values
df_output_predicted_table = pd.DataFrame(columns = ["Tipus de Regressió",
"Valor predit",
"Valor predit reescalat",
"Valor test reescalat"])
# Execute all regression types
i = 0
for (key1, value1), (key2, value2) in zip(regression_types.items(), parameters_types.items()):
i += 1
trainedModelMethod = trainYachtModelGridSearchCV(dataset['X_train'], dataset['X_test'],
dataset['y_train'], dataset['y_test'],
value1,
value2,
kfold_cv_size)
metricsTrainedMethod = getMetricsFromTrainedModel(dataset['y_test'],
trainedModelMethod['Output Predictions'],
dataset['scaler_function'])
df_output_predicted_table.loc[i] = [trainedModelMethod['Regression type'],
trainedModelMethod['Output Predictions'],
metricsTrainedMethod['Predictions True scale'],
metricsTrainedMethod['Output Test True scale']]
df_metrics_regression_table.loc[i] = [trainedModelMethod['Regression type'],
trainedModelMethod['Training time'],
trainedModelMethod['Score GRCV'],
trainedModelMethod['Best parameters estimator GRCV'],
metricsTrainedMethod['R2'],
metricsTrainedMethod['MSE'],
metricsTrainedMethod['RMSE'],
metricsTrainedMethod['MAE'],
metricsTrainedMethod['MEDAE'],
metricsTrainedMethod['R2 True scale'],
metricsTrainedMethod['MSE True scale'],
metricsTrainedMethod['RMSE True scale'],
metricsTrainedMethod['MAE True scale'],
metricsTrainedMethod['MEDAE True scale']]
return df_metrics_regression_table, df_output_predicted_table
# Run all regression types
allRegressionTypes = runAllRegressionTypes(regression_type, parameter_type)
###Output
Fitting 10 folds for each of 8 candidates, totalling 80 fits
###Markdown
Resultados Tabla que resume las métricas de todos los algoritmos de entrenamiento utilizados, para **GridSearchCV**
###Code
# Show metrics table
allRegressionTypes[0]
# Show output trained values
allRegressionTypes[1]
###Output
_____no_output_____
###Markdown
Gráficas que muestran la salida de los modelos y cómo se ajustan los datos reales
###Code
%matplotlib inline
for i in range(1,7):
x = np.linspace(0, 1, 100)
plt.plot(allRegressionTypes[1]['Valor predit'][i], dataset['y_test'], linestyle='none', marker='o')
plt.plot(x, x, '--')
plt.title(allRegressionTypes[1]['Tipus de Regressió'][i])
plt.xlabel('Valor predit')
plt.ylabel('Valor real')
plt.show()
for i in range(1,7):
x = np.linspace(-5, 50, 100)
plt.plot(allRegressionTypes[1]['Valor predit reescalat'][i], allRegressionTypes[1]['Valor test reescalat'][i], linestyle='none', marker='o')
plt.plot(x, x, '--')
plt.title(allRegressionTypes[1]['Tipus de Regressió'][i])
plt.xlabel('Valor predit reescalat')
plt.ylabel('Valor real reescalat')
plt.show()
###Output
_____no_output_____ |
10-nested-loops/nested loops.ipynb | ###Markdown
Playlist no youtubehttps://www.youtube.com/playlist?list=PLvu-cXEstYRPZXyug9IqTJq4JtdBxXgeS
###Code
listaNomes = ['python', 'c++','java','fortran']
vogais = ['a','e','i', 'o', 'u']
nv = 0
for nomes in listaNomes:
for letra in nomes:
if letra in vogais:
nv += 1
print('Numeros de vogais em {} é : {}'.format(nomes,nv))
nv = 0
novosNomes = [nomes for nomes in listaNomes]
###Output
_____no_output_____ |
pr-lr/Monte Carlo and Importance Sampling.ipynb | ###Markdown
Question PromptGiven the following current equation$$I(\Delta L, \Delta V_{TH}) = \frac{50}{0.1 + \Delta L} (0.6 - \Delta V_{TH})^2$$* $\Delta L \sim \ N(0, 0.01^2)$* $\Delta V_{TH} \sim \ N(0, 0.03^2)$We would like to calculate $P(I > 275)$ using direct Monte-Carlo and Importance Sampling. Direct Monte-Carlo EstimationIn MC estimation, we approximate an integral by the sample mean of a function of simulated random variables. In more mathematical terms,$$\int p(x)\ f(x)\ dx = \mathbb{E}_{p(x)} \big[\ f(x) \big] \approx \frac{1}{N} \sum_{n=1}^{N}f(x_n)$$where $x_n \sim \ p(x)$.A useful application of MC is probability estimation. In fact, we can cast a probability as an expectation using the indicator function. In our case, given that $A = \{I \ | \ I > 275\}$, we define $f(x)$ as$$f(x) = I_{A}(x)= \begin{cases} 1 & I \geq 275 \\ 0 & I < 275 \end{cases}$$ Replacing in our equation above, we get$$\int p(x) \ f(x) \ dx = \int I(x)\ p(x) \ d(x) = \int_{x \in A} p(x)\ d(x) \approx \frac{1}{N} \sum_{n=1}^{N}I_{A}(x_n)$$
###Code
def monte_carlo_proba(num_simulations, num_samples, verbose=True, plot=False):
if verbose:
print("===========================================")
print("{} Monte Carlo Simulations of size {}".format(num_simulations, num_samples))
print("===========================================\n")
num_samples = int(num_samples)
num_simulations = int(num_simulations)
probas = []
for i in range(num_simulations):
mu_1, sigma_1 = 0, 0.01
mu_2, sigma_2 = 0, 0.03
length = np.random.normal(mu_1, sigma_1, num_samples)
voltage = np.random.normal(mu_2, sigma_2, num_samples)
num = 50 * np.square((0.6 - voltage))
denum = 0.1 + length
I = num / denum
true_condition = np.where(I >= 275)
false_condition = np.where(I < 275)
num_true = true_condition[0].shape[0]
proba = num_true / num_samples
probas.append(proba)
if plot:
if i == (num_simulations - 1):
plt.scatter(length[true_condition], voltage[true_condition], color='r')
plt.scatter(length[false_condition], voltage[false_condition], color='b')
plt.xlabel(r'$\Delta L$ [$\mu$m]')
plt.ylabel(r'$\Delta V_{TH}$ [V]')
plt.title("Monte Carlo Estimation of P(I > 275)")
plt.grid(True)
plt.savefig(plot_dir + 'monte_carlo_{}.pdf'.format(num_samples), format='pdf', dpi=300)
plt.show()
mean_proba = np.mean(probas)
std_proba = np.std(probas)
if verbose:
print("Probability Mean: {:0.5f}".format(mean_proba))
print("Probability Std: {:0.5f}".format(std_proba))
return probas
probas = monte_carlo_proba(10, 10000, plot=False)
def MC_histogram(num_samples, plot=True):
num_samples = int(num_samples)
mu_1, sigma_1 = 0, 0.01
mu_2, sigma_2 = 0, 0.03
length = np.random.normal(mu_1, sigma_1, num_samples)
voltage = np.random.normal(mu_2, sigma_2, num_samples)
num = 50 * np.square((0.6 - voltage))
denum = 0.1 + length
I = num / denum
if plot:
n, bins, patches = plt.hist(I, 50, density=1, facecolor='green', alpha=0.75)
plt.ylabel('Number of Samples')
plt.xlabel(r'$I_{DS}$ [$\mu$A]')
plt.title("Monte Carlo Estimation of P(I > 275)")
plt.grid(True)
plt.savefig(plot_dir + 'mc_histogram_{}.pdf'.format(num_samples), format='pdf', dpi=300)
plt.show()
MC_histogram(1e6)
num_samples = [1e3, 1e4, 1e5, 1e6]
num_repetitions = 25
total_probas = []
for i, num_sample in enumerate(num_samples):
print("Iter {}/{}".format(i+1, len(num_samples)))
probas = monte_carlo_proba(num_repetitions, num_sample, verbose=False)
total_probas.append(probas)
# plt.figure(figsize=(8, 10))
y_axis_monte = np.asarray(total_probas)
x_axis_monte = np.asarray(num_samples)
for x, y in zip(x_axis_monte, y_axis_monte):
plt.scatter([x] * len(y), y, s=0.5)
plt.xscale('log')
plt.title("Direct Monte-Carlo Estimation")
plt.ylabel("Probability Estimate")
plt.xlabel('Number of Samples')
plt.grid(True)
plt.savefig(plot_dir + 'monte_carlo_convergence_speed.jpg', format='jpg', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Importance Sampling With importance sampling, we try to reduce the variance of our Monte-Carlo integral estimation by choosing a better distribution from which to simulate our random variables. It involves multiplying the integrand by 1 (usually dressed up in a “tricky fashion”) to yield an expectation of a quantity that varies less than the original integrand over the region of integration. Concretely,$$\mathbb{E}_{p(x)} \big[\ f(x) \big] = \int f(x)\ p(x)\ dx = \int f(x)\ p(x)\ \frac{q(x)}{q(x)}\ dx = \int \frac{p(x)}{q(x)}\cdot f(x)\ q(x)\ dx = \mathbb{E}_{q(x)} \big[\ f(x)\cdot \frac{p(x)}{q(x)} \big]$$Thus, the MC estimation of the expectation becomes:$$\mathbb{E}_{q(x)} \big[\ f(x)\cdot \frac{p(x)}{q(x)} \big] \approx \frac{1}{N} \sum_{n=1}^{N} w_n \cdot f(x_n)$$where $w_n = \dfrac{p(x_n)}{q(x_n)}$ In our current example above, we can alter the mean and/or standard deviation of $\Delta L$ and $\Delta V_{TH}$ in the hopes that more of our sampling points will fall in the failure region (red area). For example, let us define 2 new distributions with altered $\sigma^2$.* $\Delta \hat{L} \sim \ N(0, 0.02^2)$* $\Delta \hat{V}_{TH} \sim \ N(0, 0.06^2)$
###Code
def importance_sampling(num_simulations, num_samples, verbose=True, plot=False):
if verbose:
print("===================================================")
print("{} Importance Sampling Simulations of size {}".format(num_simulations, num_samples))
print("===================================================\n")
num_simulations = int(num_simulations)
num_samples = int(num_samples)
probas = []
for i in range(num_simulations):
mu_1, sigma_1 = 0, 0.01
mu_2, sigma_2 = 0, 0.03
mu_1_n, sigma_1_n = 0, 0.02
mu_2_n, sigma_2_n = 0, 0.06
# setup pdfs
old_pdf_1 = norm(mu_1, sigma_1)
new_pdf_1 = norm(mu_1_n, sigma_1_n)
old_pdf_2 = norm(mu_2, sigma_2)
new_pdf_2 = norm(mu_2_n, sigma_2_n)
length = np.random.normal(mu_1_n, sigma_1_n, num_samples)
voltage = np.random.normal(mu_2_n, sigma_2_n, num_samples)
# calculate current
num = 50 * np.square((0.6 - voltage))
denum = 0.1 + length
I = num / denum
# calculate f
true_condition = np.where(I >= 275)
# calculate weight
num = old_pdf_1.pdf(length) * old_pdf_2.pdf(voltage)
denum = new_pdf_1.pdf(length) * new_pdf_2.pdf(voltage)
weights = num / denum
# select weights for nonzero f
weights = weights[true_condition]
# compute unbiased proba
proba = np.sum(weights) / num_samples
probas.append(proba)
false_condition = np.where(I < 275)
if plot:
if i == num_simulations -1:
plt.scatter(length[true_condition], voltage[true_condition], color='r')
plt.scatter(length[false_condition], voltage[false_condition], color='b')
plt.xlabel(r'$\Delta L$ [$\mu$m]')
plt.ylabel(r'$\Delta V_{TH}$ [V]')
plt.title("Monte Carlo Estimation of P(I > 275)")
plt.grid(True)
plt.savefig(plot_dir + 'imp_sampling_{}.pdf'.format(num_samples), format='pdf', dpi=300)
plt.show()
mean_proba = np.mean(probas)
std_proba = np.std(probas)
if verbose:
print("Probability Mean: {}".format(mean_proba))
print("Probability Std: {}".format(std_proba))
return probas
probas = importance_sampling(10, 10000, plot=False)
def IS_histogram(num_samples, plot=True):
num_samples = int(num_samples)
mu_1_n, sigma_1_n = 0, 0.02
mu_2_n, sigma_2_n = 0, 0.06
length = np.random.normal(mu_1_n, sigma_1_n, num_samples)
voltage = np.random.normal(mu_2_n, sigma_2_n, num_samples)
# calculate biased current
num = 50 * np.square((0.6 - voltage))
denum = 0.1 + length
I = num / denum
if plot:
n, bins, patches = plt.hist(I, 50, density=1, facecolor='green', alpha=0.75)
plt.ylabel('Number of Samples')
plt.xlabel(r'$I_{DS}$ [$\mu$A]')
plt.title("Importance Sampling of P(I > 275)")
plt.grid(True)
plt.savefig(plot_dir + 'is_histogram_{}.pdf'.format(num_samples), format='pdf', dpi=300)
plt.show()
IS_histogram(1e5)
num_samples = [1e3, 1e4, 1e5, 1e6]
num_repetitions = 25
total_probas = []
for i, num_sample in enumerate(num_samples):
print("Iter {}/{}".format(i+1, len(num_samples)))
probas = importance_sampling(num_repetitions, num_sample, verbose=False)
total_probas.append(probas)
# plt.figure(figsize=(8, 10))
y_axis_imp = np.asarray(total_probas)
x_axis_imp = np.asarray(num_samples)
for x, y in zip(x_axis_imp, y_axis_imp):
plt.scatter([x] * len(y), y, s=0.5)
plt.xscale('log')
plt.title("Importance Sampling")
plt.ylabel("Probability Estimate")
plt.xlabel('Number of Samples')
plt.grid(True)
plt.savefig(plot_dir + 'imp_sampling_convergence_speed.jpg', format='jpg', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Side by Side
###Code
fig, ax = plt.subplots(1, 1)
# monte carlo
for x, y in zip(x_axis_imp, y_axis_monte):
ax.scatter([x] * len(y), y, s=0.5, c='r', alpha=0.3)
# importance sampling
for x, y in zip(x_axis_imp, y_axis_imp):
ax.scatter([x] * len(y), y, s=0.5, c='b', alpha=0.3)
blue = mlines.Line2D([], [], color='blue', marker='_', linestyle='None', markersize=10, label='importance sampling')
red = mlines.Line2D([], [], color='red', marker='_', linestyle='None', markersize=10, label='monte carlo')
plt.xscale('log')
plt.grid(True)
plt.legend(handles=[blue, red], loc='lower right')
plt.savefig('/Users/kevin/Desktop/plot.jpg', format='jpg', dpi=300, bbox_inches='tight')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
matplotlib/gallery_jupyter/mplot3d/lines3d.ipynb | ###Markdown
Parametric CurveThis example demonstrates plotting a parametric curve in 3D.
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['legend.fontsize'] = 10
fig = plt.figure()
ax = fig.gca(projection='3d')
# Prepare arrays x, y, z
theta = np.linspace(-4 * np.pi, 4 * np.pi, 100)
z = np.linspace(-2, 2, 100)
r = z**2 + 1
x = r * np.sin(theta)
y = r * np.cos(theta)
ax.plot(x, y, z, label='parametric curve')
ax.legend()
plt.show()
###Output
_____no_output_____ |
src/.ipynb_checkpoints/37 > steel IS NOT graphite-checkpoint.ipynb | ###Markdown
From the useful sheets, we have to formalize a GRL problem.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
import requests
import io
import seaborn as sns
PATH = '../data/'
files = os.listdir(PATH)
dfs = {f[:-4] : pd.read_csv(PATH + f)
for f in files if f[-3:] == 'csv'
}
# All Users
# member_id is unique and reflected across all csvs (total 1207)
# all other things like DoB, IP, etc are most likely spoof
# Difference between name and member_title
# What is member_group_id?
# ID 4, 6, 10 are core moderators (from core_moderators)
dfs['core_members'].head(4);
core_members = dfs['core_members'][['member_id', 'name', 'member_title', 'member_posts']]
core_members.to_csv(PATH + 'modified/core_members.csv')
#3. 170 comments. comment_eid (event id?), comment_mid (member id?),
#comment_text,
# comment_author (string name) (doesn't match with any of member_title or name)
# might be useful to get some active users
# only comment_mid would be useful ig
dfs['calendar_event_comments'].head(4)
# 8066 follow connections
# follow_rel_id (?) and follow_member_id may be useful.
dfs['core_follow'].head(5)
dfs['core_follow'].follow_app.value_counts()
dfs['core_follow'].follow_area.value_counts()
dfs['core_follow'].follow_rel_id.nunique(), dfs['core_follow'].follow_member_id.nunique()
min(dfs['core_members'].member_id), max(dfs['core_members'].member_id)
min(dfs['core_follow'].follow_rel_id), max(dfs['core_follow'].follow_rel_id)
min(dfs['core_follow'].follow_member_id), max(dfs['core_follow'].follow_member_id)
# Important. All Private Messages. Total 21715.
# msg_topic_id is the thread ID; each convo bw two users has its own thread/topic
# msg_author_id is the same as member_id in core_members
# msg_post is the first message in the thread, with HTML tags
dfs['core_message_posts'].head(3)
#30. All Private Message Topics (total 4475)
# mt_id is same as msg_topic_id from core_message_posts
# mt_starter_id is the member_id of the person who put the first message on that topic
# mt_to_member_id is the member_id of the recipient of this private message
dfs['core_message_topics'].head(5)
#35. Pfields for each member. Total 1208. Could be used for node classification.
# dfs['core_pfields_content'].field_5[dfs['core_pfields_content'].field_5.isnull()]
# dfs['core_pfields_content'].field_5[190:196]
# print(sum(dfs['core_pfields_content'].field_5.isnull()))
# print(sum(dfs['core_pfields_content'].field_6.isnull()))
# print(sum(dfs['core_pfields_content'].field_7.isnull()))
# print(sum(dfs['core_pfields_content'].field_12.isnull()))
# print(sum(dfs['core_pfields_content'].field_13.isnull()))
# print(sum(dfs['core_pfields_content'].field_3.isnull()))
dfs['core_pfields_content'].head(10)
#43. Important. All Posts. Total 196042.
# index_content has the post content.
# index_author is the member_id of the authon of that post.
dfs['core_search_index'].head(4)
min(dfs['core_search_index'].index_author), max(dfs['core_search_index'].index_author)
#75. Useful. Notification Graph could give us very informative edges.
# from notify_from_id to notify_to_id based on notify_type_key
# and notify_type_key could be a nice edge feature
dfs['orig_inline_notifications'].notify_type_key.value_counts();
dfs['orig_inline_notifications'].head(5)
#76. 763 Original members.
# check if all orig members have emails filled in
sum(dfs['orig_members'].email.isnull()) # (== 0) = True
orig_members = [name for name in dfs['orig_members'].name]
core_members = [name for name in dfs['core_members'].name]
print('based on username')
print(len(core_members), len(orig_members))
print(round(np.mean([
user in core_members
for user in orig_members
]), 2),
'= fraction of orig members in core members.')
print(round(np.mean([
user in orig_members
for user in core_members
]), 2),
'= fraction of core members in orig members.')
print('based on member_id')
orig_members = [user for user in dfs['orig_members'].member_id]
core_members = [user for user in dfs['core_members'].member_id]
print(len(core_members), len(orig_members))
print(round(np.mean([
user in core_members
for user in orig_members
]), 2),
'= fraction of orig members in core members.')
print(round(np.mean([
user in orig_members
for user in core_members
]), 2),
'= fraction of core members in orig members.')
#78. Useful. But don't fully understand.
# Mapping from user_id to topic_id, might help connect users.
dfs['orig_message_topic_user_map'].head(4)
# TODO:
# Check if users following the same topic are same in
# this map and core_message_topics and orig_message_topics
# reference with topic_title and compare the topic_id v topic_title mapping of both
# orig and core
ids = 0
for i in range(len(dfs['orig_message_topics'].mt_title)):
if dfs['orig_message_topics'].mt_title[i] == dfs['core_message_topics'].mt_title[i]:
ids += 1
print(ids)
len(dfs['orig_message_topics'].mt_title), len(dfs['core_message_topics'].mt_title)
#79 All Orig Message Topics (total 3101)
# mt_id is same as map_topic_id from orig_message_topic_user_map
# mt_starter_id is the member_id of the person who put the first message on that topic
# mt_to_member_id is the member_id of the recipient of this message
dfs['orig_message_topics'].head(5)
#82. pfields of 764 members. Might help in node features.
dfs['orig_pfields_content'].head(3)
# What is reputation? Total 141550
# 635 users have a reputation index, could be used for node classification or features?
members = set(dfs['orig_reputation_index'].member_id)
# print(members)
freq = [[m, sum(dfs['orig_reputation_index'].member_id == m)]
for m in members]
# dfs['orig_reputation_index'].head(3)
freq_sort = sorted(freq, key = lambda z: z[1], reverse=True)
freq_sorted = pd.DataFrame(freq_sort)
i = 0
while freq_sorted[1][i] > 30:
i += 1
print(i)
print(len(freq_sorted[1]) - i)
plt.plot(freq_sorted[1])
plt.grid()
###Output
_____no_output_____ |
DSA/tree/convertBST.ipynb | ###Markdown
Given a Binary Search Tree (BST), convert it to a Greater Tree such that every key of the original BST is changed to the original key plus sum of all keys greater than the original key in BST.Example: Input: The root of a Binary Search Tree like this: 5 / \ 2 13 Output: The root of a Greater Tree like this: 18 / \ 20 13 ref:- [Approach 1 Recursion [Accepted]](https://leetcode.com/problems/convert-bst-to-greater-tree/solution/)> Intuition> > One way to perform a reverse in-order traversal is via recursion. By using the call stack to return to previous nodes, we can easily visit the nodes in reverse order.> >> Algorithm>> For the recursive approach, we maintain some minor "global" state so each recursive call can access and modify the current total sum. Essentially, we ensure that the current node exists, recurse on the right subtree, visit the current node by updating its value and the total sum, and finally recurse on the left subtree. If we know that recursing on root.right properly updates the right subtree and that recursing on root.left properly updates the left subtree, then we are guaranteed to update all nodes with larger values before the current node and all nodes with smaller values after.
###Code
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def __init__(self):
self.valsum = 0
def convertBST(self, root: TreeNode) -> TreeNode:
if root:
self.convertBST(root.right)
self.valsum += root.val
root.val = self.valsum
self.convertBST(root.left)
return root
###Output
_____no_output_____ |
MNIST_For_ML_Beginners.ipynb | ###Markdown
2d tensor that will have the imae values
###Code
x = tf.placeholder(tf.float32, [None, 784])
###Output
_____no_output_____
###Markdown
variables to be optimized, model to obtain
###Code
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
###Output
_____no_output_____
###Markdown
Implementing the model
###Code
# y = tf.nn.softmax(tf.matmul(x, W) + b) #commented when using tf.nn.softmax_cross_entropy_with_logits()
###Output
_____no_output_____
###Markdown
a variable to use in loss function
###Code
y_ = tf.placeholder(tf.float32, [None, 10])
###Output
_____no_output_____
###Markdown
Cross entropy as cost function
###Code
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
###Output
_____no_output_____
###Markdown
Applying minimization to the loss function
###Code
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
###Output
_____no_output_____
###Markdown
Launch the model
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
tf.global_variables_initializer().run()
###Output
_____no_output_____
###Markdown
Train the model
###Code
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
###Output
_____no_output_____
###Markdown
"Using small batches of random data is called stochastic training -- in this case, stochastic gradient descent"
###Code
batch_xs.shape #images
batch_ys.shape #label
###Output
_____no_output_____
###Markdown
Evaluate the model
###Code
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
# acurácia para a croos-entropy "manual" 0.9148
###Output
_____no_output_____ |
python/tutorials/jupyter_notebooks/05_layer_norm.ipynb | ###Markdown
Installation & Versioning Use the function %matplotlib inline to enable the inline plotting in this notebook.
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Install Triton via pip:
###Code
!pip install triton==2.0.0.dev20220206
###Output
Collecting triton==2.0.0.dev20220206
Downloading triton-2.0.0.dev20220206-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (27.8 MB)
[K |████████████████████████████████| 27.8 MB 54.7 MB/s
[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from triton==2.0.0.dev20220206) (3.6.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.7/dist-packages (from triton==2.0.0.dev20220206) (3.12.0)
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from triton==2.0.0.dev20220206) (1.10.0+cu111)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->triton==2.0.0.dev20220206) (3.10.0.2)
Installing collected packages: triton
Successfully installed triton-2.0.0.dev20220206
###Markdown
Layer Normalization
###Code
import pkg_resources
# Import from latest release tag==2.0.0.dev20220206
pkg_resources.require("triton==2.0.0.dev20220206")
import torch
import triton
import triton.language as tl
try:
# This is https://github.com/NVIDIA/apex, NOT the apex on PyPi, so it
# should not be added to extras_require in setup.py.
import apex
HAS_APEX = True
except ModuleNotFoundError:
HAS_APEX = False
# Forward Pass
@triton.jit
def _layer_norm_fwd_fused(X, Y, W, B, M, V, stride, N, eps,
BLOCK_SIZE: tl.constexpr):
# position of elements processed by this program
row = tl.program_id(0)
cols = tl.arange(0, BLOCK_SIZE)
mask = cols < N
# offset data pointers to start at the row of interest
X += row * stride
Y += row * stride
# load data and cast to float32
x = tl.load(X + cols, mask=mask, other=0).to(tl.float32)
# compute mean
mean = tl.sum(x, axis=0) / N
# compute std
xmean = tl.where(mask, x - mean, 0.)
var = tl.sum(xmean * xmean, axis=0) / N
rstd = 1 / tl.sqrt(var + eps)
xhat = xmean * rstd
# write-back mean/rstd
tl.store(M + row, mean)
tl.store(V + row, rstd)
# multiply by weight and add bias
w = tl.load(W + cols, mask=mask)
b = tl.load(B + cols, mask=mask)
y = xhat * w + b
# write-back
tl.store(Y + cols, y, mask=mask)
# Backward pass (DX + partial DW + partial DB)
@triton.jit
def _layer_norm_bwd_dx_fused(DX, DY, DW, DB, X, W, B, M, V, Lock, stride, N, eps,
GROUP_SIZE_M: tl.constexpr, BLOCK_SIZE_N: tl.constexpr):
# position of elements processed by this program
row = tl.program_id(0)
cols = tl.arange(0, BLOCK_SIZE_N)
mask = cols < N
# offset data pointers to start at the row of interest
X += row * stride
DY += row * stride
DX += row * stride
# offset locks and weight/bias gradient pointer
# each kernel instance accumulates partial sums for
# DW and DB into one of GROUP_SIZE_M independent buffers
# these buffers stay in the L2, which allow this kernel
# to be fast
lock_id = row % GROUP_SIZE_M
Lock += lock_id
Count = Lock + GROUP_SIZE_M
DW = DW + lock_id * N + cols
DB = DB + lock_id * N + cols
# load data to SRAM
x = tl.load(X + cols, mask=mask, other=0).to(tl.float32)
dy = tl.load(DY + cols, mask=mask, other=0).to(tl.float32)
w = tl.load(W + cols, mask=mask).to(tl.float32)
mean = tl.load(M + row)
rstd = tl.load(V + row)
# compute dx
xhat = (x - mean) * rstd
wdy = w * dy
xhat = tl.where(mask, xhat, 0.)
wdy = tl.where(mask, wdy, 0.)
mean1 = tl.sum(xhat * wdy, axis=0) / N
mean2 = tl.sum(wdy, axis=0) / N
dx = (wdy - (xhat * mean1 + mean2)) * rstd
# write-back dx
tl.store(DX + cols, dx, mask=mask)
# accumulate partial sums for dw/db
partial_dw = (dy * xhat).to(w.dtype)
partial_db = (dy).to(w.dtype)
while tl.atomic_cas(Lock, 0, 1) == 1:
pass
count = tl.load(Count)
# first store doesn't accumulate
if count == 0:
tl.atomic_xchg(Count, 1)
else:
partial_dw += tl.load(DW, mask=mask)
partial_db += tl.load(DB, mask=mask)
tl.store(DW, partial_dw, mask=mask)
tl.store(DB, partial_db, mask=mask)
# release lock
tl.atomic_xchg(Lock, 0)
# Backward pass (total DW + total DB)
@triton.jit
def _layer_norm_bwd_dwdb(DW, DB, FINAL_DW, FINAL_DB, M, N,
BLOCK_SIZE_M: tl.constexpr, BLOCK_SIZE_N: tl.constexpr):
pid = tl.program_id(0)
cols = pid * BLOCK_SIZE_N + tl.arange(0, BLOCK_SIZE_N)
dw = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.float32)
db = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.float32)
for i in range(0, M, BLOCK_SIZE_M):
rows = i + tl.arange(0, BLOCK_SIZE_M)
mask = (rows[:, None] < M) & (cols[None, :] < N)
offs = rows[:, None] * N + cols[None, :]
dw += tl.load(DW + offs, mask=mask, other=0.)
db += tl.load(DB + offs, mask=mask, other=0.)
sum_dw = tl.sum(dw, axis=0)
sum_db = tl.sum(db, axis=0)
tl.store(FINAL_DW + cols, sum_dw, mask=cols < N)
tl.store(FINAL_DB + cols, sum_db, mask=cols < N)
class LayerNorm(torch.autograd.Function):
@staticmethod
def forward(ctx, x, normalized_shape, weight, bias, eps):
# allocate output
y = torch.empty_like(x)
# reshape input data into 2D tensor
x_arg = x.reshape(-1, x.shape[-1])
M, N = x_arg.shape
mean = torch.empty((M, ), dtype=torch.float32, device='cuda')
rstd = torch.empty((M, ), dtype=torch.float32, device='cuda')
# Less than 64KB per feature: enqueue fused kernel
MAX_FUSED_SIZE = 65536 // x.element_size()
BLOCK_SIZE = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
if N > BLOCK_SIZE:
raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
# heuristics for number of warps
num_warps = min(max(BLOCK_SIZE // 256, 1), 8)
# enqueue kernel
_layer_norm_fwd_fused[(M,)](x_arg, y, weight, bias, mean, rstd,
x_arg.stride(0), N, eps,
BLOCK_SIZE=BLOCK_SIZE, num_warps=num_warps)
ctx.save_for_backward(x, weight, bias, mean, rstd)
ctx.BLOCK_SIZE = BLOCK_SIZE
ctx.num_warps = num_warps
ctx.eps = eps
return y
@staticmethod
def backward(ctx, dy):
x, w, b, m, v = ctx.saved_tensors
# heuristics for amount of parallel reduction stream for DG/DB
N = w.shape[0]
GROUP_SIZE_M = 64
if N <= 8192: GROUP_SIZE_M = 96
if N <= 4096: GROUP_SIZE_M = 128
if N <= 1024: GROUP_SIZE_M = 256
# allocate output
locks = torch.zeros(2 * GROUP_SIZE_M, dtype=torch.int32, device='cuda')
_dw = torch.empty((GROUP_SIZE_M, w.shape[0]), dtype=x.dtype, device=w.device)
_db = torch.empty((GROUP_SIZE_M, w.shape[0]), dtype=x.dtype, device=w.device)
dw = torch.empty((w.shape[0],), dtype=w.dtype, device=w.device)
db = torch.empty((w.shape[0],), dtype=w.dtype, device=w.device)
dx = torch.empty_like(dy)
# enqueue kernel using forward pass heuristics
# also compute partial sums for DW and DB
x_arg = x.reshape(-1, x.shape[-1])
M, N = x_arg.shape
_layer_norm_bwd_dx_fused[(M,)](dx, dy, _dw, _db, x, w, b, m, v, locks,
x_arg.stride(0), N, ctx.eps,
BLOCK_SIZE_N=ctx.BLOCK_SIZE,
GROUP_SIZE_M=GROUP_SIZE_M,
num_warps=ctx.num_warps)
grid = lambda meta: [triton.cdiv(N, meta['BLOCK_SIZE_N'])]
# accumulate partial sums in separate kernel
_layer_norm_bwd_dwdb[grid](_dw, _db, dw, db, GROUP_SIZE_M, N,
BLOCK_SIZE_M=32,
BLOCK_SIZE_N=128)
return dx, None, dw, db, None
layer_norm = LayerNorm.apply
def test_layer_norm(M, N, dtype, eps=1e-5, device='cuda'):
# create data
x_shape = (M, N)
w_shape = (x_shape[-1], )
weight = torch.rand(w_shape, dtype=dtype, device='cuda', requires_grad=True)
bias = torch.rand(w_shape, dtype=dtype, device='cuda', requires_grad=True)
x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device='cuda')
dy = .1 * torch.randn_like(x)
x.requires_grad_(True)
# forward pass
y_tri = layer_norm(x, w_shape, weight, bias, eps)
y_ref = torch.nn.functional.layer_norm(x, w_shape, weight, bias, eps).to(dtype)
# backward pass (triton)
y_tri.backward(dy, retain_graph=True)
dx_tri, dw_tri, db_tri = [_.grad.clone() for _ in [x, weight, bias]]
x.grad, weight.grad, bias.grad = None, None, None
# backward pass (torch)
y_ref.backward(dy, retain_graph=True)
dx_ref, dw_ref, db_ref = [_.grad.clone() for _ in [x, weight, bias]]
# compare
triton.testing.assert_almost_equal(y_tri, y_ref)
triton.testing.assert_almost_equal(dx_tri, dx_ref)
triton.testing.assert_almost_equal(db_tri, db_ref, decimal=1)
triton.testing.assert_almost_equal(dw_tri, dw_ref, decimal=1)
@triton.testing.perf_report(
triton.testing.Benchmark(
x_names=['N'],
x_vals=[512 * i for i in range(2, 32)],
line_arg='provider',
line_vals=['triton', 'torch'] + (['apex'] if HAS_APEX else []),
line_names=['Triton', 'Torch'] + (['Apex'] if HAS_APEX else []),
styles=[('blue', '-'), ('green', '-'), ('orange', '-')],
ylabel='GB/s',
plot_name='layer-norm-backward',
args={'M': 4096, 'dtype': torch.float16, 'mode': 'backward'}
)
)
def bench_layer_norm(M, N, dtype, provider, mode='backward', eps=1e-5, device='cuda'):
# create data
x_shape = (M, N)
w_shape = (x_shape[-1], )
weight = torch.rand(w_shape, dtype=dtype, device='cuda', requires_grad=True)
bias = torch.rand(w_shape, dtype=dtype, device='cuda', requires_grad=True)
x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device='cuda')
dy = .1 * torch.randn_like(x)
x.requires_grad_(True)
# utility functions
if provider == 'triton':
y_fwd = lambda: layer_norm(x, w_shape, weight, bias, eps)
if provider == 'torch':
y_fwd = lambda: torch.nn.functional.layer_norm(x, w_shape, weight, bias, eps)
if provider == 'apex':
apex_layer_norm = apex.normalization.FusedLayerNorm(w_shape).to(x.device).to(x.dtype)
y_fwd = lambda: apex_layer_norm(x)
# forward pass
if mode == 'forward':
gbps = lambda ms: 2 * x.numel() * x.element_size() / ms * 1e-6
ms, min_ms, max_ms = triton.testing.do_bench(y_fwd, rep=500)
# backward pass
if mode == 'backward':
gbps = lambda ms: 3 * x.numel() * x.element_size() / ms * 1e-6
y = y_fwd()
ms, min_ms, max_ms = triton.testing.do_bench(lambda: y.backward(dy, retain_graph=True),
grad_to_none=[x], rep=500)
return gbps(ms), gbps(max_ms), gbps(min_ms)
bench_layer_norm.run(save_path='.', print_data=True)
###Output
_____no_output_____ |
notebooks/Adventure2.ipynb | ###Markdown
Welcome HomeUsually when Steve comes home, there is no one at home. Steve can get lonely at times especially after long hard battle with creepers and zombies.In this programming adventure we'll make Minecraft display a warm and friendly welcome message when Steve comes home. We'll test your program by exploring the world and then come back home to a friendly welcome. Along the way we will learn about coordinate systems which help us locate objects in a game. We will also learn about variables and conditions. Coordinate SystemsFrom your math classes, you will remember coordinate systems to locate points in a plane. The points (2,3), (-3,1) and (-1,-2.5) are shown in the grid below.The Minecraft coordinate grid is shown below:In Minecraft, when you move East, your X-coordinate increases and when you move South, your Z-coordinate increases. Let's confirm this through a few Minecraft exercises. Task 1: Moving in Minecraft coordinate systemsIn Minecraft look at Steve's coordinates. Now move Steve to any other position. See how his coordinates change as you move? - [ ] Change your direction so that only the Xcoordinate moves when you move forward or back. - [ ] Change your direction so only the Z-coordinate moves when you move forward or back. Task 2: Write a program to show Steve's position on the screenRemember functions from the first Adventure? A function lets us do things in a computer program or in the minecraft game. The function **getTilePos()** get the players position as (x,y,z) coordinates in Minecraft. Let's using this function to print Steve's position as he moves around. We need to store Steve's position when we call the function **getTilePos()** so that we can print the position later. We can use a program **variable** to store the position. A variable has a name and can be used to store values. We'll call our variable **pos** for position and it will contain the Steve's position. When we want to print the position, we print the values of the position x,y and z coordinates using another function **print()** which prints any strings you give it.Start up minecraft and type the following in a new cell.```pythonfrom mcpi.minecraft import *mc = Minecraft.create()pos = mc.player.getTilePos()print(pos.x)print(pos.y)print(pos.z)```When you run your program by pressing Ctrl+Enter in the program cell, you should now see Steve's position printed.** Great Job!**
###Code
from mcpi.minecraft import *
import time
mc = Minecraft.create()
# Type Task 2 program here
###Output
_____no_output_____
###Markdown
Task 3: Prettying up messagesThe messages we printed are somewhat basic and can be confusing since we don't know which number is x,y or z. Why not print a message that is more useful. Often messages are built by attaching strings and data. Try typing ```python"my name is " + "Steve"``` in a code cell. What message gets printed? Now try```python"my age is " + 10``` Hmmm... That did not work :( Strings can only be attached or _concatenated_ with other strings. In order to attach a number to string, we need to convert the number into a printable string. We will use another function **str()** which returns a printable string of its arguments. Since x,y,z coordinates are numbers, we need to convert them to strings in order to print them with other strings. Too how the str() function works type the following in a code cell and run.```python"my age is " + str(10)``` What gets printed by the line below?```python"x = " + str(10) + ",y = " + str(20) + ",z = " + str(30)``` You now have all the information you need to print a pretty message. - [ ] Modify your program to print a pretty message shown below to correctly print Steve's position Steve's position is: x = 10,y = 20,z = 30- [ ] Modify your program to use a variable names _message_ to store the pretty message and then print the message Hint:```pythonmessage = ...print(message)```
###Code
## Type Task 3 program here
###Output
_____no_output_____
###Markdown
Task 4: Display Steve's coordinates in MinecraftFor this task instead of printing Steve's coordinates, lets display them in Minecraft using the **postToChat()** function from Adventure1You should see a message like the one below once you run your program.
###Code
while True:
time.sleep(1)
## Type Task 4 program here
###Output
_____no_output_____
###Markdown
HomeIn Minecraft move to a location that you want to call home and place a Gold block there. Move Steve on top of the Gold block and write down his coordinates. Lets save these coordinates in the variables **home_x** and **home_z**. We will use these variables to detect when Steve returns home.
###Code
## Change these values for your home
home_x = 0
home_z = 0
###Output
_____no_output_____
###Markdown
Is Steve home?Now the magic of figuring out if Steve is home. As Steve moves in Minecraft, his x and z coordinates change. We can detect that Steve is home when his coordinates are equal to the cordinates of his home! To put it in math terms, Steve is home when$$(pos_{x},pos_{z}) = (home_{x},home_{z})$$In the program we can write the math expression as```pythonpos.x == home_x and pos.z == home_z```We can an **if** program block to check if Steve's coordinates equal his home coordinates. An **if** block is written as shown below```pythonif (condition): do something 1 do something 2```Lets put this all together in the program below```pythonwhile True: time.sleep(1) pos = mc.player.getTilePos() if (pos.x == home_x and pos.z == home_z): mc.postToChat("Welcome home Steve.") the rest of your program from task 4```What happens when you run around in Minecraft and return to the gold block that is your home? That warm message makes Steve happy. He can now be well rested for battling creepers the next day.
###Code
## Type Task 5 program here
###Output
_____no_output_____ |
notebooks/MORE_PANDAS_Auto_MPG.ipynb | ###Markdown
MPG Cars Introduction:The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) Step 1. Import the necessary libraries
###Code
###Output
_____no_output_____
###Markdown
Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv).
###Code
# https://raw.githubusercontent.com/a-forty-two/COG_GN22CDBDS001_MARCH_22/main/cars1.csv
# https://raw.githubusercontent.com/a-forty-two/COG_GN22CDBDS001_MARCH_22/main/cars2.csv
###Output
_____no_output_____
###Markdown
Step 3. Assign each to a variable called cars1 and cars2
###Code
###Output
_____no_output_____
###Markdown
Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1
###Code
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in each dataset?
###Code
###Output
_____no_output_____
###Markdown
Step 6. Join cars1 and cars2 into a single DataFrame called cars
###Code
###Output
_____no_output_____
###Markdown
Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000.
###Code
###Output
_____no_output_____
###Markdown
Step 8. Add the column owners to cars
###Code
###Output
_____no_output_____ |
notebooks/Session02-BostonHousePrice-Solution.ipynb | ###Markdown
Regression is basically a process which predicts the relationship between x and y based on features.This time we are going to practice Linear Regression with Boston House Price Data that are already embedded in scikit-learn datasets**Useful functions**- sklearn.metrics.mean_squared_error: famous evaluation method (MSE)- np.sqrt(x): square root of tensor x- linear_model.coef_ : get `Regression coefficient` of the fitted linear model
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.datasets as datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
BOSTON_DATA = datasets.load_boston()
# TODO: get rid of the semicolon and see how the data look like.
BOSTON_DATA;
###Output
_____no_output_____
###Markdown
Simple EDA
###Code
# Load both boston data and target, and convert it as dataframe.
def add_target_to_data(dataset):
# TODO: make the raw dataset cleaner and easier to process -> use dataframe
df = pd.DataFrame(dataset.data, columns=dataset.feature_names)
# TODO: put the target data (price) to the dataframe we just made.
print("Before adding target: ", df.shape)
df['PRICE'] = dataset.target
print("After adding target: {} \n {}\n".format(df.shape, df.head(2)))
return df
"""
10 features as default.
Why didn't I put all the 13 features? Because n_row=2 and n_col=5 as default.
It will create 10 graphs for each features.
"""
def plotting_graph(df, features, n_row=2, n_col=5):
fig, axes = plt.subplots(n_row, n_col, figsize=(16, 8))
assert len(features) == n_row * n_col
# TODO: Draw a regression graph using seaborn's regplot
for i, feature in enumerate(features):
row = int(i / n_col)
col = i % n_col
sns.regplot(x=feature, y='PRICE', data=df, ax=axes[row][col])
plt.show()
def split_dataframe(df):
label_data = df['PRICE']
# others without PRICE
# axis!! --> Whether to drop labels from the index (0 or ‘index’) or columns (1 or ‘columns’).
input_data = df.drop(['PRICE'], axis=1)
# TODO: split! Set random_state if you want consistently same result
input_train, input_eval, label_train, label_eval = train_test_split(input_data, label_data, test_size=0.3,
random_state=42)
return input_train, input_eval, label_train, label_eval
boston_df = add_target_to_data(BOSTON_DATA)
features = ['RM', 'ZN', 'INDUS', 'NOX', 'AGE', 'PTRATIO', 'LSTAT', 'RAD', 'CRIM', 'B']
plotting_graph(boston_df, features, n_row=2, n_col=5)
'''
The correlation coefficient ranges from -1 to 1.
If the value is close to 1, it means that there is a strong positive correlation between the two variables.
When it is close to -1, the variables have a strong negative correlation.
'''
correlation_matrix = boston_df.corr().round(2)
correlation_matrix
sns.heatmap(correlation_matrix, cmap="YlGnBu")
plt.show()
###Output
_____no_output_____
###Markdown
Prediction with Linear Regression
###Code
X_train, X_test, Y_train, Y_test = split_dataframe(boston_df)
# TODO: Load your machine learning model
model = LinearRegression()
# TODO: Train!
model.fit(X_train, Y_train)
# TODO: make prediction with unseen data!
pred = model.predict(X_test)
expectation = Y_test
# TODO: what is mse between the answer and your prediction?
lr_mse = mean_squared_error(expectation, pred)
# TODO: RMSE
lr_rmse = np.sqrt(lr_mse)
print('LR_MSE: {0:.3f}, LR_RMSE: {1:.3F}'.format(lr_mse, lr_rmse))
# Regression Coefficient
print('Regression Coefficients:', np.round(model.coef_, 1))
# sort from the biggest
coeff = pd.Series(data=model.coef_, index=X_train.columns).sort_values(ascending=False)
print(coeff)
plt.scatter(expectation, pred)
plt.plot([0, 50], [0, 50], '--k')
plt.xlabel('Expected price')
plt.ylabel('Predicted price')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Prediction with other Regression methods- **Ridge, Lasso and ElasticNet**- **Gradient Boosting Regressor**- **XG Boost**- **SGD Regressor**According to sklearn's official documentation, "SGDRegressor is well suited for regression problems with a large number of training samples (> 10,000), for other problems we recommend Ridge, Lasso, or ElasticNet."
###Code
!pip install xgboost
from sklearn.linear_model import Ridge, Lasso, ElasticNet, SGDRegressor
from sklearn.ensemble import GradientBoostingRegressor
from xgboost import XGBRegressor
# Try tuning the hyper-parameters
models = {
"Ridge" : Ridge(),
"Lasso" : Lasso(),
"ElasticNet" : ElasticNet(),
"Gradient Boosting" : GradientBoostingRegressor(),
"SGD" : SGDRegressor(max_iter=1000, tol=1e-3),
"XGB" : XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
max_depth = 5, alpha = 10, n_estimators = 10)
}
pred_record = {}
for name, model in models.items():
# TODO: Load your machine learning model
curr_model = model
# TODO: Train!
curr_model.fit(X_train, Y_train)
# TODO: make prediction with unseen data!
pred = curr_model.predict(X_test)
expectation = Y_test
# TODO: what is mse between the answer and your prediction?
mse = mean_squared_error(expectation, pred)
# TODO: RMSE
rmse = np.sqrt(mse)
print('{} MSE: {}, {} RMSE: {}'.format(name, mse, name, rmse))
pred_record.update({name : pred})
prediction = pred_record["SGD"]
plt.scatter(expectation, prediction)
plt.plot([0, 50], [0, 50], '--k')
plt.xlabel('Expected price')
plt.ylabel('Predicted price')
plt.tight_layout()
prediction = pred_record["XGB"]
plt.scatter(expectation, prediction)
plt.plot([0, 50], [0, 50], '--k')
plt.xlabel('Expected price')
plt.ylabel('Predicted price')
plt.tight_layout()
prediction = pred_record["Gradient Boosting"]
plt.scatter(expectation, prediction)
plt.plot([0, 50], [0, 50], '--k')
plt.xlabel('Expected price')
plt.ylabel('Predicted price')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
A Little Taster Session for Neural Network
###Code
from tensorflow import keras
from tensorflow.keras.layers import add, Dense, Activation
def neural_net():
model = keras.Sequential()
model.add(Dense(512, input_dim=BOSTON_DATA.data.shape[1]))
model.add(Activation('relu'))
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dense(1))
return model
model = neural_net()
model.summary()
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, Y_train, epochs=100)
loss, test_acc = model.evaluate(X_test, Y_test)
print('Test Loss : {:.4f} | Test Accuracy : {}'.format(loss, test_acc))
prediction = model.predict(X_test)
plt.scatter(expectation, prediction)
plt.plot([0, 50], [0, 50], '--k')
plt.xlabel('Expected price')
plt.ylabel('Predicted price')
plt.tight_layout()
###Output
W0218 20:45:20.979368 4659797312 training.py:504] Falling back from v2 loop because of error: Failed to find data adapter that can handle input: <class 'pandas.core.frame.DataFrame'>, <class 'NoneType'>
|
Joshua_Haga_DS_Sprint_Challenge_5.ipynb | ###Markdown
_Lambda School Data Science, Unit 2_ Linear Models Sprint ChallengeTo demonstrate mastery on your Sprint Challenge, do all the required, numbered instructions in this notebook.To earn a score of "3", also do all the stretch goals.You are permitted and encouraged to do as much data exploration as you want. Part 1, Classification- 1.1. Do train/test split. Arrange data into X features matrix and y target vector- 1.2. Use scikit-learn to fit a logistic regression model- 1.3. Report classification metric: accuracy Part 2, Regression- 2.1. Begin with baselines for regression- 2.2. Do train/validate/test split- 2.3. Arrange data into X features matrix and y target vector- 2.4. Do one-hot encoding- 2.5. Use scikit-learn to fit a linear regression or ridge regression model- 2.6. Report validation MAE and $R^2$ Stretch Goals, Regression- Make at least 2 visualizations to explore relationships between features and target. You may use any visualization library- Try at least 3 feature combinations. You may select features manually, or automatically- Report validation MAE and $R^2$ for each feature combination you try- Report test MAE and $R^2$ for your final model- Print or plot the coefficients for the features in your model
###Code
# If you're in Colab...
import sys
if 'google.colab' in sys.modules:
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
!pip install plotly==4.*
###Output
Collecting category_encoders==2.*
[?25l Downloading https://files.pythonhosted.org/packages/44/57/fcef41c248701ee62e8325026b90c432adea35555cbc870aff9cfba23727/category_encoders-2.2.2-py2.py3-none-any.whl (80kB)
[K |████ | 10kB 17.7MB/s eta 0:00:01
[K |████████▏ | 20kB 6.1MB/s eta 0:00:01
[K |████████████▏ | 30kB 6.6MB/s eta 0:00:01
[K |████████████████▎ | 40kB 8.3MB/s eta 0:00:01
[K |████████████████████▎ | 51kB 7.1MB/s eta 0:00:01
[K |████████████████████████▍ | 61kB 7.7MB/s eta 0:00:01
[K |████████████████████████████▍ | 71kB 8.4MB/s eta 0:00:01
[K |████████████████████████████████| 81kB 4.8MB/s
[?25hRequirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.4.1)
Requirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.0.5)
Requirement already satisfied: statsmodels>=0.9.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.10.2)
Requirement already satisfied: numpy>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.18.5)
Requirement already satisfied: patsy>=0.5.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.5.1)
Requirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.22.2.post1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2018.9)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2.8.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.5.1->category_encoders==2.*) (1.15.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders==2.*) (0.16.0)
Installing collected packages: category-encoders
Successfully installed category-encoders-2.2.2
Collecting pandas-profiling==2.*
[?25l Downloading https://files.pythonhosted.org/packages/32/79/5d03ed1172e3e67a997a6a795bcdd2ab58f84851969d01a91455383795b6/pandas_profiling-2.9.0-py2.py3-none-any.whl (258kB)
[K |████████████████████████████████| 266kB 8.3MB/s
[?25hRequirement already satisfied: attrs>=19.3.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (20.1.0)
Collecting phik>=0.9.10
[?25l Downloading https://files.pythonhosted.org/packages/01/5a/7ef1c04ce62cd72f900c06298dc2385840550d5c653a0dbc19109a5477e6/phik-0.10.0-py3-none-any.whl (599kB)
[K |████████████████████████████████| 604kB 18.4MB/s
[?25hRequirement already satisfied: jinja2>=2.11.1 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (2.11.2)
Collecting tqdm>=4.43.0
[?25l Downloading https://files.pythonhosted.org/packages/28/7e/281edb5bc3274dfb894d90f4dbacfceaca381c2435ec6187a2c6f329aed7/tqdm-4.48.2-py2.py3-none-any.whl (68kB)
[K |████████████████████████████████| 71kB 7.3MB/s
[?25hRequirement already satisfied: ipywidgets>=7.5.1 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (7.5.1)
Requirement already satisfied: scipy>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.4.1)
Collecting tangled-up-in-unicode>=0.0.6
[?25l Downloading https://files.pythonhosted.org/packages/4a/e2/e588ab9298d4989ce7fdb2b97d18aac878d99dbdc379a4476a09d9271b68/tangled_up_in_unicode-0.0.6-py3-none-any.whl (3.1MB)
[K |████████████████████████████████| 3.1MB 29.4MB/s
[?25hRequirement already satisfied: seaborn>=0.10.1 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.10.1)
Requirement already satisfied: matplotlib>=3.2.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.2.2)
Collecting htmlmin>=0.1.12
Downloading https://files.pythonhosted.org/packages/b3/e7/fcd59e12169de19f0131ff2812077f964c6b960e7c09804d30a7bf2ab461/htmlmin-0.1.12.tar.gz
Requirement already satisfied: pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.0.5)
Collecting visions[type_image_path]==0.5.0
[?25l Downloading https://files.pythonhosted.org/packages/26/e3/9416e94e767d59a86edcbcb8e1c8f42874d272c3b343676074879e9db0e0/visions-0.5.0-py3-none-any.whl (64kB)
[K |████████████████████████████████| 71kB 7.5MB/s
[?25hRequirement already satisfied: requests>=2.23.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (2.23.0)
Requirement already satisfied: numpy>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.18.5)
Requirement already satisfied: missingno>=0.4.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.4.2)
Collecting confuse>=1.0.0
[?25l Downloading https://files.pythonhosted.org/packages/b5/6d/bedc0d1068bd244cee05843313cbec6cebb9f01f925538269bababc6d887/confuse-1.3.0-py2.py3-none-any.whl (64kB)
[K |████████████████████████████████| 71kB 8.8MB/s
[?25hRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.16.0)
Requirement already satisfied: numba>=0.38.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.10->pandas-profiling==2.*) (0.48.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.11.1->pandas-profiling==2.*) (1.1.1)
Requirement already satisfied: ipykernel>=4.5.1 in /usr/local/lib/python3.6/dist-packages (from ipywidgets>=7.5.1->pandas-profiling==2.*) (4.10.1)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets>=7.5.1->pandas-profiling==2.*) (3.5.1)
Requirement already satisfied: nbformat>=4.2.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets>=7.5.1->pandas-profiling==2.*) (5.0.7)
Requirement already satisfied: ipython>=4.0.0; python_version >= "3.3" in /usr/local/lib/python3.6/dist-packages (from ipywidgets>=7.5.1->pandas-profiling==2.*) (5.5.0)
Requirement already satisfied: traitlets>=4.3.1 in /usr/local/lib/python3.6/dist-packages (from ipywidgets>=7.5.1->pandas-profiling==2.*) (4.3.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.2.0->pandas-profiling==2.*) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.2.0->pandas-profiling==2.*) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.2.0->pandas-profiling==2.*) (2.8.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.2.0->pandas-profiling==2.*) (1.2.0)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3->pandas-profiling==2.*) (2018.9)
Requirement already satisfied: networkx>=2.4 in /usr/local/lib/python3.6/dist-packages (from visions[type_image_path]==0.5.0->pandas-profiling==2.*) (2.5)
Requirement already satisfied: Pillow; extra == "type_image_path" in /usr/local/lib/python3.6/dist-packages (from visions[type_image_path]==0.5.0->pandas-profiling==2.*) (7.0.0)
Collecting imagehash; extra == "type_image_path"
[?25l Downloading https://files.pythonhosted.org/packages/1a/5d/cc81830be3c4705a46cdbca74439b67f1017881383ba0127c41c4cecb7b3/ImageHash-4.1.0.tar.gz (291kB)
[K |████████████████████████████████| 296kB 34.3MB/s
[?25hRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.23.0->pandas-profiling==2.*) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.23.0->pandas-profiling==2.*) (2020.6.20)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.23.0->pandas-profiling==2.*) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.23.0->pandas-profiling==2.*) (1.24.3)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from confuse>=1.0.0->pandas-profiling==2.*) (3.13)
Requirement already satisfied: llvmlite<0.32.0,>=0.31.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik>=0.9.10->pandas-profiling==2.*) (0.31.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik>=0.9.10->pandas-profiling==2.*) (49.6.0)
Requirement already satisfied: jupyter-client in /usr/local/lib/python3.6/dist-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.1->pandas-profiling==2.*) (5.3.5)
Requirement already satisfied: tornado>=4.0 in /usr/local/lib/python3.6/dist-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.1->pandas-profiling==2.*) (5.1.1)
Requirement already satisfied: notebook>=4.4.1 in /usr/local/lib/python3.6/dist-packages (from widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (5.3.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.2.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (4.6.3)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.2.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (2.6.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.2.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.2.0)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.8.1)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.5.1->pandas-profiling==2.*) (2.1.3)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.5.1->pandas-profiling==2.*) (1.0.18)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.5.1->pandas-profiling==2.*) (4.4.2)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.7.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.5.1->pandas-profiling==2.*) (4.8.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.3.1->ipywidgets>=7.5.1->pandas-profiling==2.*) (1.15.0)
Requirement already satisfied: PyWavelets in /usr/local/lib/python3.6/dist-packages (from imagehash; extra == "type_image_path"->visions[type_image_path]==0.5.0->pandas-profiling==2.*) (1.1.1)
Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client->ipykernel>=4.5.1->ipywidgets>=7.5.1->pandas-profiling==2.*) (19.0.2)
Requirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.8.3)
Requirement already satisfied: Send2Trash in /usr/local/lib/python3.6/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (1.5.0)
Requirement already satisfied: nbconvert in /usr/local/lib/python3.6/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (5.6.1)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.2.5)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.8.4)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (3.1.5)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.3)
Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.4.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (1.4.2)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (20.4)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling==2.*) (0.5.1)
Building wheels for collected packages: htmlmin, imagehash
Building wheel for htmlmin (setup.py) ... [?25l[?25hdone
Created wheel for htmlmin: filename=htmlmin-0.1.12-cp36-none-any.whl size=27085 sha256=4d74ff64a723588d3587bc2f61300dce5d7487dc06185288f9201722bab87780
Stored in directory: /root/.cache/pip/wheels/43/07/ac/7c5a9d708d65247ac1f94066cf1db075540b85716c30255459
Building wheel for imagehash (setup.py) ... [?25l[?25hdone
Created wheel for imagehash: filename=ImageHash-4.1.0-py2.py3-none-any.whl size=291991 sha256=70a562c3bb9e1bd9ae7a8ad589f35e90c1442599efa292deb294692478cbdff5
Stored in directory: /root/.cache/pip/wheels/07/1c/dc/6831446f09feb8cc199ec73a0f2f0703253f6ae013a22f4be9
Successfully built htmlmin imagehash
Installing collected packages: phik, tqdm, tangled-up-in-unicode, htmlmin, imagehash, visions, confuse, pandas-profiling
Found existing installation: tqdm 4.41.1
Uninstalling tqdm-4.41.1:
Successfully uninstalled tqdm-4.41.1
Found existing installation: pandas-profiling 1.4.1
Uninstalling pandas-profiling-1.4.1:
Successfully uninstalled pandas-profiling-1.4.1
Successfully installed confuse-1.3.0 htmlmin-0.1.12 imagehash-4.1.0 pandas-profiling-2.9.0 phik-0.10.0 tangled-up-in-unicode-0.0.6 tqdm-4.48.2 visions-0.5.0
Requirement already satisfied: plotly==4.* in /usr/local/lib/python3.6/dist-packages (4.4.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from plotly==4.*) (1.15.0)
Requirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.6/dist-packages (from plotly==4.*) (1.3.3)
###Markdown
Part 1, Classification: Predict Blood Donations 🚑Our dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict whether the donor made a donation in March 2007, using information about each donor's history.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need.
###Code
import pandas as pd
donors = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
assert donors.shape == (748,5)
donors = donors.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
###Output
_____no_output_____
###Markdown
Notice that the majority class (did not donate blood in March 2007) occurs about 3/4 of the time. This is the accuracy score for the "majority class baseline" (the accuracy score we'd get by just guessing the majority class every time).
###Code
donors['made_donation_in_march_2007'].value_counts(normalize=True)
###Data Exploration
import numpy as np
import plotly as py
donors.columns
donors.head()
donors.isnull().sum()
donor_corr = donors.corr()
donor_corr
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(12, 8))
sns.heatmap(donor_corr);
donor_corr['made_donation_in_march_2007'].sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
It appears, when taking a look at the correlations between people who made donations in march, and the other features in the dataset, I see that 'number of donations' and 'total volume donated', both have the highest correlation with the target feature. Interestingly, they both are the same value. 1.1. Do train/test split. Arrange data into X features matrix and y target vectorDo these steps in either order.Use scikit-learn's train/test split function to split randomly. (You can include 75% of the data in the train set, and hold out 25% for the test set, which is the default.)
###Code
###Import SKlearn
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
###Output
_____no_output_____
###Markdown
1.2. Use scikit-learn to fit a logistic regression modelYou may use any number of features
###Code
###Output
_____no_output_____
###Markdown
1.3. Report classification metric: accuracyWhat is your model's accuracy on the test set?Don't worry if your model doesn't beat the majority class baseline. That's okay!_"The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."_ —[John Tukey](https://en.wikiquote.org/wiki/John_Tukey)(Also, if we used recall score instead of accuracy score, then your model would almost certainly beat the baseline. We'll discuss how to choose and interpret evaluation metrics throughout this unit.)
###Code
###Output
_____no_output_____
###Markdown
Part 2, Regression: Predict home prices in Ames, Iowa 🏠You'll use historical housing data. ***There's a data dictionary at the bottom of the notebook.*** Run this code cell to load the dataset:
###Code
import pandas as pd
URL = 'https://drive.google.com/uc?export=download&id=1522WlEW6HFss36roD_Cd9nybqSuiVcCK'
homes = pd.read_csv(URL)
assert homes.shape == (2904, 47)
###Output
_____no_output_____
###Markdown
2.1. Begin with baselinesWhat is the Mean Absolute Error and R^2 score for a mean baseline? (You can get these estimated scores using all your data, before splitting it.)
###Code
###Output
_____no_output_____
###Markdown
2.2. Do train/validate/test splitTrain on houses sold in the years 2006 - 2008. (1,920 rows)Validate on house sold in 2009. (644 rows)Test on houses sold in 2010. (340 rows)
###Code
###Output
_____no_output_____
###Markdown
2.3. Arrange data into X features matrix and y target vectorSelect at least one numeric feature and at least one categorical feature.Otherwise, you may choose whichever features and however many you want.
###Code
###Output
_____no_output_____
###Markdown
2.4. Do one-hot encodingEncode your categorical feature(s).
###Code
###Output
_____no_output_____
###Markdown
2.5. Use scikit-learn to fit a linear regression or ridge regression modelFit your model.
###Code
###Output
_____no_output_____
###Markdown
2.6. Report validation MAE and $R^2$What is your model's Mean Absolute Error and $R^2$ score on the validation set? (You are not graded on how high or low your validation scores are.)
###Code
###Output
_____no_output_____
###Markdown
Stretch Goals, Regression- Make at least 2 visualizations to explore relationships between features and target. You may use any visualization library- Try at least 3 feature combinations. You may select features manually, or automatically- Report validation MAE and $R^2$ for each feature combination you try- Report test MAE and $R^2$ for your final model- Print or plot the coefficients for the features in your model
###Code
###Output
_____no_output_____ |
data_structure_and_algorithm_py3_7.ipynb | ###Markdown
1.20 合并多个字典或映射 有多个字典或映射 将他们从逻辑上合并为一个单一的映射后执行某些操作 比如 查找值或检查某些键是否存在
###Code
a = {'x': 1, 'z': 3}
b = {'y': 2, 'z': 4}
# 需在两 dict 中执行查找操作 (先从 a 中找,若是找不到,再在 b 中找)
from collections import ChainMap
c = ChainMap(a,b)
print(c['x'])
print(c['y'])
print(c['z'])
###Output
1
2
3
###Markdown
一个 ChainMap 接受多个 dict 将他们在逻辑上变为一个 dict 然后 这些 dict 不是真的合并在一起了 ChainMap 类只是在内部创建了一个容纳这些 dict 的 list and 重新定义了一些常用的 dict 操作来遍历这些列表 大部分 dict 都是正常使用的
###Code
len(c)
list(c.keys())
list(c.values())
###Output
_____no_output_____
###Markdown
if 出现重复键 即 其中 'z' 那么以第一次出现的映射值为主 同时被返回 SO 其中 c['z'] 总是返回 dict a 中 对应的值 而不是 b 中的值对于 dict 的更新或删除操作总是影响的列表中的 第一个 dict
###Code
c['z'] = 10
c['w'] = 80
del c['x']
###Output
_____no_output_____
###Markdown
知道为啥以上 key 他妈都找不到了,并未按顺序进行 这里的顺序是指 Time Line 按时间先后执行
###Code
c_old = ChainMap(a,b)
c_old
type(c_old)
###Output
_____no_output_____
###Markdown
ChainMap 对于语言中 variable 作用范围 (globals , locals)非常有用 will make something easy
###Code
values = ChainMap()
values['x'] = 1
# Add a new mapping
values = values.new_child()
values['x'] = 2
# Add a new mapping
values = values.new_child()
values['x'] = 3
values
values['x']
# Discard last mapping
values = values.parents
values['x']
# Discard last mapping
values = values.parents
values['x']
values
###Output
_____no_output_____
###Markdown
作为 ChainMap 的替代 考虑使用 update 将两个 dict 合并
###Code
a = {'x':1,'z':3}
b = {'y':2,'z':4}
merged = dict(b)
merged.update(a)
print(merged['x'],'\n\n',merged['y'],'\n\n',merged['z'])
###Output
1
2
3
###Markdown
虽然以上可行 但需要创建一个完全不同的 dict 对象 (可能破坏现有 dict 结构) 同时 若 原 dict 做了更新 这种更新不会反映到 新的合并 dict 中去
###Code
a['x'] = 19
merged['x']
###Output
_____no_output_____
###Markdown
ChainMap 使用原来 dict 他自己不会创建 new dict SO 其 会影响其本身
###Code
a = {'x': 1, 'z': 3}
b = {'y': 2, 'z': 4}
merged = ChainMap(a,b)
merged['x']
a['x'] = 43
merged['x'] # Notice change to merged dict
###Output
_____no_output_____ |
Documents/auxiliars_doc.ipynb | ###Markdown
Documentación de las clases auxiliares de la simulación1. Characteristic2. Dependence3. Operators 1. CharacteristicClase donde se representan las características, esta contiene **nombre**, **valor**, **límites**, **mutabilidad** y función de distribución. Además contiene el método para modificarlos
###Code
import os
from sys import path
path.append(os.path.abspath(os.path.join('',os.pardir)))
from Simulation.characteristic import Characteristic
from Simulation.dependence import Dependence
from Simulation.operators import *
char = Characteristic("precipitacionnes", 12, 0, 10000, 2, lambda x: x*x)
char.Update_Characteristic_Value(-1)
char.value
###Output
_____no_output_____
###Markdown
2. Dependenceclase con la que ses representada una dependencia, influencia o interdependencia. Esta tiene la posición, entidad y características que se relacionan, además de c que sería la razón con que se relacionan y las funciones de suma y multiplicación de la dependencia.
###Code
dep = Dependence([0,0], "","altura",[0,0],"atleta_no_boliviano","rendimiento",-2)
###Output
_____no_output_____ |
Intro2Algo/Basics.ipynb | ###Markdown
Basic
###Code
%reload_ext blackcellmagic
###Output
_____no_output_____
###Markdown
Define a function to generate random list
###Code
import random
def ran_list(length, start=0, end=1000):
random_list = []
for i in range(0, length):
x = random.randint(start, end)
random_list.append(x)
return random_list
###Output
_____no_output_____
###Markdown
Insertion sort
###Code
unsorted_data = ran_list(length=10)
sorted_data=list(sorted(unsorted_data))
print(unsorted_data)
def insertion_sort_in_place(data):
for idx in range(1, len(data)):
key = data[idx]
i = idx - 1
while i >= 0 and data[i] > key:
data[i + 1] = data[i]
i = i - 1
data[i + 1] = key
return data
from operator import eq
insertion_sort_in_place(unsorted_data)
print(unsorted_data)
print(eq(sorted_data,unsorted_data))
###Output
[2, 154, 165, 337, 634, 688, 722, 796, 945, 957]
True
|
Simple_Model_of_LNG_Market_v_0.25.ipynb | ###Markdown
Simple LNG MarketThe importing countries try to minimize their costs for the imported LNG. The object of this model is to minimize total LNG costs while satisfying constraints on the supplies available at each of the exporting nodes, and satisfying demand requirements at each of the importing nodes.
###Code
import shutil
import sys
import os.path
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.close('all')
from IPython.display import display
from geopy.geocoders import Nominatim
from geopy import distance
from pyomo.environ import *
geolocator = Nominatim(user_agent="Your_Name")
pd.set_option('precision', 3)
MMBtu = 23.161 #1m³LNG = 23.161MMBtu
###Output
_____no_output_____
###Markdown
1 Data 1.1 Importing NodesTen largest importing countries according to "BP Stats Review (2020)" are taken into consideration.
###Code
# Data for importing and exporting countries from "BP Stats Review (2020)"
# "Import (MMBtu)" column consists of imports in m³ multiplied with conversion factor
Imp_Countries = pd.DataFrame({
"Country" : ["Japan", "China", "South Korea", "India", "France", "Spain",
"UK", "Italy", "Turkey", "Belgium"],
"Import (MMBtu)" : [105.5*10**9 * MMBtu, 84.8*10**9 * MMBtu, 55.6*10**9 * MMBtu, 32.9*10**9 * MMBtu,
22.9*10**9 * MMBtu, 21.9*10**9 * MMBtu, 18*10**9 * MMBtu, 13.5*10**9 * MMBtu,
12.9*10**9 * MMBtu, 7.2*10**9 * MMBtu],
"Regasification Terminal" : ["Sodegaura", "Yancheng", "Incheon", "Dahej", "Dunkirk",
"Barcelona", "Isle of Grain", "Rovigo", "Marmara", "Zeebrugge"]
})
#Imp_Countries["Import (MMBtu)"] = (Imp_Countries["Import (MMBtu)"].astype(float)/1000000000).astype(str)
# getting the coordinates of largest regas terminals, not neccesarry anymore, since there is no suitable package
# for calculating sea distances :(
Imp_Countries["Location"] = Imp_Countries["Regasification Terminal"].apply(geolocator.geocode)
Imp_Countries["Point"]= Imp_Countries["Location"].apply(lambda loc: tuple(loc.point) if loc else None)
Imp_Countries[["Latitude", "Longitude", "Altitude"]] = pd.DataFrame(Imp_Countries["Point"].to_list(), index=Imp_Countries.index)
del Imp_Countries["Altitude"]
del Imp_Countries["Location"]
del Imp_Countries["Point"]
#center the columns, display() method doesn't work anymore, so not necessary for now
def pd_centered(df):
return df.style.set_table_styles([
{"selector": "th", "props": [("text-align", "center")]},
{"selector": "td", "props": [("text-align", "center")]}])
#display(pd_centered(Imp_Countries))
Imp_Countries
###Output
_____no_output_____
###Markdown
1.2 Exporting NodesTen largest exporting countries according to "BP Stats Review (2020)" are taken into consideration.
###Code
# define DF with data for exporting countries (BP 2020)
Exp_Countries = pd.DataFrame({
"Country" : ["Qatar", "Australia", "USA", "Russia", "Malaysia", "Nigeria",
"Trinidad & Tobago", "Algeria", "Indonesia","Oman"],
"Export (MMBtu)" : [107.1*10**9 * MMBtu, 104.7*10**9 * MMBtu, 47.5*10**9 * MMBtu, 39.4*10**9 * MMBtu,
35.1*10**9 * MMBtu, 28.8*10**9 * MMBtu, 17.0*10**9 * MMBtu, 16.9*10**9 * MMBtu,
16.5*10**9 * MMBtu, 14.1*10**9 * MMBtu],
"Break Eeven Price ($/MMBtu)" : [2.5, 6.5, 6.0, 4.5, 6.0, 4.08, 5.1, 2.5, 6.0, 3.5],
#"Feedstock ($/MMBtu)" : [0.5, 3.0, 3.0, 0.5, 3.0, 2.2, 3.0, 0.5, 2.0, 1.5],
#"Liquefaction ($/MMBtu)" : [2.0, 3.5, 3.0, 4.0, 3.0, 1.88, 2.1, 2.0, 4.0, 2.0],
"Liquefaction Terminal" : ["Ras Laffan", "Gladstone", "Sabine Pass", "Sabetta",
"Bintulu", "Bonny Island", "Point Fortin",
" Bethioua", "Bontang", "Qalhat"],
})
#Exp_Countries["Export (Bcm)"] = (Exp_Countries["Export (Bcm)"].astype(float)/1000000000).astype(str)
# Getting the coordinates of liquefaction terminals, not neccesarry anymore, since there is no suitable package
# for calculating sea distances :(
Exp_Countries["Location"] = Exp_Countries["Liquefaction Terminal"].apply(geolocator.geocode)
Exp_Countries["Point"] = Exp_Countries["Location"].apply(lambda loc: tuple(loc.point) if loc else None)
Exp_Countries[["Latitude", "Longitude", "Altitude"]] = pd.DataFrame(Exp_Countries["Point"].to_list(), index=Exp_Countries.index)
# remove unnecessary columns
del Exp_Countries["Altitude"]
del Exp_Countries["Location"]
del Exp_Countries["Point"]
#display(pd_centered(Exp_Countries))
Exp_Countries
###Output
_____no_output_____
###Markdown
1.3 Sea distances among the importing and exporting nodes
###Code
# Define DF with distances among largest liquefaction and regas terminals in nautical miles
# In case of lack of route to the specific terminal, next largest port was taken
# sea-distances.org, searoutes.com
# if route over canal: distance rounded on 0 for suez and on 5 for panama, needed for later calculations
Distances = pd.DataFrame([(6512, 5846, 6161, 1301, 6240, 4650, 6260, 4380, 3460, 6270),
(3861, 4134, 4313, 5918, 11650, 10060, 11670, 9790, 8870, 11680),
(9205, 10075, 10005, 9645, 4881, 5206, 4897, 6354, 6341, 4908),
(4929, 5596, 5671, 10160, 2494, 4252, 2511, 5546, 5481, 2477),
(2276, 1656, 1998, 3231, 8980, 7390, 9000, 7120, 6200, 9010),
(10752, 10088, 10406, 6988, 4287, 3824, 4309, 4973, 4961, 4321),
(8885, 9755, 9665, 8370, 3952, 3926, 3974, 5074, 5062, 3984),
(9590, 8930, 9240, 4720, 1519, 343, 1541, 1432, 1417, 1552),
(2651, 2181, 2487, 3517, 9260, 7670, 9280, 7390, 6470, 9290),
(6046, 5379, 5694, 853, 5760, 4180, 5790, 3900, 2980, 5800)
],
index=["Qatar", "Australia", "USA", "Russia", "Malaysia",
"Nigeria", "Trinidad & Tobago", "Algeria", "Indonesia"
,"Oman"],
columns = ("Japan", "China", "South Korea", "India", "France",
"Spain", "UK", "Italy", "Turkey", "Belgium"))
#display(pd_centered(Distances))
Distances
###Output
_____no_output_____
###Markdown
1.4 LNG CarrierTo keep the model simple it is assumed that there is only one type of the LNG carrier with the following characteristics:
###Code
# define DF with lng carrier properties
LNG_Carrier = pd.DataFrame({"Capacity (m³)" : [160000], # average capacity in 2019 160000 (!!! https://www.rivieramm.com/opinion/opinion/lng-shipping-by-numbers-36027)
"Spot Charter Rate ($/day)" : [70000], # average spot charter rate in 2019
"Speed (knots/h)" : [17],
"Bunker (mt/d)" : [110], # !!! Does boil off reduce it and what's the ratio then???
"Bunkers Price ($/mt)" : [670],
"Boil off" : [1/1000], # the daily guaranteed maximum boil-off or DGMB (0,1%)
"Heel" : [4/100]}) # 4% of the cargo is retained as the heel (LNG storage tanks need to be kept cool)
#print(LNG_Carrier_DF.loc[0, "Capacity (m³)"])
#display(pd_centered(LNG_Carrier))
LNG_Carrier
###Output
_____no_output_____
###Markdown
1.5 Additional Costs- Port costs for 3 days: outbound port, destination port and one day in case it has to wait for loading/unloading- Canal Fees (Suez or Panama Canal)- Agents and broker fees: 2% of charter cost plus- Insurance: 2,600 $/day
###Code
# define DF containing additional costs
Additional_Costs = pd.DataFrame({
"Port Costs ($ for 3 days)" : [300000],
"Suez Canal Costs ($/Cargo)" : [1000000], # 0.24 $/mmBtU
"Panama Canal Costs ($/Cargo)" : [900000], # 0.21 $/mmBtu,
"Insurance ($/day)" : [2600],
"Fees (Percentage of Charter Costs)" : [2/100]})
#display(pd_centered(Additional_Costs))
Additional_Costs
###Output
_____no_output_____
###Markdown
2 Calculations 2.1 (Return) Voyage Times
###Code
Time = Distances / (LNG_Carrier.loc[0, "Speed (knots/h)"] * 12) # in days
# note to myself: it would be better to define a function which would compute the time!!!
#display(pd_centered(Time))
Time
###Output
_____no_output_____
###Markdown
2.2 Charter Costs
###Code
Charter_Costs = (Time + 2) * LNG_Carrier.loc[0, "Spot Charter Rate ($/day)"] # in $
# display(pd_centered(Charter_Costs))
Charter_Costs
###Output
_____no_output_____
###Markdown
2.3 Fuel Costs
###Code
# this is arbitrary cost (it should be taken either price after relequifaction or destination hub price)
lng_cost = 5 # $/MMBtu
Fuel_Costs = Time * LNG_Carrier.loc[0, "Boil off"] * 0.98 * LNG_Carrier.loc[0, "Capacity (m³)"] * lng_cost * MMBtu
# this is arbitrary cost (it should be taken either price after relequifaction or destination hub price)
lng_cost = 5 # $/MMBtu
#display(pd_centered(Fuel_Costs))
Fuel_Costs
###Output
_____no_output_____
###Markdown
2.4 Agents and Broker Fees + Insurance
###Code
Fees_and_Insurance = (Time + 2) * Additional_Costs.loc[0, "Insurance ($/day)"] + Charter_Costs * Additional_Costs.loc[0, "Fees (Percentage of Charter Costs)"]
#display(pd_centered(Fees_and_Insurance))
Fees_and_Insurance
###Output
_____no_output_____
###Markdown
2.5 Suez/Panama Canal Fee
###Code
# create empty data frames for canal fees and port costs
Canal_Fees = pd.DataFrame(np.zeros((10,10)),
index = ["Qatar", "Australia", "USA", "Russia", "Malaysia",
"Nigeria", "Trinidad & Tobago", "Algeria", "Indonesia",
"Oman"],
columns = ("Japan", "China", "South Korea", "India", "France",
"Spain", "UK", "Italy", "Turkey", "Belgium"))
for i in Distances.index:
for j in Distances.columns:
if (((Distances.loc[i,j]) % 5 == 0) and ((Distances.loc[i,j]) %10 != 0)):
Canal_Fees.loc[i,j] += Additional_Costs.loc[0, "Panama Canal Costs ($/Cargo)"]
elif ((Distances.loc[i,j]) % 10 == 0):
Canal_Fees.loc[i,j] += Additional_Costs.loc[0, "Suez Canal Costs ($/Cargo)"]
#display(pd_centered(Canal_Fees))
Canal_Fees
###Output
_____no_output_____
###Markdown
2.6 Port Costs
###Code
# empty data frame for port costs
Port_Costs = pd.DataFrame(np.zeros((10,10)),
index = ["Qatar", "Australia", "USA", "Russia", "Malaysia",
"Nigeria", "Trinidad & Tobago", "Algeria", "Indonesia",
"Oman"],
columns = ("Japan", "China", "South Korea", "India", "France",
"Spain", "UK", "Italy", "Turkey", "Belgium"))
Port_Costs += Additional_Costs.loc[0, "Port Costs ($ for 3 days)"]
#display(pd_centered(Port_Costs))
Port_Costs
###Output
_____no_output_____
###Markdown
2.7 Total Costs of Transportation
###Code
Transport_Costs = Charter_Costs + Fuel_Costs + Fees_and_Insurance + Canal_Fees + Port_Costs
#Transport_Costs
Transport_Costs_MMBtu = Transport_Costs / (0.94 * LNG_Carrier.loc[0, "Capacity (m³)"] * MMBtu) # 1-heel-boil_off = 0.94
#display(pd_centered(Transport_Costs_MMBtu))
Transport_Costs_MMBtu #in $/MMBtu
###Output
_____no_output_____
###Markdown
2.8 Total CostsTotal costs per unit of transported LNG = Feedstock Price + Liquefaction Costs + Total Costs of Transportation
###Code
# taken from EXP_Countries df
Breakeven = pd.DataFrame([(2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5),
(6.5, 6.5, 6.5, 6.5, 6.5, 6.5, 6.5, 6.5, 6.5, 6.5),
(6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0),
(4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5, 4.5),
(6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0),
(4.08, 4.08, 4.08, 4.08, 4.08, 4.08, 4.08, 4.08, 4.08, 4.08),
(5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1),
(2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5),
(6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0),
(3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5)
],
index = ["Qatar", "Australia", "USA", "Russia", "Malaysia", "Nigeria",
"Trinidad & Tobago", "Algeria", "Indonesia","Oman"],
columns = ("Japan", "China", "South Korea", "India", "France",
"Spain", "UK", "Italy", "Turkey", "Belgium"))
Total_Costs = Breakeven + Transport_Costs_MMBtu
#display(pd_centered(Total_Costs))
Total_Costs # in $/MMBtu
###Output
_____no_output_____
###Markdown
3 Pyomo Model 3.1 Data File
###Code
# Extract "Countries", "Exports" and "Imports" from "Exp_Countries" and "Imp_Countries" dfs
# and create dictionaries
Supply = Exp_Countries.set_index("Country")["Export (MMBtu)"].to_dict()
#print(Supply)
Demand = Imp_Countries.set_index("Country")["Import (MMBtu)"].to_dict()
#print(Demand)
# I would really like to get "Cost" dictionary straight from the "Total_Costs" df, since doing it
# this way is real pain in the ...
Cost = {
("Japan", "Qatar"): 3.473,
("China", "Qatar"): 3.387,
("South Korea", "Qatar"): 3.428,
("India", "Qatar"): 2.797,
("France", "Qatar"): 3.725,
("Spain", "Qatar"): 3.519,
("UK", "Qatar"): 3.728,
("Italy", "Qatar"): 3.484,
("Turkey", "Qatar"): 3.364,
("Belgium", "Qatar"): 3.729,
("Japan", "Australia"): 7.129,
("China", "Australia"): 7.165,
("South Korea", "Australia"): 7.188,
("India", "Australia"): 7.396,
("France", "Australia"): 8.427,
("Spain", "Australia"): 8.220,
("UK", "Australia"): 8.429,
("Italy", "Australia"): 8.185,
("Turkey", "Australia"): 8.066,
("Belgium", "Australia"): 8.430,
("Japan", "USA"): 7.581,
("China", "USA"): 7.694,
("South Korea", "USA"): 7.685,
("India", "USA"): 7.638,
("France", "USA"): 6.762,
("Spain", "USA"): 6.804,
("UK", "USA"): 6.764,
("Italy", "USA"): 6.953,
("Turkey", "USA"): 6.951,
("Belgium", "USA"): 6.765,
("Japan", "Russia"): 5.268,
("China", "Russia"): 5.354,
("South Korea", "Russia"): 5.364,
("India", "Russia"): 6.233,
("France", "Russia"): 4.952,
("Spain", "Russia"): 5.180,
("UK", "Russia"): 4.954,
("Italy", "Russia"): 5.348,
("Turkey", "Russia"): 5.339,
("Belgium", "Russia"): 4.950,
("Japan", "Malaysia"): 6.424,
("China", "Malaysia"): 6.343,
("South Korea", "Malaysia"): 6.388,
("India", "Malaysia"): 6.548,
("France", "Malaysia"): 7.580,
("Spain", "Malaysia"): 7.374,
("UK", "Malaysia"): 7.583,
("Italy", "Malaysia"): 7.339,
("Turkey", "Malaysia"): 7.220,
("Belgium", "Malaysia"): 7.584,
("Japan", "Nigeria"): 5.603,
("China", "Nigeria"): 5.517,
("South Korea", "Nigeria"): 5.558,
("India", "Nigeria"): 5.115,
("France", "Nigeria"): 4.765,
("Spain", "Nigeria"): 4.705,
("UK", "Nigeria"): 4.767,
("Italy", "Nigeria"): 4.854,
("Turkey", "Nigeria"): 4.852,
("Belgium", "Nigeria"): 4.769,
("Japan", "Trinidad & Tobago"): 6.639,
("China", "Trinidad & Tobago"): 6.752,
("South Korea", "Trinidad & Tobago"): 6.740,
("India", "Trinidad & Tobago"): 6.601,
("France", "Trinidad & Tobago"): 5.741,
("Spain", "Trinidad & Tobago"): 5.738,
("UK", "Trinidad & Tobago"): 5.744,
("Italy", "Trinidad & Tobago"): 5.887,
("Turkey", "Trinidad & Tobago"): 5.885,
("Belgium", "Trinidad & Tobago"): 5.745,
("Japan", "Algeria"): 4.159,
("China", "Algeria"): 4.074,
("South Korea", "Algeria"): 4.114,
("India", "Algeria"): 3.528,
("France", "Algeria"): 2.826,
("Spain", "Algeria"): 2.673,
("UK", "Algeria"): 2.828,
("Italy", "Algeria"): 2.814,
("Turkey", "Algeria"): 2.812,
("Belgium", "Algeria"): 2.830,
("Japan", "Indonesia"): 6.472,
("China", "Indonesia"): 6.411,
("South Korea", "Indonesia"): 6.451,
("India", "Indonesia"): 6.585,
("France", "Indonesia"): 7.617,
("Spain", "Indonesia"): 7.410,
("UK", "Indonesia"): 7.619,
("Italy", "Indonesia"): 7.374,
("Turkey", "Indonesia"): 7.255,
("Belgium","Indonesia"): 7.620,
("Japan", "Oman"): 4.413,
("China", "Oman"): 4.326,
("South Korea", "Oman"): 4.367,
("India", "Oman"): 3.739,
("France", "Oman"): 4.663,
("Spain", "Oman"): 4.458,
("UK", "Oman"): 4.667,
("Italy", "Oman"): 4.421,
("Turkey", "Oman"): 4.302,
("Belgium", "Oman"): 4.668
}
#print(Cost)
###Output
_____no_output_____
###Markdown
3.1 Model File
###Code
# Step 0: Create an instance of the model
model = ConcreteModel()
model.dual = Suffix(direction=Suffix.IMPORT)
# Step 1: Define index sets
CUS = list(Demand.keys()) # Customer
SRC = list(Supply.keys()) # Source
# Step 2: Define the decision
model.x = Var(CUS, SRC, domain = NonNegativeReals)
# Step 3: Define Objective
model.Cost = Objective(
expr = sum([Cost[c,s]*model.x[c,s] for c in CUS for s in SRC]),
sense = minimize)
# Step 4: Constraints
model.src = ConstraintList()
for s in SRC:
model.src.add(sum([model.x[c,s] for c in CUS]) <= Supply[s])
model.dmd = ConstraintList()
for c in CUS:
model.dmd.add(sum([model.x[c,s] for s in SRC]) == Demand[c])
results = SolverFactory('glpk').solve(model)
results.write()
###Output
# ==========================================================
# = Solver Results =
# ==========================================================
# ----------------------------------------------------------
# Problem Information
# ----------------------------------------------------------
Problem:
- Name: unknown
Lower bound: 44639429781300.0
Upper bound: 44639429781300.0
Number of objectives: 1
Number of constraints: 21
Number of variables: 101
Number of nonzeros: 201
Sense: minimize
# ----------------------------------------------------------
# Solver Information
# ----------------------------------------------------------
Solver:
- Status: ok
Termination condition: optimal
Statistics:
Branch and bound:
Number of bounded subproblems: 0
Number of created subproblems: 0
Error rc: 0
Time: 0.02081131935119629
# ----------------------------------------------------------
# Solution Information
# ----------------------------------------------------------
Solution:
- number of solutions: 0
number of solutions displayed: 0
###Markdown
4 Solution
###Code
#for c in CUS:
# for s in SRC:
# print(c, s, model.x[c,s]() / MMBtu / 1000000000)
if 'ok' == str(results.Solver.status):
#print("Total Costs = ",model.Cost(), "$")
print("\nShipping Table:")
for s in SRC:
for c in CUS:
if model.x[c,s]() > 0:
print("Ship from ", s," to ", c, ":", model.x[c,s]() / MMBtu / 1000000000, "bcm")
else:
print("No Valid Solution Found")
print("\nShipping Table:")
for c in CUS:
for s in SRC:
if model.x[c,s]() > 0:
print("Ship from ", s," to ", c, ":", model.x[c,s]() / MMBtu / 1000000000, "bcm")
###Output
Shipping Table:
Ship from Australia to Japan : 66.1 bcm
Ship from Russia to Japan : 39.4 bcm
Ship from Qatar to China : 35.6 bcm
Ship from Malaysia to China : 35.1 bcm
Ship from Oman to China : 14.1 bcm
Ship from Qatar to South Korea : 38.6 bcm
Ship from Australia to South Korea : 0.5 bcm
Ship from Indonesia to South Korea : 16.499999999999996 bcm
Ship from Qatar to India : 32.9 bcm
Ship from USA to France : 8.499999999999998 bcm
Ship from Trinidad & Tobago to France : 14.4 bcm
Ship from Nigeria to Spain : 19.3 bcm
Ship from Trinidad & Tobago to Spain : 2.6 bcm
Ship from USA to UK : 18.0 bcm
Ship from Nigeria to Italy : 9.5 bcm
Ship from Algeria to Italy : 4.0 bcm
Ship from Algeria to Turkey : 12.9 bcm
Ship from USA to Belgium : 7.2 bcm
|
.ipynb_checkpoints/Stock-Duration-To-Come-Back-From-Crisis-checkpoint.ipynb | ###Markdown
Stock-Duration-To-Come-Back-From-Crisis
###Code
import sys
sys.executable # Check in which virtual environment I am operating
###Output
_____no_output_____
###Markdown
Generating a table with the worst crisis for a stock over a given period of time. And the time it took for the stock to come back to the initial price it had fallen from.This will give an historical view of the kind of crises a company faced in its history, and the duration it took the company to recover form that crisis.| Company | Crisis | Low | from High | Change | got back by | after (years)|-----------|----------- |------|-----------|--------|-------------|--------------| Boeing | 1973-03-10 | \$10 | \$20 | -50% |1976-09-10 |3.5
###Code
import numpy as np
import pandas as pd
import yfinance as yf # Module to retrieve data on financial instruments (similar to 'yahoo finance')
import matplotlib
from matplotlib import pyplot as plt # Import pyplot for plotting
from pandas.plotting import register_matplotlib_converters # to register Register Pandas Formatters and Converters with matplotlib.
from pandas.tseries.offsets import DateOffset
plt.style.use('seaborn') # using a specific matplotlib style
from dateutil.relativedelta import relativedelta
###Output
_____no_output_____
###Markdown
Parameters `stock` Captures the sock (company) to be studied. The variable should be set with the official "Ticker" of the company.E.g: `stock = 'AAPL'` for Ticker AAPL (Ticker of the company Apple).
###Code
stock = 'BA' # stock under study. Boeing in this case.
stock_name = 'Boeing'
###Output
_____no_output_____
###Markdown
`period` Period of time under study. This is the period of time during which we want to find the crisis for the stock under study.For example, it can be the last 5 years, the last 10 years or the maximum period for which we have data.
###Code
period = 'max' # valid period values: 1y,2y,5y,10y,ytd,max
###Output
_____no_output_____
###Markdown
`time_window` The time window is the moving time interval used to calculate the loss of the sotck.At each point in time, the loss will be calculated compared to a reference point dated at the beginning of that time window.This way, the loss calculated will be representative of the loss over that rolling time window.E.g.: `time_window = '20'`The change will be calculated over a rolling time window of 20 trading days.
###Code
time_window = 20 # in open trading days
###Output
_____no_output_____
###Markdown
`large_loss` A stock will be considered to have a crisis if the loss it suffers over the `time_window` is larger than `large_loss`.
###Code
large_loss = -0.30 # large loss (a percentage number: large_loss = -0.30 represents a loss of -30%)
###Output
_____no_output_____
###Markdown
Implementation Get the stock historical price data for the period under study Use the yfinance module to download the daily data for the period. And store the result in a Pandas DataFrame.
###Code
df = yf.download(tickers=stock, period= period, interval='1d') # Download the data
df.drop(columns=['Open','High','Low','Adj Close','Volume'],inplace=True) # Drop unnecessary columns
df.insert(0, 'Company', stock_name) # Insert a column "Company" with the stock name in it
df.head() # Display the first rows of the data
###Output
[*********************100%***********************] 1 of 1 completed
###Markdown
Calculate Change over the `time_window` rolling period Create a column to store the reference price at the beginning of the time window.
###Code
df['Reference Price'] = df.Close.shift(time_window)
df
###Output
_____no_output_____
###Markdown
At the beginning of the data, we cannot get a reference point back the entire time window because we do not have the data before that point.We will replace the missing values by the oldest possible price point available: the first historical price point available in the data.
###Code
df.fillna(value=df.iloc[0]['Close'],inplace=True)
df
###Output
_____no_output_____
###Markdown
Calculate the change column: last "Close" compared to "Reference Price" (the reference price being the price of the stock at the start of the `time_window`.
###Code
df['Change'] = (df['Close']-df['Reference Price']) / df['Reference Price']
df
###Output
_____no_output_____
###Markdown
Find the worst crisis To find the worst crisis, we are going to search for local minimums in the "Change" column. And then find those "Change" local minimums that are inferior to the `large_loss` parameter.E.g.: Worst crisis of Boeing over a month (loss larger than -30%).
###Code
# Create a column with the local minimums (change smaller than the one before and the one after)
df['Local Min'] = df.Change[ (df.Change.shift(1)>df.Change) & (df.Change.shift(-1)>df.Change) ]
df
###Output
_____no_output_____
###Markdown
Address the fact that the very last price could be a local minimum as well.If the very last price is smaller than the day before, then consider it as a local minimum as well.
###Code
if df.loc[df.index[-1],'Change'] < df.loc[df.index[-2],'Change']: # if the last change is smaller tham the day before
df.loc[df.index[-1],'Local Min'] = df.loc[df.index[-1],'Change'] # consider the last change as a local minimum
df
###Output
_____no_output_____
###Markdown
Find out the worst crisis by selecting the rows for which the Change has a Local Minimum inferior to the `large_loss` parameter defined at the top of the Notebook.`df1`will be a smaller copy of the larger `df`. `df1` being will be used to ultimately represent the final output of that entire Notebook.
###Code
df1 = df[ df['Local Min'] < large_loss ].copy()
df1
###Output
_____no_output_____
###Markdown
It happens that we have several local minimums in the span of a `time_window`.In order to avoid redundancy and keep only the worst loss point for each crisis, we will keep only the largest loss endured during the span of a given `time_window`.The following function will use a `time_cursor` that will go iteratively at the start of each `time_window`, get the local minimums for that `time_window`, and keep only the worst date as being representative of the crisis during that `time_window`.
###Code
def get_single_crisis_dates_per_time_window(df):
'''
DOCSTRING:
INPUT: a DataFrame with local minimus of Change, with possibly several local minimums per time_window.
OUTPUT: List of dates corresponding to the worst data points (worst loss) within each time_window.
The output is the final list of crisis dates for that Stock over the period under study.
'''
crisis_dates = [] # initiate a list that will contain the crisis dates
time_cursor = df.index[0] # initiate a time cursor that will move at the start of each time_window to consider
# print(f'time_cursor initiated at: {time_cursor}')
time_window_df = df.loc[time_cursor:time_cursor+DateOffset(months=1)] # data within the time_window being pointed out by the cursor
# Loop running as long as the cursor can be moved to a next time_window (i.e as long as there are dates moving forward)
while True:
# print('\n')
# print( time_window_df )
# get the date of the worst crisis during the time_window and append to the list of crisis dates
crisis_dates.append( time_window_df['Change'].idxmin() )
# print(f'\nThe crisis dates so far are: {crisis_dates}')
# Try to get the next row after that time_window, and place the time_cursor there
try:
next_row = df[ time_window_df.index[-1] : ].iloc[1] # Try to get the next row after that time_window
time_cursor = next_row.name # place the time_cursor at the date corresponding to that next row
# print(f'Moving the time_cursor to the start of the next time window to consider: {time_cursor}')
time_window_df = df.loc[time_cursor:time_cursor+DateOffset(months=1)] # update the data being pointed out by the time_cursor
# If no next row, we are at the end of the data and we can break the loop there
except:
# print(f'There is not next date to which to move the the time cursor that is currently at {time_cursor}\nBREAK OUT OF THE LOOP')
break
return crisis_dates
stock_crisis_dates = get_single_crisis_dates_per_time_window(df1)
###Output
_____no_output_____
###Markdown
We found the worst crisis dates:
###Code
stock_crisis_dates
###Output
_____no_output_____
###Markdown
The data corresponding to those dates is:
###Code
df1 = df1.loc[stock_crisis_dates] # limit df1 (target output of the Notebook) to worst crisis dates only
df1
###Output
_____no_output_____
###Markdown
Find the date by which the stock comes back to its initial price For a given crisis, we are going to retrieve the date by which the stock came back to the reference price from which it has fallen.
###Code
for crisis_date in df1.index: # Iterate through crisis dates
# Try to get the first day after the crisis where the stock closed higher than the Reference Price.
# If it exists (thus the Try)
try:
df1.loc[crisis_date,'got back by'] = df.loc[ ( df.index > crisis_date ) &
( df.Close > df.loc[crisis_date,'Reference Price']) ].iloc[0].name.date()
except:
print(f'Could not find a date by which the stock got back to its reference price for crisis dated:Ù{crisis_date}')
df1
###Output
Could not find a date by which the stock got back to its reference price for crisis dated:Ù2020-03-20 00:00:00
###Markdown
And indicate the duration it took for the stock to come back to its reference price (to come back from the crisis)
###Code
# transform the index in a series of dates and calculate duration for stock to come back from crisis
df1['Duration (years)'] = df1['got back by'] - pd.to_datetime(df1.index.to_series()).dt.date
# Get the number of days of the duraction object and divide by 365 to get years
df1['Duration (years)'] = df1['Duration (years)'].apply(lambda x: x.days / 365)
df1
###Output
_____no_output_____
###Markdown
Develop the duration into the exact number of years, months and days:
###Code
for i in range(len(df1)-1):
my_relative_delta = relativedelta( df1.loc[df1.index[i],'got back by'] , pd.to_datetime(df1.index.to_series()).dt.date[i] )
df1.loc[df1.index[i],'years'] = my_relative_delta.years
df1.loc[df1.index[i],'months'] = my_relative_delta.months
df1.loc[df1.index[i],'days'] = my_relative_delta.days
df1
###Output
_____no_output_____
###Markdown
Rename the columns to make it more readable as a final output
###Code
# Rename columns to make it more readable
df1.rename(columns={'Close': 'Low', 'Reference Price':'from High', 'Duration (years)':'after (years)'}, inplace=True)
# Drop unnecessary columns
df1.drop(columns=['Local Min','Company'],inplace=True)
# Rename the index to "Crisis"
df1.index.rename('Crisis',inplace=True)
df1
###Output
_____no_output_____
###Markdown
RESULT
###Code
# Formatting the numbers
df1.index = df1.index.strftime("%Y-%m-%d") # Replace the index by formatted strings Y-d-m
df1.style.format({'Low':'${:.2f}','from High':'${:.2f}','Change':'{:.2f}%','got back by':'{}','after (years)':'{:.2f}',
'years':'{:.0f}','months':'{:.0f}','days':'{:.0f}'},na_rep="-")\
.bar(subset=['Change'],color='#E98888',align='mid')\
.bar(subset=['after (years)'],color='#88ABE9',align='mid')\
.set_caption(f"Company: {stock_name}")
#df.to_csv('BA.csv')
###Output
_____no_output_____
###Markdown
ANNEX (Plot)
###Code
# Will allow us to embed images in the notebook
%matplotlib inline
register_matplotlib_converters()
# Create a figure containing a single axes
fig, ax = plt.subplots()
# Draw on the axes
ax.plot(df.index.values,df['Change'])
plt.show()
###Output
_____no_output_____ |
Big-Data-Clusters/CU14/public/content/monitor-bdc/tsg070-use-azdata-sql-query.ipynb | ###Markdown
TSG070 - Query SQL master pool==============================Steps----- Parameters
###Code
query = "select * from sys.dm_cluster_endpoints"
###Output
_____no_output_____
###Markdown
Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False, regex_mask=None):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
cmd_display = cmd
if regex_mask is not None:
regex = re.compile(regex_mask)
cmd_display = re.sub(regex, '******', cmd)
print(f"START: {cmd_display} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd_display} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd_display} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
# Hints for tool retry (on transient fault), known errors and install guide
#
retry_hints = {'python': [ ], 'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', ], 'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', ], }
error_hints = {'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP012 - Install unixodbc for Mac', '../install/sop012-brew-install-odbc-for-sql-server.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb'], ], 'azdata': [['Please run \'azdata login\' to first authenticate', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Can\'t open lib \'ODBC Driver 17 for SQL Server', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ['NameError: name \'azdata_login_secret_name\' is not defined', 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', 'TSG124 - \'No credentials were supplied\' error from azdata login', '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', 'TSG126 - azdata fails with \'accept the license terms to use this product\'', '../repair/tsg126-accept-license-terms.ipynb'], ], 'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb'], ], }
install_hint = {'azdata': [ 'SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb' ], 'kubectl': [ 'SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb' ], }
print('Common functions defined successfully.')
###Output
_____no_output_____
###Markdown
Get the Kubernetes namespace for the big data clusterGet the namespace of the Big Data Cluster use the kubectl command lineinterface .**NOTE:**If there is more than one Big Data Cluster in the target Kubernetescluster, then either:- set \[0\] to the correct value for the big data cluster.- set the environment variable AZDATA\_NAMESPACE, before starting Azure Data Studio.
###Code
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
###Output
_____no_output_____
###Markdown
Get the controller username and passwordGet the controller username and password from the Kubernetes SecretStore and place in the required AZDATA\_USERNAME and AZDATA\_PASSWORDenvironment variables.
###Code
# Place controller secret in AZDATA_USERNAME/AZDATA_PASSWORD environment variables
import os, base64
os.environ["AZDATA_USERNAME"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.username}}', return_output=True, base64_decode=True)
os.environ["AZDATA_PASSWORD"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.password}}', return_output=True, base64_decode=True)
print(f"Controller username '{os.environ['AZDATA_USERNAME']}' and password stored in environment variables")
###Output
_____no_output_____
###Markdown
Use `azdata` to query the `SQL Master Pool`Get the current `@@VERSION` and `@@SERVERNAME` (which is the `pod` nameof the current primary replica in an HA configuration)
###Code
run('azdata sql query -q "select @@VERSION [version], @@SERVERNAME [primary replica pod]"')
###Output
_____no_output_____
###Markdown
Run the query
###Code
run(f'azdata sql query -q "{query}"')
print("Notebook execution is complete.")
###Output
_____no_output_____ |
MOOCS/Deeplearing_Specialization/Notebooks/Regularization-v2.ipynb | ###Markdown
RegularizationWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!**You will learn to:** Use regularization in your deep learning models.Let's first import the packages you are going to use.
###Code
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. **Figure 1** : **Football field** The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head They give you the following 2D dataset from France's past 10 games.
###Code
train_X, train_Y, test_X, test_Y = load_2D_dataset()
###Output
_____no_output_____
###Markdown
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.- If the dot is blue, it means the French player managed to hit the ball with his/her head- If the dot is red, it means the other team's player hit the ball with their head**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball. **Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. 1 - Non-regularized modelYou will use the following neural network (already implemented for you below). This model can be used:- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python. - in *dropout mode* -- by setting the `keep_prob` to a value less than oneYou will first try the model without any regularization. Then, you will implement:- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
###Code
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
###Code
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6557412523481002
Cost after iteration 10000: 0.16329987525724216
Cost after iteration 20000: 0.13851642423255986
###Markdown
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
###Code
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. 2 - L2 RegularizationThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$To:$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$Let's modify your cost and observe the consequences.**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :```pythonnp.sum(np.square(Wl))```Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
###Code
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = lambd/(2*m) * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
###Output
cost = 1.78648594516
###Markdown
**Expected Output**: **cost** 1.78648594516 Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. **Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
###Code
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + lambd/m * W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + lambd/m * W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + lambd/m * W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
###Output
dW1 = [[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 = [[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 = [[-1.77691347 -0.11832879 -0.09397446]]
###Markdown
**Expected Output**: **dW1** [[-0.25604646 0.12298827 -0.28297129] [-0.17706303 0.34536094 -0.4410571 ]] **dW2** [[ 0.79276486 0.85133918] [-0.0957219 -0.01720463] [-0.13100772 -0.03750433]] **dW3** [[-1.77691347 -0.11832879 -0.09397446]] Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call: - `compute_cost_with_regularization` instead of `compute_cost`- `backward_propagation_with_regularization` instead of `backward_propagation`
###Code
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6974484493131264
Cost after iteration 10000: 0.2684918873282239
Cost after iteration 20000: 0.2680916337127301
###Markdown
Congrats, the test set accuracy increased to 93%. You have saved the French football team!You are not overfitting the training data anymore. Let's plot the decision boundary.
###Code
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.**What is L2-regularization actually doing?**:L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. **What you should remember** -- the implications of L2-regularization on:- The cost computation: - A regularization term is added to the cost- The backpropagation function: - There are extra terms in the gradients with respect to weight matrices- Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. 3 - DropoutFinally, **dropout** is a widely used regularization technique that is specific to deep learning. **It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!<!--To understand drop-out, consider this conversation with a friend:- Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."!--> Figure 2 : Drop-out on the second hidden layer. At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. Figure 3 : Drop-out on the first and third hidden layers. $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. 3.1 - Forward propagation with dropout**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. **Instructions**:You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
###Code
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = 1 * (D1 < keep_prob) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = D1 * A1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = 1 * (D2 < keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = D2 * A2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
###Output
A3 = [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
###Markdown
**Expected Output**: **A3** [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]] 3.2 - Backward propagation with dropout**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. **Instruction**:Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. 2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
###Code
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
###Output
dA1 = [[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 = [[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
###Markdown
**Expected Output**: **dA1** [[ 0.36544439 0. -0.00188233 0. -0.17408748] [ 0.65515713 0. -0.00337459 0. -0. ]] **dA2** [[ 0.58180856 0. -0.00299679 0. -0.27715731] [ 0. 0.53159854 -0. 0.53159854 -0.34089673] [ 0. 0. -0.00292733 0. -0. ]] Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:- `forward_propagation_with_dropout` instead of `forward_propagation`.- `backward_propagation_with_dropout` instead of `backward_propagation`.
###Code
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6543912405149825
###Markdown
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! Run the code below to plot the decision boundary.
###Code
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
notebooks/filter_inspect.ipynb | ###Markdown
Examine Waterfalls and FR-plots for several redundant groups.
###Code
hd = HERAData(xtalk_filtered_sums[0])
antpairs_data = hd.get_antpairs()
reds = redcal.get_pos_reds(hd.antpos)
#reds = redcal.filter_reds(reds, antpos=hd.antpos)
reds = [[bl for bl in grp if bl in antpairs_data or bl[::-1] in antpairs_data] for grp in reds]
reds = [grp for grp in reds if len(grp)>0]
reds = sorted(reds, key=len, reverse=True)
frf_xtalk = frf.FRFilter(xtalk_filtered_sums)
frf_xtalk.read(axis='blt')
# generate redundantly averaged data
hd_xtalkr = utils.red_average(frf_xtalk.hd, inplace=False, reds=reds, red_bl_keys=[grp[0] for grp in reds])
frf_xtalkr = frf.FRFilter(hd_xtalkr)
for spw_num, spw in enumerate(spws):
frf_xtalkr.fft_data(window='bh', ax='both', assign=f'dfft2_spw_{spw_num}',
verbose=False, overwrite=True, edgecut_low=(0, spw[0]), edgecut_hi=(0, frf_xtalkr.Nfreqs-spw[1]))
frf_xtalkr.fft_data(window='bh', ax='freq', assign=f'dfft_spw_{spw_num}',
verbose=False, overwrite=True, edgecut_low=spw[0], edgecut_hi=frf_xtalkr.Nfreqs-spw[1])
if len(time_inpainted_sums) > 0:
frf_inpaint = frf.FRFilter(time_inpainted_sums)
frf_inpaint.read(axis='blt')
# generate redundantly averaged data
hd_inpaintr = utils.red_average(frf_inpaint.hd, inplace=False, reds=reds, red_bl_keys=[grp[0] for grp in reds])
frf_inpaintr = frf.FRFilter(hd_inpaintr)
for spw_num, spw in enumerate(spws):
frf_inpaintr.fft_data(window='bh', ax='both', assign=f'dfft2_spw_{spw_num}',
verbose=False, overwrite=True, edgecut_low=(0, spw[0]), edgecut_hi=(0, frf_inpaintr.Nfreqs-spw[1]))
frf_inpaintr.fft_data(window='bh', ax='freq', assign=f'dfft_spw_{spw_num}',
verbose=False, overwrite=True, edgecut_low=spw[0], edgecut_hi=frf_inpaintr.Nfreqs-spw[1])
def delay_plots(frft, frft_red, spw_num):
spw = spws[spw_num]
frft.fft_data(window='bh', ax='both', assign=f'dfft2_{spw_num}', keys=[reds[0][0] + ('nn',)], overwrite=True,
edgecut_low=(0, spw[0]), edgecut_hi=(0, frf_xtalkr.Nfreqs-spw[1]))
df = np.mean(np.diff(frft.freqs))
dt = np.mean(np.diff(frft.times * 3600 * 24))
cmax_frate = 10 ** np.round(np.log10(np.abs(getattr(frft_red, f'dfft2_spw_{spw_num}')[reds[0][0] + ('nn',)] * dt * df).max()))
cmin_frate = cmax_frate / 1e5
cmax_delay = 10 ** np.round(np.log10(np.abs(getattr(frft_red, f'dfft_spw_{spw_num}')[reds[0][0] + ('nn',)] * df).max()))
cmin_delay = cmax_delay / 1e5
for gn, grp in enumerate(reds[::nskip][:nreds]):
ext_frate = [frft.delays.min(), frft.delays.max(), frft.frates.max(), frft.frates.min()]
ext_tdelay = [frft.delays.min(), frft.delays.max(),
frft.times.max(), frft.times.min()]
lst_func = interp1d(frft.times, frft.lsts * 12 / np.pi)
fig, axarr = plt.subplots(2, 2 * min(len(grp) + 1, max_bls_per_redgrp + 1))
nbls = (len(axarr[0]) - 1) // 2
fig.set_size_inches(32, 8)
cbax1 = fig.add_axes([0.105, 0.35, 0.005, 0.3])
cbax2 = fig.add_axes([0.915, 0.35, 0.005, 0.3])
if grp[0] in frft.bllens:
hrzn_dly = frft.bllens[grp[0]] * 1e9
blvec = frft.blvecs[grp[0]]
else:
hrzn_dly = frft.bllens[grp[0][::-1]] * 1e9
blvec = -frft.blvecs[grp[0][::-1]]
# get vmin and vmax from grp[0][0] min / max rounded up / down
# generate fringe-rate plots.
for pn, pol in enumerate(['ee', 'nn']):
for blnum in range(nbls + 1):
plt.sca(axarr[pn][blnum])
if blnum < nbls:
bl = grp[blnum]
blk = bl + (pol,)
frft.fft_data(window='bh', ax='both', assign=f'dfft2_spw_{spw_num}', keys=[blk], overwrite=True,
edgecut_low=[0, spw[0]], edgecut_hi=[0, frf_xtalkr.Nfreqs-spw[1]])
cm = plt.imshow(np.abs(getattr(frft, f'dfft2_spw_{spw_num}')[blk] * df * dt), norm=LogNorm(cmin_frate, cmax_frate), extent=ext_frate, aspect='auto', interpolation='nearest', cmap='inferno')
plt.title(f'{blk} \n{frft.freqs[spw[0]] / 1e6:.1f} - {frft.freqs[spw[1] - 1] / 1e6:.1f} ')
else:
blk = grp[0] + (pol,)
d = getattr(frft_red, f'dfft2_spw_{spw_num}')[blk] * df * dt
conj = blk not in list(frft_red.data.keys())
if conj:
d = np.conj(d[::-1, ::-1])
cm = plt.imshow(np.abs(d), norm=LogNorm(cmin_frate, cmax_frate), extent=ext_frate, aspect='auto', interpolation='nearest', cmap='inferno')
plt.title(f'{blvec[0]:.1f} m, {blvec[1]:.1f} m, {pol}\n{frft.freqs[spw[0]] / 1e6:.1f} - {frft.freqs[spw[1]-1] / 1e6:.1f} ')
plt.xlim(-1000, 1000)
plt.ylim(-1.5, 1.5)
plt.axvline(hrzn_dly, ls='--', color='w', lw=1)
plt.axvline(-hrzn_dly, ls='--', color='w', lw=1)
if pn == 0:
cbar = fig.colorbar(cm, orientation='vertical', cax=cbax1)
cbax1.yaxis.set_ticks_position('left')
plt.gca().set_xticklabels(['' for tick in plt.gca().get_xticklabels()])
cbar.ax.set_ylabel('Abs($\\widetilde{V}_{\\tau, f_r}$) [Jy]', rotation=90)
else:
plt.gca().set_xlabel('$\\tau$ [ns]')
if blnum > 0:
plt.gca().set_yticklabels(['' for tick in plt.gca().get_yticklabels()])
else:
plt.gca().set_ylabel('$f_r$ [mHz]')
# generate delay-waterfall plots.
for pn, pol in enumerate(['ee', 'nn']):
for blnum in range(nbls + 1):
plt.sca(axarr[pn][blnum + nbls + 1])
if blnum < nbls:
bl = grp[blnum]
blk = bl + (pol,)
frft.fft_data(window='bh', ax='freq', assign=f'dfft_spw_{spw_num}', keys=[blk], overwrite=True,
edgecut_low=spw[0], edgecut_hi=frf_xtalkr.Nfreqs-spw[1])
cm = plt.imshow(np.abs(getattr(frft, f'dfft_spw_{spw_num}')[blk] * df), norm=LogNorm(cmin_delay, cmax_delay), extent=ext_tdelay, aspect='auto', interpolation='nearest', cmap='inferno')
plt.title(f'{blk}')
else:
blk = grp[0] + (pol,)
d = getattr(frft_red, f'dfft_spw_{spw_num}')[blk] * df
conj = blk not in list(frft_red.data.keys())
if conj:
d = np.conj(d[:, ::-1])
cm = plt.imshow(np.abs(d), norm=LogNorm(cmin_delay, cmax_delay), extent=ext_tdelay, aspect='auto', interpolation='nearest', cmap='inferno')
plt.title(f'{blvec[0]:.1f} m, {blvec[1]:.1f} m, {pol}')
plt.xlim(-1000, 1000)
plt.axvline(hrzn_dly, ls='--', color='w', lw=1)
plt.axvline(-hrzn_dly, ls='--', color='w', lw=1)
plt.gca().set_yticks([t for t in plt.gca().get_yticks() if t >= ext_tdelay[-1] and t <= ext_tdelay[-2]])
if pn == 0:
plt.gca().set_xticklabels(['' for tick in plt.gca().get_xticklabels()])
else:
plt.gca().set_xlabel('$\\tau$ [ns]')
if blnum < nbls:
plt.gca().set_yticklabels(['' for tick in plt.gca().get_yticklabels()])
else:
plt.gca().set_ylabel('LST [Hrs]')
plt.gca().set_yticklabels([f'{lst_func(t):.1f}' for t in plt.gca().get_yticks()])
cbar = fig.colorbar(cm, orientation='vertical', cax=cbax2)
cbar.ax.set_ylabel('Abs($\\widetilde{V}$) [Jy Hz]', rotation=90)
plt.gca().yaxis.tick_right()
plt.gca().yaxis.set_label_position("right")
plt.show()
def freq_plots(frft, frft_red, spw_num):
cmax_freq = 10 ** np.round(np.log10(np.abs(frft_red.data[reds[0][0] + ('nn',)]).max()))
cmin_freq = cmax_freq / 1e5
spw_inds = np.arange(spws[spw_num][0], spws[spw_num][1]).astype(int)
for gn, grp in enumerate(reds[::nskip][:nreds]):
ext_freq = [frft.freqs[spw_inds].min() / 1e6, frft.freqs[spw_inds].max() / 1e6,
frft.times.max(), frft.times.min()]
lst_func = interp1d(frft.times, frft.lsts * 12 / np.pi)
fig, axarr = plt.subplots(2, 2 * min(len(grp) + 1, max_bls_per_redgrp + 1))
cbax1 = fig.add_axes([0.105, 0.35, 0.005, 0.3])
cbax2 = fig.add_axes([0.915, 0.35, 0.005, 0.3])
nbls = (len(axarr[0]) - 1) // 2
fig.set_size_inches(32, 8)
if grp[0] in frft.bllens:
hrzn_dly = frft.bllens[grp[0]] * 1e9
blvec = frft.blvecs[grp[0]]
else:
hrzn_dly = frft.bllens[grp[0][::-1]] * 1e9
blvec = -frft.blvecs[grp[0][::-1]]
# generate fringe-rate plots.
for pn, pol in enumerate(['ee', 'nn']):
for blnum in range(nbls + 1):
plt.sca(axarr[pn][blnum])
if blnum < nbls:
bl = grp[blnum]
blk = bl + (pol,)
cm = plt.imshow(np.abs(frft.data[blk][:, spw_inds]) / ~frft.flags[blk][:, spw_inds], norm=LogNorm(cmin_freq, cmax_freq), extent=ext_freq, aspect='auto', interpolation='nearest', cmap='inferno')
plt.title(f'{blk}')
else:
blk = grp[0] + (pol,)
d = frft_red.data[blk][:, spw_inds]
conj = blk not in list(frft_red.data.keys())
if conj:
d = np.conj(d)
cm = plt.imshow(np.abs(d), norm=LogNorm(cmin_freq, cmax_freq), extent=ext_freq, aspect='auto', interpolation='nearest', cmap='inferno')
plt.title(f'{blvec[0]:.1f} m, {blvec[1]:.1f} m, {pol}')
plt.gca().set_yticks([t for t in plt.gca().get_yticks() if t >= ext_freq[-1] and t <= ext_freq[-2]])
if pn == 0:
plt.gca().set_xticklabels(['' for tick in plt.gca().get_xticklabels()])
cbar = fig.colorbar(cm, orientation='vertical', cax=cbax1)
cbax1.yaxis.set_ticks_position('left')
cbar.ax.set_ylabel('Abs(V) [Jy]', rotation=90)
else:
plt.gca().set_xlabel('$\\nu$ [MHz]')
if blnum > 0:
plt.gca().set_yticklabels(['' for tick in plt.gca().get_yticklabels()])
else:
plt.gca().set_ylabel('LST [Hrs]')
plt.gca().set_yticklabels([f'{lst_func(t):.1f}' for t in plt.gca().get_yticks()])
# generate delay-waterfall plots.
for pn, pol in enumerate(['ee', 'nn']):
for blnum in range(nbls + 1):
plt.sca(axarr[pn][blnum + nbls + 1])
if blnum < nbls:
bl = grp[blnum]
blk = bl + (pol,)
cm = plt.imshow(np.angle(frft.data[blk][:, spw_inds]) / ~frft.flags[blk][:, spw_inds], vmin=-np.pi, vmax=np.pi, extent=ext_freq, aspect='auto', interpolation='nearest', cmap='twilight')
plt.title(f'{blk}')
else:
blk = grp[0] + (pol,)
d = frft_red.data[blk][:, spw_inds]
conj = blk not in list(frft_red.data.keys())
if conj:
d = np.conj(d)
cm = plt.imshow(np.angle(d) / ~frft.flags[blk][:, spw_inds], vmin=-np.pi, vmax=np.pi, extent=ext_freq, aspect='auto', interpolation='nearest', cmap='twilight')
plt.title(f'{blvec[0]:.1f} m, {blvec[1]:.1f} m, {pol}')
plt.gca().set_yticks([t for t in plt.gca().get_yticks() if t >= ext_freq[-1] and t <= ext_freq[-2]])
if pn == 0:
plt.gca().set_xticklabels(['' for tick in plt.gca().get_xticklabels()])
else:
plt.gca().set_xlabel('$\\nu$ [MHz]')
if blnum < nbls:
plt.gca().set_yticklabels(['' for tick in plt.gca().get_yticklabels()])
else:
plt.gca().set_ylabel('LST [Hrs]')
plt.gca().set_yticklabels([f'{lst_func(t):.1f}' for t in plt.gca().get_yticks()])
cbar = fig.colorbar(cm, orientation='vertical', cax=cbax2)
cbar.ax.set_ylabel('Arg(V) [rad]', rotation=270)
plt.gca().yaxis.tick_right()
plt.gca().yaxis.set_label_position("right")
plt.show()
if len(time_inpainted_sums) > 0:
for spw_num in range(len(spws)):
freq_plots(frf_inpaint, frf_inpaintr, spw_num)
if len(time_inpainted_sums) > 0:
for spw_num in range(len(spws)):
delay_plots(frf_inpaint, frf_inpaintr, spw_num)
for spw_num in range(len(spws)):
freq_plots(frf_xtalk, frf_xtalkr, spw_num)
for spw_num in range(len(spws)):
delay_plots(frf_xtalk, frf_xtalkr, spw_num)
###Output
_____no_output_____ |
sandbox/distribution_maps/all_states_distmaps.ipynb | ###Markdown
Example: Rio Grande do Sul
###Code
RS = municipios.query("estado=='RS'")
RS.loc[:, 'municipio'] = decodar(RS.loc[:, 'municipio'])
mapa = geobr.read_municipality(code_muni='RS', year=2018)
mapa.rename(columns={'name_muni': 'municipio'}, inplace=True)
mapa.loc[:, 'municipio'] = decodar(mapa.loc[:, 'municipio'])
full = pd.merge(mapa, RS, on='municipio', how='left')
full["casosAcumulado"] = full["casosAcumulado"].fillna(0)
natural_order = ["0",
"1-10",
"11-50",
"51-500",
">500",
">1000",
">10000"]
full["casos_categorizados"] = pd.cut(full["casosAcumulado"],
bins=[-1, 1, 10,
50, 500, 1000,
10000, 100000000],
labels=natural_order)
###Output
_____no_output_____
###Markdown
Instructions on how to add save function to altair:* https://github.com/altair-viz/altair_savernodejs - Had to npm install vega too* https://github.com/altair-viz/altair_saver/issues/13
###Code
alt.Chart(full).mark_geoshape(
stroke="lightgray",
strokeOpacity=0.2
).encode(
alt.Color('casos_categorizados:O',
sort=natural_order,
scale=alt.Scale(scheme='lightorange'))
).properties(
width=450,
height=300
).configure_legend(
title=None
).configure_view(
strokeWidth=0
).save(f"RS_{recent}.svg", method='node')
###Output
_____no_output_____
###Markdown
Now running for all statesThis takes some time (though not as much as I had imagined)I'm saving maps as svg, but pngs are also an option, [some discussion about that here](https://en.wikipedia.org/wiki/Wikipedia:Blank_maps)
###Code
def get_estado(municipios, est):
estado = municipios.query("estado==@est")
estado.loc[:, 'municipio'] = decodar(estado.loc[:, 'municipio'])
return(estado)
def get_mapa(est):
geobr.read_municipality(code_muni=f"{est}", year=2018)
mapa.rename(columns={'name_muni': 'municipio'}, inplace=True)
mapa.loc[:, 'municipio'] = decodar(mapa.loc[:, 'municipio'])
return(mapa)
def get_tabela_completa(mapa, estado, natural_order):
full = pd.merge(mapa, estado, on='municipio', how='left')
full["casosAcumulado"] = full["casosAcumulado"].fillna(0)
full["casos_categorizados"] = pd.cut(full["casosAcumulado"],
bins=[-1, 1, 10,
50, 500, 1000,
10000, 100000000],
labels=natural_order)
return(full)
def get_altair_map(full, natural_order):
mapa_para_retornar = alt.Chart(full).mark_geoshape(
stroke="lightgray",
strokeOpacity=0.2
).encode(
alt.Color('casos_categorizados:O',
sort=natural_order,
scale=alt.Scale(scheme='lightorange'))
).properties(
width=450,
height=300
).configure_legend(
title=None
).configure_view(
strokeWidth=0
)
return(mapa_para_retornar)
import datetime
from datetime import date, timedelta
def craft_description(place, data_dos_dados):
source_url = "https://covid.saude.gov.br/"
today = date.today()
today_in_commons_format = today.strftime("%Y-%m-%d")
result = '''=={{int:filedesc}}==
{{Information
|description={{pt|1=Casos de COVID-19 por município no estado de ''' + place + " até o dia " + data_dos_dados + ''' de 2020.
Gráfico em Python gerado a partir dos dados de [''' + source_url + '''].
Script disponível em github.com/lubianat/wikidata_covid19/tree/master/sandbox/distribution_maps}}.
|date=''' + today_in_commons_format + '''
|source={{own}}
|author=[[User:TiagoLubiana|TiagoLubiana]] and [[User:Jvcavv]]
|permission=
|other versions=
}}
=={{int:license-header}}==
{{self|cc-by-sa-4.0}}
[[Category:COVID-19_pandemic_in_Brazil_by_state]]
'''
return(result)
for est in estados:
estado = get_estado(municipios, est)
mapa = get_mapa(est)
natural_order = ["0",
"1-10",
"11-50",
"51-500",
">500",
">1000",
">10000"]
full = get_tabela_completa(mapa, estado, natural_order)
mapa_plotado_via_altair = get_altair_map(full, natural_order)
nome_do_arquivo_da_figura = f"./fig/{est}_{recent}.svg"
mapa_plotado_via_altair.save(nome_do_arquivo_da_figura, method='node')
print(mapa_plotado_via_altair)
break
data_dos_dados="16 de maio de 2020"
description = craft_description(place=est, data_dos_dados=data_dos_dados)
data_dos_dados_sem_espaco = "_".join(data_dos_dados.split(" "))
nome_do_arquivo_descrevendo_a_figura = "fig/description_cases_" + est + "_" + data_dos_dados_sem_espaco + ".txt"
file2 = open(nome_do_arquivo_descrevendo_casos, "w")
file2.write(description)
file2.close()
command_cases = "yes N | python3 upload.py -keep -filename " + nome_do_arquivo_da_figura + ' -summary:"updating status for today"' + " $(cat " + nome_do_arquivo_descrevendo_a_figura + ")"
print(command_cases)
! $command_cases
###Output
yes N | python3 upload.py -keep -filename ./fig/MS_2020-05-16.svg -summary:"updating status for today" $(cat fig/description_cases_MS_16_de_maio_de_2020.txt)
/home/lubianat/.local/lib/python3.6/site-packages/pywikibot/config2.py:1091: UserWarning:
Configuration variable "use_api_login" is defined in your user-
config.py but unknown. It can be a misspelled one or a variable that
is no longer supported.
'supported.'.format(name)), UserWarning)
['./fig/MS_2020-05-16.svg']
WARNING: No user is logged in on site commons:commons
Password for user TiagoLubiana on commons:commons (no characters will be shown): |
Analyze_ab_test_results_notebook_3.ipynb | ###Markdown
Analyze A/B Test ResultsThis project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck! Table of Contents- [Introduction](intro)- [Part I - Probability](probability)- [Part II - A/B Test](ab_test)- [Part III - Regression](regression) IntroductionA/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.**As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question.** The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the [RUBRIC](https://review.udacity.com/!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric). Part I - ProbabilityTo get started, let's import our libraries.
###Code
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
#We are setting the seed to assure you get the same answers on quizzes as we set up
random.seed(42)
###Output
_____no_output_____
###Markdown
`1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**a. Read in the dataset and take a look at the top few rows here:
###Code
#Read the data and store it in dataframe(df)
df = pd.read_csv('ab_data.csv')
df.head()
###Output
_____no_output_____
###Markdown
b. Use the below cell to find the number of rows in the dataset.
###Code
#To count the no. of rows and columns
df.shape
###Output
_____no_output_____
###Markdown
c. The number of unique users in the dataset.
###Code
#To find the number of unique user by using user_id
df['user_id'].nunique()
###Output
_____no_output_____
###Markdown
d. The proportion of users converted.
###Code
#Find proportion by getting mean
df['converted'].mean()
###Output
_____no_output_____
###Markdown
e. The number of times the `new_page` and `treatment` don't line up.
###Code
#To find the no. of mismatch values in defined columns
pd.crosstab(df.group,df['landing_page'],margins=True)
###Output
_____no_output_____
###Markdown
new page = 1928 and old page = 1965 (new page + old page) = 3893Number of times new pages and old pages don't line up = 3893 f. Do any of the rows have missing values?
###Code
#To get complete view on dataset
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 294478 entries, 0 to 294477
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 user_id 294478 non-null int64
1 timestamp 294478 non-null object
2 group 294478 non-null object
3 landing_page 294478 non-null object
4 converted 294478 non-null int64
dtypes: int64(2), object(3)
memory usage: 11.2+ MB
###Markdown
`2.` For the rows where **treatment** is not aligned with **new_page** or **control** is not aligned with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to provide how we should handle these rows. a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in **df2**.
###Code
#Aligned mismatch values and store in dataframe2
df2 = df[((df['group']=='treatment') & (df['landing_page']=='new_page'))| ((df['group']=='control') & (df['landing_page']=='old_page'))]
# Double Check all of the correct rows were removed - this should be 0
df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]
###Output
_____no_output_____
###Markdown
`3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom. a. How many unique **user_id**s are in **df2**?
###Code
#Find unique number of users in df2
df2['user_id'].nunique()
###Output
_____no_output_____
###Markdown
b. There is one **user_id** repeated in **df2**. What is it?
###Code
#To find the duplicate value in user id column
df2[df2['user_id'].duplicated()]['user_id']
###Output
_____no_output_____
###Markdown
c. What is the row information for the repeat **user_id**?
###Code
#To represent duplicate row
df2[df2['user_id'].duplicated()]
#Check the no of rows and column before dropping
df2.shape
###Output
_____no_output_____
###Markdown
d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**.
###Code
#Dropped duplicate value(Cleaning of data)
df2 = df2.drop(df2[df2.user_id.duplicated()].index, axis=0)
#To confirm Duplicate dropped or not
df2.shape
###Output
_____no_output_____
###Markdown
`4.` Use **df2** in the below cells to answer the quiz questions related to **Quiz 4** in the classroom.a. What is the probability of an individual converting regardless of the page they receive?
###Code
#Using mean find probability of converted
df2['converted'].mean()
###Output
_____no_output_____
###Markdown
b. Given that an individual was in the `control` group, what is the probability they converted?
###Code
#To find probability of only control group which is converted
df2.query('group == "control"').converted.mean()
###Output
_____no_output_____
###Markdown
c. Given that an individual was in the `treatment` group, what is the probability they converted?
###Code
#To find probability of only treatment group which is converted
df2.query('group == "treatment"').converted.mean()
###Output
_____no_output_____
###Markdown
d. What is the probability that an individual received the new page?
###Code
#To find number of items in an object
len(df2)
#To find number of items in landing page with only new page
len(df2.query('landing_page == "new_page"'))
#Probability individual get new page
len(df2.query('landing_page == "new_page"'))/len(df2)
###Output
_____no_output_____
###Markdown
e. Consider your results from a. through d. above, and explain below whether you think there is sufficient evidence to say that the new treatment page leads to more conversions. **Your answer goes here.** With help of above activity we couldn't find the solid evidence which can proof either old page need to be modified or new page.It seems like control group(old user) has higher conversion rate compare to treatment(new page). We need to do more test like finding P value to support the investigation. Part II - A/B TestNotice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed. However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another? These questions are the difficult parts associated with A/B tests in general. `1.` For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of **$p_{old}$** and **$p_{new}$**, which are the converted rates for the old and new pages. $$H_{0}(Null-hypothesis) : $$ $$p_{old} \geq P_{new} $$ $$H_{1}(Alternative-hypothesis) : $$ $$p_{new} \gt p_{old} $$ `2.` Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have "true" success rates equal to the **converted** success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the **converted** rate in **ab_data.csv** regardless of the page. Use a sample size for each page equal to the ones in **ab_data.csv**. Perform the sampling distribution for the difference in **converted** between the two pages over 10,000 iterations of calculating an estimate from the null. Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use **Quiz 5** in the classroom to make sure you are on the right track. a. What is the **convert rate** for $p_{new}$ under the null?
###Code
#Pnew is equal to converted mean in ab_data.csv
p_new = df2.converted.mean()
p_new
###Output
_____no_output_____
###Markdown
b. What is the **convert rate** for $p_{old}$ under the null?
###Code
#Pold is also equal to converted mean in ab_data.csv
p_old = df2.converted.mean()
p_old
###Output
_____no_output_____
###Markdown
c. What is $n_{new}$?
###Code
#Create dataframe where landing page is represent only new page from df2
n_new = df2.query('landing_page == "new_page"').shape[0]
n_new
###Output
_____no_output_____
###Markdown
d. What is $n_{old}$?
###Code
#Create dataframe where landing page is represent only new page from df2
old_page = df2.query('landing_page == "old_page"').shape[0]
n_old = old_page
n_old
###Output
_____no_output_____
###Markdown
e. Simulate $n_{new}$ transactions with a convert rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**.
###Code
new_page_converted = np.random.binomial(n_new,p_new)
###Output
_____no_output_____
###Markdown
f. Simulate $n_{old}$ transactions with a convert rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**.
###Code
old_page_converted = np.random.binomial(n_old,p_old)
###Output
_____no_output_____
###Markdown
g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).
###Code
new_page_converted/n_new - old_page_converted/n_old
###Output
_____no_output_____
###Markdown
h. Simulate 10,000 $p_{new}$ - $p_{old}$ values using this same process similarly to the one you calculated in parts **a. through g.** above. Store all 10,000 values in a numpy array called **p_diffs**.
###Code
#To run and find the different 10000 samples of Treatment and control group
p_diffs = []
for _ in range(10000):
new_page_converted = np.random.binomial(n_new,p_new)
old_page_converted = np.random.binomial(n_old, p_old)
diff = new_page_converted/n_new - old_page_converted/n_old
p_diffs.append(diff)
###Output
_____no_output_____
###Markdown
i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
###Code
#For visualization plot historgram
plt.hist(p_diffs);
###Output
_____no_output_____
###Markdown
j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?
###Code
#convert p_diffs to array
p_diffs = np.array(p_diffs)
#Get the difference in ab_data.csv
act_diff = df2[df2['group'] == 'treatment']['converted'].mean() - df2[df2['group'] == 'control']['converted'].mean()
act_diff
#calculate the proportion of p_diffs are greater than actual difference in ab_data.csv(p value)
(p_diffs > act_diff).mean()
plt.hist(p_diffs)
plt.axvline(act_diff,c='r',linewidth=3)
###Output
_____no_output_____
###Markdown
k. In words, explain what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages? **Put your answer here.** In the previous part, we were calculating the p-value which is the probability of getting our statistic or a more extreme value if the null is true.Having a large p-value goes on to say that the statistic is more likely to come from our null hypothesis; hence, there is no statistical evidence to reject the null hypothesis which states that old pages are the same or slightly better than the new pages. l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let `n_old` and `n_new` refer the the number of rows associated with the old page and new pages, respectively.
###Code
import statsmodels.api as sm
convert_old = df2.query("landing_page=='old_page'").converted.sum()
convert_new = df2.query("landing_page=='new_page'").converted.sum()
n_old = df2.query("landing_page=='old_page'").shape[0]
n_new = df2.query("landing_page=='new_page'").shape[0]
###Output
_____no_output_____
###Markdown
m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](http://knowledgetack.com/python/statsmodels/proportions_ztest/) is a helpful link on using the built in.
###Code
z,p = sm.stats.proportions_ztest([convert_old,convert_new],[n_old,n_new],alternative = 'smaller')
z,p
###Output
_____no_output_____
###Markdown
n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**?
###Code
#Import norm function to compute the significance of z value
from scipy.stats import norm
norm.cdf(z)
#Checked our critical value at 95% confidence interval
norm.ppf(1-(0.05/2))
###Output
_____no_output_____
###Markdown
**Put your answer here.** Results above deduced a z_score of 1.31. Since this value does not exceed the critical value at 95% confidence interval (1.96); there is no statistical evidence to reject the null hypothesis. Furthermore, p-value obtained is similar to the result obtained from our previous findings in j. and k. which also fails to reject the null hypothesis as it provides evidence of a higher probability of the null hypothesis. Part III - A regression approach`1.` In this final part, you will see that the result you acheived in the previous A/B test can also be acheived by performing regression.a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case? **Put your answer here.** We have only two outcomes Conversion or no conversion so i will use Logistic regression to perform it. b. The goal is to use **statsmodels** to fit the regression model you specified in part **a.** to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create a column for the intercept, and create a dummy variable column for which page each user received. Add an **intercept** column, as well as an **ab_page** column, which is 1 when an individual receives the **treatment** and 0 if **control**.
###Code
df2['ab_page'] = df2['group'].apply(lambda x : 1 if x=='treatment' else 0)
###Output
_____no_output_____
###Markdown
c. Use **statsmodels** to import your regression model. Instantiate the model, and fit the model using the two columns you created in part **b.** to predict whether or not an individual converts.
###Code
df2['intercept'] = 1
lm = sm.Logit(df2['converted'],df2[['intercept','ab_page']])
results = lm.fit()
results.summary()
###Output
Optimization terminated successfully.
Current function value: 0.366118
Iterations 6
###Markdown
d. Provide the summary of your model below, and use it as necessary to answer the following questions. e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**? **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in the **Part II**? P value associated with ab_page is -1.311Difference is that in part II we perfomed one sided test but in Logistic regression we peformed two sided test either conversion or no conversion. With this regression test our p value is less as compare to part II. In Logistic regression we try t0 draw a line between data points but in part II we collected sample and compare with their respective means.The p-value associated with ab_page was 0.19 which was significantly lower than the one in Part II which was approximately 0.9. The reason for such a significant difference is because the null and alternative hypothesis differed in each exercise. f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model? Regression model is to predict the depedent value based on independent values. If we add other thinges in our model we can evaluate more factors in regard of our convertaion rate. But, some times increation things in our model will increase the relationship between indepdent variables that will be not a good sign. g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives. You will need to read in the **countries.csv** dataset and merge together your datasets on the approporiate rows. [Here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) are the docs for joining tables. Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - **Hint: You will need two columns for the three dummy variables.** Provide the statistical output as well as a written response to answer this question.
###Code
countries_df = pd.read_csv('./countries.csv')
df_new = countries_df.set_index('user_id').join(df2.set_index('user_id'), how='inner')
df_new.head(2)
### Create the necessary dummy variables
df_new[['CA','UK','US']] = pd.get_dummies(df_new['country'])
###Output
_____no_output_____
###Markdown
h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model. Provide the summary results, and your conclusions based on the results.
###Code
### Fit Your Linear Model And Obtain the Results
df_new['UK_ab_page'] = df_new['UK']*df_new['ab_page']
df_new['CA_ab_page'] = df_new['CA']*df_new['ab_page']
lm = sm.Logit(df_new['converted'],df_new[['CA','UK','US','ab_page','UK_ab_page','CA_ab_page']])
result = lm.fit()
result.summary()
###Output
Optimization terminated successfully.
Current function value: 0.366108
Iterations 6
|
Build Better Generative Adversarial Networks (GANs)/Week 3 - StyleGAN and Advancements/C2W3_Assignment.ipynb | ###Markdown
Components of StyleGAN GoalsIn this notebook, you're going to implement various components of StyleGAN, including the truncation trick, the mapping layer, noise injection, adaptive instance normalization (AdaIN), and progressive growing. Learning Objectives1. Understand the components of StyleGAN that differ from the traditional GAN.2. Implement the components of StyleGAN. Getting StartedYou will begin by importing some packages from PyTorch and defining a visualization function which will be useful later.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
def show_tensor_images(image_tensor, num_images=16, size=(3, 64, 64), nrow=3):
'''
Function for visualizing images: Given a tensor of images, number of images,
size per image, and images per row, plots and prints the images in an uniform grid.
'''
image_tensor = (image_tensor + 1) / 2
image_unflat = image_tensor.detach().cpu().clamp_(0, 1)
image_grid = make_grid(image_unflat[:num_images], nrow=nrow, padding=0)
plt.imshow(image_grid.permute(1, 2, 0).squeeze())
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Truncation TrickThe first component you will implement is the truncation trick. Remember that this is done after the model is trained and when you are sampling beautiful outputs. The truncation trick resamples the noise vector $z$ from a truncated normal distribution which allows you to tune the generator's fidelity/diversity. The truncation value is at least 0, where 1 means there is little truncation (high diversity) and 0 means the distribution is all truncated except for the mean (high quality/fidelity). This trick is not exclusive to StyleGAN. In fact, you may recall playing with it in an earlier GAN notebook.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL: get_truncated_noise
from scipy.stats import truncnorm
def get_truncated_noise(n_samples, z_dim, truncation):
'''
Function for creating truncated noise vectors: Given the dimensions (n_samples, z_dim)
and truncation value, creates a tensor of that shape filled with random
numbers from the truncated normal distribution.
Parameters:
n_samples: the number of samples to generate, a scalar
z_dim: the dimension of the noise vector, a scalar
truncation: the truncation value, a non-negative scalar
'''
#### START CODE HERE ####
truncated_noise = truncnorm.rvs(-truncation, truncation, size=(n_samples, z_dim))
#### END CODE HERE ####
return torch.Tensor(truncated_noise)
# Test the truncation sample
assert tuple(get_truncated_noise(n_samples=10, z_dim=5, truncation=0.7).shape) == (10, 5)
simple_noise = get_truncated_noise(n_samples=1000, z_dim=10, truncation=0.2)
assert simple_noise.max() > 0.199 and simple_noise.max() < 2
assert simple_noise.min() < -0.199 and simple_noise.min() > -0.2
assert simple_noise.std() > 0.113 and simple_noise.std() < 0.117
print("Success!")
###Output
Success!
###Markdown
Mapping $z$ → $w$The next component you need to implement is the mapping network. It takes the noise vector, $z$, and maps it to an intermediate noise vector, $w$. This makes it so $z$ can be represented in a more disentangled space which makes the features easier to control later.The mapping network in StyleGAN is composed of 8 layers, but for your implementation, you will use a neural network with 3 layers. This is to save time training later.Optional hints for MappingLayers1. This code should be five lines.2. You need 3 linear layers and should use ReLU activations.3. Your linear layers should be input -> hidden_dim -> hidden_dim -> output.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL: MappingLayers
class MappingLayers(nn.Module):
'''
Mapping Layers Class
Values:
z_dim: the dimension of the noise vector, a scalar
hidden_dim: the inner dimension, a scalar
w_dim: the dimension of the intermediate noise vector, a scalar
'''
def __init__(self, z_dim, hidden_dim, w_dim):
super().__init__()
self.mapping = nn.Sequential(
# Please write a neural network which takes in tensors of
# shape (n_samples, z_dim) and outputs (n_samples, w_dim)
# with a hidden layer with hidden_dim neurons
#### START CODE HERE ####
nn.Linear(z_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, w_dim)
#### END CODE HERE ####
)
def forward(self, noise):
'''
Function for completing a forward pass of MappingLayers:
Given an initial noise tensor, returns the intermediate noise tensor.
Parameters:
noise: a noise tensor with dimensions (n_samples, z_dim)
'''
return self.mapping(noise)
#UNIT TEST COMMENT: Required for grading
def get_mapping(self):
return self.mapping
# Test the mapping function
map_fn = MappingLayers(10,20,30)
assert tuple(map_fn(torch.randn(2, 10)).shape) == (2, 30)
assert len(map_fn.mapping) > 4
outputs = map_fn(torch.randn(1000, 10))
assert outputs.std() > 0.05 and outputs.std() < 0.3
assert outputs.min() > -2 and outputs.min() < 0
assert outputs.max() < 2 and outputs.max() > 0
layers = [str(x).replace(' ', '').replace('inplace=True', '') for x in map_fn.get_mapping()]
assert layers == ['Linear(in_features=10,out_features=20,bias=True)',
'ReLU()',
'Linear(in_features=20,out_features=20,bias=True)',
'ReLU()',
'Linear(in_features=20,out_features=30,bias=True)']
print("Success!")
###Output
Success!
###Markdown
Random Noise InjectionNext, you will implement the random noise injection that occurs before every AdaIN block. To do this, you need to create a noise tensor that is the same size as the current feature map (image).The noise tensor is not entirely random; it is initialized as one random channel that is then multiplied by learned weights for each channel in the image. For example, imagine an image has 512 channels and its height and width are (4 x 4). You would first create a random (4 x 4) noise matrix with one channel. Then, your model would create 512 values—one for each channel. Next, you multiply the (4 x 4) matrix by each one of these values. This creates a "random" tensor of 512 channels and (4 x 4) pixels, the same dimensions as the image. Finally, you add this noise tensor to the image. This introduces uncorrelated noise and is meant to increase the diversity in the image.New starting weights are generated for every new layer, or generator, where this class is used. Within a layer, every following time the noise injection is called, you take another step with the optimizer and the weights that you use for each channel are optimized (i.e. learned).Optional hint for InjectNoise1. The weight should have the shape (1, channels, 1, 1).Optional hint for InjectNoise1. Remember that you only make the noise for one channel (it is then multiplied by random values to create ones for the other channels). -->
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL: InjectNoise
class InjectNoise(nn.Module):
'''
Inject Noise Class
Values:
channels: the number of channels the image has, a scalar
'''
def __init__(self, channels):
super().__init__()
self.weight = nn.Parameter( # You use nn.Parameter so that these weights can be optimized
# Initiate the weights for the channels from a random normal distribution
#### START CODE HERE ####
torch.randn(1, channels, 1, 1)
#### END CODE HERE ####
)
def forward(self, image):
'''
Function for completing a forward pass of InjectNoise: Given an image,
returns the image with random noise added.
Parameters:
image: the feature map of shape (n_samples, channels, width, height)
'''
# Set the appropriate shape for the noise!
#### START CODE HERE ####
noise_shape = (image.shape[0], 1, image.shape[2], image.shape[3])
#### END CODE HERE ####
noise = torch.randn(noise_shape, device=image.device) # Creates the random noise
return image + self.weight * noise # Applies to image after multiplying by the weight for each channel
#UNIT TEST COMMENT: Required for grading
def get_weight(self):
return self.weight
#UNIT TEST COMMENT: Required for grading
def get_self(self):
return self
# UNIT TEST
test_noise_channels = 3000
test_noise_samples = 20
fake_images = torch.randn(test_noise_samples, test_noise_channels, 10, 10)
inject_noise = InjectNoise(test_noise_channels)
assert torch.abs(inject_noise.weight.std() - 1) < 0.1
assert torch.abs(inject_noise.weight.mean()) < 0.1
assert type(inject_noise.get_weight()) == torch.nn.parameter.Parameter
assert tuple(inject_noise.weight.shape) == (1, test_noise_channels, 1, 1)
inject_noise.weight = nn.Parameter(torch.ones_like(inject_noise.weight))
# Check that something changed
assert torch.abs((inject_noise(fake_images) - fake_images)).mean() > 0.1
# Check that the change is per-channel
assert torch.abs((inject_noise(fake_images) - fake_images).std(0)).mean() > 1e-4
assert torch.abs((inject_noise(fake_images) - fake_images).std(1)).mean() < 1e-4
assert torch.abs((inject_noise(fake_images) - fake_images).std(2)).mean() > 1e-4
assert torch.abs((inject_noise(fake_images) - fake_images).std(3)).mean() > 1e-4
# Check that the per-channel change is roughly normal
per_channel_change = (inject_noise(fake_images) - fake_images).mean(1).std()
assert per_channel_change > 0.9 and per_channel_change < 1.1
# Make sure that the weights are being used at all
inject_noise.weight = nn.Parameter(torch.zeros_like(inject_noise.weight))
assert torch.abs((inject_noise(fake_images) - fake_images)).mean() < 1e-4
assert len(inject_noise.weight.shape) == 4
print("Success!")
###Output
Success!
###Markdown
Adaptive Instance Normalization (AdaIN)The next component you will implement is AdaIN. To increase control over the image, you inject $w$ — the intermediate noise vector — multiple times throughout StyleGAN. This is done by transforming it into a set of style parameters and introducing the style to the image through AdaIN. Given an image ($x_i$) and the intermediate vector ($w$), AdaIN takes the instance normalization of the image and multiplies it by the style scale ($y_s$) and adds the style bias ($y_b$). You need to calculate the learnable style scale and bias by using linear mappings from $w$. $ \text{AdaIN}(\boldsymbol{\mathrm{x}}_i, \boldsymbol{\mathrm{y}}) = \boldsymbol{\mathrm{y}}_{s,i} \frac{\boldsymbol{\mathrm{x}}_i - \mu(\boldsymbol{\mathrm{x}}_i)}{\sigma(\boldsymbol{\mathrm{x}}_i)} + \boldsymbol{\mathrm{y}}_{b,i} $Optional hints for forward1. Remember the equation for AdaIN.2. The instance normalized image, style scale, and style shift have already been calculated for you.
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL: AdaIN
class AdaIN(nn.Module):
'''
AdaIN Class
Values:
channels: the number of channels the image has, a scalar
w_dim: the dimension of the intermediate noise vector, a scalar
'''
def __init__(self, channels, w_dim):
super().__init__()
# Normalize the input per-dimension
self.instance_norm = nn.InstanceNorm2d(channels)
# You want to map w to a set of style weights per channel.
# Replace the Nones with the correct dimensions - keep in mind that
# both linear maps transform a w vector into style weights
# corresponding to the number of image channels.
#### START CODE HERE ####
self.style_scale_transform = nn.Linear(w_dim, channels)
self.style_shift_transform = nn.Linear(w_dim, channels)
#### END CODE HERE ####
def forward(self, image, w):
'''
Function for completing a forward pass of AdaIN: Given an image and intermediate noise vector w,
returns the normalized image that has been scaled and shifted by the style.
Parameters:
image: the feature map of shape (n_samples, channels, width, height)
w: the intermediate noise vector
'''
normalized_image = self.instance_norm(image)
style_scale = self.style_scale_transform(w)[:, :, None, None]
style_shift = self.style_shift_transform(w)[:, :, None, None]
# Calculate the transformed image
#### START CODE HERE ####
transformed_image = style_scale * normalized_image + style_shift
#### END CODE HERE ####
return transformed_image
#UNIT TEST COMMENT: Required for grading
def get_style_scale_transform(self):
return self.style_scale_transform
#UNIT TEST COMMENT: Required for grading
def get_style_shift_transform(self):
return self.style_shift_transform
#UNIT TEST COMMENT: Required for grading
def get_self(self):
return self
w_channels = 50
image_channels = 20
image_size = 30
n_test = 10
adain = AdaIN(image_channels, w_channels)
test_w = torch.randn(n_test, w_channels)
assert adain.style_scale_transform(test_w).shape == adain.style_shift_transform(test_w).shape
assert adain.style_scale_transform(test_w).shape[-1] == image_channels
assert tuple(adain(torch.randn(n_test, image_channels, image_size, image_size), test_w).shape) == (n_test, image_channels, image_size, image_size)
w_channels = 3
image_channels = 2
image_size = 3
n_test = 1
adain = AdaIN(image_channels, w_channels)
adain.style_scale_transform.weight.data = torch.ones_like(adain.style_scale_transform.weight.data) / 4
adain.style_scale_transform.bias.data = torch.zeros_like(adain.style_scale_transform.bias.data)
adain.style_shift_transform.weight.data = torch.ones_like(adain.style_shift_transform.weight.data) / 5
adain.style_shift_transform.bias.data = torch.zeros_like(adain.style_shift_transform.bias.data)
test_input = torch.ones(n_test, image_channels, image_size, image_size)
test_input[:, :, 0] = 0
test_w = torch.ones(n_test, w_channels)
test_output = adain(test_input, test_w)
assert(torch.abs(test_output[0, 0, 0, 0] - 3 / 5 + torch.sqrt(torch.tensor(9 / 8))) < 1e-4)
assert(torch.abs(test_output[0, 0, 1, 0] - 3 / 5 - torch.sqrt(torch.tensor(9 / 32))) < 1e-4)
print("Success!")
###Output
Success!
###Markdown
Progressive Growing in StyleGANThe final StyleGAN component that you will create is progressive growing. This helps StyleGAN to create high resolution images by gradually doubling the image's size until the desired size.You will start by creating a block for the StyleGAN generator. This is comprised of an upsampling layer, a convolutional layer, random noise injection, an AdaIN layer, and an activation.
###Code
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL: MicroStyleGANGeneratorBlock
class MicroStyleGANGeneratorBlock(nn.Module):
'''
Micro StyleGAN Generator Block Class
Values:
in_chan: the number of channels in the input, a scalar
out_chan: the number of channels wanted in the output, a scalar
w_dim: the dimension of the intermediate noise vector, a scalar
kernel_size: the size of the convolving kernel
starting_size: the size of the starting image
'''
def __init__(self, in_chan, out_chan, w_dim, kernel_size, starting_size, use_upsample=True):
super().__init__()
self.use_upsample = use_upsample
# Replace the Nones in order to:
# 1. Upsample to the starting_size, bilinearly (https://pytorch.org/docs/master/generated/torch.nn.Upsample.html)
# 2. Create a kernel_size convolution which takes in
# an image with in_chan and outputs one with out_chan (https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html)
# 3. Create an object to inject noise
# 4. Create an AdaIN object
# 5. Create a LeakyReLU activation with slope 0.2
#### START CODE HERE ####
if self.use_upsample:
self.upsample = nn.Upsample((starting_size, starting_size), mode='bilinear')
self.conv = nn.Conv2d(in_chan, out_chan, kernel_size, padding=1) # Padding is used to maintain the image size
self.inject_noise = InjectNoise(out_chan)
self.adain = AdaIN(out_chan, w_dim)
self.activation = nn.LeakyReLU(0.2)
#### END CODE HERE ####
def forward(self, x, w):
'''
Function for completing a forward pass of MicroStyleGANGeneratorBlock: Given an x and w,
computes a StyleGAN generator block.
Parameters:
x: the input into the generator, feature map of shape (n_samples, channels, width, height)
w: the intermediate noise vector
'''
if self.use_upsample:
x = self.upsample(x)
x = self.conv(x)
x = self.inject_noise(x)
x = self.activation(x)
x = self.adain(x, w)
return x
#UNIT TEST COMMENT: Required for grading
def get_self(self):
return self;
test_stylegan_block = MicroStyleGANGeneratorBlock(in_chan=128, out_chan=64, w_dim=256, kernel_size=3, starting_size=8)
test_x = torch.ones(1, 128, 4, 4)
test_x[:, :, 1:3, 1:3] = 0
test_w = torch.ones(1, 256)
test_x = test_stylegan_block.upsample(test_x)
assert tuple(test_x.shape) == (1, 128, 8, 8)
assert torch.abs(test_x.mean() - 0.75) < 1e-4
test_x = test_stylegan_block.conv(test_x)
assert tuple(test_x.shape) == (1, 64, 8, 8)
test_x = test_stylegan_block.inject_noise(test_x)
test_x = test_stylegan_block.activation(test_x)
assert test_x.min() < 0
assert -test_x.min() / test_x.max() < 0.4
test_x = test_stylegan_block.adain(test_x, test_w)
foo = test_stylegan_block(torch.ones(10, 128, 4, 4), torch.ones(10, 256))
print("Success!")
###Output
Success!
###Markdown
Now, you can implement progressive growing. StyleGAN starts with a constant 4 x 4 (x 512 channel) tensor which is put through an iteration of the generator without upsampling. The output is some noise that can then be transformed into a blurry 4 x 4 image. This is where the progressive growing process begins. The 4 x 4 noise can be further passed through a generator block with upsampling to produce an 8 x 8 output. However, this will be done gradually.You will simulate progressive growing from an 8 x 8 image to a 16 x 16 image. Instead of simply passing it to the generator block with upsampling, StyleGAN gradually trains the generator to the new size by mixing in an image that was only upsampled. By mixing an upsampled 8 x 8 image (which is 16 x 16) with increasingly more of the 16 x 16 generator output, the generator is more stable as it progressively trains. As such, you will do two separate operations with the 8 x 8 noise:1. Pass it into the next generator block to create an output noise, that you will then transform to an image.2. Transform it into an image and then upsample it to be 16 x 16.You will now have two images that are both double the resolution of the 8 x 8 noise. Then, using an alpha ($\alpha$) term, you combine the higher resolution images obtained from (1) and (2). You would then pass this into the discriminator and use the feedback to update the weights of your generator. The key here is that the $\alpha$ term is gradually increased until eventually, only the image from (1), the generator, is used. That is your final image or you could continue this process to make a 32 x 32 image or 64 x 64, 128 x 128, etc. This micro model you will implement will visualize what the model outputs at a particular stage of training, for a specific value of $\alpha$. However to reiterate, in practice, StyleGAN will slowly phase out the upsampled image by increasing the $\alpha$ parameter over many training steps, doing this process repeatedly with larger and larger alpha values until it is 1—at this point, the combined image is solely comprised of the image from the generator block. This method of gradually training the generator increases the stability and fidelity of the model.Optional hint for forward1. You may find [torch.lerp](https://pytorch.org/docs/stable/generated/torch.lerp.html) helpful.
###Code
# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL: MicroStyleGANGenerator
class MicroStyleGANGenerator(nn.Module):
'''
Micro StyleGAN Generator Class
Values:
z_dim: the dimension of the noise vector, a scalar
map_hidden_dim: the mapping inner dimension, a scalar
w_dim: the dimension of the intermediate noise vector, a scalar
in_chan: the dimension of the constant input, usually w_dim, a scalar
out_chan: the number of channels wanted in the output, a scalar
kernel_size: the size of the convolving kernel
hidden_chan: the inner dimension, a scalar
'''
def __init__(self,
z_dim,
map_hidden_dim,
w_dim,
in_chan,
out_chan,
kernel_size,
hidden_chan):
super().__init__()
self.map = MappingLayers(z_dim, map_hidden_dim, w_dim)
# Typically this constant is initiated to all ones, but you will initiate to a
# Gaussian to better visualize the network's effect
self.starting_constant = nn.Parameter(torch.randn(1, in_chan, 4, 4))
self.block0 = MicroStyleGANGeneratorBlock(in_chan, hidden_chan, w_dim, kernel_size, 4, use_upsample=False)
self.block1 = MicroStyleGANGeneratorBlock(hidden_chan, hidden_chan, w_dim, kernel_size, 8)
self.block2 = MicroStyleGANGeneratorBlock(hidden_chan, hidden_chan, w_dim, kernel_size, 16)
# You need to have a way of mapping from the output noise to an image,
# so you learn a 1x1 convolution to transform the e.g. 512 channels into 3 channels
# (Note that this is simplified, with clipping used in the real StyleGAN)
self.block1_to_image = nn.Conv2d(hidden_chan, out_chan, kernel_size=1)
self.block2_to_image = nn.Conv2d(hidden_chan, out_chan, kernel_size=1)
self.alpha = 0.2
def upsample_to_match_size(self, smaller_image, bigger_image):
'''
Function for upsampling an image to the size of another: Given a two images (smaller and bigger),
upsamples the first to have the same dimensions as the second.
Parameters:
smaller_image: the smaller image to upsample
bigger_image: the bigger image whose dimensions will be upsampled to
'''
return F.interpolate(smaller_image, size=bigger_image.shape[-2:], mode='bilinear')
def forward(self, noise, return_intermediate=False):
'''
Function for completing a forward pass of MicroStyleGANGenerator: Given noise,
computes a StyleGAN iteration.
Parameters:
noise: a noise tensor with dimensions (n_samples, z_dim)
return_intermediate: a boolean, true to return the images as well (for testing) and false otherwise
'''
x = self.starting_constant
w = self.map(noise)
x = self.block0(x, w)
x_small = self.block1(x, w) # First generator run output
x_small_image = self.block1_to_image(x_small)
x_big = self.block2(x_small, w) # Second generator run output
x_big_image = self.block2_to_image(x_big)
x_small_upsample = self.upsample_to_match_size(x_small_image, x_big_image) # Upsample first generator run output to be same size as second generator run output
# Interpolate between the upsampled image and the image from the generator using alpha
#### START CODE HERE ####
interpolation = self.alpha * x_big_image + (1 - self.alpha) * x_small_upsample
#### END CODE HERE ####
if return_intermediate:
return interpolation, x_small_upsample, x_big_image
return interpolation
#UNIT TEST COMMENT: Required for grading
def get_self(self):
return self;
z_dim = 128
out_chan = 3
truncation = 0.7
mu_stylegan = MicroStyleGANGenerator(
z_dim=z_dim,
map_hidden_dim=1024,
w_dim=496,
in_chan=512,
out_chan=out_chan,
kernel_size=3,
hidden_chan=256
)
test_samples = 10
test_result = mu_stylegan(get_truncated_noise(test_samples, z_dim, truncation))
# Check if the block works
assert tuple(test_result.shape) == (test_samples, out_chan, 16, 16)
# Check that the interpolation is correct
mu_stylegan.alpha = 1.
test_result, _, test_big = mu_stylegan(
get_truncated_noise(test_samples, z_dim, truncation),
return_intermediate=True)
assert torch.abs(test_result - test_big).mean() < 0.001
mu_stylegan.alpha = 0.
test_result, test_small, _ = mu_stylegan(
get_truncated_noise(test_samples, z_dim, truncation),
return_intermediate=True)
assert torch.abs(test_result - test_small).mean() < 0.001
print("Success!")
###Output
Success!
###Markdown
Running StyleGANFinally, you can put all the components together to run an iteration of your micro StyleGAN!You can also visualize what this randomly initiated generator can produce. The code will automatically interpolate between different values of alpha so that you can intuitively see what it means to mix the low-resolution and high-resolution images using different values of alpha. In the generated image, the samples start from low alpha values and go to high alpha values.
###Code
import numpy as np
from torchvision.utils import make_grid
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [15, 15]
viz_samples = 10
# The noise is exaggerated for visual effect
viz_noise = get_truncated_noise(viz_samples, z_dim, truncation) * 10
mu_stylegan.eval()
images = []
for alpha in np.linspace(0, 1, num=5):
mu_stylegan.alpha = alpha
viz_result, _, _ = mu_stylegan(
viz_noise,
return_intermediate=True)
images += [tensor for tensor in viz_result]
show_tensor_images(torch.stack(images), nrow=viz_samples, num_images=len(images))
mu_stylegan = mu_stylegan.train()
###Output
_____no_output_____ |
05_Dictionary.ipynb | ###Markdown
Dictionary- Definition- Examples- Thumb rules for creating Dictionary data- Accessing Dictionary elements- Adding/Updating new Key:Value pair to dictionary elements- Access all Keys and all Values- Available Methods
###Code
a = {} # empty dictionary
print(a)
a = {"A":1,"B":2}
b = {"A":1,"B":2,"A":10}
c = {"A":1,"B":2,"a":10}
d = {"a":1,1:10,12.3:40,"abc":20,(1,2,3):100}
e = {"a":1,"B":10,
"C":{"A":1,"B":2}
}
# f = {"A":1,[1,2,3]:30}
print(a)
print(b)
print(c)
print(d)
print(e)
a = {"A":1,"B":2,"C":3,"D":4}
print(a)
###Output
{'A': 1, 'B': 2, 'C': 3, 'D': 4}
###Markdown
get the keys
###Code
len(a)
len(a.keys())
list(a.keys())
len(list(a.keys()))
###Output
_____no_output_____
###Markdown
get the values
###Code
print(a)
a.values()
# dir(a)
###Output
_____no_output_____
###Markdown
pop
###Code
a.pop("A")
print(a)
###Output
{'B': 2, 'C': 3, 'D': 4}
###Markdown
popitem
###Code
a.popitem()
###Output
_____no_output_____
###Markdown
items
###Code
a = {"A":1,"B":2,"C":3,"D":4}
a.items()
###Output
_____no_output_____
###Markdown
Access individual element of dictionary
###Code
a = {"A":1,"B":2,"C":3,"D":4,10:"A",(1,2):[1,2,3]}
a["A"]
a["D"] = 40
print(a)
a[10] = "B"
print(a)
a[(1,2)]
###Output
_____no_output_____
###Markdown
Add new k:v pair in dictionary
###Code
a
a["Z"] = 1000
print(a)
###Output
{'A': 1, 'B': 2, 'C': 3, 'D': 4, 10: 'A', (1, 2): [1, 2, 3], 'Z': 1000}
###Markdown
Exercise
###Code
a = {"A":1,"B":[2,3,4],"C":10,"D":{"A":100,"B":200,100:(1,2,3)},100:(1,2,3)}
print(a)
a[a["D"]["A"]][::-1]
# a[a["D"]["A"]][::-1]
###Output
_____no_output_____ |
ipy-notebooks/raw-data/hpi-dayfilter/HPI Cooling Data - Typical Profile Extraction.ipynb | ###Markdown
HPI CoolingThis notebook will extract typical profiles for use as input for the coupled co-simulation with CitySim
###Code
import pandas as pd
import os
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
workingdir = "/Users/Clayton/Dropbox/03-ETH/98-UMEM/RawDataAnalysis/"
# os.chdir(workingdir)
df = pd.read_csv(workingdir+"aggset2_QW/HPI_QW.csv", index_col="Date Time", parse_dates=True)
df.info()
point = "HPIMKA01QW_A [kWh]"
df_QW = pd.DataFrame(df[point].truncate(before='2013',after='2014'))
df_QW.plot(figsize=(20,3));
df_QW = df_QW[(df_QW<1000)]
###Output
_____no_output_____
###Markdown
Convert to SAX
###Code
df = df_QW.dropna()
#df.head()
df['Date'] = df.index.map(lambda t: t.date())
df['Time'] = df.index.map(lambda t: t.time())
df_pivot = pd.pivot_table(df, values=point, index='Date', columns='Time')
a = 3
w = '4h'
from scipy.stats import norm
import numpy as np
import string
def discretizer(row, breakpoints):
return np.where(breakpoints > float(row))[0][0]
def stringizer(row):
return ''.join(string.ascii_letters[int(row['step'])])
def adddate(df):
df['Date'] = df.index.map(lambda t: t.date())
df['Time'] = df.index.map(lambda t: t.time())
return df
def SAXizer(df, symbol_count, breakfreq):
x = df.fillna(method='ffill')
y = (x - x.mean()) / x.std()
z = pd.DataFrame(y.resample(breakfreq).dropna())
z.columns = ["numbers"]
breakpoints = norm.ppf(np.linspace(1./symbol_count, 1-1./symbol_count, symbol_count-1))
breakpoints = np.concatenate((breakpoints, np.array([np.Inf])))
z['step'] = z.apply(discretizer, axis=1, args=[breakpoints])
z['letter'] = z.apply(stringizer, axis=1)
z = adddate(z)
zpivot = z.pivot(index='Date', columns='Time', values='letter')
zpivot = z.pivot(index='Date', columns='Time', values='letter')
SAXstrings = zpivot.dropna().sum(axis=1)
return zpivot.dropna(), SAXstrings
df_forSAX = df[point]
zpivot, SAXstrings = SAXizer(df_forSAX, a, w)
patterncount = SAXstrings.value_counts()
patterncount.plot(kind='bar', figsize=(15,5));
binsizethreshold = 0.02
motifs = patterncount[(patterncount > patterncount.sum() * binsizethreshold)]
motifs
discords = patterncount[(patterncount < patterncount.sum() * binsizethreshold)]
discords.head()
df_RawAndSAX = pd.concat([df_pivot, pd.DataFrame(SAXstrings, columns=['SAXstring'])], axis=1)
motifdata = df_RawAndSAX[df_RawAndSAX.SAXstring.isin(list(motifs.index))]
###Output
_____no_output_____
###Markdown
Cluster!
###Code
from datetime import datetime
import matplotlib
import matplotlib.pyplot as plt
# df = motifdata.drop(['SAXstring'], axis=1)
df_pivot.head()
df = df_pivot
def timestampcombine_parse(date,time):
#timestampstring = date+" "+time
# date = datetime.strptime(date, "%Y-%M-%d")
# time = datetime.strptime(time, "%H:%M:%S").time()
pydatetime = datetime.combine(date, time)
#pydatetime = pydatetime.replace(tzinfo=pytz.UTC)
#return pydatetime.astimezone(singaporezone).replace(tzinfo=None)
return pydatetime
df = df.T.unstack().reset_index()
df['timestampstring'] = map(timestampcombine_parse, df.Date, df.Time)
df.index = df.timestampstring
df = df.drop(['Date','Time','timestampstring'],axis=1)
df.columns = [point]
df = df.resample('H')
df.head()
from scipy.cluster.vq import kmeans, vq, whiten
from scipy.spatial.distance import cdist
from sklearn import metrics
import numpy as np
df_norm = (df - df.mean()) / (df.max() - df.min()) #normalized
df['Time'] = df.index.map(lambda t: t.time())
df['Date'] = df.index.map(lambda t: t.date())
df_norm['Time'] = df_norm.index.map(lambda t: t.time())
df_norm['Date'] = df_norm.index.map(lambda t: t.date())
dailyblocks = pd.pivot_table(df, values=point, index='Date', columns='Time', aggfunc='mean')
dailyblocks_norm = pd.pivot_table(df_norm, values=point, index='Date', columns='Time', aggfunc='mean')
dailyblocksmatrix_norm = np.matrix(dailyblocks_norm.dropna())
centers, _ = kmeans(dailyblocksmatrix_norm, 4, iter=10000)
cluster, _ = vq(dailyblocksmatrix_norm, centers)
clusterdf = pd.DataFrame(cluster, columns=['ClusterNo'])
dailyclusters = pd.concat([dailyblocks.dropna().reset_index(), clusterdf], axis=1)
x = dailyclusters.groupby('ClusterNo').mean().sum(axis=1).order()
x = pd.DataFrame(x.reset_index())
x['ClusterNo2'] = x.index
x = x.set_index('ClusterNo')
x = x.drop([0], axis=1)
dailyclusters = dailyclusters.merge(x, how='outer', left_on='ClusterNo', right_index=True)
dailyclusters = dailyclusters.drop(['ClusterNo'],axis=1)
dailyclusters = dailyclusters.set_index(['ClusterNo2','Date']).T.sort()
clusterlist = list(dailyclusters.columns.get_level_values(0).unique())
matplotlib.rcParams['figure.figsize'] = 4,2
styles2 = ['LightSkyBlue', 'b','LightGreen', 'g','LightCoral','r','SandyBrown','Orange','Plum','Purple','Gold','b']
fig, ax = plt.subplots()
for col, style in zip(clusterlist, styles2):
dailyclusters[col].plot(ax=ax, legend=False, style=style, alpha=0.1, xticks=np.arange(0, 86400, 21600))
ax.set_ylabel('Total Daily Profile')
ax.set_xlabel('Time of Day')
plt.savefig("cooling_clusters_total_overlaid_profiles.pdf")
def ClusterUnstacker(df):
df = df.unstack().reset_index()
df['timestampstring'] = map(timestampcombine, df.Date, df.Time)
df = df.dropna()
return df
def timestampcombine(date,time):
pydatetime = datetime.combine(date, time)
#pydatetime = pydatetime.replace(tzinfo=pytz.UTC)
#return pydatetime.astimezone(singaporezone).replace(tzinfo=None)
return pydatetime
dfclusterunstacked = ClusterUnstacker(dailyclusters)
dfclusterunstackedpivoted = pd.pivot_table(dfclusterunstacked, values=0, index='timestampstring', columns='ClusterNo2')
clusteravgplot = dfclusterunstackedpivoted.resample('D', how=np.sum).plot(style="^",markersize=10, alpha=0.5)
clusteravgplot.set_ylabel('Daily Totals')
clusteravgplot.set_xlabel('Date')
clusteravgplot.legend(loc='center left', bbox_to_anchor=(1, 0.5), title='Cluster')
plt.savefig("cooling_clusters_overtime.pdf")
dailyclusters.head()
calendar = dfclusterunstackedpivoted.resample('D', how=np.sum)
calendar.head()
calendar.to_csv("cooling_calendar.csv")
dfclusterunstackedpivoted['Time'] = dfclusterunstackedpivoted.index.map(lambda t: t.time())
dailyprofile = dfclusterunstackedpivoted.groupby('Time').mean().plot(figsize=(6,2),linewidth=3, xticks=np.arange(0, 86400, 10800))
dailyprofile.set_ylabel('Average Daily Profile')
dailyprofile.set_xlabel('Time of Day')
dailyprofile.legend(loc='center left', bbox_to_anchor=(1, 0.5), title='Cluster')
plt.savefig("cooling_clusters_averagedprofiles.pdf")
dfclusterunstackedpivoted.groupby('Time').max().max().max()
#dfclusterunstackedpivoted['Time'] = dfclusterunstackedpivoted.index.map(lambda t: t.time())
normalizedprofiles = dfclusterunstackedpivoted.groupby('Time').mean() / dfclusterunstackedpivoted.groupby('Time').max().max().max()
normalizedprofiles = normalizedprofiles.fillna(0)
normalizedprofiles.plot()
normalizedprofiles.to_csv("cooling_schedules.csv")
def DayvsClusterMaker(df):
df.index = df.timestampstring
df['Weekday'] = df.index.map(lambda t: t.date().weekday())
df['Date'] = df.index.map(lambda t: t.date())
df['Time'] = df.index.map(lambda t: t.time())
DayVsCluster = df.resample('D').reset_index(drop=True)
DayVsCluster = pd.pivot_table(DayVsCluster, values=0, index='ClusterNo2', columns='Weekday', aggfunc='count')
DayVsCluster.columns = ['Mon','Tue','Wed','Thur','Fri','Sat','Sun']
return DayVsCluster.T
DayVsCluster = DayvsClusterMaker(dfclusterunstacked)
DayVsCluster = DayVsCluster.T/DayVsCluster.T.sum()
DayVsCluster = DayVsCluster.T
DayVsClusterplot1 = DayVsCluster.plot(figsize=(7,3), kind='bar', stacked=True)
DayVsClusterplot1.set_ylabel('Number of Days in Each Cluster')
DayVsClusterplot1.set_xlabel('Day of the Week')
DayVsClusterplot1.legend(loc='center left', bbox_to_anchor=(1, 0.5), title='Cluster')
plt.savefig("cooling_clusters_dailybreakdown.pdf")
DayVsCluster
###Output
_____no_output_____
###Markdown
Create Graphics for JBPS PaperFirst load the resultant data from the analysis so no need to rerun:
###Code
normalizedprofiles = pd.read_csv("Schedules.csv", index_col='Time')
normalizedprofiles.head()
dailyprofile = normalizedprofiles.plot(figsize=(4,2),linewidth=3)
dailyprofile.set_ylabel('Normalized Daily Profile')
dailyprofile.set_xlabel('Time of Day')
dailyprofile.legend(loc='center', bbox_to_anchor=(0.5, 1.1), title='Cluster', ncol=4)
plt.savefig("clusters_averagedprofiles_normalized.pdf")
###Output
_____no_output_____
###Markdown
The Cal-Heatmap setup
###Code
calendar = pd.read_csv("calendar.csv", index_col='timestampstring', parse_dates=True)
#calendar.fillna(0).dropna(how="all").info()
import time
calendar['epochtime'] = calendar.index.map(lambda x: int(time.mktime(x.timetuple())))
calendar.index = calendar.epochtime
calendar.head()
calendar = calendar.drop(['epochtime'], axis=1)
calendar.head()
cal_heatmap = calendar.unstack().dropna().reset_index()
cal_heatmap.head()
cal_heatmap.index = cal_heatmap.epochtime
cal_heatmap.head()
cal_heatmap = cal_heatmap.drop(['epochtime',0], axis=1)
cal_heatmap = cal_heatmap.sort()
cal_heatmap.level_0 = cal_heatmap.level_0.astype("float")
cal_heatmap.info()
cal_heatmap.head()
cal_heatmap = cal_heatmap+1
cal_heatmap.head()
cal_heatmap.level_0.to_json("hpi_cal_heatmap.json")
x = sns.color_palette()
import matplotlib.colors as colors
for color in x:
print colors.rgb2hex(color)
###Output
#4c72b0
#55a868
#c44e52
#8172b2
#ccb974
#64b5cd
|
MoreNotebooks/ModelFitting/NNLS.ipynb | ###Markdown
An Introduction to Model Fitting with Python In this notebook, I assume you have used python to some degree to analyze data. I will be using numpy/scipy, the de-facto numerical workhorse in python. I will also use matplotlib to visualize the data. We're going to fit a model to some 'fake' data: a constant continuum with a Gaussian line superimposed. The [sequel to this notebook](Emcee.ipynb) will be model fitting with Markov Chain Monte Carlo techniques (MCMC). But first, let's make the fake data. 1 Making a Fake Emission Line The "true" data is some background flux of photons (a continuum from the source or background) that has a linear trend, plus a Gaussian line with some amplitude, width and center. I set these up as variables so it's easy to play around with them and see how things change.
###Code
from numpy import * # Deal with it
# Start by defining some parameters. Change these if you like!
cont_zp = 500.0 # value at 0
cont_slope = 5.0 # change in continuum per channel
amplitude = 150.0 # peak of the line
width = 0.5 # Width of the line
center = 5.0 # location of the line
# Next, a grid of wavelength channels (assumed to have no uncertainty)
wave = linspace(0,10,100)
# The 'true' observations
flux = amplitude*exp(-0.5*power(wave-center,2)/width**2)+ \
cont_zp + + cont_slope*wave
# The actual observations = true observations + Poisson noise
obs_flux = random.poisson(flux)
###Output
_____no_output_____
###Markdown
So we have the wavelength on the x-axis, which is assumed to have no uncertainty. The measured flux is different from the "true" flux due to Poisson noise. Let's plot the true flux and observed flux to see how things look.
###Code
%matplotlib inline
from matplotlib.pyplot import subplots, plot,step,xlabel,ylabel,show
fig,ax = subplots(1)
ax.plot(wave, flux, 'r-')
ax.step(wave, obs_flux, color='k')
ax.set_xlabel('Wavelength Chanel')
ax.set_ylabel('Counts')
###Output
_____no_output_____
###Markdown
2 Fitting with Non-Linear Least Squares Just to see how you can get a quick fit, let's use the non-linear least-quares routine scipy.optimize.curve_fit. To do this, we must first write a python function that defines the model we are going to fit to the data. The first argument is the x-data, the rest are parameters (the order of the parameters will define the order of the parameter vector).
###Code
def model(x, cont, slope, amp, center, width):
model = amp*exp(-0.5*power(x-center,2)/width**2)+cont+slope*x
return model
###Output
_____no_output_____
###Markdown
Now we run curve_fit. We pass in the model function, the x and y data, an initial guess for the parameters, and the error in the observations. Since the flux has Poisson noise, we can simply put in $\sigma(y) = \sqrt y$.
###Code
from scipy.optimize import curve_fit
popt,pcov = curve_fit(model, wave, obs_flux, p0=(425.,0.0,80.,4.5,1.0))
print(popt)
err = sqrt(diag(pcov))
print(err)
###Output
_____no_output_____
###Markdown
The popt variable holds the best fit parameters as a length-4 array and pcov is the 4X4 covariance matrix. The diagonal of this is the variance of each parameter, so the square root of the diagonal gives the formal errors. Let's plot out this least-squares answer and compare with the "true" value.
###Code
ax.plot(wave, model(wave, *popt), 'b-') # Note: *popt is a python parameter substitution trick
fig
###Output
_____no_output_____
###Markdown
Aside from the best-fit values and their uncertainties, it's also a good idea to examine the covariance matrix, to see how correlated the parameters are. A quick way to do this is to construct the correlation matrix from the covariance matrix $C[i,j]$ and errors $\sigma[i]$:$$\rho[i,j] = \frac{C[i,j]}{\sigma[i]\sigma[j]}$$positive values denote correlation, negative denote anti-correlation. $\rho$ ranges from -1 to 1. A value close to 0 denotes no significant correlation.
###Code
pcor = pcov/outer(err,err)
for i in range(pcor.shape[0]):
for j in range(pcor.shape[1]):
print("{:5.2f} ".format(pcor[i,j]), end='')
print()
###Output
_____no_output_____
###Markdown
From this correlation matrix, you can probably see that the continuum zero-point (first row/column) is significantly anti-correlated with the continuum slope (second row/column) and the amplitude (third row/column) is anti-correlated with the width (5th row/column). The center of the line (fourth row/column) is not significantly correlated with any of the parameters. If you think aobut it, this makes sense. A way to visualize the correlations is to plot equal-probability ellipses in parameter space. There's no automatic way to do this that I'm aware of, so we'll follow [this procedure](https://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/:~:text=The%20error%20ellipse%20represents%20an,visualize%20a%202D%20confidence%20interval.&text=This%20confidence%20ellipse%20defines%20the,from%20the%20underlying%20Gaussian%20distribution). Briefly, we'll compute the eigenvectors and eigenvalues of the covariance matrix which gives us the major and minor axes of the ellipse. We then need to scale the whole ellipse by a factor that depends on the number of parameters we're fitting (degrees of freedom) and there are lookup tables for that, but I've just supplied the value.Matplotlib does not (yet) have a simple function to plot ellipses. We have to use the deeper-down API to first create an ellipse *artist* and then add this artist to the current axis (which we get with the gca() function).
###Code
from matplotlib.patches import Ellipse
from matplotlib.pyplot import gca,show,xlim,ylim,legend,axvline,axhline
eigval,eigvec = linalg.eigh(pcov[0:2,0:2])
if eigval[0]<eigval[1]: # make sure eigvals are reverse-sorted
eigval = eigval[::-1]
eigvec = eigvec[:,::-1]
# eigvec is 2X2 matrix, each eigenvector is a column. Compute angle of
# first vector which will be the major axis of the ellipse
theta = 180.0/pi*arctan2(eigvec[1,0],eigvec[0,0])
# The full width and height of the 68% error ellipse is
# 2*sqrt(eigval)*sqrt(s), where for 5 degrees of freedom, s = 5.9
width,height = 2*sqrt(eigval)*sqrt(5.9)
plot([popt[0]],[popt[1]], "*", ms=18, label="Best-fit solution")
ell = Ellipse(xy=[popt[0],popt[1]], width=width, height=height, angle=theta,
fc='None', ec='red')
ax = gca()
ax.add_artist(ell)
# Show the real answer:
axhline(cont_slope, linestyle='--', label="True answer")
axvline(cont_zp, linestyle='--')
xlabel('cont_zp')
ylabel('cont_slope')
# Set some reasonable limits
xlim(popt[0]-4*err[0],popt[0]+4*err[0])
ylim(popt[1]-4*err[1],popt[1]+4*err[1])
legend(numpoints=1)
show()
###Output
_____no_output_____ |
intro_mxnet_gluon/.ipynb_checkpoints/mxnet_gluon_primer_with_churn_prediction-checkpoint.ipynb | ###Markdown
Logistic Regression: Predicting churn with Apache MXNet and GluonThis notebook is designed to be a quick primer on Apache MXNet and Gluon while solving a churn prediction use case ProblemService providers have historical records on customer loyalty and track how likely users are going to continue to use the service. We can use this historical information to construct a model to predict if the user is going to leave (churn) or continue to use the service. Logistic RegressionTo solve this problem we are going to use a technique known as logistic regression. Its used when the dependent variable is categorical. In this problem we are predicting if the user will churn or not, hence we'll use a binary logistic regression which is the binary version of the more generalized multiclass logistic regression. For further reading check the wikipedia [article](https://en.wikipedia.org/wiki/Multinomial_logistic_regression) DataThe dataset I use is publicly available and was mentioned in the book “Discovering Knowledge in Data” by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets, and can be downloaded from the author’s website [here](http://www.dataminingconsultant.com/data/churn.txt) in .csv format.A modified version is provided in the data/ folder for convinience
###Code
import numpy as np
import mxnet as mx
import logging
import pandas as pd
import sys
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
import logging
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) # Config the logging
###Output
_____no_output_____
###Markdown
Feature selectionThere are many factors (or features) that we think are indicative of customer churn. For simplicity we are going to use the last 5 features namely -- Night Charge, Intl Mins, Intl Calls, Intl Charge, CustServ Calls as the indicator for churn.
###Code
# Data fields in the CSV
#State,Account Length,Area Code,Phone,Int'l Plan,VMail Plan, VMail Message,Day Mins,Day Calls,
#Day Charge,Eve Mins,Eve Calls,Eve Charge,Night Mins,Night Calls,
#Night Charge,Intl Mins,Intl Calls,Intl Charge,CustServ Calls,Churn?
dataframe = pd.read_csv('churn.txt', engine='python', skipfooter=3)
dataset = dataframe.values
x_data = dataset[:, -6:-1] # use a subset as features
# convert the last field in to [0,1] from False/True
y_data = np.array([[0 if d == 'False.' else 1 for d in dataset[:, [-1]]]]).T
print(x_data.shape, y_data.shape)
print type(x_data), type(y_data)
###Output
((3330, 5), (3330, 1))
<type 'numpy.ndarray'> <type 'numpy.ndarray'>
###Markdown
Lets split into training and test set
###Code
sample_num = x_data.shape[0]
dimension = x_data.shape[1]
batch_size = 32
train_size = int(len(dataset) * 0.7)
test_size = len(dataset) - train_size
train_x, test_x = x_data[0:train_size,:], x_data[train_size:len(x_data),:]
train_y, test_y = y_data[0:train_size,:], y_data[train_size:len(y_data),:]
print len(train_x), len(test_x)
print len(train_y), len(test_y)
###Output
2331 999
2331 999
###Markdown
Take a moment to look at [NDArrays](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter01_crashcourse/ndarray.ipynb) in MXNet and Gluon. We'll use this extensively in all our notebooks Build the Logistic Regression Model -- Symbolic Apache MXNet
###Code
# Lets build the Logistic Regression Model
# Placeholders for X & y
data = mx.sym.Variable("data")
target = mx.sym.Variable("target")
fc = mx.sym.FullyConnected(data=data, num_hidden=1, name='fc')
pred = mx.sym.LogisticRegressionOutput(data=fc, label=target)
# Contstruct the module object
model = mx.mod.Module(symbol=pred,
data_names=['data'],
label_names=['target'],
context=mx.cpu(0))
# bind the data and label shapes
# you can also use train_iter.provide_data & .provide_label
model.bind(data_shapes=[mx.io.DataDesc(name='data', shape=(batch_size, dimension), layout='NC')],
label_shapes=[mx.io.DataDesc(name='target', shape=(batch_size, 1), layout='NC')])
model.init_params(initializer=mx.init.Normal(sigma=0.01))
model.init_optimizer(optimizer='sgd',
optimizer_params={'learning_rate': 1E-3, 'momentum': 0.9})
# Build the data iterator
train_iter = mx.io.NDArrayIter(train_x, train_y, batch_size,
shuffle=True, label_name='target')
# Create a custom metric
metric = mx.metric.CustomMetric(feval=lambda labels,
pred: ((pred > 0.5) == labels).mean(),
name="acc")
# Train the model
model.fit(train_data=train_iter, eval_metric=metric, num_epoch=20)
###Output
WARNING:root:Already bound, ignoring bind()
WARNING:root:optimizer already initialized, ignoring...
INFO:root:Epoch[0] Train-acc=0.839897
INFO:root:Epoch[0] Time cost=0.059
INFO:root:Epoch[1] Train-acc=0.865154
INFO:root:Epoch[1] Time cost=0.040
INFO:root:Epoch[2] Train-acc=0.865154
INFO:root:Epoch[2] Time cost=0.047
INFO:root:Epoch[3] Train-acc=0.865154
INFO:root:Epoch[3] Time cost=0.040
###Markdown
Model Evaluation
###Code
# Test data iterator
test_iter = mx.io.NDArrayIter(test_x, test_y, batch_size, shuffle=False, label_name=None)
pred_class = (fc > 0) # de
test_model = mx.mod.Module(symbol=pred_class,
data_names=['data'],
label_names=None,
context=mx.cpu(0))
test_model.bind(data_shapes=[mx.io.DataDesc(name='data', shape=(batch_size, dimension), layout='NC')],
label_shapes=None,
for_training=False,
shared_module=model)
out = test_model.predict(eval_data=test_iter)
acc = np.sum(out.asnumpy() == test_y)/ (len(test_y)*1.0)
#print(out.asnumpy())
print acc
###Output
0.831831831832
###Markdown
Confusion MatrixAlong with accuracy we'd like to visualize the evaluation with four important statistics relative to the total number of predictions: the percentage of true negatives (TN), true positives (TP), false negatives (FN), and false positives (FP). These stats are often presented in the form of a , as follows.
###Code
from sklearn.metrics import confusion_matrix
print confusion_matrix(test_y, out.asnumpy())
print np.sum(test_y)
###Output
[[828 2]
[166 3]]
169
###Markdown
Logistic Regression with Gluon
###Code
from mxnet import nd, autograd
from mxnet import gluon
ctx = mx.cpu()
N_CLASS = 1
# Define the model
net = gluon.nn.Sequential()
with net.name_scope():
net.add(gluon.nn.Dense(1))
# init params
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
# optimizer
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.01})
from mxnet import nd, autograd
from mxnet import gluon
# Define Data Iterators
test_iter = mx.io.NDArrayIter(train_x, train_y, batch_size, shuffle=True, label_name=None)
test_iter = mx.io.NDArrayIter(test_x, test_y, batch_size, shuffle=False, label_name=None)
# The Network
ctx = mx.cpu()
net = gluon.nn.Dense(1)
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=ctx)
#optimizer
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.01})
# loss function
def logistic(z):
return 1. / (1. + nd.exp(-z))
def log_loss(output, y):
yhat = logistic(output)
return - nd.nansum( y * nd.log(yhat) + (1-y) * nd.log(1-yhat))
###Output
_____no_output_____
###Markdown
During training we use the autograd module to take gradients. See this [link](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter01_crashcourse/autograd.ipynb) for details on how it works Training Loop
###Code
epochs = 5
loss_sequence = []
num_examples = len(train_x)
train_iter.reset()
for e in range(epochs):
cumulative_loss = 0
for i, batch in enumerate(train_iter):
data = batch.data[0].as_in_context(ctx)
label = batch.label[0].as_in_context(ctx)
with autograd.record():
output = net(data)
loss = log_loss(output, label)
loss.backward()
trainer.step(batch_size)
cumulative_loss += nd.sum(loss).asscalar()
print("Epoch %s, loss: %s" % (e, cumulative_loss ))
loss_sequence.append(cumulative_loss)
# plot the convergence of the estimated loss function
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.figure(num=None,figsize=(8, 6))
plt.plot(loss_sequence)
plt.grid(True, which="both")
plt.xlabel('epoch',fontsize=14)
plt.ylabel('average loss',fontsize=14)
###Output
_____no_output_____
###Markdown
Calculate the accuracy on the test set
###Code
num_correct = 0.0
num_total = len(test_x)
pred_out = []
test_iter.reset()
for i, batch in enumerate(test_iter):
data = batch.data[0].as_in_context(ctx)
label = batch.label[0].as_in_context(ctx)
output = net(data)
prediction = (nd.sign(output) + 1) / 2
pred_out.append(prediction.asnumpy())
num_correct += nd.sum(prediction == label)
print("Accuracy: %0.3f (%s/%s)" % (num_correct.asscalar()/num_total, num_correct.asscalar(), num_total))
from sklearn.metrics import confusion_matrix
print confusion_matrix(test_y, np.vstack(pred_out)[:len(test_y)])
print np.sum(test_y)
###Output
[[784 46]
[131 38]]
169
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.