path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
ds_movie_rec.ipynb | ###Markdown
In this notebook I am going to be experimenting with a Movie Recommendation System using TMDB 5000 Movie Dataset. https://www.kaggle.com/ibtesama/getting-started-with-a-movie-recommendation-system?select=tmdb_5000_movies.csv There are basically three types of recommender systems:-* Demographic Filtering | They offer generalized recommendations to every user, based on movie popularity and/or genre. The System recommends the same movies to users with similar demographic features. Since each user is different , this approach is considered to be too simple. The basic idea behind this system is that movies that are more popular and critically acclaimed will have a higher probability of being liked by the average audience.* Content Based Filtering | They suggest similar items based on a particular item. This system uses item metadata, such as genre, director, description, actors, etc. for movies, to make these recommendations. The general idea behind these recommender systems is that if a person liked a particular item, he or she will also like an item that is similar to it.* Collaborative Filtering | This system matches persons with similar interests and provides recommendations based on this matching. Collaborative filters do not require item metadata like its content-based counterparts. Let's load the data
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df_credits = pd.read_csv('../ds_movie_recommendation/datasets/tmdb_5000_credits.csv')
df_movies = pd.read_csv('../ds_movie_recommendation/datasets/tmdb_5000_movies.csv')
###Output
_____no_output_____
###Markdown
The first dataset contains the following features: -* movie_id - Unique identifier for each movie* cast - Name of the lead and supporting actors* crew - Name of director, editor, composer, writer etc The second dataset has the following features:-* budget - The budget in which the movie was made.* genre - The genre of the movie, Action, Comedy ,Thriller etc.* homepage - A link to the homepage of the movie.* id - This is infact the movie_id as in the first dataset.* keywords - The keywords or tags related to the movie.* original_language - The language in which the movie was made.* original_title - The title of the movie before translation or adaptation.* overview - A brief description of the movie.* popularity - A numeric quantity specifying the movie popularity.* production_companies - The production house of the movie.* production_countries - The country in which it was produced.* release_date - The date on which it was released.* revenue - The worldwide revenue generated by the movie.* runtime - The running time of the movie in minutes.* status - "Released" or "Rumored".* tagline - Movie's tagline.* title - Title of the movie.* vote_average - average ratings the movie recieved.* vote_count - the count of votes recieved.
###Code
df_movies.head()
# Let's join the two datasets
df_credits.columns = ['id', 'title', 'cast', 'crew']
df_movies.merge(df_credits, on = 'id')
df_movies.head(5)
###Output
_____no_output_____
###Markdown
Demographic Filtering - Before getting started with this:* we need a metric to score or rate movie* calculate the score for every movie* sort the scores and recommend the best rated movie to the usersWe can use the average ratings of the movie as the score but using this won't be fair enough since a movie with 8.9 average rating and only 3 votes cannot be considered better than the movie with 7.8 as as average rating but 40 votes. So, I'll be using IMDB's weighted rating (wr) which is given as :-where, * v is the number of votes for the movie, * m is the minimum votes required to be listed in the chart,* R is the average rating of the movie,* C is the mean vote across the whole report
###Code
C = df_movies['vote_average'].mean()
C
###Output
_____no_output_____
###Markdown
So, the mean rating for all the movies is approx 6 on a scale of 10.The next step is to determine an appropriate value for m, the minimum votes required to be listed in the chart. We will use 90th percentile as our cutoff. In other words, for a movie to feature in the charts, it must have more votes than at least 90% of the movies in the list.
###Code
m = df_movies['vote_count'].quantile(0.9)
m
###Output
_____no_output_____
###Markdown
Now, we can filter out the movies that qualify for the chart
###Code
q_movies = df_movies.copy().loc[df_movies['vote_count'] >= m]
q_movies.shape
###Output
_____no_output_____
###Markdown
We see that there are 481 movies which qualify to be in this list. Now, we need to calculate our metric for each qualified movie. To do this, we will define a function, weighted_rating() and define a new feature score, of which we'll calculate the value by applying this function to our DataFrame of qualified movies:
###Code
def weighted_rating(x, m=m, C=C):
v = x['vote_count']
R = x['vote_average']
# Calculation based on the IMDB formula
return (v/(v+m) * R) + (m/(m+v) * C)
# Define a new feature 'score' and calculate its value with 'weighted_rating()'
q_movies['score'] = q_movies.apply(weighted_rating, axis=1)
###Output
_____no_output_____
###Markdown
Finally, let's sort the DataFrame based on the score feature and output the title, vote count, vote average and weighted rating or score of the top 10 movies.
###Code
# Sort movies based on score calculated above
q_movies = q_movies.sort_values('score', ascending=False)
# Print the top 15 movies
q_movies[['title', 'vote_count', 'vote_average', 'score']].head(15)
###Output
_____no_output_____
###Markdown
**Yes!** We found the first (though very basic) recommender. Under the **Trending Now** tab of these systems we find movies that are very popular and they can just be obtained by sorting the dataset by the popularity column.
###Code
popular_movies = df_movies.sort_values('popularity', ascending=False)
plt.figure(figsize=(12,4))
plt.barh(popular_movies['title'].head(6),popular_movies['popularity'].head(6), align='center',color='skyblue')
plt.gca().invert_yaxis()
plt.xlabel('Popularity')
plt.title('Popular Movies')
###Output
_____no_output_____
###Markdown
Now something to keep in mind is that these demographic recommender provide a general chart of recommended movies to all the users. They are not sensitive to the interests and tastes of a particular user. This is when we move on to a more refined system- **Content Based Filtering**. Content Based FilteringIn this recommender system the content of the movie (overview, cast, crew, keyword, tagline etc) is used to find its similarity with other movies. Then the movies that are most likely to be similar are recommended. Plot description based RecommenderWe will compute pairwise similarity scores for all movies based on their plot descriptions and recommend movies based on that similarity score. The plot description is given in the overview feature of our dataset. Let's take a look at the data...
###Code
df_movies['overview'].head(5)
###Output
_____no_output_____
###Markdown
We need to convert the word vector of each overview. We will compute Term Frequency - Inverse Document Frequency (TF-IDF) vectors for each overview.Now if you are wondering what is term frequency , it is the relative frequency of a word in a document and is given as (term instances/total instances). Inverse Document Frequency is the relative count of documents containing the term is given as log(number of documents/documents with term) The overall importance of each word to the documents in which they appear is equal to TF * IDFThis will give you a matrix where each column represents a word in the overview vocabulary (all the words that appear in at least one document) and each row represents a movie, as before.This is done to reduce the importance of words that occur frequently in plot overviews and therefore, their significance in computing the final similarity score.Fortunately, scikit-learn gives you a built-in TfIdfVectorizer class that produces the TF-IDF matrix in a couple of lines.
###Code
# Import TfIdfVectorizer from scikit-learn
from sklearn.feature_extraction.text import TfidfVectorizer
# Define a TF-IDF Vectorizer Object. Remove all english stop words such as 'the', 'a'
tfidf = TfidfVectorizer(stop_words='english')
# Replace NaN with an empty string
df_movies['overview'] = df_movies['overview'].fillna('')
# Construct the required TF-IDF matrix by fitting and transforming the data
tfidf_matrix = tfidf.fit_transform(df_movies['overview'])
# Output the shape of tfidf_matrix
tfidf_matrix.shape
###Output
_____no_output_____
###Markdown
We see that over 20,000 different words were used to describe the 4800 movies in our dataset.With this matrix in hand, we can now compute a similarity score. There are several candidates for this; such as the euclidean, the Pearson and the cosine similarity scores. There is no right answer to which score is the best. Different scores work well in different scenarios and it is often a good idea to experiment with different metrics.We will be using the cosine similarity to calculate a numeric quantity that denotes the similarity between two movies. We use the cosine similarity score since it is independent of magnitude and is relatively easy and fast to calculate. Mathematically, it is defined as follows:Since we have used the TF-IDF vectorizer, calculating the dot product will directly give us the cosine similarity score. Therefore, we will use sklearn's linear_kernel() instead of cosine_similarities() since it is faster.
###Code
# Import linear_kernel
from sklearn.metrics.pairwise import linear_kernel
# Compute the cosine similarity matrix
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)
###Output
_____no_output_____
###Markdown
We are going to define a function that takes in a movie title as an input and outputs a list of the 10 most similar movies. Firstly, for this, we need a reverse mapping of movie titles and DataFrame indices. In other words, we need a mechanism to identify the index of a movie in our metadata DataFrame, given its title.
###Code
# Construct a reverse map of indices and movie titles
indices = pd.Series(df_movies.index, index = df_movies['title']).drop_duplicates()
###Output
_____no_output_____
###Markdown
We are now in a good position to define our recommendation function. These are the following steps we'll follow :-* Get the index of the movie given its title.* Get the list of cosine similarity scores for that particular movie with all movies. Convert it into a list of tuples where the first element is its position and the second is the similarity score.* Sort the aforementioned list of tuples based on the similarity scores; that is, the second element.* Get the top 10 elements of this list. Ignore the first element as it refers to self (the movie most similar to a particular movie is the movie itself).* Return the titles corresponding to the indices of the top elements.
###Code
# Function that takes in movie title as input and outputs most similar movies
def get_recommendations(title, cosine_sim=cosine_sim):
# Get the index of the movie that matches the title
idx = indices[title]
# Get the pairwise similarity scores of all movies with that movie
sim_scores = list(enumerate(cosine_sim[idx]))
# Sort the movies based on the similarity scores
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
# Get the scores of the 10 most similar movies
sim_scores = sim_scores[1:11]
# Get the movie indices
movie_indices = [i[0] for i in sim_scores]
# Return the top 10 most similar movies
return df_movies['original_title'].iloc[movie_indices]
get_recommendations('The Dark Knight Rises')
get_recommendations('The Avengers')
###Output
_____no_output_____
###Markdown
While our system has done a decent job of finding movies with similar plot descriptions, the quality of recommendations is not that great. "The Dark Knight Rises" returns all Batman movies while it is more likely that the people who liked that movie are more inclined to enjoy other Christopher Nolan movies. This is something that cannot be captured by the present system. Credits, Genres and Keywords Based RecommenderIt goes without saying that the quality of our recommender would be increased with the usage of better metadata. That is exactly what we are going to do in this section. We are going to build a recommender based on the following metadata: the 3 top actors, the director, related genres and the movie plot keywords.From the cast, crew and keywords features, we need to extract the three most important actors, the director and the keywords associated with that movie. Right now, our data is present in the form of "stringified" lists , we need to convert it into a safe and usable structure
###Code
# For some reason df_movies was not merged
df_movies = df_movies.merge(df_credits, on='id')
# Parse the stringified features into their corresponding python objects
from ast import literal_eval
features = ['cast', 'crew', 'keywords', 'genres']
for feature in features:
df_movies[feature] = df_movies[feature].apply(literal_eval)
###Output
_____no_output_____
###Markdown
Next, we'll write functions that will help us to extract the required information from each feature.
###Code
# Get the director's name from the crew feature. If director is not listed, return NaN
def get_director(x):
for i in x:
if i['job'] == 'Director':
return i['name']
return np.nan
# Returns the list top 3 elements or entire list; whichever is more.
def get_list(x):
if isinstance(x, list):
names = [i['name'] for i in x]
#Check if more than 3 elements exist. If yes, return only first three. If no, return entire list.
if len(names) > 3:
names = names[:3]
return names
#Return empty list in case of missing/malformed data
return []
# Define new director, cast, genres and keywords features that are in a suitable form.
df_movies['director'] = df_movies['crew'].apply(get_director)
features = ['cast', 'keywords', 'genres']
for feature in features:
df_movies[feature] = df_movies[feature].apply(get_list)
# Print the new features of the first 3 films
df_movies[['original_title','cast', 'director', 'keywords', 'genres']].head(3)
###Output
_____no_output_____
###Markdown
The next step would be to convert the names and keyword instances into lowercase and strip all the spaces between them. This is done so that our vectorizer doesn't count the Johnny of "Johnny Depp" and "Johnny Galecki" as the same.
###Code
# Function to convert all string to lower case and strip names of spaces
def clean_data(x):
if isinstance(x, list):
return [str.lower(i.replace(" ", "")) for i in x]
else:
# Check if director exists. If not, return empty string
if isinstance(x, str):
return str.lower(x.replace(" ", ""))
else:
return ''
# Apply function clean_data to your features
features = ['cast', 'keywords', 'director', 'genres']
for feature in features:
df_movies[feature] = df_movies[feature].apply(clean_data)
###Output
_____no_output_____
###Markdown
We are now in a position to create our "metadata soup", which is a string that contains all the metadata that we want to feed to our vectorizer (namely actors, director and keywords).
###Code
def create_soup(x):
return ' '.join(x['keywords']) + ' ' + ' '.join(x['cast']) + ' ' + x['director'] + ' ' + ' '.join(x['genres'])
df_movies['soup'] = df_movies.apply(create_soup, axis=1)
###Output
_____no_output_____
###Markdown
The next steps are the same as what we did with our plot description based recommender. One important difference is that we use the CountVectorizer() instead of TF-IDF. This is because we do not want to down-weight the presence of an actor/director if he or she has acted or directed in relatively more movies. It doesn't make much intuitive sense.
###Code
# Import CountVectorizer and create the count matrix
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english')
count_matrix = count.fit_transform(df_movies['soup'])
# Compute the Cosine Similarity matrix based on the count_matrix
from sklearn.metrics.pairwise import cosine_similarity
cosine_sim2 = cosine_similarity(count_matrix, count_matrix)
df_movies = df_movies.reset_index()
indices = pd.Series(df_movies.index, index=df_movies['original_title'])
get_recommendations('The Dark Knight Rises', cosine_sim2)
get_recommendations('The Godfather', cosine_sim2)
###Output
_____no_output_____
###Markdown
We see that our recommender has been successful in capturing more information due to more metadata and has given us (arguably) better recommendations. It is more likely that Marvels or DC comics fans will like the movies of the same production house. Therefore, to our features above we can add production_company . We can also increase the weight of the director , by adding the feature multiple times in the soup. Collaborative Filtering Our content based engine suffers from some severe limitations. It is only capable of suggesting movies which are close to a certain movie. That is, it is not capable of capturing tastes and providing recommendations across genres.Also, the engine that we built is not really personal in that it doesn't capture the personal tastes and biases of a user. Anyone querying our engine for recommendations based on a movie will receive the same recommendations for that movie, regardless of who she/he is.Therefore, in this section, we will use a technique called Collaborative Filtering to make recommendations to Movie Watchers. It is basically of two types:-* **User based filtering**- These systems recommend products to a user that similar users have liked. For measuring the **similarity between two users we can either use pearson correlation or cosine similarity**. * **Item Based Collaborative Filtering** - Instead of measuring the similarity between users, the item-based CF **recommends items based on their similarity with the items that the target user rated**. Likewise, the similarity can be computed with Pearson Correlation or Cosine Similarity. The major difference is that, with item-based collaborative filtering, we fill in the blank vertically, as oppose to the horizontal manner that user-based CF does. Single Value Decomposition One way to handle the scalability and sparsity issue created by CF is to leverage a latent factor model to capture the similarity between users and items. Essentially, we want to turn the recommendation problem into an optimization problem. We can view it as how good we are in predicting the rating for items given a user. One common metric is Root Mean Square Error (RMSE). The lower the RMSE, the better the performance.Now talking about latent factor you might be wondering what is it ?It is a broad idea which describes a property or concept that a user or an item have. For instance, for music, latent factor can refer to the genre that the music belongs to. SVD decreases the dimension of the utility matrix by extracting its latent factors. Essentially, we map each user and each item into a latent space with dimension r. Therefore, it helps us better understand the relationship between users and items as they become directly comparable. The below figure illustrates this idea.Since the dataset we used before did not have userId(which is necessary for collaborative filtering) let's load another dataset. We'll be using the **Surprise library to implement SVD**.
###Code
from surprise import Reader, Dataset, SVD
from surprise.model_selection import cross_validate
reader = Reader()
ratings = pd.read_csv('../ds_movie_recommendation/datasets/ratings_small.csv')
ratings.head()
data = Dataset.load_from_df(ratings[['userId', 'movieId', 'rating']], reader)
svd = SVD()
cross_validate(svd, data, measures=['RMSE', 'MAE'], cv=10, verbose=True)
trainset = data.build_full_trainset()
svd.fit(trainset)
ratings[ratings['userId'] == 1]
svd.predict(1, 302, 3)
###Output
_____no_output_____ |
week-09/seminar_done.ipynb | ###Markdown
SeminarHi! Today we are going to learn a new tokenization algorithm, seq2seq metrics and a machine translation task. We will be acquainted with an attention mechanism. BPE. YouTokenToMePreviously we have discussed a text preprocessing pipeline. We used `WordPunctTokenizer`, that tokenize text to words and punctuations. But this tokenization algorithm isn't perfect. Some languages have many word-forms. Many languages have words modification, like prefixes and suffixes. We want to save morphology information in text, but save every possible word-form isn't memory-efficient and isn't easy to train. However, we can create tokenziation mechanism, that will tokenize every word by subword morphology. And there is unsupervised algorithm to do it. It's called Byte Pair Encoding. How it works:1. We split texts into characters2. Count bigrams on characters3. Merge the most popular pair4. Continue until we reach given vocabulary size.It's easy algorithm, and we have several implementations:- SentencePiece- fastBPE- Tokenizers by 🤗- YouTokenToMeThe fastes one is YouTokenToMe by VK Team. Let's look how it works:
###Code
from typing import List, Tuple
import youtokentome as yttm
from torchtext.utils import download_from_url, extract_archive
from torchtext.vocab import Vocab
from torchtext.experimental.datasets import WMT14
###Output
_____no_output_____
###Markdown
Download WMT14 dataset. It have pair texts on English and German languages, processed by Google Brain.
###Code
wmt_url = "https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8"
dataset_tar = download_from_url(wmt_url, root="wmt14")
extracted = extract_archive(dataset_tar)
###Output
_____no_output_____
###Markdown
We will use `newtest2016` data us train part in training pipeline. Now we need to train BPE tokenizers for English and German languages. Consider vocabulary size as 10000 tokens.
###Code
train_data_en_path = "wmt14/newstest2016.en"
tokenizer_en_path = "en.tok"
yttm.BPE.train(
data=train_data_en_path, vocab_size=10000, model=tokenizer_en_path
)
train_data_de_path = "wmt14/newstest2016.de"
tokenizer_de_path = "de.tok"
yttm.BPE.train(
data=train_data_de_path, vocab_size=10000, model=tokenizer_de_path
)
###Output
_____no_output_____
###Markdown
Training procedure in `YTTM` run in a background. We need to load tokenizers to work with them:
###Code
tokenizer_en = yttm.BPE(model=tokenizer_en_path)
tokenizer_de = yttm.BPE(model=tokenizer_de_path)
###Output
_____no_output_____
###Markdown
Our text example will be:
###Code
test_text = "Tinkoff loves VK!"
###Output
_____no_output_____
###Markdown
Try to get tokens, ids, add special tokens:
###Code
tokenizer_en.encode([test_text], output_type=yttm.OutputType.SUBWORD)
tokenizer_en.encode([test_text], output_type=yttm.OutputType.ID)
tokenizer_en.encode(
[test_text], output_type=yttm.OutputType.SUBWORD, bos=True, eos=True
)
tokenizer_en.encode(
[test_text], output_type=yttm.OutputType.ID, bos=True, eos=True
)
###Output
_____no_output_____
###Markdown
To join `YTTM` tokenizer and `TorchText` dataset abstraction we need to code couple functions:
###Code
# Code them
def tokenize_de(text: str) -> List[str]:
return tokenizer_de.encode(
[text], output_type=yttm.OutputType.SUBWORD, bos=True, eos=True
)[0]
def tokenize_en(text: str) -> List[str]:
return tokenizer_en.encode(
[text], output_type=yttm.OutputType.SUBWORD, bos=True, eos=True
)[0]
(train_dataset, valid_dataset, test_dataset) = WMT14(
train_filenames=("newstest2016.en", "newstest2016.de"),
valid_filenames=("newstest2010.en", "newstest2010.de"),
test_filenames=("newstest2009.en", "newstest2009.de"),
tokenizer=(tokenize_en, tokenize_de),
)
###Output
_____no_output_____
###Markdown
Check how `dataset` works:
###Code
train_dataset[0]
tokens = [train_dataset.get_vocab()[0].itos[i] for i in train_dataset[0][0]]
"".join(tokens)
tokens = [train_dataset.get_vocab()[1].itos[i] for i in train_dataset[0][1]]
"".join(tokens)
###Output
_____no_output_____
###Markdown
Let's code special function to decode input ids into human-readable text:
###Code
# code function to decode input ids to pretty output
def decoding(input_ids: torch.Tensor, vocab: Vocab) -> str:
result_text = ""
for input_id in input_ids:
if input_id == vocab.stoi["<EOS>"]:
break
elif input_id != vocab.stoi["<BOS>"]:
result_text += vocab.itos[input_id]
return "".join(t if t != "▁" else " " for t in result_text )
decoding(train_dataset[0][0], train_dataset.get_vocab()[0])
decoding(train_dataset[0][1], train_dataset.get_vocab()[1])
###Output
_____no_output_____
###Markdown
We need to code padding code:
###Code
PAD_ID_src = train_dataset.get_vocab()[0].stoi["<PAD>"]
PAD_ID_trg = train_dataset.get_vocab()[1].stoi["<PAD>"]
max_length = 64 # 128
def collate_fn(batch: Tuple[torch.Tensor]) -> Tuple[torch.Tensor]:
max_len_src = min(max(b[0].size(0) for b in batch), max_length)
max_len_trg = min(max(b[1].size(0) for b in batch), max_length)
all_src = torch.zeros(max_len_src, len(batch)) + PAD_ID_src
all_trg = torch.zeros(max_len_trg, len(batch)) + PAD_ID_trg
for num, (src, trg) in enumerate(batch):
all_src[: src.size(0), num] = src[:max_length]
all_trg[: trg.size(0), num] = trg[:max_length]
return all_src.type(torch.LongTensor), all_trg.type(torch.LongTensor)
###Output
_____no_output_____
###Markdown
And bucketing sampler! It's special sampler, that will reduce padding in batches. We need to sort our text by lens in tokens, and form batches using text order. We'll implement this by `SortedSampler` and `RandomSubsetSampler`:
###Code
from typing import Any, Callable, Iterable
from torch.utils.data import Dataset
from torch.utils.data.sampler import Sampler
class SortedSampler(Sampler):
def __init__(self, data: Dataset, sort_key: Callable[[Any], Any] = lambda x: x):
super().__init__(data)
self.data = data
self.sort_key = sort_key
zip_ = [(i, self.sort_key(row)) for i, row in enumerate(self.data)]
zip_ = sorted(zip_, key=lambda r: r[1])
self.sorted_indexes = [item[0] for item in zip_]
def __iter__(self) -> Iterable[int]:
return iter(self.sorted_indexes)
def __len__(self) -> int:
return len(self.data)
###Output
_____no_output_____
###Markdown
`BucketBatchSampler`'s algorithm is this:- Create buckets, subsets on random order.- Sort data in each bucket- Generate sample by getting items from buckets
###Code
import math
from typing import Generator, List
from torch.utils.data.sampler import BatchSampler
from torch.utils.data.sampler import SubsetRandomSampler
class BucketBatchSampler(BatchSampler):
def __init__(
self,
sampler: Sampler,
batch_size: int,
drop_last: bool,
sort_key: Callable[[Any], Any] = lambda x: x,
bucket_size_multiplier: int = 100
):
super().__init__(sampler, batch_size, drop_last)
self.sort_key = sort_key
self.bucket_sampler = BatchSampler(
sampler,
min(batch_size * bucket_size_multiplier, len(sampler)),
False
)
def __iter__(self) -> Generator[List[int], None, None]:
for bucket in self.bucket_sampler:
sorted_sampler = SortedSampler(bucket, self.sort_key)
for batch in SubsetRandomSampler(
list(
BatchSampler(
sorted_sampler,
self.batch_size,
self.drop_last
)
)
):
yield [bucket[i] for i in batch]
def __len__(self):
if self.drop_last:
return len(self.sampler) // self.batch_size
else:
return math.ceil(len(self.sampler) / self.batch_size)
###Output
_____no_output_____
###Markdown
And now we just need to create data loaders:
###Code
from torch.utils.data import DataLoader
from torch.utils.data.sampler import RandomSampler
batch_size = 128
train_sampler = RandomSampler(train_dataset)
sort_key = lambda row: len(train_dataset[row][0])
train_batch_sampler = BucketBatchSampler(
train_sampler,
batch_size=batch_size,
drop_last=True,
sort_key=sort_key
)
train_loader = DataLoader(
train_dataset,
batch_sampler=train_batch_sampler,
collate_fn=collate_fn
)
valid_loader = DataLoader(
valid_dataset,
batch_size=batch_size,
shuffle=False,
drop_last=False,
collate_fn=collate_fn
)
test_loader = DataLoader(
test_dataset,
batch_size=batch_size,
shuffle=False,
drop_last=False,
collate_fn=collate_fn
)
###Output
_____no_output_____
###Markdown
BLEUIn this section we will discuss metrics for Seq2Seq models. There are several metrics: BLEU, ROUGE, METEOR, WER, etc. They used to understand how well model solve any task with generating texts with target text. Let's look on to BLEU.BLEU stands for "BIlingual Evaluation Understudy". To compute it, we need n-grams for predicted text(hypothesis) and target text(references) and compare them. And BLEU would be a number of n-grams from predicted text, appears in target text. Let's look at the example:
###Code
test_target = "Die Prager Börse stürzt gegen Geschäftsschluss ins Minus"
test_predicted = "Das Prager Börse stürzt gegest Geschäftschlus uns Minus"
target_tokens = test_target.split() # Simple way to get tokens
unigrams_target = [(t_0,) for t_0 in target_tokens]
bigrams_target = [
(t_0, t_1) for t_0, t_1 in zip(target_tokens[:-1], target_tokens[1:])
]
trigrams_target = [
(t_0, t_1, t_2)
for t_0, t_1, t_2 in zip(
target_tokens[:-2], target_tokens[1:-1], target_tokens[2:]
)
]
predicted_tokens = test_predicted.split()
# find ngrams for predicted text
unigrams_predicted = [(t_0,) for t_0 in predicted_tokens]
bigrams_predicted = [
(t_0, t_1) for t_0, t_1 in zip(predicted_tokens[:-1], predicted_tokens[1:])
]
trigrams_predicted = [
(t_0, t_1, t_2)
for t_0, t_1, t_2 in zip(
predicted_tokens[:-2], predicted_tokens[1:-1], predicted_tokens[2:]
)
]
###Output
_____no_output_____
###Markdown
Count number of n-grams appeard in target text:
###Code
count_unigrams = sum(
uni in unigrams_target for uni in unigrams_predicted
) / len(unigrams_predicted)
# Count statistic for bigrams and trigrams
count_bigrams = sum(
bi in bigrams_target for bi in bigrams_predicted
) / len(bigrams_predicted)
count_trigrams = sum(
tri in trigrams_target for tri in trigrams_predicted
) / len(trigrams_predicted)
print(f"Uni: {count_unigrams}\nBi: {count_bigrams}\nTri: {count_trigrams}")
bleu = (count_unigrams + count_bigrams + count_trigrams) / 3
print(f"Our BLEU: {bleu}")
###Output
_____no_output_____
###Markdown
We don't need to implement BLEU score from scratch. In `nltk` we have algorithms to calculate it:
###Code
from nltk.translate.bleu_score import corpus_bleu
def compute_bleu(predicted, target):
return corpus_bleu([[ref] for ref in target], predicted)
compute_bleu([test_predicted], [test_target])
###Output
_____no_output_____
###Markdown
Seq2Seq. TranslationTranslation is one of the task, where we need to have Seq2Seq models, that consist of an Encoder and a Decoder. An encoder should return an informative vector, that will represent an input text. A decoder should generate translation, based on the vector. We will use a Recurrent Neural Network with additional component called attention. RNN + Attention There are two famous attention formulation in the nlp. One of them is [by Luong](https://arxiv.org/pdf/1508.04025.pdf). Another one is [by Bahdanau](https://arxiv.org/pdf/1409.0473.pdf). We will implement an aproximation of Luong attention, that can be showed like this:Let's code this.
###Code
import torch.nn as nn
class Encoder(nn.Module):
def __init__(
self,
vocab_size: int,
emb_size: int,
hidden_size: int,
num_layers: int,
dropout: float,
):
super().__init__()
self.vocab_size = vocab_size
self.emb_size = emb_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.embedding = nn.Embedding(vocab_size, emb_size)
self.rnn = nn.LSTM(
emb_size, hidden_size, num_layers=num_layers, dropout=dropout
)
self.dropout = nn.Dropout(dropout)
def forward(
self, src: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
embedded = self.embedding(src)
embedded = self.dropout(embedded)
outputs, (hidden, cell) = self.rnn(embedded)
return outputs, hidden, cell
class Attention(nn.Module):
def __init__(self, hidden_size: int):
super().__init__()
self.hidden_size = hidden_size
# Instead from one matrix we will use two linear modules
self.enc_linear = nn.Linear(hidden_size, hidden_size)
self.dec_linear = nn.Linear(hidden_size, hidden_size)
def forward(
self, last_hidden: torch.Tensor, encoder_outputs: torch.Tensor
) -> torch.Tensor:
bs = last_hidden.size(1)
# Prepare our examples
encoder_outputs = self.enc_linear(encoder_outputs).reshape(
bs, -1, self.hidden_size
)
last_hidden = self.dec_linear(last_hidden).reshape(
bs, self.hidden_size, 1
)
# Compute logits by batch matrix multiplication
logits = torch.bmm(encoder_outputs, last_hidden)
attn = torch.softmax(logits, 1).reshape(-1, bs, 1)
return attn
class DecoderAttn(nn.Module):
def __init__(
self,
vocab_size: int,
emb_size: int,
hidden_size: int,
num_layers: int,
attention: Attention,
dropout: float,
):
super().__init__()
self.emb_size = emb_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.num_layers = num_layers
self.attn = attention
self.embedding = nn.Embedding(vocab_size, emb_size)
self.rnn = nn.LSTM(
emb_size, hidden_size, num_layers=num_layers, dropout=dropout
)
self.out = nn.Linear(hidden_size, vocab_size)
self.dropout = nn.Dropout(dropout)
def forward(
self,
input_: torch.Tensor,
hidden: torch.Tensor, # hidden_state from t-1
cell: torch.Tensor,
encoder_output: torch.Tensor,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
embedded = self.embedding(input_)
embedded = self.dropout(embedded)
attn = self.attn(hidden[-1:], encoder_output)
# Generating new cell state by attention and encoder output
new_cell = (encoder_output * attn).sum(0)
cell[-1] = new_cell
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
prediction = self.out(output)
return prediction, hidden, cell
###Output
_____no_output_____
###Markdown
One important point about training Seq2Seq models it's adding target tokens in a Decoder training loop. While our model is not good enough, it's generating "trash" tokens, that hasn't any information for generating. That's why we try to feed the decoder. However, it's not good too! The decoder will generate text via its generated tokens. Fopr this purpose we try to train the Decoder with random decited tokens source (target or itself).
###Code
from random import random
BOS_IDX = train_dataset.get_vocab()[1].stoi["<BOS>"]
class Seq2Seq(nn.Module):
def __init__(self, encoder: Encoder, decoder: DecoderAttn):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.max_len = max_length
def forward(
self,
src: torch.Tensor,
trg: torch.Tensor,
teacher_forcing_ratio: float = 0.1,
) -> torch.Tensor:
batch_size = src.shape[1]
max_len = trg.shape[0]
trg_vocab_size = self.decoder.vocab_size
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(device)
enc_out, hidden, cell = self.encoder(src)
input_ = torch.zeros(1, batch_size) + BOS_IDX
input_ = input_.type(torch.LongTensor).to(device)
for t in range(1, max_len):
output, hidden, cell = self.decoder(input_, hidden, cell, enc_out)
outputs[t] = output
teacher_force = random() < teacher_forcing_ratio
top1 = output.max(2)[1]
input_ = (trg[t] if teacher_force else top1).reshape(1, -1)
return outputs[1:]
def translate(self, src: torch.Tensor) -> torch.Tensor:
batch_size = src.shape[1]
outputs = torch.zeros(self.max_len, batch_size).to(device)
enc_out, hidden, cell = self.encoder(src)
input_ = torch.zeros(1, batch_size) + BOS_IDX
input_ = input_.type(torch.LongTensor).to(device)
for t in range(1, self.max_len):
output, hidden, cell = self.decoder(input_, hidden, cell, enc_out)
top1 = output.max(2)[1].reshape(-1)
outputs[t] = top1
input_ = top1.reshape(1, -1)
return outputs[1:]
###Output
_____no_output_____
###Markdown
Create a model, special runner for Seq2Seq models and train the model!
###Code
source_vocab, target_vocab = train_dataset.get_vocab()
input_size = len(source_vocab)
output_size = len(target_vocab)
src_emb_size = tgt_emb_size = 100
hidden_size = 300
num_layers = 2
dropout_p = 0.1
enc = Encoder(input_size, src_emb_size, hidden_size, num_layers, dropout_p)
attention = Attention(hidden_size)
dec = DecoderAttn(
output_size, tgt_emb_size, hidden_size, num_layers, attention, dropout_p
)
model = Seq2Seq(enc, dec).to(device)
###Output
_____no_output_____
###Markdown
To train model, we will compare generated tokens with a target for each source. To compare, use `CrossEntropyLoss`!
###Code
from catalyst.dl import Runner
class Seq2SeqRunner(Runner):
def __init__(
self, source_vocab: Vocab, target_vocab: Vocab, *args, **kwargs
):
super().__init__(*args, **kwargs)
self.source_vocab = source_vocab
self.target_vocab = target_vocab
def predict_batch(self, batch) -> torch.Tensor:
source, target = batch
predictions = self.model.translate(source).type(torch.LongTensor)
translations = [
decoding(sentence, self.target_vocab)
for sentence in predictions.t()
]
return translations
def handle_batch(self, batch) -> None:
source, target = batch
self.batch = {}
if self.is_valid_loader:
target_decoded = [
decoding(sentence, runner.target_vocab)
for sentence in target.t()
]
predicted = runner.predict_batch(batch)
self.batch["predicted"] = predicted
self.batch["target_decoded"] = target_decoded
logits = self.model(source, target)
target = target[1:].reshape(-1)
logits = logits.reshape(target.size(0), -1)
self.batch.update(
**{"source": source, "target": target, "logits": logits}
)
###Output
_____no_output_____
###Markdown
To calculate BLEU score in train loop, we need to code Callback for this.
###Code
import numpy as np
from catalyst.dl import Callback, CallbackOrder
class BLEUCallback(Callback):
def __init__(self):
super().__init__(CallbackOrder.Metric)
def on_batch_end(self, runner: Runner) -> None:
if runner.is_valid_loader:
predicted = runner.batch["predicted"]
target = runner.batch["target_decoded"]
bleu = compute_bleu(predicted, target)
runner.batch_metrics.update(**{"bleu": bleu})
from catalyst.contrib.nn import RAdam
from torch.nn.utils import clip_grad_norm_
from catalyst.dl import CriterionCallback, OptimizerCallback
lr = 1e-2
optimizer = RAdam(model.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss(ignore_index=PAD_ID_trg)
callbacks = [
CriterionCallback("logits", "target", "loss"),
OptimizerCallback(
"loss", grad_clip_fn=clip_grad_norm_, grad_clip_params={"max_norm": 1}
),
BLEUCallback(),
]
loaders = {"train": train_loader, "valid": valid_loader}
runner = Seq2SeqRunner(source_vocab=source_vocab, target_vocab=target_vocab)
from datetime import datetime
from pathlib import Path
logdir = Path("logs") / datetime.now().strftime("%Y%m%d-%H%M%S")
runner.train(
model=model,
optimizer=optimizer,
criterion=criterion,
loaders=loaders,
callbacks=callbacks,
num_epochs=5,
verbose=True,
logdir=logdir,
)
###Output
_____no_output_____
###Markdown
Our model, trained on small data, is not well prepared to be a good translator. Anyway, let's test code and the model.
###Code
test = "A cat eats a fish"
test_input_ids = train_dataset.transforms[0](test)
test_input_ids = test_input_ids.reshape(-1, 1).to(device)
prediction = model.translate(test_input_ids).to("cpu").type(torch.LongTensor)
decoding(prediction, target_vocab)
###Output
_____no_output_____ |
notebooks/Literature_data.ipynb | ###Markdown
Define search terms
###Code
from search_terms import primary_terms, secondary_terms, descriptive_terms
primary_terms
secondary_terms
descriptive_terms
###Output
_____no_output_____
###Markdown
Perform search in PubMed
###Code
from easy_entrez import EntrezAPI
from config import ENTREZ_API_NAME, ENTREZ_API_EMAIL
entrez_api = EntrezAPI(
tool=ENTREZ_API_NAME,
email=ENTREZ_API_EMAIL,
minimal_interval=2
)
search_terms = {
**primary_terms,
**secondary_terms,
**descriptive_terms
}
from tqdm import tqdm
%%cache search_results pubmed_results
pubmed_results = {}
MAX_RESULTS = 10_000
for term in tqdm(search_terms):
result = entrez_api.search(
search_terms[term],
database='pubmed',
max_results=MAX_RESULTS
)
esearch = result.data['esearchresult']
count = int(esearch['count'])
assert count >= 0
assert count < MAX_RESULTS
pubmed_results[term] = result
all_papers = sorted(set(sum(
[
result.data['esearchresult']['idlist']
for result in pubmed_results.values()
],
[]
)))
len(all_papers)
%%cache pubmed_documents_data documents
documents_by_batch = (
entrez_api
.in_batches_of(size=100)
.fetch(all_papers, max_results=10_000, return_type='xml')
)
documents = sum(
(
list(result.data)
for result in documents_by_batch.values()
),
[]
)
from helpers.utils import xml_element_to_json
documents = [xml_element_to_json(document) for document in list(documents)]
assert len(documents) == len(all_papers)
###Output
_____no_output_____
###Markdown
Create a data frame with PubMed documents and covariates
###Code
from pandas import Series, DataFrame, read_csv, to_datetime
# create a frame with 0 columns and UID of each paper on the index
literature = Series(all_papers).to_frame('uid').set_index('uid')
# add columns for the occurrences of the terms
for term, result in pubmed_results.items():
literature[term] = False
for uid in result.data['esearchresult']['idlist']:
literature.loc[uid, term] = True
literature
###Output
_____no_output_____
###Markdown
Parse the PubMed metadata of articles Reference: - Medline: https://www.nlm.nih.gov/bsd/mms/medlineelements.html - Publication types: https://www.nlm.nih.gov/mesh/pubtypes.html (fun fact: includes "Wit and Humor" type)
###Code
from warnings import warn
from helpers.parse_pubmed import listify, extract_abstract, parse_date, parse_doi
missing_abstract = []
authors = []
affiliations = []
publication_types = []
for document in documents:
kind = None
date = None
doi = None
if 'PubmedBookArticle' in document:
kind = 'article in book'
book_document = document['PubmedBookArticle']['BookDocument']
pmid = book_document['PMID']['#text']
title = book_document['ArticleTitle']['#text']
abstract = extract_abstract(book_document)
# 'PublicationType' and 'KeywordList' ignored for book_document as only 2 matches (compared to 3k)
if 'PubmedArticle' in document:
pubmed_article = document['PubmedArticle']
assert not kind
kind = 'article'
medline_citation = pubmed_article['MedlineCitation']
pmid = medline_citation['PMID']['#text']
article = medline_citation['Article']
literature.loc[pmid, 'journal'] = article['Journal']['Title']
if 'ELocationID' in article:
doi = parse_doi(article['ELocationID'])
issue = article['Journal']['JournalIssue']
if 'PubDate' in issue:
date = parse_date(issue['PubDate'])
for author in listify(article['AuthorList']['Author'] if 'AuthorList' in article else None):
author_id = len(authors)
authors.append(
{
'ID': author_id,
'ForeName': author.get('ForeName'),
'LastName': author.get('LastName'),
'CollectiveName': author.get('CollectiveName'),
'PMID': pmid
}
)
for affiliation in listify(author.get('AffiliationInfo')):
affiliations.append({
'Affiliation': affiliation['Affiliation'],
'PMID': pmid,
'AuthorID': author_id
})
for publication_type in listify(article['PublicationTypeList']['PublicationType'] if 'PublicationTypeList' in article else None):
type_name = publication_type['#text']
publication_types.append(type_name)
literature.loc[pmid, f'Is {type_name}'] = True
try:
literature.loc[pmid, 'journal_issn'] = article['Journal']['ISSN']['#text']
except KeyError:
warn(f'{article["Journal"]} had no ISSN assigned')
if 'ArticleTitle' in article:
title = article['ArticleTitle']
if isinstance(title, dict):
title = title['#text']
abstract = extract_abstract(article)
if not abstract:
missing_abstract.append(pmid)
assert kind
literature.loc[pmid, 'kind'] = kind
literature.loc[pmid, 'doi'] = doi
literature.loc[pmid, 'title'] = title
literature.loc[pmid, 'abstract'] = abstract
literature.loc[pmid, 'date'] = date
publication_types = Series(publication_types)
publication_types.sorted_value_counts()
affiliations = DataFrame(affiliations)
authors = DataFrame(authors)
authors['JointName'] = authors['ForeName'] + ' ' + authors['LastName']
literature['has_doi'] = ~literature.doi.isnull()
literature.date = to_datetime(literature.date)
literature['year'] = literature.date.dt.year
terms = list(pubmed_results.keys())
def which_term(term):
term = list(term[term].index)
if len(term) == 1:
return term[0]
else:
return 'multiple'
literature['term'] = literature[terms].apply(which_term, axis=1)
from pandas import Categorical
literature['term'] = Categorical(literature['term'], ordered=True, categories=list(literature['term'].sorted_value_counts().index))
literature['has_url_in_abstract'] = literature['abstract'].str.contains('(?:https?://|www.)')
###Output
_____no_output_____
###Markdown
Add PubmedCentral mapping
###Code
%%cache pubmed_central_metadata pmc_metadata
# approx 2GB in RAM, best to subset early
pmc_metadata_all = read_csv('data/PMC-ids.csv.gz')
pmid_of_interest = set(literature.index)
pmc_metadata = pmc_metadata_all[pmc_metadata_all.PMID.isin(pmid_of_interest)]
del pmc_metadata_all
len(pmc_metadata)
pmc_metadata.head()
literature['PMC'] = pmc_metadata.set_index('PMID').reindex(literature.index.astype(float))['PMCID']
assert len(pmc_metadata) == sum(~literature['PMC'].isnull())
literature['has_pmc'] = (~literature['PMC'].isnull())
###Output
_____no_output_____
###Markdown
Note can also try to find missing PMCs in the summaries:
###Code
# result = entrez_api.search(primary_terms['poly-omics'], max_results=10_000)
# summary = entrez_api.summarize(result.data['esearchresult']['idlist'][:5], max_results=10_000)
# summary.data
###Output
_____no_output_____
###Markdown
Download full texts as XML
###Code
pmc_ids = literature[literature['has_pmc']]['PMC'].tolist()
pmc_ids[:4]
%%cache pubmed_central_xml pmc_xmls
pmc_full_texts = entrez_api.in_batches_of(size=100).fetch(pmc_ids, max_results=5_000, database='pmc', return_type='xml')
pmc_xmls = sum(
[
list(response.data)
for response in pmc_full_texts.values()
],
[]
)
len(pmc_xmls)
ignore_text = {'xref', 'table', 'thead', 'th', 'td', 'tr', 'graphic'}
def extract_text(body) -> str:
fragments = []
for i in body.iter():
if i.tag in ignore_text:
continue
text = i.text
if i.tag == 'label' and text and text.startswith('Figure'):
continue
if text:
fragments.append(text)
return '\n'.join(fragments)
literature_subjects = literature.index.to_frame().drop(columns='uid').copy()
for xml in pmc_xmls:
pmid = xml.find('front/article-meta/article-id[@pub-id-type="pmid"]').text
body = xml.find('body')
has_full_text = body is not None
subjects = [subject.text for subject in xml.findall('front/article-meta//subject')]
literature.loc[pmid, 'has_full_text'] = has_full_text
literature.loc[pmid, 'full_text'] = extract_text(body) if has_full_text else None
literature.loc[pmid, 'article_type'] = xml.attrib['article-type']
for subject in subjects:
literature_subjects.loc[pmid, subject] = True
literature_subjects = literature_subjects.fillna(False)
literature_subjects.sum().sort_values(ascending=False).head(10)
literature.article_type.sorted_value_counts()
sum(literature['has_full_text'] == True)
#from helpers.utils import display_xml
#display_xml(pmc_xmls[-2].find('body'))
###Output
_____no_output_____
###Markdown
Abstract clean-up Many abstracts contains sections/organising headers, such as:
###Code
['BACKGROUND', 'MOTIVATION', 'OBJECTIVE', 'SCOPE']
###Output
_____no_output_____
###Markdown
By convention those are upper case in PubMed. Here We filter those out:
###Code
from re import findall
def extract_upper_case(abstract: str, min_len: int = 3):
if abstract:
return findall('([A-Z]{' + str(min_len) + ',})', abstract)
return []
def count_upper_case_phrases(data: Series, min_len: int = 3) -> Series:
return Series(sum(data.apply(extract_upper_case, min_len=min_len), [])).sorted_value_counts()
potential_headers = count_upper_case_phrases(literature['abstract'])
potential_headers[potential_headers > 100]
###Output
_____no_output_____
###Markdown
There are many disease abbreviations making the list too long to browse:
###Code
len(potential_headers[potential_headers > 3])
###Output
_____no_output_____
###Markdown
So we will look at longer words:
###Code
potential_headers[potential_headers > 3].index.map(len).value_counts()
potential_headers_long = count_upper_case_phrases(literature['abstract'], min_len=5)
potential_headers_long.head(20)
###Output
_____no_output_____
###Markdown
I manually chosen headers from among top 100 hits:
###Code
ABSTRACT_HEADERS = [
# manually added to prevent hanging "OF"
'PURPOSE OF REVIEW',
# chosen from top 100 most frequent
'RESULTS',
'BACKGROUND',
'CONCLUSIONS',
'METHODS',
'CONCLUSION',
'PURPOSE',
'OBJECTIVE',
'AVAILABILITY',
'MOTIVATION',
'INFORMATION',
'SUPPLEMENTARY',
'FINDINGS',
'SIGNIFICANCE',
'INTRODUCTION',
'DESIGN',
'OBJECTIVES',
'REVIEW',
'SUMMARY',
'MATERIALS',
'STUDY',
'EXPERIMENTAL',
'DISCUSSION',
'REGISTRATION',
'METHOD',
'CONTACT',
'FUTURE',
'INTERPRETATION',
]
literature['abstract_clean'] = literature['abstract'].str.replace('|'.join(ABSTRACT_HEADERS), '')
%vault store literature in pubmed_derived_data
%vault store literature_subjects in pubmed_derived_data
%vault store affiliations, authors, publication_types in pubmed_derived_data
import pandas
pandas.set_option('display.max_colwidth', 1000)
from typing import Union
def format_token(t: Union[str, dict]) -> str:
if isinstance(t, str):
return t
assert t['explode'] == 'N'
assert t['field'] == 'Text'
return t['term'] + '→' + t['count'] + ''
pubmed_translations = []
for term, result in pubmed_results.items():
pubmed_translations.append({
'term': term,
'translation_stack': ' '.join([format_token(t) for t in result.data['esearchresult']['translationstack']]),
'query_translation': result.data['esearchresult']['querytranslation']
})
pubmed_translations = DataFrame(pubmed_translations)
pubmed_translations
###Output
_____no_output_____
###Markdown
Create a control of documents published in the journals with hits
###Code
years_set = sorted(set(literature.year.dropna().astype(int)))
years_set
journal_freq = literature.journal.sorted_value_counts()
journal_freq
popular_journals = journal_freq[journal_freq >= 3]
popular_journals.sum() / journal_freq.sum()
%vault store popular_journals in pubmed_derived_data
%%cache all_articles_by_journal_and_year all_articles_by_journal_and_year
all_articles_by_journal_and_year = []
for journal in tqdm(sorted(popular_journals.index)):
for year in list(years_set):
result = entrez_api.search(
f'("{journal}"[Journal]) AND ("{year}"[Date - Publication])',
database='pubmed',
max_results=1
)
esearch = result.data['esearchresult']
count = int(esearch['count'])
assert count >= 0
all_articles_by_journal_and_year.append({
'count': count,
'year': year,
'journal': journal
})
all_articles_by_journal_and_year = DataFrame(all_articles_by_journal_and_year)
%vault store all_articles_by_journal_and_year in pubmed_derived_data
###Output
_____no_output_____
###Markdown
Create a control for cancer enrichment
###Code
MIN_DATE = min(literature.date.dt.year)
MAX_DATE = max(literature.date.dt.year)
MIN_DATE, MAX_DATE
SAME_PERIOD_AS_MULTI_OMICS = f'(("{MIN_DATE}"[Date - Publication] : "{MAX_DATE}"[Date - Publication]))'
SAME_PERIOD_AS_MULTI_OMICS
###Output
_____no_output_____
###Markdown
Full-text search:
###Code
%%cache cancer_articles_from_popular_journals_any_field cancer_articles_from_popular_journals_any_field
cancer_articles_by_journal = []
for journal in tqdm(sorted(popular_journals.index)):
result = entrez_api.search(
f'("{journal}"[Journal]) AND ("cancer"[All Fields]) AND {SAME_PERIOD_AS_MULTI_OMICS}',
database='pubmed',
max_results=1
)
esearch = result.data['esearchresult']
count = int(esearch['count'])
assert count >= 0
cancer_articles_by_journal.append({
'count': count,
'journal': journal
})
cancer_articles_from_popular_journals_any_field = DataFrame(cancer_articles_by_journal)
%vault store cancer_articles_from_popular_journals_any_field in pubmed_derived_data
###Output
_____no_output_____
###Markdown
Title/abstract only:
###Code
%%cache cancer_articles_from_popular_journals_tiab_only cancer_articles_from_popular_journals_tiab_only
cancer_tiab_articles_by_journal = []
for journal in tqdm(sorted(popular_journals.index)):
result = entrez_api.search(
f'("{journal}"[Journal]) AND ("cancer"[TIAB]) AND {SAME_PERIOD_AS_MULTI_OMICS}',
database='pubmed',
max_results=1
)
esearch = result.data['esearchresult']
count = int(esearch['count'])
assert count >= 0
cancer_tiab_articles_by_journal.append({
'count': count,
'journal': journal
})
cancer_articles_from_popular_journals_tiab_only = DataFrame(cancer_tiab_articles_by_journal)
%vault store cancer_articles_from_popular_journals_tiab_only in pubmed_derived_data
###Output
_____no_output_____ |
tutorial/04. Usando variables.ipynb | ###Markdown
Usando variablesLas variables son un nombre que representa o guarda un valor, y que puedes usar más adelante, más de una vez.Por ejemplo, puedes usar una variable para tu año de nacimiento:
###Code
año_nacimiento = 2010
###Output
_____no_output_____
###Markdown
**Para mostrar lo que contiene una variable se usa `print( )`, con la variable adentro de los paréntesis `(` y `)`**:
###Code
print(año_nacimiento)
###Output
2010
###Markdown
Hay algunas mañas con los nombres de las variables:- Tienen que comenzar con una letra o el caracter `_`- Generalmente son una palabra o una descripción- No pueden contener espacios. Si quieres separar palabras usa el caracter `_`- Pueden tener números.Crea ahora una variable para almacenar tu edad.
###Code
mi_edad = 7
print(mi_edad)
###Output
7
###Markdown
Las variables son de diferentes tipos. Hay variables que almacenan números, otros que almacenan `strings`:
###Code
un_numero = 3345
mi_nombre = "Francisca"
###Output
_____no_output_____
###Markdown
Las variables se pueden asignar y hacer operaciones entre ellas.Por ejemplo para sumar o restar dos números:
###Code
numero_1 = 16
numero_2 = 10
la_suma = numero_1 + numero_2
print(la_suma)
la_resta = numero_1 - numero_2
print(la_resta)
###Output
6
|
Scripts/web_scraping.ipynb | ###Markdown
Web Scraping- esse Jupyter Notebook mostra como é feita o web scraping.
###Code
## Beautiful Soup
! pip install beautifulsoup4
print("\n" + '\033[1m' + '\033[93m' + "Downloads feitos com sucesso!")
! pip install requests
import requests
import requests
from bs4 import BeautifulSoup
import re
print("\n" + '\033[1m' + '\033[93m' + "Importações feitas com sucesso!")
url = 'https://g1.globo.com/pop-arte/musica/noticia/2020/04/19/roberto-carlos-live-e-neste-domingo-19-com-transmissao-no-globoplay-e-no-domingao-veja-detalhes.ghtml'
pagina = requests.get(url).text
soup = BeautifulSoup(pagina, features="html.parser")
result = soup.findAll("div", {"class":"content-text"})
lista = list()
for linha in result:
# Extraindo o texto do paragrado
linha = linha.find('p').text
# Deixando tudo em minúsculo
linha = linha.lower()
# Removendo a pontuação (EX: "!?.,/|#$%¨&")
linha = re.sub(r'[^\w\s]', '', linha)
# #removendo links e etec
linha = re.sub('\[.*?\]', '', linha)
linha = re.sub('\w*\d\w*', '', linha)
linha = re.sub(r'^https?:\/\/.*[\r\n]*', '', linha, flags=re.MULTILINE)
lista.append(linha)
print (lista)
###Output
[' roberto carlos vai comemorar seu aniversário de anos neste domingo com a tv globo e o globoplay o cantor estará em seu estúdio na urca no rio de janeiro ', ' live de roberto carlos como assistir o público poderá acompanhar a apresentação completa com duração prevista de minutos no globoplay veja o link o domingão do faustão exibirá na tv globo ao vivo às as duas primeiras músicas ', ' com uma produção mínima e cercado de cuidados roberto carlos estará acompanhado apenas do maestro eduardo lages e de tutuca borba nos teclados ', ' além de curtir os sucessos e comemorar o aniversário junto com roberto carlos o público será convidado a se juntar à corrente de solidariedade que toma conta do brasil e do mundo e conhecer o paraquemdoarcombr plataforma criada pela globo que reúne projetos de institutos fundações entidades e movimentos sociais que estão atuando e precisam de apoio para minimizar os impactos da pandemia ']
|
ICCT_si/examples/04/SS-39-Krmiljenje_pozicije_bremena_na_zerjavu.ipynb | ###Markdown
Krmiljenje pozicije bremena na žerjavuPortalni žerjav sestoji iz vozička mase $m_c = 1000$ kg in bremena mase $m_l$, ki sta povezana z vrvjo dolžine $L$ (maso vrvi zanemarimo). Trenje je linearno, proporcionalno hitrosti vozička, s koeficientom trenja $B_f = 100$ Ns/m; voziček se premika zaradi delovanja sile $F$. Kot med navpično osjo in vrvjo je $\theta$. Dinamični enačbi sistema, linearizirani okoli stacionarnih pogojev ($\theta=\dot{\theta}=\dot{x}=0$), sta ($g = 9.81$ m/$\text{s}^2$ je težnostni pospešek): \begin{cases} (m_l+m_c)\ddot{x}+m_lL\ddot{\theta}+B_f\dot{x}=F \\ m_lL^2\ddot{\theta}+m_lL\ddot{x}+m_lLg\theta=0\end{cases}Predpostavimo, da sta $m_l=100$ kg in $L=10$ m nominalni vrednosti; želimo načrtovati regulator izmerjene pozicije bremena $y=x+L\sin{\theta} \cong x+L\theta$ ob upoštevanju naslednjih zahtev: - čas ustalitve krajši od 20 s (dosežena vrednost izhoda naj se razlikuje od tiste v stacionarnem stanju za 5%), - brez prenihaja ali z zgolj minimalnim prenihajem,- brez ali z minimalnim odstopkom v stacionarnem stanju v odzivu na koračno funkcijo,- sila $F$ ne sem presegati $\pm1000$ N ob zahtevani spremembi pozicije 10 m.Najprej zapišimo linearizirane enačbe v obliko prostora stanj; deljenje druge enačbe iz zgornjega sistema enačb z $m_lL$, nam da$$ L\ddot{\theta}+\ddot{x}+g\theta=0 \, .$$Sedaj lahko, ob upoštevanju $x=\begin{bmatrix} x_1 & x_2 & x_3 & x_4 \end{bmatrix}^T = =\begin{bmatrix} \dot{x} & \dot{\theta} & \theta & x \end{bmatrix}^T$, zapišemo:$$M\dot{x}=Gx+HF,$$kjer velja:$$ M = \begin{bmatrix} (m_l+m_c) & m_lL & 0 & 0 \\ 1 & L & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \quad G = \begin{bmatrix} -B_f & 0 & 0 & 0 \\ 0 & 0 & -g & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix} \quad \text{and} \quad H = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \, .$$Tako, s predhodnim množenjem z $M^{-1}$, dobimo ($A=M^{-1}G$, $B=M^{-1}H$):\begin{cases} \dot{x} = \begin{bmatrix} -B_f/m_c & 0 & gm_l/m_c & 0 \\ B_f/(Lm_c) & 0 & -g(1/L + m_l/(Lm_c)) & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}x +\begin{bmatrix}1/m_c \\ -1/(Lm_c) \\ 0 \\ 0 \end{bmatrix}F \\ y = \begin{bmatrix} 0 & 0 & L & 1 \end{bmatrix}x\end{cases} Narčtovanje regulatorja Načrotvanje krmilnikaPoli sistema so $0$, $-0.091$ in $-0.0045\pm1.038i$. Z namenom dosega zahteve glede časa ustalitve, premaknemo realne pole v $-0.28$ in zmanjšamo realni del imaginarnega pola za $-0.25$. Izbrani poli zaprtozančnega sistema so tako $-0.28$, $-0.28$ in $-0.2545\pm1.038i$. Za dosego ničelnega odstopka v stacionarnem stanju, pomonožimo vhodni (referenčni signal) z ojačanjem, ki je enak inverzni vrednosti ojačanja zaprtozančnega sistema. Načrtovanje spoznavalnikaDa omogočimo krmilnik povratne zveze stanj, načrtujemo spoznavalnik stanj s poli v $-8$. Kako upravljati s tem interaktivnim primerom?Poizkusi spremeniti lokacije polov tako, da bodo izpolnjene zahteve v primeru vhodne funkcije v vrednosti 15 m.
###Code
# Preparatory cell
X0 = numpy.matrix('0.0; 0.0; 0.0; 0.0')
K = numpy.matrix([0,0,0,0])
L = numpy.matrix([[0],[0],[0],[0]])
X0w = matrixWidget(4,1)
X0w.setM(X0)
Kw = matrixWidget(1,4)
Kw.setM(K)
Lw = matrixWidget(4,1)
Lw.setM(L)
eig1c = matrixWidget(1,1)
eig2c = matrixWidget(2,1)
eig3c = matrixWidget(1,1)
eig4c = matrixWidget(2,1)
eig1c.setM(numpy.matrix([-0.28]))
eig2c.setM(numpy.matrix([[-0.2545],[-1.038]]))
eig3c.setM(numpy.matrix([-0.28]))
eig4c.setM(numpy.matrix([[-1.],[-1.]]))
eig1o = matrixWidget(1,1)
eig2o = matrixWidget(2,1)
eig3o = matrixWidget(1,1)
eig4o = matrixWidget(2,1)
eig1o.setM(numpy.matrix([-8.]))
eig2o.setM(numpy.matrix([[-8.1],[0.]]))
eig3o.setM(numpy.matrix([-8.2]))
eig4o.setM(numpy.matrix([[-8.3],[0.]]))
# Misc
#create dummy widget
DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
#create button widget
START = widgets.Button(
description='Test',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Test',
icon='check'
)
def on_start_button_clicked(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW.value> 0 :
DW.value = -1
else:
DW.value = 1
pass
START.on_click(on_start_button_clicked)
# Define type of method
selm = widgets.Dropdown(
options= ['Nastavi K in L', 'Nastavi lastne vrednosti'],
value= 'Nastavi lastne vrednosti',
description='',
disabled=False
)
# Define the number of complex eigenvalues
selec = widgets.Dropdown(
options= ['brez kompleksnih lastnih vrednosti', 'dve kompleksni lastni vrednosti', 'štiri kompleksne lastne vrednosti'],
value= 'dve kompleksni lastni vrednosti',
description='Lastne vrednosti krmilnika:',
disabled=False
)
seleo = widgets.Dropdown(
options= ['brez kompleksnih lastnih vrednosti', 'dve kompleksni lastni vrednosti'],
value= 'brez kompleksnih lastnih vrednosti',
description='Lastne vrednosti spoznavalnika:',
disabled=False
)
#define type of ipout
selu = widgets.Dropdown(
options=['impulzna funkcija', 'koračna funkcija', 'sinusoidna funkcija', 'kvadratni val'],
value='koračna funkcija',
description='Vhod:',
style = {'description_width': 'initial'},
disabled=False
)
# Define the values of the input
u = widgets.FloatSlider(
value=10,
min=0,
max=20,
step=1,
description='Vhod:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
period = widgets.FloatSlider(
value=0.5,
min=0.001,
max=10,
step=0.001,
description='Perioda: ',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.3f',
)
simTime = widgets.FloatText(
value=30,
description='',
disabled=False
)
# Support functions
def eigen_choice(selec,seleo):
if selec == 'brez kompleksnih lastnih vrednosti':
eig1c.children[0].children[0].disabled = False
eig2c.children[1].children[0].disabled = True
eig3c.children[0].children[0].disabled = False
eig4c.children[0].children[0].disabled = False
eig4c.children[1].children[0].disabled = True
eigc = 0
if seleo == 'brez kompleksnih lastnih vrednosti':
eig1o.children[0].children[0].disabled = False
eig2o.children[1].children[0].disabled = True
eig3o.children[0].children[0].disabled = False
eig4o.children[0].children[0].disabled = False
eig4o.children[1].children[0].disabled = True
eigo = 0
if selec == 'dve kompleksni lastni vrednosti':
eig1c.children[0].children[0].disabled = False
eig2c.children[1].children[0].disabled = False
eig3c.children[0].children[0].disabled = False
eig4c.children[0].children[0].disabled = True
eig4c.children[1].children[0].disabled = True
eigc = 2
if seleo == 'dve kompleksni lastni vrednosti':
eig1o.children[0].children[0].disabled = False
eig2o.children[1].children[0].disabled = False
eig3o.children[0].children[0].disabled = False
eig4o.children[0].children[0].disabled = True
eig4o.children[1].children[0].disabled = True
eigo = 2
if selec == 'štiri kompleksne lastne vrednosti':
eig1c.children[0].children[0].disabled = True
eig2c.children[1].children[0].disabled = False
eig3c.children[0].children[0].disabled = True
eig4c.children[0].children[0].disabled = False
eig4c.children[1].children[0].disabled = False
eigc = 4
if seleo == 'štiri kompleksne lastne vrednosti':
eig1o.children[0].children[0].disabled = True
eig2o.children[1].children[0].disabled = False
eig3o.children[0].children[0].disabled = True
eig4o.children[0].children[0].disabled = False
eig4o.children[1].children[0].disabled = False
eigo = 4
return eigc, eigo
def method_choice(selm):
if selm == 'Nastavi K in L':
method = 1
selec.disabled = True
seleo.disabled = True
if selm == 'Nastavi lastne vrednosti':
method = 2
selec.disabled = False
seleo.disabled = False
return method
ml = 100
mc = 1000
L = 10
g = 9.81
Bf = 100
A = numpy.matrix([[-Bf/mc,0,g*ml/mc,0],
[Bf/(L*mc), 0, -g*(1/L + ml/(L*mc)), 0],
[0, 1, 0, 0],
[1, 0, 0, 0]])
B = numpy.matrix([[1/mc],[-1/(L*mc)],[0],[0]])
C = numpy.matrix([[0,0,L,1]])
def main_callback2(X0w, K, L, eig1c, eig2c, eig3c, eig4c, eig1o, eig2o, eig3o, eig4o, u, period, selm, selec, seleo, selu, simTime, DW):
eigc, eigo = eigen_choice(selec,seleo)
method = method_choice(selm)
if method == 1:
solc = numpy.linalg.eig(A-B*K)
solo = numpy.linalg.eig(A-L*C)
if method == 2:
try:
if eigc == 0:
K = control.acker(A, B, [eig1c[0,0], eig2c[0,0], eig3c[0,0], eig4c[0,0]])
Kw.setM(K)
if eigc == 2:
K = control.acker(A, B, [eig3c[0,0],
eig1c[0,0],
numpy.complex(eig2c[0,0], eig2c[1,0]),
numpy.complex(eig2c[0,0],-eig2c[1,0])])
Kw.setM(K)
if eigc == 4:
K = control.acker(A, B, [numpy.complex(eig4c[0,0], eig4c[1,0]),
numpy.complex(eig4c[0,0],-eig4c[1,0]),
numpy.complex(eig2c[0,0], eig2c[1,0]),
numpy.complex(eig2c[0,0],-eig2c[1,0])])
Kw.setM(K)
if eigo == 0:
L = control.place(A.T, C.T, [eig1o[0,0], eig2o[0,0], eig3o[0,0], eig4o[0,0]]).T
Lw.setM(L)
if eigo == 2:
L = control.place(A.T, C.T, [eig3o[0,0],
eig1o[0,0],
numpy.complex(eig2o[0,0], eig2o[1,0]),
numpy.complex(eig2o[0,0],-eig2o[1,0])]).T
Lw.setM(L)
if eigo == 4:
L = control.place(A.T, C.T, [numpy.complex(eig4o[0,0], eig4o[1,0]),
numpy.complex(eig4o[0,0],-eig4o[1,0]),
numpy.complex(eig2o[0,0], eig2o[1,0]),
numpy.complex(eig2o[0,0],-eig2o[1,0])]).T
Lw.setM(L)
except:
print("NAPAKA: najmanj en pol je ponovljen večkrat, kot je rang matrike B; poizkusi drugače razporediti pole")
return
sys = sss(A,B,numpy.vstack((C,[0,0,0,0])),[[0],[1]])
syse = sss(A-L*C,numpy.hstack((B,L)),numpy.eye(4),numpy.zeros((4,2)))
sysc = sss(0,[0,0,0,0],0,-K)
sys_append = control.append(sys,syse,sysc)
# To avoid strange behaviours
try:
sys_CL = control.connect(sys_append,
[[1,7],[2,2],[3,1],[4,3],[5,4],[6,5],[7,6]],
[1],
[1,2])
except:
sys_CL = control.connect(sys_append,
[[1,7],[2,2],[3,1],[4,3],[5,4],[6,5],[7,6]],
[1],
[1,2])
X0w1 = numpy.zeros((8,1))
X0w1[4,0] = X0w[0,0]
X0w1[5,0] = X0w[1,0]
X0w1[6,0] = X0w[2,0]
X0w1[7,0] = X0w[3,0]
if simTime != 0:
T = numpy.linspace(0, simTime, 10000)
else:
T = numpy.linspace(0, 1, 10000)
#t1, y1 = step(sys_CL[0,0],[0,10000])
u1 = u
try:
DCgain = control.dcgain(sys_CL[0,0])
u = u/DCgain
except:
print("Napaka v izračunu DC ojačanja (DC gain) zaprtozančnega krmiljenega sistema. Vnaprejšnje (ang. feedforward) ojačanje je nastavljeno na 1.")
DCgain = 1
if selu == 'impulzna funkcija': #selu
U = [0 for t in range(0,len(T))]
U[0] = u
U1 = [0 for t in range(0,len(T))]
U1[0] = u1
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'koračna funkcija':
U = [u for t in range(0,len(T))]
U1 = [u1 for t in range(0,len(T))]
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'sinusoidna funkcija':
U = u*numpy.sin(2*numpy.pi/period*T)
U1 = u1*numpy.sin(2*numpy.pi/period*T)
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'kvadratni val':
U = u*numpy.sign(numpy.sin(2*numpy.pi/period*T))
U1 = u1*numpy.sign(numpy.sin(2*numpy.pi/period*T))
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
try:
step_info_dict = control.step_info(sys_CL[0,0],SettlingTimeThreshold=0.05,T=T)
print('Informacija o odzivu: \n\tČas vzpona [s]=',step_info_dict['RiseTime'],'\n\tČas ustalitve [s] =',step_info_dict['SettlingTime'],'\n\tPrenihaj [%]=',step_info_dict['Overshoot'])
print('Maksimalna vrednost u (v deležu od 1000 N)=', max(abs(yout[1]))/(1000)*100)
except:
print("Napaka v izračunu informacij o odzivu sistema.")
print("Ojačanje zaprtozančnega sistema =",DCgain)
fig = plt.figure(num='Simulacija 1', figsize=(14,12))
fig.add_subplot(221)
plt.title('Odziv sistema')
plt.ylabel('Izhod')
plt.plot(T,yout[0],T,U1,'r--')
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.legend(['$y$','Referenca'])
plt.grid()
fig.add_subplot(222)
plt.title('Vhod')
plt.ylabel('$u$')
plt.plot(T,yout[1])
plt.plot(T,[1000 for i in range(len(T))],'r--')
plt.plot(T,[-1000 for i in range(len(T))],'r--')
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.grid()
fig.add_subplot(223)
plt.title('Odzivi stanj')
plt.ylabel('Stanja')
plt.plot(T,xout[0],
T,xout[1],
T,xout[2],
T,xout[3])
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.legend(['$x_{1}$','$x_{2}$','$x_{3}$','$x_{4}$'])
plt.grid()
fig.add_subplot(224)
plt.title('Napaka ocene stanj')
plt.ylabel('Napaka ocene stanj')
plt.plot(T,xout[4]-xout[0])
plt.plot(T,xout[5]-xout[1])
plt.plot(T,xout[6]-xout[2])
plt.plot(T,xout[7]-xout[3])
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.xlabel('$t$ [s]')
plt.legend(['$e_{1}$','$e_{2}$','$e_{3}$','$e_{4}$'])
plt.grid()
#plt.tight_layout()
alltogether2 = widgets.VBox([widgets.HBox([selm,
selec,
seleo,
selu]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.HBox([widgets.Label('K:',border=3), Kw,
widgets.Label('Lastne vrednosti:',border=3),
widgets.HBox([eig1c,
eig2c,
eig3c,
eig4c])])]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.VBox([widgets.HBox([widgets.Label('L:',border=3), Lw, widgets.Label(' ',border=3),
widgets.Label(' ',border=3),
widgets.Label('Lastne vrednosti:',border=3),
eig1o,
eig2o,
eig3o,
eig4o,
widgets.Label(' ',border=3),
widgets.Label(' ',border=3),
widgets.Label('X0 est.:',border=3), X0w]),
widgets.Label(' ',border=3),
widgets.HBox([
widgets.VBox([widgets.Label('Simulacijski čas [s]:',border=3)]),
widgets.VBox([simTime])])]),
widgets.Label(' ',border=3)]),
widgets.Label(' ',border=3),
widgets.HBox([u,
period,
START])])
out2 = widgets.interactive_output(main_callback2, {'X0w':X0w, 'K':Kw, 'L':Lw,
'eig1c':eig1c, 'eig2c':eig2c, 'eig3c':eig3c, 'eig4c':eig4c,
'eig1o':eig1o, 'eig2o':eig2o, 'eig3o':eig3o, 'eig4o':eig4o,
'u':u, 'period':period, 'selm':selm, 'selec':selec, 'seleo':seleo, 'selu':selu, 'simTime':simTime, 'DW':DW})
out2.layout.height = '860px'
display(out2, alltogether2)
###Output
_____no_output_____ |
slides/12_Cifar10_CNN.ipynb | ###Markdown
Dense layer limitations* pixel values are individual features, no neighboring information is stored* number of weigths for large images can be very large
###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(64, activation=tf.nn.relu),
tf.keras.layers.Dense(32, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.summary()
# Same network for larger input images
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(512, 512)),
tf.keras.layers.Dense(64, activation=tf.nn.relu),
tf.keras.layers.Dense(32, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.summary()
tf.keras.backend.clear_session()
###Output
_____no_output_____
###Markdown
Convolution with filters
###Code
test_image = misc.ascent()
fig = plt.figure(figsize=(6, 6))
plt.imshow(test_image, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Convolution with a $3\times 3$ filter$$ A \circledast F = \begin{bmatrix} 2 & 0 & 1 & 7 & 2 \\ 0 & 0 & 1 & 1 & 1 \\ 0 & 3 & 2 & 8 & 3 \\ 1 & 1 & 4 & 3 & 4 \\ 0 & 0 & 1 & 6 & 9 \\\end{bmatrix} \circledast\begin{bmatrix}1 & 0 & -1 \\ 1 & 0 & -1 \\ 1 & 0 & -1 \end{bmatrix} =\begin{bmatrix}-2 & -13 & -2 \\-6 & -8 & -1 \\-6 & -13 & -9\\\end{bmatrix}$$
###Code
def convolution(matrix, filt):
r, c = matrix.shape
f = len(filt)
result = np.zeros((r-f+1, c-f+1))
for i in range(r-f+1):
for j in range(c-f+1):
result[i, j] = np.sum(np.multiply(matrix[i:i+f, j:j+f], filt))
return result
filters = [
[[1, 0, -1], [1, 0, -1], [1, 0, -1]], # vertical edge detector
[[1, 1, 1], [0, 0, 0], [-1, -1, -1]], # horizontal edge detector
[[1, 0, -1], [2, 0, -2], [1, 0, -1]] # Sobel filter - edge detection
]
# filters = [
# [[1, 0, -1], [1, 0, -1], [1, 0, -1]],
# [[0, -1, 0], [-1, 5, -1], [0, -1, 0]], # what is this?
# [[1/9, 1/9, 1/9], [1/9, 1/9, 1/9], [1/9, 1/9, 1/9]] # what is this?
# ]
filtered_images = []
for filt in filters:
filtered_image = convolution(test_image, filt)
filtered_image[filtered_image < 0] = 0
filtered_image[filtered_image > 255] = 255
filtered_images.append(filtered_image)
fig = plt.figure(figsize=(14, 14))
fig.add_subplot(2, 2, 1)
plt.imshow(test_image, cmap='gray')
fig.add_subplot(2, 2, 2)
plt.imshow(filtered_images[0], cmap='gray')
fig.add_subplot(2, 2, 3)
plt.imshow(filtered_images[1], cmap='gray')
fig.add_subplot(2, 2, 4)
plt.imshow(filtered_images[2], cmap='gray')
plt.show()
def maxpooling(matrix):
r, c = matrix.shape
pooled_rows, pooled_cols = r // 2, c // 2
result = np.empty(shape=(pooled_rows, pooled_cols), dtype=matrix.dtype)
for ix in range(pooled_rows):
for jy in range(pooled_cols):
result[ix, jy] = np.max(matrix[2*ix:2*ix+2, 2*jy:2*jy+2])
return result
pooled_images = [maxpooling(image) for image in filtered_images]
fig = plt.figure(figsize=(16, 6))
fig.add_subplot(1, 3, 1)
plt.imshow(test_image, cmap='gray')
fig.add_subplot(1, 3, 2)
plt.imshow(filtered_images[0], cmap='gray')
fig.add_subplot(1, 3, 3)
plt.imshow(pooled_images[0], cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
CIFAR-10
###Code
tf.config.list_physical_devices('GPU')
!nvidia-smi
# (X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()
# !wget http://pjreddie.com/media/files/cifar.tgz -O /tmp/cifar.tgz
#
with tarfile.open("data/cifar.tgz","r") as tar:
tar.extractall("/tmp")
folder = "/tmp/cifar"
def get_filenames(folder):
return [f.name for f in os.scandir(folder) if f.is_file()]
def read_cifar_dataset(folder):
training_folder = f"{folder}/train"
test_folder = f"{folder}/test"
X_train, y_train = read_images_in_folder(training_folder)
X_test, y_test = read_images_in_folder(test_folder)
return (X_train, y_train), (X_test, y_test)
def read_images_in_folder(folder):
filenames = get_filenames(folder)
images = []
labels = []
for filename in filenames:
path = f"{folder}/{filename}"
name, _ = os.path.splitext(path)
label = name.split("_")[-1]
image = np.array(Image.open(path)) / 255.0
images.append(image)
labels.append(label)
return np.array(images), np.array(labels)
(X_train, y_train), (X_test, y_test) = read_cifar_dataset(folder)
X_train.shape
print("Training examples:", X_train.shape, y_train.shape)
print("Test examples:", X_test.shape, y_test.shape)
def print_image_for_each_label(X, y):
fig = plt.figure(figsize=(16, 6))
labels = np.unique(y)
for p, label in enumerate(labels):
ix = np.random.choice(np.where(y==label)[0])
image = X[ix, :, :, :]
ax = fig.add_subplot(2, 5, p+1)
plt.imshow(image, cmap=plt.cm.binary)
ax.set_title(label)
plt.show()
print_image_for_each_label(X_train, y_train)
def categorical_to_numeric(y):
_, indices = np.unique(y, return_inverse=True)
return indices
y_train_num = categorical_to_numeric(y_train)
y_test_num = categorical_to_numeric(y_test)
###Output
_____no_output_____
###Markdown
Logistic regression
###Code
tf.keras.backend.clear_session()
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(32, 32, 3)),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train, y_train_num, epochs=50, verbose=1, validation_data=(X_test, y_test_num))
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = np.arange(len(acc))
plt.figure(figsize=(16, 4))
plt.plot(epochs, acc)
plt.plot(epochs, val_acc)
plt.title('Training and validation accuracy')
plt.show()
plt.figure(figsize=(16, 4))
plt.plot(epochs, loss)
plt.plot(epochs, val_loss)
plt.title('Training and validation loss')
plt.show()
y_hat = np.argmax(model.predict(X_test), axis=1)
plt.figure(figsize=(7, 6))
plt.title('Confusion matrix', fontsize=14)
plt.imshow(confusion_matrix(y_test_num, y_hat))
plt.xticks(np.arange(10), list(range(10)), fontsize=12)
plt.yticks(np.arange(10), list(range(10)), fontsize=12)
plt.colorbar()
plt.show()
print("Test accuracy:", np.equal(y_hat, y_test_num).sum() / len(y_test))
tf.keras.backend.clear_session()
###Output
_____no_output_____
###Markdown
Fully connected network with hidden layers
###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(32, 32, 3)),
tf.keras.layers.Dense(64, activation=tf.nn.relu),
tf.keras.layers.Dense(32, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train, y_train_num, epochs=50, verbose=1, validation_data=(X_test, y_test_num))
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = np.arange(len(acc))
plt.figure(figsize=(16, 4))
plt.plot(epochs, acc)
plt.plot(epochs, val_acc)
plt.title('Training and validation accuracy')
plt.show()
plt.figure(figsize=(16, 4))
plt.plot(epochs, loss)
plt.plot(epochs, val_loss)
plt.title('Training and validation loss')
plt.show()
y_hat = np.argmax(model.predict(X_test), axis=1)
plt.figure(figsize=(7, 6))
plt.set_cmap('viridis')
plt.title('Confusion matrix', fontsize=14)
plt.imshow(confusion_matrix(y_test_num, y_hat))
plt.xticks(np.arange(10), list(range(10)), fontsize=12)
plt.yticks(np.arange(10), list(range(10)), fontsize=12)
plt.colorbar()
plt.show()
print("Test accuracy:", np.equal(y_hat, y_test_num).sum() / len(y_test))
###Output
_____no_output_____
###Markdown
CNN - LeNet5
###Code
tf.keras.backend.clear_session()
###Output
_____no_output_____
###Markdown

###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(6, kernel_size=5, strides=1, padding='valid', activation='tanh',
input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(pool_size=2, strides=2),
tf.keras.layers.Conv2D(16, kernel_size=5, strides=1, padding='valid', activation='tanh'),
tf.keras.layers.MaxPooling2D(pool_size=2, strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120, activation=tf.nn.tanh),
tf.keras.layers.Dense(84, activation=tf.nn.tanh),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train, y_train_num, epochs=50, verbose=1, validation_data=(X_test, y_test_num))
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = np.arange(len(acc))
plt.figure(figsize=(16, 4))
plt.plot(epochs, acc)
plt.plot(epochs, val_acc)
plt.title('Training and validation accuracy')
plt.show()
plt.figure(figsize=(16, 4))
plt.plot(epochs, loss)
plt.plot(epochs, val_loss)
plt.title('Training and validation loss')
plt.show()
y_hat = np.argmax(model.predict(X_test), axis=1)
plt.figure(figsize=(7, 6))
plt.title('Confusion matrix', fontsize=14)
plt.imshow(confusion_matrix(y_test_num, y_hat))
plt.xticks(np.arange(10), list(range(10)), fontsize=12)
plt.yticks(np.arange(10), list(range(10)), fontsize=12)
plt.colorbar()
plt.show()
print("Test accuracy:", np.equal(y_hat, y_test_num).sum() / len(y_test))
###Output
_____no_output_____
###Markdown
LeNet5 with regularization
###Code
tf.keras.backend.clear_session()
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(6, kernel_size=5, strides=1, padding="valid", activation="tanh",
input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(pool_size=2, strides=2),
tf.keras.layers.Dropout(0.15),
tf.keras.layers.Conv2D(16, kernel_size=5, strides=1, padding="valid", activation="tanh"),
tf.keras.layers.MaxPooling2D(pool_size=2, strides=2),
tf.keras.layers.Dropout(0.15),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120, activation=tf.nn.tanh, kernel_regularizer="l2"),
tf.keras.layers.Dense(84, activation=tf.nn.tanh, kernel_regularizer="l2"),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train, y_train_num, epochs=30, verbose=1, validation_data=(X_test, y_test_num))
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = np.arange(len(acc))
plt.figure(figsize=(16, 4))
plt.plot(epochs, acc)
plt.plot(epochs, val_acc)
plt.title('Training and validation accuracy')
plt.show()
plt.figure(figsize=(16, 4))
plt.plot(epochs, loss)
plt.plot(epochs, val_loss)
plt.title('Training and validation loss')
plt.show()
y_hat = np.argmax(model.predict(X_test), axis=1)
plt.figure(figsize=(7, 6))
plt.title('Confusion matrix', fontsize=14)
plt.imshow(confusion_matrix(y_test_num, y_hat))
plt.xticks(np.arange(10), list(range(10)), fontsize=12)
plt.yticks(np.arange(10), list(range(10)), fontsize=12)
plt.colorbar()
plt.show()
print("Test accuracy:", np.equal(y_hat, y_test_num).sum() / len(y_test))
###Output
_____no_output_____ |
on_demand/tfx-caip/lab-04-tfx-metadata/labs/lab-04.ipynb | ###Markdown
Inspecting TFX metadata Learning Objectives1. Use a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.In this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. A hosted **AI Platform Pipelines** instance includes the [ML Metadata](https://github.com/google/ml-metadata) service. In **AI Platform Pipelines**, ML Metadata uses *MySQL* as a database backend and can be accessed using a GRPC server. Setup
###Code
import os
import ml_metadata
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration import metadata
from tfx.types import standard_artifacts
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
###Output
_____no_output_____
###Markdown
Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in `lab-02` or `lab-03`. 1.1 Configure Kubernetes port forwardingTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Proceed to the next step, "Connecting to ML Metadata". Option 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.Hosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.
###Code
import yaml
# Set `PATH` to include the directory containing TFX CLI.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
###Output
_____no_output_____
###Markdown
The pipeline source can be found in the `pipeline` folder. Switch to the `pipeline` folder and compile the pipeline.
###Code
%cd pipeline
###Output
_____no_output_____
###Markdown
2.1 Create AI Platform Pipelines clusterNavigate to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select `"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform"` to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an `App instance name` such as "TFX-lab-04". 2.2 Configure environment settings Update the below constants with the settings reflecting your lab environment.- `GCP_REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `kubeflowpipelines-` prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
* `CUSTOM_SERVICE_ACCOUNT` - In the gcp console Click on the Navigation Menu. Navigate to `IAM & Admin`, then to `Service Accounts` and use the service account starting with prifix - `'tfx-tuner-caip-service-account'`. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup `README` for setup instructions. - `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the *SETTINGS* for your instance2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
###Code
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default' #Change
ENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com' #Change
CUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com' #Change
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
###Output
_____no_output_____
###Markdown
2.3 Compile pipeline
###Code
PIPELINE_NAME = 'tfx_covertype_lab_04'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=False
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
###Output
_____no_output_____
###Markdown
2.4 Deploy pipeline to AI Platform
###Code
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:
###Code
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
###Output
_____no_output_____
###Markdown
2.5 Create and monitor pipeline run
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
2.6 Configure Kubernetes port forwarding To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Connecting to ML Metadata Configure ML Metadata GRPC client
###Code
grpc_host = 'localhost'
grpc_port = 7000
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = grpc_host
connection_config.port = grpc_port
###Output
_____no_output_____
###Markdown
Connect to ML Metadata service
###Code
store = metadata_store.MetadataStore(connection_config)
###Output
_____no_output_____
###Markdown
ImportantA full pipeline run without tuning takes about 40-45 minutes to complete. You need to wait until a pipeline run is complete before proceeding with the steps below. Exploring ML Metadata The Metadata Store uses the following data model:- `ArtifactType` describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.- `Artifact` describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.- `ExecutionType` describes a type of component or step in a workflow, and its runtime parameters.- `Execution` is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.- `Event` is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.- `ContextType` describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.- `Context` is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.- `Attribution` is a record of the relationship between Artifacts and Contexts.- `Association` is a record of the relationship between Executions and Contexts. List the registered artifact types.
###Code
for artifact_type in store.get_artifact_types():
print(artifact_type.name)
###Output
_____no_output_____
###Markdown
Display the registered execution types.
###Code
for execution_type in store.get_execution_types():
print(execution_type.name)
###Output
_____no_output_____
###Markdown
List the registered context types.
###Code
for context_type in store.get_context_types():
print(context_type.name)
###Output
_____no_output_____
###Markdown
Visualizing TFX artifacts Retrieve data analysis and validation artifacts
###Code
with metadata.Metadata(connection_config) as store:
schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME)
stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)
anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)
schema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')
print("Generated schame file:{}".format(schema_file))
stats_path = stats_artifacts[-1].uri
train_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')
eval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')
print("Train stats file:{}, Eval stats file:{}".format(
train_stats_file, eval_stats_file))
anomalies_path = anomalies_artifacts[-1].uri
train_anomalies_file = os.path.join(anomalies_path, 'train', 'anomalies.pbtxt')
eval_anomalies_file = os.path.join(anomalies_path, 'eval', 'anomalies.pbtxt')
print("Train anomalies file:{}, Eval anomalies file:{}".format(
train_anomalies_file, eval_anomalies_file))
###Output
_____no_output_____
###Markdown
Visualize schema
###Code
schema = tfdv.load_schema_text(schema_file)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Visualize statistics Exercise: looking at the features visualized below, answer the following questions:- Which feature transformations would you apply to each feature with TF Transform?- Are there data quality issues with certain features that may impact your model performance? How might you deal with it?
###Code
train_stats = tfdv.load_statistics(train_stats_file)
eval_stats = tfdv.load_statistics(eval_stats_file)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
###Output
_____no_output_____
###Markdown
Visualize anomalies
###Code
train_anomalies = tfdv.load_anomalies_text(train_anomalies_file)
tfdv.display_anomalies(train_anomalies)
eval_anomalies = tfdv.load_anomalies_text(eval_anomalies_file)
tfdv.display_anomalies(eval_anomalies)
###Output
_____no_output_____
###Markdown
Retrieve model artifacts
###Code
with metadata.Metadata(connection_config) as store:
model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)
hyperparam_artifacts = store.get_artifacts_by_type(standard_artifacts.HyperParameters.TYPE_NAME)
model_eval_path = model_eval_artifacts[-1].uri
print("Generated model evaluation result:{}".format(model_eval_path))
best_hparams_path = os.path.join(hyperparam_artifacts[-1].uri, 'best_hyperparameters.txt')
print("Generated model best hyperparameters result:{}".format(best_hparams_path))
###Output
_____no_output_____
###Markdown
Return best hyperparameters
###Code
# Latest pipeline run Tuner search space.
json.loads(file_io.read_file_to_string(best_hparams_path))['space']
# Latest pipeline run Tuner searched best_hyperparameters artifacts.
json.loads(file_io.read_file_to_string(best_hparams_path))['values']
###Output
_____no_output_____
###Markdown
Visualize model evaluations Exercise: review the model evaluation results below and answer the following questions:- Which Wilderness Area had the highest accuracy?- Which Wilderness Area had the lowest performance? Why do you think that is? What are some steps you could take to improve your next model runs?
###Code
eval_result = tfma.load_eval_result(model_eval_path)
tfma.view.render_slicing_metrics(
eval_result, slicing_column='Wilderness_Area')
###Output
_____no_output_____ |
notebooks/TFDV_example1.ipynb | ###Markdown
Use Tensorflow data validationLet's use the
###Code
import tensorflow as tf
import tensorflow_data_validation as tfdv
import os
import tempfile, urllib, zipfile
import pandas as pd
print('TF version:', tf.__version__)
print('TFDV version:', tfdv.version.__version__)
# Read train data
train_data_path="../data/adult_train.csv"
train_data=pd.read_csv(train_data_path, sep=",", na_values=["?"])
train_data.head()
# Read eval data
eval_data_path="../data/adult_eval.csv"
eval_data=pd.read_csv(eval_data_path, sep=",", na_values=["?"])
eval_data.head()
# read serving data
serving_data_path="../data/adult_serving.csv"
serving_data=pd.read_csv(serving_data_path, sep=",", na_values=["?"])
serving_data.head()
###Output
_____no_output_____
###Markdown
Step1 Data profilingFirst we'll use [tfdv.generate_statistics_from_dataframe](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) to understand our training data.
###Code
train_stats = tfdv.generate_statistics_from_dataframe(dataframe=train_data)
###Output
_____no_output_____
###Markdown
Now let's use [tfdv.visualize_statistics](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/visualize_statistics).
###Code
tfdv.visualize_statistics(train_stats)
###Output
_____no_output_____
###Markdown
Step2: Infer a schemaIn TFDV, the definition of **schema** is different from database or dataframe. It describes not only the column name, column datatypes, it also describes the characteristics of your data such as presence/abscence of data, expected range of values, etc.In general, TFDV uses **conservative heuristics** to infer stable data properties from the statistics in order to avoid overfitting the schema to the specific dataset. **It is strongly advised to review the inferred schema and refine it as needed**, to capture any domain knowledge about the data that TFDV’s heuristics might have missed.If we want to compare with solutions such as Great Expectations, TDDA. The stats(TFDV) are profiler(TDDA), the schema(TFDV) is validation rules(TDDA).
###Code
schema = tfdv.infer_schema(statistics=train_stats)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Check evaluation data for errorsSo far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training.
###Code
# Compute stats for evaluation data
eval_stats = tfdv.generate_statistics_from_dataframe(dataframe=eval_data)
# Compare evaluation data with training data
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
###Output
_____no_output_____
###Markdown
Step 4. Check for evaluation anomaliesDoes our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values.Key Point: What would happen if we tried to evaluate using data with categorical feature values that were not in our training dataset? What about numeric features that are outside the ranges in our training dataset?
###Code
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
###Output
_____no_output_____
###Markdown
Fix evaluation anomalies in the schemaIn the eval data, the column native-country contains Holand-Netherlands, that does not exist in training data. We can fix the anomaly by adding this value in the schema.
###Code
# Add new value to the domain of feature workclass.
nativecountry_type_domain = tfdv.get_domain(schema, 'native-country')
nativecountry_type_domain.value.append('Holand-Netherlands')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(updated_anomalies)
###Output
_____no_output_____
###Markdown
We also split off a 'serving' dataset for this example, so we should check that too. By default all datasets in a pipeline should use the same schema, but there are often exceptions. For example, in supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In some cases introducing slight schema variations is necessary.**Environments** can be used to express such requirements. In particular, features in schema can be associated with a set of environments using `default_environment`, `in_environment` and `not_in_environment`.For example, in this dataset the `income` feature is included as the label for training, but it's missing in the serving data. Without environment specified, it will show up as an anomaly.
###Code
serving_stats = tfdv.generate_statistics_from_dataframe(dataframe=serving_data)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
# Add new value to the domain of feature workclass.
workclass_type_domain = tfdv.get_domain(schema, 'workclass')
workclass_type_domain.value.append('public')
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
###Output
_____no_output_____
###Markdown
Now we just have the `income` feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that.
###Code
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
# Specify that 'tips' feature is not in SERVING environment.
tfdv.get_feature(schema, 'income').not_in_environment.append('SERVING')
serving_anomalies_with_env = tfdv.validate_statistics(serving_stats, schema, environment='SERVING')
tfdv.display_anomalies(serving_anomalies_with_env)
###Output
_____no_output_____
###Markdown
Freeze the schemaNow that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state. TFDV uses the **protobuf library**, which is becoming a unified method to manipulate your static data (datastructure, transformation scheme, frozen models…).
###Code
from tensorflow.python.lib.io import file_io
from google.protobuf import text_format
output_dir="../data"
schema_file = os.path.join(output_dir, 'schema.pbtxt')
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
###Output
feature {
name: "age"
type: INT
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "workclass"
value_count {
min: 1
max: 1
}
type: BYTES
domain: "workclass"
presence {
min_count: 1
}
}
feature {
name: "fnlwgt"
type: INT
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "education"
type: BYTES
domain: "education"
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "education-num"
type: INT
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "marital-status"
type: BYTES
domain: "marital-status"
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "occupation"
value_count {
min: 1
max: 1
}
type: BYTES
domain: "occupation"
presence {
min_count: 1
}
}
feature {
name: "relationship"
type: BYTES
domain: "relationship"
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "race"
type: BYTES
domain: "race"
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "sex"
type: BYTES
domain: "sex"
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "capital-gain"
type: INT
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "capital-loss"
type: INT
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "hours-per-week"
type: INT
presence {
min_fraction: 1.0
min_count: 1
}
shape {
dim {
size: 1
}
}
}
feature {
name: "native-country"
value_count {
min: 1
max: 1
}
type: BYTES
domain: "native-country"
presence {
min_count: 1
}
}
feature {
name: "income"
type: BYTES
domain: "income"
presence {
min_fraction: 1.0
min_count: 1
}
not_in_environment: "SERVING"
shape {
dim {
size: 1
}
}
}
string_domain {
name: "workclass"
value: "Federal-gov"
value: "Local-gov"
value: "Never-worked"
value: "Private"
value: "Self-emp-inc"
value: "Self-emp-not-inc"
value: "State-gov"
value: "Without-pay"
value: "public"
}
string_domain {
name: "education"
value: "10th"
value: "11th"
value: "12th"
value: "1st-4th"
value: "5th-6th"
value: "7th-8th"
value: "9th"
value: "Assoc-acdm"
value: "Assoc-voc"
value: "Bachelors"
value: "Doctorate"
value: "HS-grad"
value: "Masters"
value: "Preschool"
value: "Prof-school"
value: "Some-college"
}
string_domain {
name: "marital-status"
value: "Divorced"
value: "Married-AF-spouse"
value: "Married-civ-spouse"
value: "Married-spouse-absent"
value: "Never-married"
value: "Separated"
value: "Widowed"
}
string_domain {
name: "occupation"
value: "Adm-clerical"
value: "Armed-Forces"
value: "Craft-repair"
value: "Exec-managerial"
value: "Farming-fishing"
value: "Handlers-cleaners"
value: "Machine-op-inspct"
value: "Other-service"
value: "Priv-house-serv"
value: "Prof-specialty"
value: "Protective-serv"
value: "Sales"
value: "Tech-support"
value: "Transport-moving"
}
string_domain {
name: "relationship"
value: "Husband"
value: "Not-in-family"
value: "Other-relative"
value: "Own-child"
value: "Unmarried"
value: "Wife"
}
string_domain {
name: "race"
value: "Amer-Indian-Eskimo"
value: "Asian-Pac-Islander"
value: "Black"
value: "Other"
value: "White"
}
string_domain {
name: "sex"
value: "Female"
value: "Male"
}
string_domain {
name: "native-country"
value: "Cambodia"
value: "Canada"
value: "China"
value: "Columbia"
value: "Cuba"
value: "Dominican-Republic"
value: "Ecuador"
value: "El-Salvador"
value: "England"
value: "France"
value: "Germany"
value: "Greece"
value: "Guatemala"
value: "Haiti"
value: "Honduras"
value: "Hong"
value: "Hungary"
value: "India"
value: "Iran"
value: "Ireland"
value: "Italy"
value: "Jamaica"
value: "Japan"
value: "Laos"
value: "Mexico"
value: "Nicaragua"
value: "Outlying-US(Guam-USVI-etc)"
value: "Peru"
value: "Philippines"
value: "Poland"
value: "Portugal"
value: "Puerto-Rico"
value: "Scotland"
value: "South"
value: "Taiwan"
value: "Thailand"
value: "Trinadad&Tobago"
value: "United-States"
value: "Vietnam"
value: "Yugoslavia"
value: "Holand-Netherlands"
}
string_domain {
name: "income"
value: "<=50K"
value: ">50K"
}
default_environment: "TRAINING"
default_environment: "SERVING"
|
notebooks/nlp/russian/ru_tokenizer_100_nospaces_lexicon.ipynb | ###Markdown
Exploring Russian Lexicon-based Tokenizer with no spaces
###Code
import os, sys
cwd = os.getcwd()
project_path = cwd[:cwd.find('pygents')+7]
if project_path not in sys.path: sys.path.append(project_path)
os.chdir(project_path)
#from importlib import reload # Python 3.4+
import pickle
import pandas as pd
#force reimport
if 'pygents.util' in sys.modules:
del sys.modules['pygents.util']
if 'pygents.text' in sys.modules:
del sys.modules['pygents.text']
if 'pygents.plot' in sys.modules:
del sys.modules['pygents.plot']
if 'pygents.token' in sys.modules:
del sys.modules['pygents.token']
if 'pygents.token_plot' in sys.modules:
del sys.modules['pygents.token_plot']
from pygents.token import *
from pygents.text import *
from pygents.util import *
from pygents.plot import plot_bars, plot_dict, matrix_plot
from pygents.token_plot import *
path = 'data/corpora/Russian/'
test_df = pd.read_csv(os.path.join(path,'magicdata/zh_en_ru_100/CORPUS_ZH_EN_RU.txt'),delimiter='\t')
test_texts = list(test_df['ru'])
print(len(test_texts))
test_df[['ru']]
for text in test_texts:
print(text)
del_tokenizer = DelimiterTokenizer()
#get raw lexicon list
ru_lex = list(pd.read_csv("https://raw.githubusercontent.com/aigents/aigents-java/master/lexicon_russian.txt",sep='\t',header=None).to_records(index=False))
print(len(ru_lex))
#debug raw lexicon
print(max(ru_lex,key=lambda item:item[1]))
ru_lex_dict = weightedlist2dict(ru_lex,lower=False) # no case-insensitive merge
print(len(ru_lex_dict))
print(ru_lex_dict['не'])
print(ru_lex_dict['Не'])
print(ru_lex_dict['НЕ'])
# merge and get top weight
ru_lex_dict = weightedlist2dict(ru_lex,lower=True) # with case-insensitive merge
print(len(ru_lex_dict))
print(ru_lex_dict['не'])
top_weight = max([ru_lex_dict[key] for key in ru_lex_dict],key=lambda item:item)
print(top_weight)
# add delimiters to the list
ru_lex_delimited = ru_lex + [(i, top_weight) for i in list(delimiters)]
print(len(delimiters))
print(len(ru_lex_delimited))
# no delimiters
filter_thresholds = [0,0.00001,0.0001,0.001,0.01]
for t in filter_thresholds:
lex = listofpairs_compress_with_loss(ru_lex,t) if t > 0 else ru_lex
ru_lex0_tokenizer = LexiconIndexedTokenizer(lexicon=lex,sortmode=0,cased=True)
ru_lex1_tokenizer = LexiconIndexedTokenizer(lexicon=lex,sortmode=1,cased=True)
ru_lex2_tokenizer = LexiconIndexedTokenizer(lexicon=lex,sortmode=2,cased=True)
print(t,ru_lex0_tokenizer.count_params())
print(evaluate_tokenizer_f1(test_texts,del_tokenizer,ru_lex0_tokenizer,nospaces=True,debug=False))#sort by len
print(evaluate_tokenizer_f1(test_texts,del_tokenizer,ru_lex1_tokenizer,nospaces=True,debug=False))#sort by freq
print(evaluate_tokenizer_f1(test_texts,del_tokenizer,ru_lex2_tokenizer,nospaces=True,debug=False))#sort by len and freq
print()
# with delimiters
filter_thresholds = [0,0.00001,0.0001,0.001,0.01]
for t in filter_thresholds:
lex = listofpairs_compress_with_loss(ru_lex_delimited,t) if t > 0 else ru_lex_delimited
ru_lex0_tokenizer = LexiconIndexedTokenizer(lexicon=lex,sortmode=0,cased=True)
ru_lex1_tokenizer = LexiconIndexedTokenizer(lexicon=lex,sortmode=1,cased=True)
ru_lex2_tokenizer = LexiconIndexedTokenizer(lexicon=lex,sortmode=2,cased=True)
print(t,ru_lex0_tokenizer.count_params())
print(evaluate_tokenizer_f1(test_texts,del_tokenizer,ru_lex0_tokenizer,nospaces=True,debug=False))#sort by len
print(evaluate_tokenizer_f1(test_texts,del_tokenizer,ru_lex1_tokenizer,nospaces=True,debug=False))#sort by freq
print(evaluate_tokenizer_f1(test_texts,del_tokenizer,ru_lex2_tokenizer,nospaces=True,debug=False))#sort by len and freq
print()
ru_lex0_tokenizer = LexiconIndexedTokenizer(lexicon=ru_lex_delimited,sortmode=0,cased=True)
for text in test_texts:
expected = del_tokenizer.tokenize(text)
remove_all(expected,' ')
actual = ru_lex0_tokenizer.tokenize(text.replace(' ',''))
f1 = calc_f1(expected,actual)
if f1 < 1:
print(expected)
print(actual)
print(round(f1,2))
###Output
['Как', 'насчет', 'медицинской', 'страховки', '?', 'В', 'случае', 'вашей', 'семьи', ',', 'её', 'можно', 'оформить', 'и', 'взрослому', 'и', 'ребенку', '.']
['Как', 'насчет', 'медицинской', 'страховки', '?', 'Вс', 'луча', 'ева', 'ш', 'ей', 'семьи', ',', 'её', 'можно', 'оформить', 'ив', 'з', 'росло', 'му', 'ире', 'бен', 'ку', '.']
0.54
['Для', 'тех', ',', 'у', 'кого', 'есть', 'страховка', ',', 'по', 'договору', 'страхования', 'они', 'получат', 'компенсацию', 'в', 'размере', '300', 'тысяч', 'рублей', '.']
['Для', 'тех', ',', 'ук', 'ого', 'есть', 'страховка', ',', 'подо', 'говору', 'страхования', 'они', 'получат', 'компенсацию', 'враз', 'мере', '300', 'тысяч', 'рублей', '.']
0.7
['На', 'самом', 'деле', ',', 'это', 'явление', 'действительно', 'очень', 'распространено', ',', 'например', ',', 'для', 'страхования', 'от', 'несчастных', 'случаев', ',', 'чем', 'больше', 'вы', 'покупаете', ',', 'тем', 'больше', 'страхуете', '.']
['Наса', 'мо', 'м', 'деле', ',', 'это', 'явление', 'действительно', 'очень', 'распространено', ',', 'например', ',', 'для', 'страхования', 'отнес', 'частных', 'случаев', ',', 'чем', 'больше', 'вып', 'оку', 'па', 'е', 'те', ',', 'тем', 'больше', 'страху', 'е', 'те', '.']
0.67
['Машину', 'нужно', 'покупать', 'в', 'полном', 'объеме', ',', 'а', 'дом', 'можно', 'купить', 'в', 'кредит', '.']
['Машину', 'нужно', 'покупать', 'в', 'полном', 'объеме', ',', 'адом', 'можно', 'купить', 'вк', 'ред', 'ит', '.']
0.71
['Вы', 'можете', 'купить', 'страховку', ',', 'страховка', ',', 'конечно', 'же', ',', 'делится', 'на', 'множество', 'категорий', '.']
['Вы', 'можете', 'купить', 'страховку', ',', 'страховка', ',', 'конечно', 'же', ',', 'делится', 'нам', 'нож', 'ест', 'во', 'категорий', '.']
0.81
['Послушайте', ',', 'я', 'не', 'знаю', ',', 'слышали', 'ли', 'вы', 'когда-нибудь', 'об', 'этом', ',', 'это', 'страховка', 'в', 'Сбере', '.']
['Послушайте', ',', 'ян', 'е', 'знаю', ',', 'слышали', 'ли', 'вы', 'когда-нибудь', 'об', 'этом', ',', 'это', 'страховка', 'вС', 'бе', 'ре', '.']
0.76
['Покупка', 'дома', 'на', 'самом', 'деле', 'является', 'инвестицией', '.']
['Покупка', 'дома', 'наса', 'мо', 'м', 'деле', 'является', 'ин', 'вести', 'ци', 'ей', '.']
0.5
['Вы', 'когда-нибудь', 'узнавали', 'об', 'обучении', 'в', 'Альфе', '?']
['Вы', 'когда-нибудь', 'узнавали', 'обо', 'бу', 'чен', 'и', 'ивА', 'ль', 'фе', '?']
0.42
['Если', 'он', 'депонирован', 'в', 'банке', ',', 'каков', 'результат', 'сложных', 'процентов', '?']
['Если', 'он', 'депо', 'ни', 'ров', 'ан', 'в', 'банке', ',', 'каков', 'результат', 'сложных', 'процентов', '?']
0.8
['Однако', ',', 'на', 'самом', 'деле', 'это', 'побуждает', 'вас', 'покупать', 'коммерческое', 'страхование', 'и', 'страхование', 'жизни', '.']
['Однако', ',', 'наса', 'мо', 'м', 'деле', 'это', 'побуждает', 'вас', 'покупать', 'коммерческое', 'страхование', 'ист', 'ра', 'хо', 'вани', 'ежи', 'зн', 'и', '.']
0.63
['Все', 'люди', ',', 'которые', 'продают', 'страховки', ',', 'полагаются', 'на', 'свои', 'связи', '.']
['Все', 'люди', ',', 'которые', 'продают', 'страховки', ',', 'полагаются', 'нас', 'во', 'ис', 'в', 'яз', 'и', '.']
0.67
['Ну', ',', 'социальное', 'страхование', '—', 'это', 'тоже', 'страхование', ',', 'и', 'страхование', 'вашего', 'автомобиля', '—', 'тоже', 'страхование', '.']
['Ну', ',', 'социальное', 'страхование', '—', 'этот', 'о', 'жест', 'ра', 'хо', 'вани', 'е', ',', 'ист', 'ра', 'хо', 'вани', 'ева', 'ш', 'его', 'автомобиля', '—', 'тоже', 'страхование', '.']
0.52
['Но', 'деньги', 'из', 'банков', ',', 'если', 'их', 'брать', ',', '—', 'это', 'своего', 'рода', 'кредит', '.']
['Но', 'деньги', 'изба', 'нк', 'ов', ',', 'если', 'их', 'брать', ',', '—', 'это', 'своего', 'рода', 'кредит', '.']
0.84
['Теперь', ',', 'похоже', ',', 'она', 'приближается', 'к', 'отрицательной', 'процентной', 'ставке', '.']
['Теперь', ',', 'похоже', ',', 'она', 'приближается', 'кот', 'риц', 'ат', 'ель', 'ной', 'процентной', 'ставке', '.']
0.72
['Я', 'действительно', 'никогда', 'не', 'пытался', 'занять', 'деньги', 'в', 'банке', '.']
['Яд', 'ей', 'ст', 'вите', 'ль', 'но', 'никогда', 'не', 'пытался', 'занять', 'деньги', 'в', 'банке', '.']
0.67
['Он', 'не', 'требует', 'процентов', ',', 'поэтому', 'просто', 'дает', 'деньги', 'взаймы', 'и', 'возвращает', 'их', 'позже', '.']
['Он', 'нет', 'ре', 'бу', 'ет', 'процентов', ',', 'поэтому', 'просто', 'дает', 'деньги', 'взаймы', 'ив', 'оз', 'вр', 'ащае', 'тих', 'позже', '.']
0.59
['Если', 'у', 'вас', 'плохая', 'кредитная', 'информация', ',', 'он', 'не', 'даст', 'вам', 'деньги', '.']
['Если', 'ув', 'ас', 'плохая', 'кредит', 'на', 'я', 'информация', ',', 'он', 'не', 'даст', 'вам', 'деньги', '.']
0.71
['Он', 'купил', 'его', 'в', 'кредит', 'или', 'заплатил', 'полностью', '?']
['Он', 'купил', 'его', 'вк', 'ред', 'ит', 'или', 'заплатил', 'полностью', '?']
0.74
['Все', 'виды', 'банков', ',', 'а', 'также', 'пять', 'крупных', 'банков', '.']
['Все', 'виды', 'банков', ',', 'атак', 'же', 'пять', 'крупных', 'банков', '.']
0.8
['Являются', 'ли', 'казначейские', 'облигации', 'видом', 'ценных', 'бумаг', '?']
['Являются', 'лика', 'зн', 'а', 'чей', 'ск', 'ие', 'облигации', 'видом', 'ценных', 'бумаг', '?']
0.6
['На', 'самом', 'деле', ',', 'торговля', 'на', 'бирже', 'сама', 'по', 'себе', 'считается', 'фактором', 'относительно', 'высокого', 'риска', '.']
['Наса', 'мо', 'м', 'деле', ',', 'торговля', 'на', 'бирже', 'сама', 'пос', 'е', 'бес', 'читается', 'фактором', 'относительно', 'высокого', 'риска', '.']
0.65
['Короче', ',', 'прибыль', 'не', 'будет', 'очень', 'высокой', ',', 'она', 'должна', 'быть', 'около', 'процентной', 'ставки', '.']
['Короче', ',', 'прибыль', 'небу', 'де', 'точен', 'ь', 'высокой', ',', 'она', 'должна', 'быть', 'около', 'процентной', 'ставки', '.']
0.77
['Затем', 'он', 'используется', 'для', 'спекуляций', 'с', 'недвижимостью', '.', 'После', 'того', ',', 'как', 'дом', 'был', 'оценен', ',', 'продайте', 'его', 'и', 'погасите', 'кредит', '.', 'Может', 'быть', ',', 'можно', 'было', 'бы', 'заработать', 'сотни', 'тысяч', 'рублей', '.']
['Затем', 'они', 'сп', 'о', 'ль', 'зует', 'ся', 'для', 'спекуляций', 'сне', 'д', 'ви', 'ж', 'им', 'ос', 'ть', 'ю', '.', 'После', 'того', ',', 'как', 'дом', 'было', 'ценен', ',', 'продай', 'те', 'его', 'и', 'погас', 'ит', 'е', 'кредит', '.', 'Может', 'быть', ',', 'можно', 'было', 'бы', 'заработать', 'сотни', 'тысяч', 'рублей', '.']
0.63
['Методы', 'ценообразования', 'для', 'обменных', 'курсов', 'включают', 'прямое', 'ценообразование', 'и', 'косвенное', 'ценообразование', '.']
['Методы', 'ценообразования', 'для', 'обменных', 'курсов', 'включают', 'прямое', 'цен', 'о', 'образование', 'ик', 'ос', 'вен', 'но', 'е', 'цен', 'о', 'образование', '.']
0.52
['Так', 'или', 'иначе', ',', 'это', 'инвестиционное', 'поведение', ',', 'инвестиционная', 'модель', 'и', 'инвестиционный', 'подход', '.']
['Таки', 'ли', 'иначе', ',', 'это', 'ин', 'вести', 'ци', 'он', 'но', 'е', 'поведение', ',', 'инвестиционная', 'модель', 'и', 'инвестиционный', 'подход', '.']
0.67
['Что', 'делать', ',', 'если', 'у', 'меня', 'есть', 'лишние', 'деньги', ',', 'чтобы', 'купить', 'квартиру', 'в', 'качестве', 'инвестиции', '?']
['Что', 'делать', ',', 'если', 'умен', 'я', 'естьли', 'ш', 'ни', 'еде', 'нь', 'ги', ',', 'чтобы', 'купить', 'квартиру', 'вк', 'а', 'че', 'ст', 'ве', 'инвестиции', '?']
0.5
['Наличие', 'кредита', 'также', 'означает', 'наличие', 'мотивации', 'делать', 'деньги', '.', 'Я', 'согласен', 'с', 'этим', '.']
['Наличие', 'кредита', 'также', 'означает', 'наличием', 'от', 'ива', 'ци', 'иде', 'лат', 'ь', 'деньги', '.', 'Я', 'согласен', 'с', 'этим', '.']
0.69
['Удобно', 'ли', 'зарегистрироваться', 'в', 'интернет-банкинге', '?']
['Удобно', 'лиза', 'регистрировать', 'ся', 'винт', 'ер', 'нет', '-', 'банки', 'нг', 'е', '?']
0.22
['Откуда', 'у', 'вас', 'деньги', ',', 'которые', 'вы', 'вложили', 'в', 'прошлый', 'раз', '?']
['Откуда', 'ув', 'ас', 'деньги', ',', 'которые', 'вы', 'вложили', 'в', 'прошлый', 'раз', '?']
0.83
['Изменения', 'в', 'банковской', 'политике', 'не', 'позволяют', 'им', 'кредитовать', '.']
['Изменения', 'в', 'банковской', 'политике', 'не', 'позволяют', 'им', 'кредитов', 'ать', '.']
0.84
['Кажется', ',', 'наша', 'страховая', 'компания', 'имеет', 'гарантированную', 'процентную', 'ставку', '2,8', 'процента', '.']
['Кажется', ',', 'наша', 'страховая', 'компания', 'имеет', 'гарантирован', 'ну', 'ю', 'процентную', 'ставку', '2', ',', '8', 'процента', '.']
0.71
['Многие', 'люди', 'испытывают', 'отвращение', 'к', 'страховке', '.']
['Многие', 'люди', 'испытывают', 'отвращение', 'кс', 'трах', 'о', 'вк', 'е', '.']
0.59
['Здорово', 'начать', 'управление', 'капиталом', 'за', 'пять', 'лет', ',', 'потому', 'что', 'управление', 'капиталом', 'само', 'по', 'себе', 'очень', 'просто', '.']
['Здорово', 'начать', 'управление', 'капиталом', 'зап', 'ять', 'лет', ',', 'потому', 'что', 'управление', 'капиталом', 'само', 'пос', 'е', 'бе', 'очень', 'просто', '.']
0.76
['Это', 'означает', ',', 'что', 'вы', 'берете', 'небольшую', 'часть', 'своих', 'расходов', 'на', 'проживание', 'в', 'качестве', 'сбережений', '.']
['Это', 'означает', ',', 'что', 'выберет', 'е', 'небольшую', 'часть', 'своих', 'расходов', 'напр', 'о', 'жива', 'ни', 'е', 'вк', 'а', 'че', 'ст', 'вес', 'бе', 'реже', 'ни', 'й', '.']
0.44
['Что', 'касается', 'фонда', 'облигаций', ',', 'то', 'он', 'предлагает', 'относительно', 'стабильную', 'прибыль', ',', 'аналогичную', 'срочному', 'депозиту', ',', 'но', 'с', 'большими', 'преимуществами', '.']
['Что', 'касается', 'фонда', 'облигаций', ',', 'тоо', 'нп', 'ред', 'ла', 'га', 'е', 'тот', 'носитель', 'нос', 'та', 'бил', 'ь', 'ну', 'ю', 'прибыль', ',', 'аналогичную', 'срочном', 'уд', 'е', 'поз', 'ит', 'у', ',', 'нос', 'большими', 'преимуществами', '.']
0.44
['Сколько', 'таких', 'актов', 'приемки', 'теоретически', 'можно', 'выдать', '.']
['Сколько', 'таких', 'актов', 'приемки', 'теоретическим', 'ож', 'новы', 'дать', '.']
0.59
['Это', 'означает', ',', 'что', 'у', 'каждого', 'из', 'нас', 'должен', 'быть', 'свой', 'способ', 'распоряжаться', 'деньгами', ',', 'независимо', 'от', 'того', ',', 'каким', 'образом', ',', 'независимо', 'от', 'того', ',', 'сколько', 'денег', 'у', 'нас', 'есть', '.']
['Это', 'означает', ',', 'что', 'ук', 'аж', 'до', 'го', 'из', 'нас', 'должен', 'быть', 'свой', 'способ', 'распоряжаться', 'деньгами', ',', 'независимо', 'оттого', ',', 'каким', 'образом', ',', 'независимо', 'оттого', ',', 'сколько', 'денег', 'у', 'нас', 'есть', '.']
0.81
['На', 'самом', 'деле', ',', 'это', 'считается', 'хорошим', 'способом', 'инвестирования', ',', 'дождаться', 'повышения', 'курса', '.']
['Наса', 'мо', 'м', 'деле', ',', 'это', 'считается', 'хорошим', 'способом', 'инвестирования', ',', 'дождаться', 'повышения', 'курса', '.']
0.83
['На', 'самом', 'деле', ',', 'их', 'финансовому', 'менеджменту', 'тоже', 'стоит', 'научиться', '.']
['Наса', 'мо', 'м', 'деле', ',', 'их', 'финансовому', 'менеджмент', 'у', 'тоже', 'стоит', 'научиться', '.']
0.67
['Возможно', ',', 'это', 'просто', 'поможет', 'вам', 'сохранить', 'деньги', ',', 'но', 'поведение', 'при', 'инвестировании', 'зависит', 'от', 'вас', 'самих', '.']
['Возможно', ',', 'это', 'просто', 'поможет', 'вам', 'сохранить', 'деньги', ',', 'но', 'поведение', 'при', 'ин', 'вести', 'ров', 'ани', 'из', 'ав', 'ис', 'ит', 'отв', 'ас', 'самих', '.']
0.67
['Если', 'вы', 'хотите', 'купить', 'машину', ',', 'дом', 'или', 'что-то', 'еще', ',', 'но', 'у', 'вас', 'недостаточно', 'денег', ',', 'вы', 'будете', 'просить', 'кредит', 'в', 'банке', ',', 'верно', '?']
['Если', 'вы', 'хотите', 'купить', 'машину', ',', 'дом', 'или', 'что-то', 'еще', ',', 'но', 'ув', 'ас', 'недостаточно', 'денег', ',', 'вы', 'будете', 'просить', 'кредит', 'в', 'банке', ',', 'верно', '?']
0.92
|
dynamic visualization with plotly.ipynb | ###Markdown
step:4date:value columnssymbol:value columnsopen:value columnsclose:value columnslow:value columnshigh:value columnsvolume:value columnsjustify:According to Munzner's book,in a flat table the key must have no duplicate [page 34 in Munzner's book,chapter 2.6.1.1].And I find both data column and symbol column all have duplicate values,so they are not keys.The rest five columns are all quantitative columns and they still have duplicate value.So they are not key,just value columns.The possible key may be the combination of data and symbol column. Step 5:I come up with a interesting task about discovering the trend of a certain stock price.In this task the action will be derive and discover the dataset,and target will be the trend of stock price(open,end,high and low). [slide page 17,lecture5-tasks] Step 6:To visualize the task ,I plan to use candlestick charts and line plots to encode the data.Candlestick chart:The candlestick chat is specialized in dealing with financial data.This chart will encode five different columns,they are date, open,high,low and close.This plot use lines as visual encoding and use vertical position as channels.Line plot:In this plot I will first derive the original dataset by generating a new column called "average stock price".This column is the average of high price and low price and I will use this column to represent stock price.This plot will encode date and average stock price columns.The marks are points and the channels are vertical position.
###Code
# step 7:candlestick plot
from datetime import datetime
# filter the data with symbol=WAT
WAT_price=stock_price.query('symbol=="WAT"')
# generate data for plot candelstick plot
trace = go.Ohlc(x=WAT_price['date'],
open=WAT_price['open'],
high=WAT_price['high'],
low=WAT_price['low'],
close=WAT_price['close'])
data = [trace]
# generate x-axis,y-axis and title
layout = go.Layout(title='WAT stock price trend in candle stick chart',xaxis=dict(title='date'),yaxis=dict(title='Stock price'))
# combine data and layout into a dictionary
fig = dict(data=data, layout=layout)
iplot(fig, filename='simple_candlestick')
###Output
_____no_output_____
###Markdown
In this plot I choose the candlestick chart as visual encoding.In this plot the x-axis reresent the time and the y-axis looks like a box and users and easily find the start,end,high price and low price for a certain day.It also has a slide bar in the buttom can let user choose a certain period of time to observe the detail of stock price.In this plot we can find this WAT stock is growing steadily from Jan 2010 to Dec 2016.So that might imply this enterpraise has a promising future and I would like to suggest my customer perchase this stock if I were the broker.I think the most important thing for users interested in stock market is to find a stock with huge potential and make a fortune.So my task is to discover not only the price but also the trend for a certain stock. The candle stick chart is a professional plot because it not only includes all information for a stock,it also use color as a channel to tell users how the stock price changed,Green means profit and red means loss.
###Code
# step 7:line plot
# filter the data with symbol=WAT
WAT_price=stock_price.query('symbol=="WAT"')
# calculate the average between low and high
WAT_price['average_stock_price']=(WAT_price['high']+WAT_price['low'])/2
#
trace = go.Scatter(x = WAT_price['date'],y = WAT_price['average_stock_price'])
data = [trace]
# generate x-axis,y-axis and title
layout = go.Layout(title='WAT stock price trend in line plot',xaxis=dict(title='date'),yaxis=dict(title='average stock price'))
# combine data and layout into a dictionary
fig = dict(data=data, layout=layout)
iplot(fig, filename='basic-line')
###Output
_____no_output_____ |
code/notebooks/python/.ipynb_checkpoints/Matando a DT-checkpoint.ipynb | ###Markdown
Matando a DT En este notebook voy a mostrar que DT no se beneficia de RFF y Nÿstroem mientras que otros modelos sí que lo hacen
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from time import time
import math
# Import datasets, classifiers and performance metrics
from sklearn import datasets
#from sklearn import pipeline
#from sklearn.kernel_approximation import (RBFSampler,
# Nystroem)
###Output
_____no_output_____
###Markdown
DT, Logit y SVM normales
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
digits = datasets.load_digits()
data = digits.data
target = digits.target
N = data.shape[0]
prop_train = 2 / 3
N_train = math.ceil(N * prop_train)
N_test = N - N_train
# Esto solo es una prueba
data /= 16
data -= data.mean(axis = 0)
data_train = data[:N_train]
data_test = data[N_train:]
target_train = target[:N_train]
target_test = target[N_train:]
dtc = DecisionTreeClassifier()
lg = LogisticRegression(C = 1, multi_class = 'multinomial',
solver = 'lbfgs')
lsvc = LinearSVC()
dtc.fit(data_train, target_train)
lg.fit(data_train, target_train)
lsvc.fit(data_train, target_train)
dtc_train_score = dtc.score(data_train, target_train)
dtc_test_score = dtc.score(data_test, target_test)
lg_train_score = lg.score(data_train, target_train)
lg_test_score = lg.score(data_test, target_test)
lsvc_train_score = lsvc.score(data_train, target_train)
lsvc_test_score = lsvc.score(data_test, target_test)
dtc_train_score, dtc_test_score
lg_train_score, lg_test_score
lsvc_train_score, lsvc_test_score
###Output
_____no_output_____
###Markdown
Conclusiones - Los tres modelos generalizan por sí solos- De peor a mejor, el orden es: 1. Decision Tree 2. SVM Lineal 3. Regresión Logística DT, Logit y SVM con RFF
###Code
from sklearn.kernel_approximation import RBFSampler
from sklearn import pipeline
feature_map_fourier = RBFSampler(gamma=.2, random_state=1)
dtc_rff = pipeline.Pipeline([("feature_map", feature_map_fourier),
("dtc", DecisionTreeClassifier())])
lg_rff = pipeline.Pipeline([("feature_map", feature_map_fourier),
("lg", LogisticRegression(C = 1, multi_class = 'multinomial',
solver = 'lbfgs'))])
lsvc_rff = pipeline.Pipeline([("feature_map", feature_map_fourier),
("lsvc", LinearSVC())])
sample_sizes = 30 * np.arange(1, 30)
dtc_scores = []
lg_scores = []
lsvc_scores = []
for D in sample_sizes:
dtc_rff.set_params(feature_map__n_components=D)
lg_rff.set_params(feature_map__n_components=D)
lsvc_rff.set_params(feature_map__n_components=D)
dtc_rff.fit(data_train, target_train)
lg_rff.fit(data_train, target_train)
lsvc_rff.fit(data_train, target_train)
dtc_rff_score = dtc_rff.score(data_test, target_test)
lg_rff_score = lg_rff.score(data_test, target_test)
lsvc_rff_score = lsvc_rff.score(data_test, target_test)
dtc_scores.append(dtc_rff_score)
lg_scores.append(lg_rff_score)
lsvc_scores.append(lsvc_rff_score)
accuracy = plt.subplot(111)
accuracy.plot(sample_sizes, dtc_scores, label = "DT with RFF" )
accuracy.plot(sample_sizes, lg_scores, label = "Logit with RFF")
accuracy.plot(sample_sizes, lsvc_scores, label = "Linear SVC with RFF")
accuracy.legend(loc='best')
###Output
_____no_output_____
###Markdown
DT, Logit y SVM con Nÿstroem
###Code
from sklearn.kernel_approximation import N
###Output
_____no_output_____ |
Reinforcement_Learning_Para_Videojuegos/Q-Table Learning-Clean.ipynb | ###Markdown
Q-Table Learning
###Code
import gym
import numpy as np
###Output
_____no_output_____
###Markdown
Load the environment
###Code
env = gym.make('FrozenLake-v0')
###Output
[2017-09-19 00:14:00,378] Making new env: FrozenLake-v0
###Markdown
Implement Q-Table learning algorithm
###Code
#Initialize table with all zeros
Q = np.zeros([env.observation_space.n,env.action_space.n])
# Set learning parameters
lr = .8
y = .95
num_episodes = 2000
#create lists to contain total rewards and steps per episode
#jList = []
rList = []
for i in range(num_episodes):
#Reset environment and get first new observation
s = env.reset()
rAll = 0
d = False
j = 0
#The Q-Table learning algorithm
while j < 99:
j+=1
#Choose an action by greedily (with noise) picking from Q table
a = np.argmax(Q[s,:] + np.random.randn(1,env.action_space.n)*(1./(i+1)))
#Get new state and reward from environment
s1,r,d,_ = env.step(a)
#Update Q-Table with new knowledge
Q[s,a] = Q[s,a] + lr*(r + y*np.max(Q[s1,:]) - Q[s,a])
rAll += r
s = s1
if d == True:
break
#jList.append(j)
rList.append(rAll)
print("Score over time: " + str(sum(rList)/num_episodes))
print("Final Q-Table Values")
print(Q)
###Output
Final Q-Table Values
[[ 0.00000000e+00 3.21659325e-03 7.11258741e-02 3.77275278e-03]
[ 1.40142926e-04 6.28663593e-04 3.82537963e-04 5.70357100e-02]
[ 1.11334221e-03 1.71547211e-03 5.03560745e-03 2.70581149e-02]
[ 1.21489905e-04 1.26054020e-06 1.51142168e-04 1.15547388e-02]
[ 1.21808902e-01 3.85409984e-04 2.56107337e-03 4.44290995e-03]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[ 1.33587701e-04 4.78696801e-06 9.09548187e-03 3.22661233e-07]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[ 5.03374296e-04 4.37830905e-04 0.00000000e+00 1.10981873e-01]
[ 1.21048450e-03 3.15300523e-01 7.05293251e-04 2.38701727e-03]
[ 7.06823321e-01 1.85555185e-04 4.56897106e-04 5.51566668e-04]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[ 0.00000000e+00 1.31919608e-03 4.88860484e-01 1.34305353e-03]
[ 0.00000000e+00 0.00000000e+00 9.71223916e-01 0.00000000e+00]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]]
|
src/inhomogeneous_experiments.ipynb | ###Markdown
Dependencies
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import scipy
import matplotlib.pyplot as plt
import os
import sys
from counterfactual_tpp import sample_counterfactual, superposition, combine, check_monotonicity, distance
from sampling_utils import homogenous_poisson, thinning, thinning_T
from multiprocessing import cpu_count, Pool
from tqdm import tqdm
import utils
cpu_count()
def normal(x, mean, sd, amp):
return amp * (1/(sd * (np.sqrt(2*np.pi)))) * np.exp(-0.5*((x-mean)/sd)**2)
# Setting the parameters of the original intensity
T = 5
number_of_gaussians = 5
np.random.seed(0)
means = np.arange(1, T , step = (T - 1) / number_of_gaussians)
means[0] = 0.55
sds = np.random.uniform(low=0, high=0.5, size=number_of_gaussians)
amps = 10 * np.random.uniform(low=1.0, high=3.0, size=number_of_gaussians)
print('means', means)
print('sds', sds)
print('amps', amps)
#**********************
def original(x):
res = 0
for i in range(number_of_gaussians):
res += normal(x, means[i], sds[i], amps[i])
return res
# plotting the original intensity
x = np.linspace(0, T, 10000)
sns.lineplot(data = pd.DataFrame({'time':x, 'intensity':original(x)}), x="time", y="intensity")
# intervention intensity
#******************* To replicate figure 5:
# run this cell 1 time for lambda'>lambda
# run this cell 12 times for lambda'<lambda
modified_index = np.random.choice(a = np.arange(number_of_gaussians), size = 1)[0]
print('index', modified_index)
new_amp = amps[modified_index] + np.random.normal(loc=0.0, scale=10, size=1)
print('previous amp', amps[modified_index])
print('new amp', new_amp)
new_amps = np.zeros(amps.shape)
for i in range(len(new_amps)):
if i == modified_index:
new_amps[i] = new_amp[0]
else:
new_amps[i] = amps[i]
def intervention(x):
res = 0
for i in range(number_of_gaussians):
res += normal(x, means[i], sds[i], new_amps[i])
return res
###Output
index 4
previous amp 17.668830376515555
new amp [9.84107533]
###Markdown
Run this if lambda'< lambda
###Code
window_radious = (means[modified_index] - means[modified_index - 1]) / 2
window_center = means[modified_index]
###Output
_____no_output_____
###Markdown
Run this if lambda'>lambda
###Code
# window_radious = (means[modified_index + 1] - means[modified_index]) / 2
# window_center = means[modified_index]
# specifying interventional intervall.
begin = window_center - window_radious
end = window_center + window_radious
if begin < 0:
t = 0 - begin
begin = 0
end = end - t
print('beginning of the interval: ', begin)
print('ending of the interval: ', end)
print('window center', window_center)
path = 'figs/new_figs/'
# Visualizing both intensities
sns.lineplot(data = pd.DataFrame({'time':x, 'intensity':original(x)}), x="time", y="intensity", label = 'Original')
sns.lineplot(data = pd.DataFrame({'time':x, 'intensity':intervention(x)}), x="time", y="intensity", label = 'Intervention')
plt.legend()
lambda_max = 100
def counterfactual(_):
sample, indicators = thinning_T(0, intensity=original, lambda_max=lambda_max, T=T)
lambdas = original(np.asarray(sample))
sample = np.asarray(sample)
counters = []
for counter in range(100):
counterfactuals, counterfactual_indicators = sample_counterfactual(sample, lambdas, lambda_max, indicators, intervention)
counters.append(counterfactuals)
return sample[indicators], counters
def counterfactual1(_):
sample, indicators = thinning_T(0, intensity=original, lambda_max=lambda_max, T=T)
lambdas = original(np.asarray(sample))
sample = np.asarray(sample)
counters = []
for counter in range(100):
counterfactuals, counterfactual_indicators = sample_counterfactual(sample, lambdas, lambda_max, indicators, intervention)
if check_monotonicity(sample, counterfactuals, original, intervention, sample[indicators]) != 'MONOTONIC':
print('Not monotonic')
counters.append(counterfactuals)
return sample[indicators], counters, sample, indicators
def counterfactual2(_):
sample = samples_load[_]
indicators = indicators_load[_]
h_observed = sample[indicators]
lambda_observed = [original(i) for i in h_observed]
lambda_bar = lambda x: lambda_max - original(x)
h_rejected, _ = thinning_T(0, intensity=lambda_bar, lambda_max=lambda_max,T=T)
sample, _, indicators = combine(h_observed, lambda_observed, h_rejected, original)
lambdas = original(sample)
counters = []
for counter in range(100):
counterfactuals, counterfactual_indicators = sample_counterfactual(sample, lambdas, lambda_max, indicators, intervention)
if check_monotonicity(sample, counterfactuals, original, intervention, sample[indicators]) != 'MONOTONIC':
print('Not monotonic')
counters.append(counterfactuals)
return sample[indicators], counters
###Output
_____no_output_____
###Markdown
Run the following 3 cells in order to generate youe new samples, othewise skip the cell and continue to load our generated data and replicate the paper results.
###Code
# with Pool(48) as pool:
# result = list(tqdm(pool.imap(counterfactual1, list(range(1000))), total = 1000))
# # Save
# all_samples_save = [result[i][2] for i in range(1000)]
# import json
# np.save('Data/allsamples.npy', all_samples_save, allow_pickle=True)
# all_indicators_save = [result[i][3] for i in range(1000)]
# with open('Data/allindicators.json', 'w') as fout:
# json.dump(all_indicators_save , fout)
###Output
_____no_output_____
###Markdown
The following cells will read our ramdomly generated data in order to replecate the paper results.
###Code
samples_load = np.load('data_inhomogeneous/allsamples.npy', allow_pickle=True)
import json
with open(r'data_inhomogeneous/allindicators.json', "r") as read_file:
indicators_load = json.load(read_file)
f = [samples_load[i][indicators_load[i]] for i in range(1000)]
with Pool(48) as pool:
result = list(tqdm(pool.imap(counterfactual2, list(range(1000))), total = 1000))
def count_interval(start, end, samples):
return len(samples[(start <= samples) & (samples < end)])
def filter_interval(start, end, samples):
return samples[(start <= samples) & (samples < end)]
def distance(accepted, counterfactuals, T):
# Calculates the distance between oserved and counterfactual realizaitons
k1 = len(accepted)
k2 = len(counterfactuals)
if k1 <= k2:
d = np.sum(np.abs(accepted[0:k1] - counterfactuals[0:k1]))
if k2 - k1 > 0:
d += np.sum(np.abs(T - counterfactuals[k1:]))
else:
d = np.sum(np.abs(accepted[0:k2] - counterfactuals[0:k2]))
if k1 - k2 > 0:
d += np.sum(np.abs(T - accepted[k2:]))
return d
def count_bins(start, end, delta, samples):
intervals = np.arange(start, end, delta)
bins = np.zeros(len(intervals) - 1)
for i in range(len(bins)):
bins[i] = count_interval(intervals[i], intervals[i + 1], samples)
return bins, intervals
number_of_events = [count_interval(window_center - window_radious, window_center + window_radious, f[i]) for i in range(len(f))]
len(number_of_events)
number_of_groups = 3
quantile_indices = pd.qcut(number_of_events, number_of_groups, labels = range(number_of_groups)).to_numpy()
pd.qcut(number_of_events, number_of_groups, labels = range(number_of_groups))
pd.qcut(number_of_events, number_of_groups, labels = range(number_of_groups), retbins=True)
###Output
_____no_output_____
###Markdown
The following includes the pre-processings needed for binning and plotting.
###Code
groups = [[] for i in range(number_of_groups)]
groups_general = [[] for i in range(number_of_groups)]
for i in range(1000):
k = quantile_indices[i]
groups[k].extend(result[i][1])
groups_general[k].append(f[i])
for i in range(number_of_groups):
print(len(groups_general[i]) / 100)
delta = 0.09 # 0.1 for lambda'>lambda
number_of_bins = len(np.arange(begin, end, delta)) - 1
bin_numbers = []
for i in range(number_of_groups):
bin_number = np.zeros(number_of_bins)
for counter in groups[i]:
num , _ = count_bins(begin, end, delta, np.array(counter))
bin_number += num
bin_number = bin_number / len(groups[i])
bin_numbers.append(bin_number)
delta = 0.09
number_of_bins = len(np.arange(begin, end, delta)) - 1
original_bin_numbers = []
for i in range(number_of_groups):
bin_number = np.zeros(number_of_bins)
for ori in groups_general[i]:
num , _ = count_bins(begin, end, delta, np.array(ori))
bin_number += num
bin_number = bin_number / len(groups_general[i])
# bin_number = bin_number / delta
original_bin_numbers.append(bin_number)
width_pt = 397
fig_height, fig_aspect = utils.get_fig_dim(width_pt, fraction=0.65)
plt.style.use(['science','no-latex'])
plt.rcParams["axes.spines.right"] = False
plt.rcParams["axes.spines.top"] = False
plt.rcParams["xtick.top"] = False
plt.rcParams["ytick.right"] = False
fig, ax = plt.subplots(figsize=(fig_height*fig_aspect,fig_height))
ax.plot(x, original(x), label = 'original')
ax.plot(x, intervention(x), label = 'intervention')
ax.set_xlabel('time', fontsize = 10)
ax.set_ylabel('intensity', fontsize = 10)
ax.axvline(x=begin, color = 'k', ls = '--', alpha = 0.5)
ax.axvline(x=end, color = 'k', ls = '--', alpha = 0.5)
ax.legend(loc='lower right', fontsize = 10)
ax.set_xticks(np.arange(0,T +1,1))
ax.set_yticks(np.arange(0,60,10))
# plt.savefig('{}/intensities1.pdf'.format(path), format = 'pdf', dpi = 900)
fig, ax = plt.subplots(figsize=(fig_height*fig_aspect,fig_height))
k = 0 # k = 0 or 1 0r 2
weights = [0.25, 0.5, 0.25]
out_f = np.convolve(bin_numbers[k],np.array(weights)[::-1],'same')
out_o = np.convolve(original_bin_numbers[k],np.array(weights)[::-1],'same')
s = len(out_f)
xnew_bar = np.arange(begin, end, delta)
xnew_bar = 0.5 * (xnew_bar[:-1] + xnew_bar[1:])
for i in range(len(out_f)):
a = np.abs(out_f[i]) # counterfactual
b = np.abs(out_o[i]) # original
if a > b:
ax.bar(xnew_bar[i], a, label='cubic', width = delta, color = 'yellowgreen', edgecolor='green')
ax.bar(xnew_bar[i], b, label='cubic', width = delta, color = 'white', edgecolor='grey')
else:
ax.bar(xnew_bar[i], b, label='cubic', width = delta, color = 'salmon', edgecolor='red')
ax.bar(xnew_bar[i], a, label='cubic', width = delta, color = 'white', edgecolor='grey')
# ax.bar(xnew_bar, np.abs(f_cubic[2](xnew_bar)), label='cubic', width = (end - begin) / 17, color = 'yellowgreen')
# ax.bar(xnew_bar, np.abs(o_cubic[2](xnew_bar)), label='cubic', width = (end - begin) / 17, color = 'red')
ax.set_xlabel('time')
ax.set_ylabel('Average Number \n of Events')
ax.set_yticks(np.arange(0,6,1))
ax.set_xticks(np.arange(begin,end,0.2))
# plt.savefig('{}/groups1_{}_lmax_{}_delta_{}.pdf'.format(path, k + 1, lambda_max, delta), format = 'pdf', dpi = 900)
###Output
_____no_output_____ |
portfolios/nasdaq100/portfolio.ipynb | ###Markdown
A financial tool that can analyze and maximize investment portfolios on a risk adjusted basis Description: This notebook is useful for examining potfolios comprised of stocks from the Nasdaq 100. Construct portfolios from the 100 stocks in the Nasdaq 100 and examine the results of different weighting schemes.
###Code
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
# imports
import pandas as pd
import matplotlib.pyplot as plt
import brownbear as bb
# format price data
pd.options.display.float_format = '{:0.2f}'.format
# display all rows
pd.set_option('display.max_rows', None)
# do not truncate column names
pd.set_option('display.max_colwidth', None)
%matplotlib inline
# set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
###Output
_____no_output_____
###Markdown
Some Globals
###Code
investment_universe = ['nasdaq100-galaxy']
risk_free_rate = 0
annual_returns = '3 Yr'
vola = 'Vola'
ds_vola = 'DS Vola'
# Fetch Investment Options - all values annualized
df = bb.fetch(investment_universe, risk_free_rate, annual_returns, vola, ds_vola)
df
# rank
rank = bb.rank(df, rank_by='Sharpe Ratio', group_by=None, num_per_group=50)
rank_filtered = rank
rank_filtered = rank.loc[(rank['3 mo'] > 0) & rank['1 Yr'] > 0]
rank_filtered = rank_filtered.head(20)
rank_filtered
###Output
_____no_output_____
###Markdown
Sample PortfoliosFormat 'Investment option': weight
###Code
# everything ranked
ranked_portfolio = {
'Title': 'Ranked Portfolio'
}
everything = list(rank_filtered['Investment Option'])
ranked_portfolio.update(dict.fromkeys(everything, 1/len(everything)))
# top 10
top10_portfolio = {
'Title': 'Ranked Portfolio'
}
top10 = list(rank_filtered['Investment Option'])[:10]
top10_portfolio.update(dict.fromkeys(top10, 1/len(top10)))
###Output
_____no_output_____
###Markdown
Custom Portfolios
###Code
# My portfolio
my_portfolio = {
'Title': 'My Portfolio',
}
###Output
_____no_output_____
###Markdown
Choose Portfolio Option
###Code
# Select one of the portfolios from above
portfolio_option = top10_portfolio
# Make a copy so that the original portfolio is preserved
portfolio_option = portfolio_option.copy()
###Output
_____no_output_____
###Markdown
Analysis Options
###Code
# Specify the weighting scheme. It will replace the weights specified in the portfolio
# You can also fix the weights on some Investent Options, Asset Classes, and Asset Subclasses
# while the others are automatically calculated.
# 'Equal' - will use equal weights.
# 'Sharpe Ratio' - will use proportionally weighted # allocations based on the percent
# of an investment option's sharpe ratio to the sum of all the sharpe ratios in the portfolio.
# 'Std Dev' - will use standard deviation adjusted weights
# 'Annual Returns' - will use return adjusted weights
# 'Vola' - will use volatility adjusted weights
# 'DS Vola' - will use downside volatility adjusted weights
# None: 'Investment Option' means use user specified weights
# 'Asset Class' means do not group by Asset Class
# 'Asset Subclass means do not group by Asset Subclass
weight_by = {
'Asset Class': {'weight_by': None},
'Asset Subclass': {'weight_by': None},
'Investment Option': {'weight_by': 'DS Vola'},
}
#weight_by = None
bb.DEBUG = False
# Analyze portfolio
annual_ret, std_dev, sharpe_ratio = \
bb.analyze(df, portfolio_option, weight_by)
# Display Results
summary = bb.summary(df, portfolio_option, annual_ret, std_dev, sharpe_ratio)
summary
# Show pie charts of investment and asset class weights
bb.show_pie_charts(df, portfolio_option, charts=['Investment Option'])
# Show exact weights
bb.print_portfolio(portfolio_option)
###Output
Ranked Portfolio Weights:
MRNA 0.0852
MSFT 0.1273
ASML 0.0706
TMUS 0.1056
SNPS 0.0971
CHTR 0.1269
CDNS 0.0908
COST 0.1567
PYPL 0.0890
TSLA 0.0507
###Markdown
Optimize Portfolio
###Code
# Run_portfolio_optimizer = True will run portfolio optimizer after portfolio analysis is complete
run_portfolio_optimizer = True
# Optimize sharpe ratio while specifying Annual Rate, Worst Typical Down Year,
# and Black Swan. Setting a constraint to None optimizes absolute Sharpe Ratio
# without regard to that constraint.
'''
constraints = {
'Annual Return': 12,
'Worst Typical Down Year': -5,
'Black Swan': None
}
'''
constraints = {
'Annual Return': 8,
'Worst Typical Down Year': None,
'Black Swan': -40
}
if run_portfolio_optimizer:
bb.optimizer(df, portfolio_option, constraints)
###Output
Running optimizer...........
Ranked Portfolio Metrics:
max_sharpe_ratio 2.20
annual_return 99.08
std_dev 44.94
worst typical down year 9.20
black_swan -35.73
Ranked Portfolio Weights:
MRNA 0.2900
MSFT 0.2500
ASML 0.1900
TMUS 0.0500
SNPS 0.1300
CHTR 0.0900
CDNS 0.0000
COST 0.0000
PYPL 0.0000
TSLA 0.0000
|
MA477 - Theory and Applications of Data Science/Lessons/Lesson 11 - Logistic Regression/.ipynb_checkpoints/Lesson 11 - LogisticRegression-checkpoint.ipynb | ###Markdown
====================================================== MA477 - Theory and Applications of Data Science Lesson 11: Logistic Regression Dr. Valmir Bucaj United States Military Academy, West Point AY20-2====================================================== Lecture Outline Why not Linear Regression? What is Logistic Regression? Estimating Regression Coefficients Logistic Regression with more than 2 response classes Python Implementation Why not Linear Regression?We will illustrate below that in the cases when the response variable is a qualitative variable, then linear regression may not be appropriate. So, why is that? Let's illustrate this using the `iris` dataset. In the iris dataset one is tasked to determine whether a plant is of type `setosa, versicolor` or `virginica` based on `sepal length, width` and `petal length, width`. So, the response variable $Y$ is clearly qualitative. Now, before we can fit a Linear Regression model (or most models for that matter) we need to first encode the outcomes using numbers. So, this is exactly where the dilemma lies. In other words, how do we encode a qualitative response variable with more than two possibilities? We will demonstrate shortly, that the model we obtain is highly sensitive to how you encode the qualitative variable. That is, if two different teams encoded the qualitative variables with more than two outcomes differently, they could end up with very different models which in turn would produce very different predictions on the new, previously unseen by the model, data. Obviously, this is a very undesirable outcome. For the purpose of this demonstration we will use the following two encodings for the response variable: Case 1$$Y_1=\begin{cases}0 & \text{ setosa }\\1 & \text{ versicolor }\\2 & \text{ virginica }\end{cases}$$ Case 2 $$Y_2=\begin{cases}2 & \text{ setosa }\\0 & \text{ versicolor }\\1 & \text{ virginica }\end{cases}$$We'll import our standard libraries as well as the `iris` dataset wich may be found in `sklearn.datasets`.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
iris=load_iris()
iris.target_names
iris.feature_names
print(iris.DESCR)
df1=pd.DataFrame(iris.data,columns=iris.feature_names)
df1.head()
df1['target']=iris.target
df1.head()
###Output
_____no_output_____
###Markdown
Recall that the current encodings of the response variable are as follows: $$Y_1=\begin{cases}0 & \text{ setosa }\\1 & \text{ versicolor }\\2 & \text{ virginica }\end{cases}$$In what follows, we will create a second dataframe `df2` with the response variable encoded as $$Y_1=\begin{cases}2 & \text{ setosa }\\0 & \text{ versicolor }\\1 & \text{ virginica }\end{cases}$$
###Code
df2=df1.copy()
df2['target']=df2['target'].apply(lambda x: 2 if x==0 else 0 if x==1 else 1)
df2.head()
###Output
_____no_output_____
###Markdown
Next, we will fit two Linear Regression models, and compare the coefficients of the model.
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
lg1=LinearRegression()
lg2=LinearRegression()
X1=df1.drop("target",axis=1)
y1=df1['target']
X2=df2.drop("target",axis=1)
y2=df2['target']
X1_train,X1_test,y1_train,y1_test=train_test_split(X1,y1,test_size=0.1,random_state=1)
X2_train,X2_test,y2_train,y2_test=train_test_split(X2,y2,test_size=0.1,random_state=1)
X1_train.head()
X2_train.head()
###Output
_____no_output_____
###Markdown
One thing to note is that since we used the same random state for both splits, `X1_train` and `X2_train` are identical, as it is desired. The only thing that differs is `y's`, since we used different encodings. Now, we fit two linear models using these two datasets.
###Code
lg1.fit(X1_train,y1_train)
lg2.fit(X2_train,y2_train)
###Output
_____no_output_____
###Markdown
Now, let's get the coefficients and compare them.
###Code
coef1=lg1.coef_
coef2=lg2.coef_
df_coef=pd.DataFrame()
df_coef['coef1']=coef1
df_coef['coef2']=coef2
df_coef
###Output
_____no_output_____
###Markdown
Clearly, these coefficients are very different, and it is to be expected that they would produce different outcomes for the same input data. Let's test this.
###Code
pred1=lg1.predict(X1_test)
pred2=lg2.predict(X2_test)
###Output
_____no_output_____
###Markdown
Next, let's put everything in a dataframe. Namely, the predictions of the two models, along with the actual classes of the plants.
###Code
df_pred=pd.DataFrame(index=y1_test.index)
df_pred['pred1']=pred1
df_pred['pred2']=pred2
df_pred['actual1']=y1_test
df_pred['actual2']=y2_test
df_pred
df_pred['pred class mod1']=df_pred['pred1'].apply(lambda x: 'setosa' if x<0.5 else 'versicolor' if 0.5<x<1.5 else 'virginca')
df_pred['pred class model2']=df_pred['pred2'].apply(lambda x: 'setosa' if x>1.5 else 'versicolor' if x<0.5 else 'virginca')
df_pred
###Output
_____no_output_____
###Markdown
As we can see, the two models could yield very different predictions. In fact, in this case, they don't match on a single output.The other issue with this approach is the fact that the outcome of the linear regression cannot quite be interpreted as a probability since they may take values above one and below zero. Logistic Regression ModelFor the remainder of this lesson, we will focus on the case of binary response variables. We will use the wisconsin breast cancer data set we previously used when we discussed KNN Classifier. Before we begin discussing the innerworkings of Logistic Regression, let's import the dataset.
###Code
from sklearn.datasets import load_breast_cancer
cancer=load_breast_cancer()
print(cancer.DESCR)
df=pd.DataFrame(cancer.data,columns=cancer.feature_names)
df['target']=cancer.target
df.head()
###Output
_____no_output_____
###Markdown
Recall that the target variable is a qualitative variable with two categories: Malignant and Benign. We will use the following encoding for the response variable:$Y=\begin{cases} 0 & \text{ Malignant }\\1 & \text{ Benign }\end{cases}$ How does Logistic Regression Work?So, the response variable $Y$ can either be $0$ (Malignant) or $1$ (Benign). Instead of modeling directly the response variable $Y$, logistic regression, models instead the probability taht $Y$ belongs to a particular category. Specifically, if we denote by $X=(X_1,\dots, X_{30})$ all the $30$ cancer measurements, then logistic regression will instead model the probability that $Y=0$ or ($Y=$ Malignant) given the measurements $X$; that is $$p(X):= P\left(Y=0|X\right)$$The values of $p(X)$ will be between $0$ and $1$. So, for example, we may predict Malignant if $p(X)>0.5$, or if we don't want to take a risk in misclassifying a malignant cell as benign, we can be more conservative and predict Malignant if $p(X)>0.1$ and predict Benign only if $p(X)\leq 0.1$. This is something that you as a Data Scientist have to decide depending on the goal at hand. Our goal is to find a good way to model the conditional probability $p(X)$. Modeling $p(X)$How can we model $p(X)?$.We could try modeling these probabilities via linear regression $$p(X)=\beta_0+\sum_{i=1}^{30}\beta_iX_i$$ Discussion: Is this a good approach? Explain!What is one of the main conditions that $p(X)$ must satisfy?Well, since $p(X)$ represent probabilities their values must be between $0$ and $1$. So, we must hunt for functions whose output satisfies this condition. While there many functions that satisfy this condition, in the case of Logistic Regression we use the logistic function (in Neural Networks this function is typically referred to as the Sigmoid function):$$S(z)=\frac{1}{1+e^{-z}}=\frac{e^z}{e^z+1}$$ ExercisePlot $S(z)$ in the range $(-10,10)$
###Code
def sigmoid(z):
return 1/(1+np.exp(-z))
sns.set_style('whitegrid')
plt.figure(figsize=(10,6))
z=np.linspace(-10,10,50)
plt.yticks(np.arange(0,1.01,0.1))
plt.plot(z,sigmoid(z), 'r--', lw=3, label='$S(z)=e^z / (e^z+1)$')
plt.legend(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
So, we would model $p(X)$ as follows:$$p(X)=P(Y=Malignant|X)=\frac{e^{\beta_0+\sum_{i=1}^{30}\beta_iX_i}}{1+e^{\beta_0+\sum_{i=1}^{30}\beta_iX_i}}$$ Estimating the coefficientsThe goal now is to estimate the regression coefficients $\beta_0,\dots,\beta_n$. We will do so via a method known as the maximum likelihood method. The maximum likelihood method is very general and is typically used to fit many of the non-linear models. The basic intuition behind the maximum likelihood method is as follows: Let $X^{(i)}$ denote the $30$ measurements for the $i^{th}$ breast cancer tissue. Then, we seek to find coefficients $\hat{\beta_0},\hat{\beta_1},\dots,\hat{\beta}_{30}$ such that $p\left(X^{(i)}\right)$ is very close to $1$ if the breast tissue is Malignant ($Y^{(i)}=0$) and it is very close to $0$ if the breast tissue is Benign ($Y^{(i)}=1)$.Formally, the function that has this property is known as the likelihood function :$$L(\beta_0,\beta_1,\dots,\beta_{30})=\prod_{i: Y^{(i)}=0}p\left(X^{(i)}\right)\prod_{j:Y^{(j)}=1}\left(1-p\left(X^{(j)}\right)\right)$$The estimates $\hat{\beta_0},\hat{\beta_1},\dots,\hat{\beta}_{30}$ are chosen to maximize $L(\cdot)$ Extensions of Logistic Regression to Multiple-class Response VariablesThere are ways to use Logisitc Regression to perform classification in the setting wehre the response variable can belong to three or more categories. We briefly mention here two such methods: One-Versus-One Suppose that the response variable can belong to one of the $K\geq 3$ classes. In the one-versus-one approach, we construct $\frac{K(K-1)}{2}$ Logistic Regression models, each of which compares a pair of classes. The simplest way to classify a new, previously unseen by the model, observation is via a hard majority vote. Specifically, we use each of the $\frac{K(K-1)}{2}$ models to classify the new observation and tally the number of times that the new observation is assigned to each of the $K$ classes. At the end, the new observation is assigned to the class to which it was most frequently assigned overall. One-Versus-All One-versus-all is a similar approach. In this case we fit a total of $K$ models, each time comparing one of the $K$ classes to the remaining $K-1$ classes. There are other methods that accomodate multiple-class responses, one of the most popular one is Linear Discriminant Analysis (LDA). Python ImplementationFor this portion we will use the wisconsin breast cancer dataset.
###Code
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
Logistic Regression has many parameters. The two most important ones that we will experiment with are `penalty` and `C`. Similar to the Lasso and Ridge methods, we can apply a regularization to the regression coefficients, namely, we can apply an $\ell^1$ or $\ell^2$ regularization, or an `elastic` regularization, which is a weighted average of the two. The constant $C$ is the inverse of the strength of the regularization. In other words, the smaller the value of $C$ the stronger the regularization.
###Code
lr=LogisticRegression(solver='liblinear')
###Output
_____no_output_____
###Markdown
For now, we will chose the validation-set method to asses the prediction power of the model.
###Code
X=df.drop('target',axis=1)
y=df['target']
X.head()
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
scaled=scaler.fit_transform(X)
X_sc=pd.DataFrame(scaled, columns=X.columns)
X_sc.head()
###Output
_____no_output_____
###Markdown
Next we split the data into a training and a test set and then we fit the model.
###Code
X_train,X_test,y_train,y_test=train_test_split(X_sc,y,test_size=0.3,random_state=11)
lr.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Now the model is fit! What does this mean? It means that the coefficients $\beta_0,\dots,\beta_{30}$ have been estimated. We get them below:
###Code
df_coef=pd.DataFrame(lr.coef_,columns=X_sc.columns,index=['Coefficients'])
df_coef
###Output
_____no_output_____
###Markdown
ExerciseCarry out the following tasks:Part 1-Make predictions-Display the confusion matrix and classification report-Plot the ROC curve and compute the AUCPart 2Tune the model to get the highest accuracy and the best recall for Malignant class (class 0)
###Code
#'pred' will output the actual class...it will be either a 0 or a 1 in our case
pred=lr.predict(X_test)
#pred_prob will output the actual probabilities for each class
pred_prob=lr.predict_proba(X_test)
#pred
#pred_prob
###Output
_____no_output_____
###Markdown
Next we display some of the metrics, such as precision, recall, accuracy etc.
###Code
print(confusion_matrix(y_test,pred))
recall_score(y_test,pred,pos_label=1)
print(classification_report(y_test,pred))
###Output
precision recall f1-score support
0 0.95 0.93 0.94 61
1 0.96 0.97 0.97 110
accuracy 0.96 171
macro avg 0.96 0.95 0.96 171
weighted avg 0.96 0.96 0.96 171
###Markdown
We have an accuracy of $96\%$, a recall rate for the Malignant class of $93\%$ and for Benign class of $97\%$. A natural question is if we can tune our model in such a way as to get better results? Before we do so, let's plot the ROC curve.
###Code
fp,tp,_=roc_curve(y_test,pred_prob[:,1])
area=auc(fp,tp)
plt.figure(figsize=(8,6))
plt.plot(fp,tp, 'r--',lw=3,label='AUC={:.3f}'.format(area))
plt.xlabel("False Positive Rate",fontsize=14)
plt.ylabel("True Positive Rate",fontsize=14)
plt.legend(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Our model, as is, is doing a great job in differentiating between the two classes...it is better than what we could achieve with KNN Classifier. Next, we will tune the model...our goal will be to improve the recall score for the Malignant class, all the while trying to maintain as high of an accuracy and AUC as possible. To do so we will search for different $C$ values for each of the different regularizations. We will also try different values for the threshold.
###Code
def custom_predict(pred,thresh):
predictions=[]
for item in pred:
if item>thresh:
predictions.append(0)
elif item<=thresh:
predictions.append(1)
return predictions
from collections import defaultdict
from sklearn.metrics import accuracy_score
reg={'l1','l2'}
param_c=np.arange(0.1,5,0.1)
def tune_log_reg(param_c,reg,thresh=0.5,iterations=10):
recall=defaultdict(list)
accuracy=defaultdict(list)
for penalty in reg:
for c in param_c:
recall_c=[]
accuracy_c=[]
auc_c=[]
for i in range(iterations):
X_train,X_test,y_train,y_test=train_test_split(X_sc,y,test_size=0.3,random_state=i+1)
lg=LogisticRegression(penalty=penalty,C=c,solver='liblinear')
lg.fit(X_train,y_train)
pred=lg.predict_proba(X_test)
pred=custom_predict(pred[:,0],thresh)
#print(pred)
#print(recall_score(y_test,pred,pos_label=0))
recall_c.append(recall_score(y_test,pred,pos_label=0))
accuracy_c.append(accuracy_score(y_test,pred))
#print(recall_c)
recall[penalty].append(np.array(recall_c).mean())
accuracy[penalty].append(np.array(accuracy_c).mean())
return recall,accuracy
recall,accuracy=tune_log_reg(param_c,reg,iterations=60,thresh=0.32)
plt.figure(figsize=(10,6))
plt.plot(param_c,recall['l1'],'r--',label='Recall l1')
plt.plot(param_c,recall['l2'],'b-.',label='Recall l2')
plt.plot(param_c,accuracy['l1'],'r-',label='Accuracy l1')
plt.plot(param_c,accuracy['l2'],'b-',label='Accuracy l2')
plt.yticks(np.arange(.93,.985,0.002))
plt.xticks(np.arange(0,5,0.15))
plt.xlim(0,3)
plt.ylim(0.95,0.98)
plt.xlabel('C',fontsize=14)
plt.legend(fontsize=12)
plt.show()
###Output
_____no_output_____ |
Code/day05.ipynb | ###Markdown
15天入门Python3CopyRight by 黑板客 转载请联系heibanke_at_aliyun.com **上节作业**1. 把自己想的三个栗子发到讨论区2. 把100以内的两个乘积大于25的数挑出来。分别用列表解析和for循环实现。并且分别评估两种方式的时间。3. 完成猜数字程序4. 优化猜数字优化思路:1. 猜的次数太多,总能猜对,如果最多只能猜6次,6次猜不对则Game Over。如何修改2. 用户输入错误异常处理,如果用户输入非数字的字符,如何进行异常处理,提示用户并要求用户重新输入。
###Code
%%timeit -n 1000 -r 1
a=[(x,y) for x in range(100) for y in range(100) if x*y > 25]
%%timeit -n 1000 -r 1
a=[]
for x in range(0,100):
for y in range(0,100):
if x*y>25:
a.append((x,y))
%load day04/guess_number.py
%load day04/guess_number_opt1.py
%load day04/guess_number_opt2.py
###Output
_____no_output_____
###Markdown
这些作业代码没有在github代码仓库。需要自己在浏览器打开相应视频后,左下角参考资料里下载对应的代码。 day05:函数—操作打包更高效1. 语法2. 理解函数也是对象3. lambda匿名函数4. 函数编程的例子5. 作业 函数的语法
###Code
"""
函数定义
def 做蛋炒饭(鸡蛋,米饭,油,香肠,盐):
鸡蛋打成汤加盐
炒熟鸡蛋,盛出来
炒葱花,火腿,加盐
加米饭,加鸡蛋,翻炒
return 一份蛋炒饭
函数调用
做蛋炒饭(参数)
"""
from __future__ import print_function
from __future__ import unicode_literals
# 可变参数
def f(a, *args, **kargs):
print("a=%s, args=%s, kargs=%s"%(a, args, kargs))
return a
f(1)
f(1,2,3)
b = f(1,2,3, x=1, y=2)
print(b)
# 默认参数
def g(a=1, b=2):
print("a=%d, b=%d"%(a, b))
return a+b
print(g())
print(g(4))
print(g(4,8))
# 判断一个数是否是素数
def is_prime(x):
# Your Code
z = x-1
while z>1:
if x%z==0:
return False
else:
z -= 1
return True
########### Test Code ################
for i in [3, 5, 7, 13, 19]:
assert is_prime(i)==True, 'True的测试%d, fail'%(i)
for i in [4, 8, 12, 15, 32]:
assert is_prime(i)==False, 'False的测试%d, fail'%(i)
print("OK, All Test Pass!")
###Output
_____no_output_____
###Markdown
练习
###Code
# 给一个字符串,要求处理后输出结果如下:
# "abcd" --> "A-Bb-Ccc-Dddd"
# "RqaEzty" --> "R-Qq-Aaa-Eeee-Zzzzz-Tttttt-Yyyyyyy"
# "cwAt" --> "C-Ww-Aaa-Tttt"
txt = ["abcd", "RqaEzty", "cwAt"]
# Your code
# 把你上节课的代码改为函数,并写出自动测试代码。
"""
判断一个字符串中重复字符的个数,比如:
"abcde" -> 0 # no characters repeat
"aabbcde" -> 2 # 'a' and 'b'
"aabBcde" -> 2 # 'a' occurs twice and 'b' twice , b and B是重复
"indivisibility" -> 1 # 'i' occurs six times,不过只有i是重复的。
"Indivisibilities" -> 2 # 'i' occurs seven times and 's' occurs twice
"aA11" -> 2 # 'a' and '1'
"ABBA" -> 2 # 'A' and 'B' each occur twice
"""
def duplicate_count(s):
# Your Code
return 0
########### Test Code ################
for s,result in (("abcde",0), ("aabbcde", 2), ("aabBcde", 2), \
("indivisibility", 1), ("Indivisibilities",2), \
("aA11", 2), ("ABBA", 2)):
assert duplicate_count(s)==result, "%s fail"%(s)
print("OK, All Test Pass!")
###Output
_____no_output_____
###Markdown
函数也是对象1. 可以赋给其他变量2. 可以作为函数返回值3. 可以作为其他函数的参数
###Code
# 赋值
def is_prime(x):
# Your Code
z = x-1
while z>1:
if x%z==0:
return False
else:
z -= 1
return True
a = is_prime
a(3)
# 作为返回值
def f_add(a,b): return a+b
def f_mul(a,b): return a*b
def f_sub(a,b): return a-b
def calc2(s):
if s=='+':
return f_add
elif s=='*':
return f_mul
elif s=='-':
return f_sub
else:
assert False, "error"
calc2('*')(100, 2)
# 函数做参数
def do_danchaofan():
# 做蛋炒饭
print('做了一份蛋炒饭')
def do_rousimian():
# 做肉丝面
print('做了一份肉丝面')
def do_roubing():
# 做肉夹馍
print('做了一个肉夹馍')
# 点菜
order = [(1, do_danchaofan), (2, do_rousimian), (3, do_roubing)]
# 做菜
def send_order(orders):
print("点菜完毕,开始做菜\n")
for nums, do_fan in orders:
for i in range(nums):
do_fan()
print("\n做菜完毕,开始上菜")
send_order(order)
###Output
_____no_output_____
###Markdown
lambda匿名函数
###Code
lower = (lambda x, y: x if x < y else y)
lower(1,2)
import random
nums = [random.randint(-50,50) for i in range(10)]
print("原始数据",nums)
# sorted(iterable, cmp=None, key=None, reverse=False) ## Python 2.x
# sorted(iterable, key=None, reverse=False) ## Python 3.x
out = sorted(nums)
from functools import cmp_to_key
print("按大小排序",out)
out = sorted(nums, key=cmp_to_key(lambda x,y: abs(x)-abs(y)))
print("按绝对值排序",out)
nums.sort(key=cmp_to_key(lambda x,y: abs(x)-abs(y)))
print(nums)
# 1. 把上面的排序改为按平方值排序
# 2. 把下面这个列表忽略首字母大小写进行排序
names = ['lilong', 'Wangqiang', 'Fengxiaotian', 'yangfengshuang', 'Zhangming', 'wangxiaotong', 'Lisixiang']
print(sorted(names))
###Output
_____no_output_____
###Markdown
函数编程:常用高阶函数一个函数f接收另一个函数g作为参数,这种函数就称之为高阶函数 数学上写为:f(g(x))
###Code
# 高阶函数 filter, map, reduce
# python2 不需要导入reduce
import random
from functools import reduce
nums = [random.randint(-50,50) for i in range(10)]
nums2 = filter(lambda n: n>0, nums)
nums3 = map(lambda x: x * 2, nums)
nums4 = reduce(lambda x,y: x*y, nums)
print(list(nums2))
print(list(nums3))
print(nums4)
###Output
_____no_output_____
###Markdown
并行处理大数据时,经常用到map-reduce方法。比如当有几十万个文本文件需要统计词频时。map -> f 送入小说文件名,出来一个词频字典。 reduce -> g 送入两个词频字典,出来一个汇总的词频字典。 word_d = map(f, filelist) word_result = reduce(g, word_d) **作业**1. day05文件夹下有一些英文小说文件,用已经学过的知识编写程序统计出每个文件里单词出现频率最高的前50大单词是什么?2. 你会发现只有001这个文件最好用,其他文件报错。其他文件的问题在哪里。(选做)
###Code
# 读文件对字符串进行处理,得到单词list
# 这里请检查一下,单词的格式是否包含了不属于单词的字符。如何处理。
# Your code
# 对每个单词统计出现的次数
# Your code
# 最后对单词出现的次数进行排序
# Your Code
# 要求统计所有单词,速度够快。
from day05.tongji import read_file
%%timeit -n 1 -r 1
a = read_file('day05/001.txt')
a = read_file('day05/001.txt')
print("频率最高的50个单词")
print(a[:50])
print("频率最低的50个单词")
print(a[-50:])
###Output
_____no_output_____ |
.ipynb_checkpoints/Nutplease Movie Recommendation System_Part2.ipynb | ###Markdown
📽넷플릭스 대한민국 분석과 시각화 그리고 추천 시스템 구상 목표```1. 데이터 분석(EDA)2. 시각화3. 가공된 모델을 기반으로 사용자 컨텐츠 추천 시스템 구현``` 2. 시각화시각화 작업에 앞서, 기존의 "matplotlib", "seaborn"과 같은 시각화 도구 외에도 "Plotly express"라는 시각화 도구가 존재한다.사용법은 seaborn과 크게 다르지 않기 때문에 개인적으로 이쪽을 권장하고 싶다.설치 방법은 **[Getting Started with Plotly in Python](https://plotly.com/python/getting-started/jupyterlab-support-python-35)** 에서 설치를 진행하면 된다.
###Code
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import cufflinks as cf
cf.go_offline()
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
netflix_tmdb_merge = pd.read_csv('dataset/netflix_tmdb_merge.csv.zip')
netflix_tmdb_merge.head(2)
###Output
_____no_output_____
###Markdown
2.1 넷플릭스 컨텐츠 중 영화와 TV 프로그램이 차지하는 비중먼저 'type' 컬럼의 'Movie' 데이터와 'TV Show' 데이터가 차지하는 비중을 간단하게 시각화해보겠다.
###Code
f, ax = plt.subplots(figsize=(10, 6))
ax.set_title('넷플릭스 영화 Vs. TV 프로그램', family='D2Coding', size=20)
plt.xkcd()
sns.set(style='whitegrid')
sns.countplot(data=netflix_tmdb_merge, x='type', palette='pastel')
###Output
_____no_output_____
###Markdown
이 데이터셋에는 TV 프로그램보다는 영화의 비중이 더 많은 것으로 확인되었다.TMDB 데이터셋에는 주로 영화(Movie) 관련 데이터가 많이 존재하기 때문에, 병합(merge)한 이후에는 TV Show 데이터가 차지하는 비중이 상당히 줄었음을 알 수 있다. 2.2 넷플릭스 컨텐츠 추가 시기 확인
###Code
netflix_tmdb_merge_date = netflix_tmdb_merge[['date_added']].dropna()
netflix_tmdb_merge_date['year'] = netflix_tmdb_merge_date['date_added'].apply(lambda x: x.split(', ')[-1])
netflix_tmdb_merge_date['month'] = netflix_tmdb_merge_date['date_added'].apply(lambda x: x.lstrip().split(' ')[0])
month_order = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'][::-1]
df = netflix_tmdb_merge_date.groupby('year')['month'].value_counts().unstack().fillna(0)[month_order]
fig = px.imshow(df, labels=dict(color='Count'), x=df.columns, y=df.index)
fig.update_layout(title='넷플릭스 컨텐츠 업데이트 연도별 월간 추가 추세')
fig.show()
f, ax = plt.subplots(figsize=(10, 6))
ax.set_title('넷플릭스 컨텐츠 연간 추가 추세', family='D2Coding', size=20)
plt.xkcd()
sns.set(style='whitegrid')
sns.countplot(data=netflix_tmdb_merge_date, y='year',
palette='pastel', order=netflix_tmdb_merge_date['year']
.value_counts().index[0:])
###Output
_____no_output_____
###Markdown
2020년 1월의 색깔이 가장 진한 것으로 보아 연도별 월간 추가 추세가 가장 두드러진 모습을 볼 수 있다.그리고 연간 추가 추세의 경우에는 2019년도가 가장 많은 양의 컨텐츠가 추가가 되었다는 점 또한 확인할 수 있다. 2.3 넷플릭스 영상물(컨텐츠) 등급 확인
###Code
netflix_tmdb_merge['rating'].unique()
f, ax = plt.subplots(figsize=(10, 6))
ax.set_title('넷플릭스 영상물(컨텐츠) 등급', family='D2Coding', size=20)
plt.xkcd()
sns.set(style = "whitegrid")
sns.countplot(data=netflix_tmdb_merge, x='rating', palette='husl',
order=netflix_tmdb_merge['rating'].value_counts().index[0:])
###Output
_____no_output_____
###Markdown
위의 등급은 **[영상물 등급 제도/미국](https://namu.wiki/w/%EC%98%81%EC%83%81%EB%AC%BC%20%EB%93%B1%EA%B8%89%20%EC%A0%9C%EB%8F%84/%EB%AF%B8%EA%B5%AD)** 을 기준으로 구분된 것이다.전체적으로 넷플릭스가 수입해 오는 컨텐츠들이 대부분 자극적인?(R: 17세 미만의 경우 부모의 동반 필수) 것을 확인할 수 있다.다음으로 많은 "PG-13" 등급의 경우 '12세 관람가에서 15세 관람가 시청 가능'이라는 뜻하며, PG, TV-MA, TV-14... 순으로 결과를 보여주고 있다. 2.4 넷플릭스 컨텐츠 상영시간(Duration) 확인
###Code
netflix_tmdb_merge['duration'].unique()
colors = ['lightslategray']
colors[0] = 'crimson'
content_dur = pd.value_counts(netflix_tmdb_merge['duration'])
fig = go.Figure([go.Bar(x=content_dur.index, y=content_dur.values,
text=content_dur.values, marker_color=colors)])
fig.update_traces(textposition='outside')
fig.update_layout(title='넷플릭스 컨텐츠 상영시간 분포도', xaxis={'categoryorder':'total descending'})
fig.show()
###Output
_____no_output_____
###Markdown
데이터가 적어서 속단하기 이르지만, 넷플릭스에 있는 컨텐츠 대부분이 2시간(120분) 내외로 끝나는 것을 알 수 있었다. 2.5 IMDb 데이터셋 추가 후 평점 비교Kaggle에는 [IMDb 데이터셋(IMDb movies extensive dataset)](https://www.kaggle.com/stefanoleone992/imdb-extensive-dataset)과 [TMDb 데이터셋(TMDb 5000 Movie Dataset)](https://www.kaggle.com/stefanoleone992/imdb-extensive-dataset)가 있는데 양쪽 모두 영화 및 TV 프로그램에 대해 평점 데이터를 제공한다.여기서 굳이 IMDb 데이터셋을 활용하는 이유는 TMDb의 경우에는 '좋은 평가'와 '좋지 않은 평가'를 투표한 사람을 퍼센트 단위로 표현하는데, IMDb는 평점을 수치화(0~10점)하기 때문에 객관적이며 보다 좋은 시각화 결과를 얻을 수 있을 것 같아서이다. 'IMDb movies extensive dataset'에는 평점을 데이터로 분리한 데이터셋(IMDb ratings.csv)과 영화 정보 데이터를 분리한 데이터셋(IMDb movies.csv), 그리고 배우 정보를 데이터로 분리한 데이터셋(IMDb names.csv)으로 구분되어 있다.우리가 해야할 것은 기존 넷플릭스 대한민국에 등록되어있는 데이터들을 기반으로 평점을 매겨야 하기 때문에 "IMDb ratings.csv" 파일과 "IMDb movies.csv" 파일을 불러온 후 시각화 작업을 하도록 한다.
###Code
imdb_rate = pd.read_csv('dataset/IMDb ratings.csv', usecols=['weighted_average_vote'],
encoding='utf-8', engine='python')
print(imdb_rate.shape)
imdb_title = pd.read_csv('dataset/IMDb movies.csv', usecols=['title', 'year', 'genre'],
encoding='utf-8', engine='python')
print(imdb_title.shape)
# 위의 두 csv 파일을 pandas의 DataFrame 형태로 병합
rating_df = pd.DataFrame({'Title':imdb_title.title,
'Genre':imdb_title.genre,
'Release Year':imdb_title.year,
'Rating':imdb_rate.weighted_average_vote})
rating_df.drop_duplicates(subset=['Title', 'Rating', 'Release Year'], inplace=True)
print(rating_df.shape)
###Output
(85852, 4)
###Markdown
2.5.1 넷플릭스 컨텐츠 평점 시각화`IMDB rating` 데이터셋과 `netflix_tmdb_merge` 데이터셋을 내부 조인(Inner Join)한다.
###Code
rating_df.dropna()
joint_contents_df = rating_df.merge(netflix_tmdb_merge, left_on='Title', right_on='title', how='inner')
joint_contents_df = joint_contents_df.sort_values(by='Rating', ascending=False)
print(joint_contents_df.shape)
joint_contents_df.head()
# IMDb의 평점(Rating) 상위 10개 데이터를 시각화
top_10_contents = joint_contents_df[0:10]
fig = px.sunburst(top_10_contents, path=['title', 'rating'],
values='Rating', color='Rating')
fig.update_layout(title='IMDb 평점 + 넷플릭스 컨텐츠')
fig.show()
print(top_10_contents.shape)
top_10_contents.head(10)
###Output
(10, 13)
###Markdown
2.6 넷플릭스 컨텐츠의 주요 장르를 WordCloud 형태로 시각화WordCloud를 사용해서 `netflix_tmdb_merge` 데이터셋의 'genres' 컬럼을 추출해서 이를 시각화해 보자.
###Code
netflix_tmdb_merge.head()
from collections import Counter
contents_genre = list(netflix_tmdb_merge['genres'])
cont_genre = []
for g in contents_genre:
g = list(g.split(','))
for i in g:
cont_genre.append(i.replace('', ''))
mov_gen = Counter(cont_genre)
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
from PIL import Image
text = list(set(cont_genre))
plt.rcParams['figure.figsize'] = (11, 11)
mask = np.array(Image.open('imgs/black-star1.png'))
wc = WordCloud(max_words=1000000, background_color='black', mask=mask).generate(str(text))
plt.imshow(wc, interpolation='bilinear')
plt.axis('off')
plt.show()
###Output
_____no_output_____ |
001 LL - adding two numbers represented by a LL.ipynb | ###Markdown
Here's the solution:
###Code
class ListNode(object):
def __init__(self, x):
self.val = x
self.next = None
class Solution:
def addTwoNumbers(self, l1, l2, l3):
c = 0
while l1 != None or l2 != None:
res = l1.val + l2.val + c
l1 = l1.next
l2 = l2.next
c = 0
if l3 == None:
if res > 9:
l3 = ListNode(res%10)
c = res//10
t3 = l3
else:
l3 = ListNode(res)
t3 = l3
else:
if res > 9:
t3.next = ListNode(res%10)
c = res//10
t3 = t3.next
else:
t3.next = ListNode(res%10)
if c != 0:
t3.next = ListNode(c)
return l3
l1 = ListNode(2)
l1.next = ListNode(4)
l1.next.next = ListNode(3)
l2 = ListNode(5)
l2.next = ListNode(6)
l2.next.next = ListNode(4)
result = Solution().addTwoNumbers(l1, l2,None)
while result:
print(result.val, end = " --> ")
result = result.next
print("None")
###Output
7 --> 0 --> 8 --> None
###Markdown
THinking of any logic? Ahh! exactly recursion can make our work easier
###Code
## Now trying out recursion to calculate addition
class ListNode(object):
def __init__(self, x):
self.val = x
self.next = None
class Solution:
def addTwoNumbers(self, l1, l2, l3, c = 0):
if l1 == None or l2 == None:
if c != 0:
t3.next = c
return l3
res = l1.val + l2.val + c
if l3 == None:
if res > 9:
l3 = ListNode(res%10)
t3 = l3
t3.next = self.addTwoNumbers(l1.next, l2.next, t3, res//10)
else:
l3 = ListNode(res)
t3 = l3
t3.next = self.addTwoNumbers(l1.next, l2.next, t3, 0)
else:
if res > 9:
t3 = ListNode(res%10)
t3.next = self.addTwoNumbers(l1.next, l2.next, t3, res//10)
else:
t3 = ListNode(res)
t3.next = self.addTwoNumbers(l1.next, l2.next, t3, 0)
l1 = ListNode(2)
l1.next = ListNode(4)
l1.next.next = ListNode(3)
l2 = ListNode(5)
l2.next = ListNode(6)
l2.next.next = ListNode(4)
result = Solution().addTwoNumbers(l1, l2, None)
while result:
print (result.val)
result = result.next
###Output
_____no_output_____ |
PredictingOrdering_ Prophet.ipynb | ###Markdown
Predictive Ordering: Prophet ---This notebook contains the code to forcast order quantities using the Merlion python library for time series intelligence.--- Reading the Data Installing Prophet
###Code
!pip install prophet
###Output
Requirement already satisfied: prophet in /usr/local/lib/python3.7/dist-packages (1.0.1)
Requirement already satisfied: pystan~=2.19.1.1 in /usr/local/lib/python3.7/dist-packages (from prophet) (2.19.1.1)
Requirement already satisfied: LunarCalendar>=0.0.9 in /usr/local/lib/python3.7/dist-packages (from prophet) (0.0.9)
Requirement already satisfied: cmdstanpy==0.9.68 in /usr/local/lib/python3.7/dist-packages (from prophet) (0.9.68)
Requirement already satisfied: Cython>=0.22 in /usr/local/lib/python3.7/dist-packages (from prophet) (0.29.28)
Requirement already satisfied: pandas>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from prophet) (1.3.5)
Requirement already satisfied: convertdate>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from prophet) (2.4.0)
Requirement already satisfied: setuptools-git>=1.2 in /usr/local/lib/python3.7/dist-packages (from prophet) (1.2)
Requirement already satisfied: matplotlib>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from prophet) (3.2.2)
Requirement already satisfied: tqdm>=4.36.1 in /usr/local/lib/python3.7/dist-packages (from prophet) (4.62.3)
Requirement already satisfied: holidays>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from prophet) (0.10.5.2)
Requirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from prophet) (2.8.2)
Requirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from prophet) (1.21.5)
Requirement already satisfied: ujson in /usr/local/lib/python3.7/dist-packages (from cmdstanpy==0.9.68->prophet) (5.1.0)
Requirement already satisfied: pymeeus<=1,>=0.3.13 in /usr/local/lib/python3.7/dist-packages (from convertdate>=2.1.2->prophet) (0.5.11)
Requirement already satisfied: hijri-converter in /usr/local/lib/python3.7/dist-packages (from holidays>=0.10.2->prophet) (2.2.3)
Requirement already satisfied: korean-lunar-calendar in /usr/local/lib/python3.7/dist-packages (from holidays>=0.10.2->prophet) (0.2.1)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from holidays>=0.10.2->prophet) (1.15.0)
Requirement already satisfied: ephem>=3.7.5.3 in /usr/local/lib/python3.7/dist-packages (from LunarCalendar>=0.0.9->prophet) (4.1.3)
Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from LunarCalendar>=0.0.9->prophet) (2018.9)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->prophet) (1.3.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->prophet) (0.11.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->prophet) (3.0.7)
###Markdown
Importing the required libraries
###Code
import pandas as pd
from google.colab import drive
import math
import io
import numpy as np
import matplotlib.pyplot as plt
from fbprophet import Prophet
###Output
_____no_output_____
###Markdown
Reading the csv file from Google Drive
###Code
drive.mount("/content/gdrive")
df = pd.read_csv("gdrive/My Drive/PreprocessedOrders.csv", index_col="Period")
###Output
Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount("/content/gdrive", force_remount=True).
###Markdown
Visualizing the Dataset Display the dataset
###Code
df.head()
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
shortProductNames = ["Diet Pepsi", "Frito Lays", "Quaker Oats", "Ruffles", "Tropican Orange", "Mountain Dew"]
plt.figure(figsize=(10, 6))
plot_series(np.arange(len(df[["Sales1", "Sales2", "Sales3", "Sales4", "Sales5", "Sales5"]]), dtype="float32"), df[["Sales1", "Sales2", "Sales3", "Sales4", "Sales5", "Sales5"]])
plt.legend(shortProductNames, loc="upper left", bbox_to_anchor=(1.05, 0.0, 0.3, 0.2))
plt.show()
###Output
_____no_output_____
###Markdown
Preparing the dataset
###Code
df.head()
###Output
_____no_output_____
###Markdown
Create a date stamp column
###Code
date = pd.date_range(start="2000-1-1", periods=60, freq="1M")
df = df.assign(ds=date)
###Output
_____no_output_____
###Markdown
Display the dataframe
###Code
df.drop(["Unnamed: 0"], axis = 1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Training the model
###Code
final = pd.DataFrame()
df_train = df[:40]
df_test = df[40:]
for column in df.columns:
if column != "ds":
model = Prophet()
temp = pd.DataFrame()
temp["ds"] = df_train["ds"]
temp["y"] = df_train[column]
model.fit(temp)
future = model.make_future_dataframe(periods=20)
forecast = model.predict(future)
forecast = forecast.rename(columns={"yhat": "yhat_"+ column})
final = pd.merge(final, forecast.set_index("ds"), how="outer", left_index=True, right_index=True)
final = final[["yhat_Sales1", "yhat_Sales2", "yhat_Sales3", "yhat_Sales4", "yhat_Sales5", "yhat_Sales6"]]
final.shape
###Output
_____no_output_____
###Markdown
Visualizing the results
###Code
plt.figure(figsize=(28, 4))
df_test = df_test.drop(['ds'], axis = 1)
test_dataset = df_test.to_numpy()
train_dataset = df_train.to_numpy()
result = final.to_numpy()[40:]
result = np.clip(result, 0, 10000)
time = np.arange(60)
time_train = time[:40]
time_validate = time[40:]
products = 6
for product in range(products):
plt.subplot(1, 6, product+1)
plot_series(time_validate, test_dataset[:, product])
plot_series(time_validate, result[:, product])
plt.title("Validation Graph for P%d" %(product + 1))
plt.legend(["Actual Sales", "Predicted Sales"], loc ="upper left")
plt.show()
plt.figure(figsize=(28, 4))
for product in range(products):
plt.subplot(1, 6, product+1)
plot_series(time_train, train_dataset[:, product])
plot_series(time_validate, result[:, product])
plt.title("Forecasting Graph for P%d" %(product + 1))
plt.legend(["Actual Sales", "Predicted Sales"], loc ="upper left")
plt.show()
df.to_numpy().shape
plt.figure(figsize=(28, 4))
# df_test = df_test.drop(['ds'], axis = 1)
test_dataset = df_test.to_numpy()
train_dataset = df_train.to_numpy()
result = final.to_numpy()
result = np.clip(result, 0, 10000)
time = np.arange(60)
time_train = time[:40]
time_validate = time[40:]
products = 6
for product in range(products):
plt.subplot(1, 6, product+1)
plot_series(time, df.to_numpy()[:, product])
plot_series(time, result[:, product])
plt.title("Validation Graph for P%d" %(product + 1))
plt.legend(["Actual Sales", "Predicted Sales"], loc ="upper left")
plt.show()
###Output
_____no_output_____
###Markdown
Calculating the metrics
###Code
def SMAPE(actual, predicted):
(rows, cols) = actual.shape
updated_actual = actual
updated_predicted = predicted
model_error = 0
for row in range(rows):
for col in range(cols):
if actual[row][col] == predicted[row][col] and actual[row][col] == 0:
updated_actual[row][col] = 1
updated_predicted[row][col] = 1
for col in range(cols):
model_error = model_error + round(np.mean(np.abs(updated_predicted[:, col] - updated_actual[:, col]) / ((np.abs(updated_predicted[:, col]) + np.abs(updated_actual[:, col]))/2))*100, 2)
return model_error/cols
smape = SMAPE (test_dataset, result[40:])
print(smape)
###Output
70.69000000000001
|
dmu1/dmu1_ml_ELAIS-N1/1.4_PanSTARRS-3SS.ipynb | ###Markdown
ELAIS-N1 master catalogue Preparation of Pan-STARRS1 - 3pi Steradian Survey (3SS) dataThis catalogue comes from `dmu0_PanSTARRS1-3SS`.In the catalogue, we keep:- The `uniquePspsSTid` as unique object identifier;- The r-band position which is given for all the sources;- The grizy `ApMag` aperture magnitude (see below);- The grizy `KronMag` as total magnitude.The Pan-STARRS1-3SS catalogue provides for each band an aperture magnitude defined as “In PS1, an 'optimal' aperture radius is determined based on the local PSF. The wings of the same analytic PSF are then used to extrapolate the flux measured inside this aperture to a 'total' flux.”The observations used for the catalogue where done between 2010 and 2015 ([ref](https://confluence.stsci.edu/display/PANSTARRS/PS1+Image+data+products)).**TODO**: Check if the detection flag can be used to know in which bands an object was detected to construct the coverage maps.**TODO**: Check for stellarity.
###Code
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
from collections import OrderedDict
import os
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, mag_to_flux
OUT_DIR = os.environ.get('TMP_DIR', "./data_tmp")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "ps1_ra"
DEC_COL = "ps1_dec"
###Output
_____no_output_____
###Markdown
I - Column selection
###Code
imported_columns = OrderedDict({
"objID": "ps1_id",
"raMean": "ps1_ra",
"decMean": "ps1_dec",
"gApMag": "m_ap_gpc1_g",
"gApMagErr": "merr_ap_gpc1_g",
"gKronMag": "m_gpc1_g",
"gKronMagErr": "merr_gpc1_g",
"rApMag": "m_ap_gpc1_r",
"rApMagErr": "merr_ap_gpc1_r",
"rKronMag": "m_gpc1_r",
"rKronMagErr": "merr_gpc1_r",
"iApMag": "m_ap_gpc1_i",
"iApMagErr": "merr_ap_gpc1_i",
"iKronMag": "m_gpc1_i",
"iKronMagErr": "merr_gpc1_i",
"zApMag": "m_ap_gpc1_z",
"zApMagErr": "merr_ap_gpc1_z",
"zKronMag": "m_gpc1_z",
"zKronMagErr": "merr_gpc1_z",
"yApMag": "m_ap_gpc1_y",
"yApMagErr": "merr_ap_gpc1_y",
"yKronMag": "m_gpc1_y",
"yKronMagErr": "merr_gpc1_y"
})
catalogue = Table.read("../../dmu0/dmu0_PanSTARRS1-3SS/data/PanSTARRS1-3SS_ELAIS-N1.fits")[list(imported_columns)]
for column in imported_columns:
catalogue[column].name = imported_columns[column]
epoch = 2012
# Clean table metadata
catalogue.meta = None
# Adding flux and band-flag columns
for col in catalogue.colnames:
if col.startswith('m_'):
errcol = "merr{}".format(col[1:])
# -999 is used for missing values
catalogue[col][catalogue[col] < -900] = np.nan
catalogue[errcol][catalogue[errcol] < -900] = np.nan
flux, error = mag_to_flux(np.array(catalogue[col]), np.array(catalogue[errcol]))
# Fluxes are added in µJy
catalogue.add_column(Column(flux * 1.e6, name="f{}".format(col[1:])))
catalogue.add_column(Column(error * 1.e6, name="f{}".format(errcol[1:])))
# Band-flag column
if "ap" not in col:
catalogue.add_column(Column(np.zeros(len(catalogue), dtype=bool), name="flag{}".format(col[1:])))
# TODO: Set to True the flag columns for fluxes that should not be used for SED fitting.
catalogue[:10].show_in_notebook()
###Output
_____no_output_____
###Markdown
II - Removal of duplicated sources We remove duplicated objects from the input catalogues.
###Code
SORT_COLS = ['merr_ap_gpc1_r', 'merr_ap_gpc1_g', 'merr_ap_gpc1_i', 'merr_ap_gpc1_z', 'merr_ap_gpc1_y']
FLAG_NAME = 'ps1_flag_cleaned'
nb_orig_sources = len(catalogue)
catalogue = remove_duplicates(catalogue, RA_COL, DEC_COL, sort_col=SORT_COLS, flag_name=FLAG_NAME)
nb_sources = len(catalogue)
print("The initial catalogue had {} sources.".format(nb_orig_sources))
print("The cleaned catalogue has {} sources ({} removed).".format(nb_sources, nb_orig_sources - nb_sources))
print("The cleaned catalogue has {} sources flagged as having been cleaned".format(np.sum(catalogue[FLAG_NAME])))
###Output
/opt/anaconda3/envs/herschelhelp_internal/lib/python3.6/site-packages/astropy/table/column.py:1096: MaskedArrayFutureWarning: setting an item on a masked array which has a shared mask will not copy the mask and also change the original mask array in the future.
Check the NumPy 1.11 release notes for more information.
ma.MaskedArray.__setitem__(self, index, value)
###Markdown
III - Astrometry correctionWe match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
###Code
gaia = Table.read("../../dmu0/dmu0_GAIA/data/GAIA_ELAIS-N1.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
delta_ra, delta_dec = astrometric_correction(
SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]),
gaia_coords
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
catalogue[RA_COL] += delta_ra.to(u.deg)
catalogue[DEC_COL] += delta_dec.to(u.deg)
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
###Output
_____no_output_____
###Markdown
IV - Flagging Gaia objects
###Code
catalogue.add_column(
gaia_flag_column(SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]), epoch, gaia)
)
GAIA_FLAG_NAME = "ps1_flag_gaia"
catalogue['flag_gaia'].name = GAIA_FLAG_NAME
print("{} sources flagged.".format(np.sum(catalogue[GAIA_FLAG_NAME] > 0)))
###Output
53200 sources flagged.
###Markdown
V - Saving to disk
###Code
catalogue.write("{}/PS1.fits".format(OUT_DIR), overwrite=True)
###Output
_____no_output_____ |
notebooks/2 - An MVP Twitter bot that anyone can use.ipynb | ###Markdown
An MVP Twitter bot that anyone can use====
###Code
bot1 = TwitterBot('test1')
bot2 = TwitterBot('test2')
bot3 = TwitterBot('test3')
kanye = TwitterBot('production')
#
kanye.tweet('Attendees will learn to like small dogs and cigarettes')
# they can talk between the two of them
tweet = bot3.tweet("Testing 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11...")
reply_tweet = bot1.tweet("I can hear you!", in_reply_to=tweet)
# they remember their conversations
bot1.should_reply(tweet)
bot3.should_reply(reply_tweet)
# or they can have a conversation
tweet = bot1.tweet("Is there anybody out there?")
reply_tweet1 = bot2.tweet("Mr. Watson--come here--I want to see you.", in_reply_to=tweet)
reply_tweet2 = bot3.tweet("wait, I thought I was your one and only. who is @%s?" % bot1.user.screen_name,
in_reply_to=reply_tweet1)
# they should all be friends
bot1._api_client.CreateFriendship(user_id=bot3.user.id)
bot1._api_client.CreateFriendship(user_id=bot2.user.id)
bot2._api_client.CreateFriendship(user_id=bot3.user.id)
bot2._api_client.CreateFriendship(user_id=bot1.user.id)
bot3._api_client.CreateFriendship(user_id=bot2.user.id)
bot3._api_client.CreateFriendship(user_id=bot1.user.id)
# and they should all like eachother's posts, why not?
bot1._api_client.CreateFavorite(status=reply_tweet1)
bot1._api_client.CreateFavorite(status=reply_tweet2)
bot2._api_client.CreateFavorite(status=tweet)
bot2._api_client.CreateFavorite(status=reply_tweet2)
# except maybe bot 3 he seems a bit wary of bot 1
###Output
_____no_output_____ |
Advanced AIML Lab/Advanced AI-ML Lab 2 09.02.2021.ipynb | ###Markdown
Abhishek Sharma CS 2nd Year | Section : "I" | Roll no.: 01 Enrollment no.: 12019009001127 Advanced Artificial Intelligence and Machine Learning Lab 2 Date : 09.02.2021 | Faculty : BM and MB
###Code
from sklearn.datasets import load_boston
boston = load_boston()
import pandas as pd
data = pd.DataFrame(boston.data, columns = boston.feature_names)
data['MEDV'] = pd.DataFrame(boston.target)
x = data['RM']
y = data['MEDV']
pd.DataFrame ([x,y]).transpose().head()
###Output
_____no_output_____
###Markdown
Linear Regression Model :
###Code
from sklearn.linear_model import LinearRegression
model1 = LinearRegression()
from sklearn.model_selection import train_test_split
x = pd.DataFrame(x)
y = pd.DataFrame(y)
x_train1, x_test1, y_train1, y_test1 = train_test_split(x,y, test_size = 0.2)
print (x_train1.shape)
print (x_test1.shape)
linearRegression_train = LinearRegression()
linearRegression_train.fit(x_train1, y_train1)
yTestPredict = linearRegression_train.predict(x_test1)
print (yTestPredict)
import numpy as np
from sklearn.metrics import mean_squared_error
np.sqrt(mean_squared_error(y_test1, yTestPredict))
linearRegression_train.score(x_test1, y_test1)
###Output
_____no_output_____
###Markdown
**Here we have imported the following libraries :**- Calculating the mean squared error - ```from sklearn.metrics import mean_squared_error```- For Training, Testing and Spliting the data - ```from sklearn.model_selection import train_test_split``` Ridge regression Model : **Importing the Lasso Model :** ```from sklearn.linear_model import Ridge```
###Code
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1)
ridge_reg.fit(x_train1, y_train1)
yRidgePredict = ridge_reg.predict(x_test1)
np.sqrt(mean_squared_error(y_test1, yRidgePredict))
###Output
_____no_output_____
###Markdown
Lasso Regression Model : **Importing the Lasso Regression Model :** ```from sklearn.linear_model import Lasso```
###Code
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha = 0.1)
lasso_reg.fit(x_train1, y_train1)
yLassoPredict = lasso_reg.predict(x_test1)
np.sqrt(mean_squared_error(y_test1, yLassoPredict))
###Output
_____no_output_____
###Markdown
ElasticNet Model : **Importing the ElasticNet Regression Model :** ```from sklearn.linear_model import ElasticNet```
###Code
from sklearn.linear_model import ElasticNet
elsticnet_reg = ElasticNet(alpha = 0.1, l1_ratio = 0.5)
elsticnet_reg.fit(x_train1, y_train1)
yElasticNetPredict = ridge_reg.predict(x_test1)
np.sqrt(mean_squared_error(y_test1, yElasticNetPredict))
###Output
_____no_output_____ |
notebooks/lda/LDA_twitter.ipynb | ###Markdown
Modelagem de tópicos em textos através do Latent Dirichlet Allocation (LDA) aplicação em python imports
###Code
print('Heloo world')
import pandas as pd
import numpy as np
import altair as alt
from sklearn.feature_extraction.text import CountVectorizer
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk import wordpunct_tokenize
import re
from time import time
###Output
[nltk_data] Downloading package stopwords to /home/renato/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Dados
###Code
df = pd.read_csv('../Dados coletados/Twitter_padronizado.csv', encoding='utf-8', lineterminator='\n')
df.head()
###Output
_____no_output_____
###Markdown
Pré processamento
###Code
def remove_elementos_repetidos(lista):
nova_lista = []
for item in lista:
if item not in nova_lista:
nova_lista.append(item)
return nova_lista
# remove special characters and digits
def clean_tweet(tweet):
# remove os RT
tweet = re.sub(r'RT+', '', tweet)
# remove as menções
tweet = re.sub(r'@\S+', '', tweet)
# remove links e alguns pontos
tweet = re.sub(r'kkk\S+', '', tweet)
tweet = re.sub(r"http\S+", "", tweet).lower().replace('.','').replace(';','').replace('-','').replace(':','').replace(')','')
# remove alguns caracteres
tweet = re.sub("(\\d|\\W)+|\w*\d\w*"," ",tweet )
tweet = ' '.join(s for s in tweet.split() if (not any(c.isdigit() for c in s)) and len(s) > 2)
tweet = tweet.replace("\n", "")
return tweet
stop_words = set(stopwords.words("portuguese"))
stop_words.update(['que', 'até', 'esse',
'essa', 'pro', 'pra',
'oi', 'lá', 'blá', 'dos'])
clean_tweets = []
for w in range(len(df.TWEET)):
tweet = df['TWEET'].iloc[w]
tweet = clean_tweet(tweet)
clean_tweets.append(tweet)
#remover colunas repetidas
clean_tweets = remove_elementos_repetidos(clean_tweets)
clean_tweets[110:145]
#procurar exemplos de limpesa de dados mais eficientes, com tweets
###Output
_____no_output_____
###Markdown
LDA funciona baseado em frequências de palavras, então usaremos TFs, e não TF-IDFs.
###Code
# COUNT vectorizer
tf_vectorizer = CountVectorizer(
min_df = 30,
max_df = 0.5,
max_features = 10000,
stop_words = stop_words,
ngram_range = (1,2)
)
#transform
vec_text = tf_vectorizer.fit_transform(clean_tweets)
#returns a list of words.
words = tf_vectorizer.get_feature_names()
#print(clean_tweets)
#print(vec_text)
print(len(words))
###Output
4569
###Markdown
Encontrando tópicosO resultado teráuma matriz que descreve a relação entre palavras e tópicosuma matriz que descreve a relação entre documentos e tópicosExiste uma outra implementação de LDA popular em python chamada Gensim.
###Code
from sklearn.decomposition import LatentDirichletAllocation
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("\n--\nTopic #{}: ".format(topic_idx + 1))
message = ", ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]])
print(message)
print()
def display_topics(W, H, feature_names, documents, no_top_words, no_top_documents):
for topic_idx, topic in enumerate(H):
print("\n--\nTopic #{}: ".format(topic_idx + 1))
print(", ".join([feature_names[i]
for i in topic.argsort()[:-no_top_words - 1:-1]]).upper())
top_d_idx = np.argsort(W[:,topic_idx])[::-1][0:no_top_documents]
for d in top_d_idx:
doc_data = df[['USUARIO', 'TWEET']].iloc[d]
print('{} - {} : \t{:.2f}'.format(doc_data[0], doc_data[1], W[d, topic_idx]))
lda = LatentDirichletAllocation(n_components=15,
learning_method='online', # 'online' equivale a minibatch no k-means
random_state=0)
t0 = time()
lda.fit(vec_text)
doc_topic_matrix = lda.transform(vec_text)
print("done in %0.3fs." % (time() - t0))
print('Matriz documento-tópicos:' + str(doc_topic_matrix.shape))
print('Matriz tópicos-termos:' + str(lda.components_.shape))
display_topics(doc_topic_matrix,
lda.components_,
words,
df,
15,
10)
###Output
--
Topic #1:
MÊS, QUER, ALGUÉM, MILHÕES, CARALHO, SABER, LUGAR, POVO, TOMAR, PUTA, THE, ONDE, NESSA, FILHO, CERTO
MikaelsonNinan - RT @PatriciaFelixRJ: Alô @jairbolsonaro, seu mentiroso: o povo brasileiro quer saber pra onde foram os mil dólares de auxílio emergencial! : 0.94
devilofbillie - se chama no time to die mas tá matando todo mundo de ansiedade... enfim a hipocrisia : 0.88
19Pablo19_ - RT @UOLEsporte: Pay-per-view: você precisa de R$ 3,7 mil/ano para assinar todos de esporte https://t.co/mI6Q3ncWMI : 0.88
ElianadaCruzRo1 - RT @MariluParreiras: … acrescentou que concedeu auxílio-emergencial de aproximadamente “1 mil dólares para 65 milhões de famílias”. https:/… : 0.84
Criaonipresente - @VeteranoLouco @yetz1 Já vai ter jogo sendo lançado junto ao console cara sempre lançam jogos junto com os consoles… https://t.co/Kn1pVxJ1KT : 0.82
prizellei - #DomingoEspetacular oi querida da pra acabar logo pra eu assistir #AFazenda12? : 0.82
paulaclimaadv - Tá muito #catfish essa nova temporada de #90diasparacasar!!! : 0.81
KayoJovovich15 - RT @IgorCamaraCN: Não custa sonhar com a redução dos impostos @MinEconomia #PS5eXboxMaisBaratos : 0.81
TheComediante - O Musto é a prova viva de que faltou apoio e dedicação pra eu e meus manos terem sido jogadores profissionais. O qu… https://t.co/E4dcfGF94D : 0.81
NanatCalore - Passou de nessian pra lucien descobrindo quem é o pai dele https://t.co/qZ7JZpJS3n : 0.81
--
Topic #2:
VAI, JOGO, FLAMENGO, AQUI, BEM, ANOS, DEMAIS, TAVA, HOJE, LAKERS, GANHAR, JOGADORES, FAZER, SAIR, DOME
VivianePimente3 - Presidente Bolsonaro, em seu discurso na abertura da conferência da ONU representou milhões de cristãos indignados… https://t.co/1xk6GGxPcR : 0.88
zeiro10cr - Grande vitória do @Cruzeiro, destaques para Régis, Machado, Sassá e principalmente o zagueiro Manoel, o melhor em campo : 0.88
ThiagoFiSa - @CECemVideos Moreno?
Ele não sabe a cor do uniforme do goleiro do Avaí! : 0.87
vitorrrosa - RT @bellexcosta: Momento muito emocionante do Neneca após o apito final da partida entre #Palmeiras e #Flamengo. Merece muito todos os elog… : 0.87
cacaugonzalez5 - RT @admiraemilly: Nossa menina preciosa!
TOPZERA REDETV : 0.85
RoseTaranto - Que Palhaçada é essa?
#MaiaEAlcolumbreNuncaMais https://t.co/3wKyqagejH : 0.84
JEANTRENTO - @WellCruz8 Desculpa mas não concordo, tem que olhar não em câmera lenta, o jogador deu o passe, consequentemente o… https://t.co/gg78lPbo2k : 0.84
manucaandr - RT @prodriguesl: 21:20 de uma sexta feira, eu, Johan e Manoel escutando 1h de Gil bala : 0.84
IMiojito - RT @prettyIHRO: Criança 1: “vc nunca assistiu Toy Story?”
Criança 2: “não...”
Criança 1: “NOSSA VC NÃO TEVE INFÂNCIA!!”
~ AMBAS COM 9 ANOS… : 0.84
Parente__Neto - Jimmy tem que atacar o Kanter agora : 0.84
--
Topic #3:
TER, VIDA, AMO, FEZ, VCS, QUERIA, DAR, BOLA, MENOS, MEIO, SER, DEU, SMASH, PROGRAMA, TRABALHO
bia_mesquita97 - RT @MonarkEmPython: O Rato, mas acima de tudo, Douglas, o ser humano que está por trás, é um dos caras mais fodas e carismáticos q temos no… : 0.90
Annassx1 - MEU DEUS É A PATROAAAA #AfterSchoolEP : 0.90
Yogagt6 - RT @reportersalles: Um programa que escolhe ter em sua bancada o profissional da fofoca e comentarista de capas da Revista Caras, Fefito, t… : 0.88
Ramirez6712 - Aidano não fique c raiva.
Faca campanha e ganhe nas urnas.
Essa e a democracia!!!!#redacaosportv : 0.87
hathwbllck - @MsSarahPaulson @NetflixBrasil tem alguma possibilidade de ocean's 9 acontecer? se não quiser responder, só balança… https://t.co/53oKOYEfYB : 0.84
WagnerRLH - RT @JManjador: Deve ser muito bom ter um presidente https://t.co/BFNetkdmAO https://t.co/9644srd7j2 : 0.84
NiaII0fficial1D - Pode ganhar aí, mas não convence. Futebol totalmente bagunçado esse do flamengo, ou mengao!
Acredito que o flameng… https://t.co/e9ddhHQhYH : 0.84
mxfezao - @PA0ZIN eu mutei na metade do game carakkkkporra zoe ta pessima dms, o tay foi um dos melhores nessa partida : 0.84
thatwhyy - RT @littIebodyheart: MANO SE VOCÊS NAO DEREM VALOR E NÃO FIZEREM ESSE EP HITAR
A MELANIE ENTREGOU TUDOO MAIS UMA VEZ, SONORIDADE TOTALMENT… : 0.84
biebzreason - RT @biebersmaniabrs: Justin Bieber e Chance The Rapper estão doando 500 dólares para os fãs que estão compartilhando suas histórias com a t… : 0.84
--
Topic #4:
VAMOS, NAO, DESSE, NINGUÉM, FOTO, CHEGOU, SENDO, PARTE, CHIQUITITAS, SCOOBY, DOO, SCOOBY DOO, SHAWN, NENHUM, PINTEREST
dutSttlys - RT @wherewereyouspm: imaginem essa criança tendo que fazer trabalho sobre família na escola "meu pai é zayn malik, minha mãe gigi hadid, mi… : 0.93
Maaniana_ - serião
a qualidade tava realmente boa
pena q n mantem nas temporadas recorrentes
MIRACULOUS NEW YORK : 0.91
WalterPorco1971 - Como pode uma pessoa se acha que é DEUS esse Gilmar Mendes.
Esse @STF_oficial acha também que está a cima de tudo.… https://t.co/kheyUYlpCX : 0.91
ChrisinhaR - @esquerdeando Muita maldade, esse genocida arrasa com tudo que vê pela frente.
Mas temos um homem do povo, que am… https://t.co/sPKIL55Pmr : 0.90
ManoJuOliveira1 - #LulaNaONU
“...Me dá um beijo, então
Aperta a minha mão
Tolice é viver a vida assim
Sem aventura...”
Lulu Santos… https://t.co/DnHRr9O1zT : 0.90
Julynha_sl - RT @leoleoperes: justiça por mari ferrer e por todxs que não viram seus estupradores sendo devidamente punidos https://t.co/zwMkXO8cM4 : 0.90
alanasfc_ - Acho que o Sport n perdeu nenhuma desde que o Jair Ventura assumiu mas acho q já tá na hora dele ir pra um time mai… https://t.co/Mebn50DF4w : 0.89
Luizm_crf - @felipcolen @goldorayo Rodriguinho é melhor mano : 0.88
zabdicasacomigo - RT @QuebrandoOTabu: Todos os dias mulheres são abusadas e não denunciam o agressor, pelos mais diversos fatores. E mesmo quando a mulher de… : 0.88
neilabaldi - Esta semana, mais de um jornalista escreveu que Barbosa seria o titular hoje. Eu escrevi: Cortez porque Grenal prec… https://t.co/EteRIrfFKP : 0.88
--
Topic #5:
BRASIL, TÃO, SEI, CARA, PRIMEIRO, SOBRE, PORRA, GRANDE, FAZENDO, FALA, JOGOU, PESSOA, ASSISTIR, VERDADE, FIM
Luanete_Fiel13 - RT @withloveluanr: E cabe as luanetes virarem trilíngue pra falar com o Luan, pq agora é português, inglês e espanhol fi
LUAN E JIM BEAM : 0.93
corinthians_bot - RT @matheussccp__: Incomoda muito a quantidade de mudanças na escalação do Corinthians entre os jogos, principalmente no meio campo. O pior… : 0.87
MaelNery - RT @HENRIQUEPAWAT: As mulheres latinas entregam muito mais qualidade sonora e visual que metade desses machos que só fazem do mesmo e são t… : 0.84
eilouisgirl_ - os dias de glória finalmente chegaram:
SKINNY SKINNY e o anúncio de superbloom
BETTER BY ZAYN
caras,tô impactada : 0.84
Biancanataan - Minha voz lá no fundo pq ela queria o tema da sereia e não foi , você era perfeita eu te amo mil milhões , meu eter… https://t.co/Fbk9M9sDPQ : 0.84
CastelliRister - @veramagalhaes Achei que era a jovem pan : 0.84
Semprega1 - RT @canceladoluca: Adrielly você é tudo entenda, ver minha neném gargalhando desse jeito não tem preço!
GUIBI NO FUTCARLU : 0.81
Clauversolato - Eu iniciei o ano recém recuperada de uma pneumonia e treinando pra correr minha primeira São Silvestre (sim, é poss… https://t.co/ytXRrEUQsh : 0.81
favluggie - @madisonrbrasil puts eu sou mt cadelinha de sunset curve mas julie and the phantoms berra aclamação
MADISON AT IMAGEN AWARDS : 0.81
Jpsalles18 - RT @souzakeths: bom dia lembrar que vocês TEM o DIREITO de ser FEIA ! : 0.81
--
Topic #6:
DIA, SER, BOM, TAG, ACHO, VAI, FALAR, AMOR, BOM DIA, LIVE, TODA, TÉCNICO, VAI SER, MAIOR, TORCIDA
thiiaguuu - Presidente @jairbolsonaro, por que o senhor desviou a finalidade de R$7,5 milhões doados para compra de testes de C… https://t.co/oV7AmZee4j : 0.88
dariossampaio - Hoje venho aqui como Contador, como me formei, parabenizar a todos os Contadores, que trabalham com zelo pelo patri… https://t.co/8qcrpFlpbP : 0.88
xlaoyd - Nesse momento Lígia se encontra jogada no seu quarto chorando baixinho em posição fetal #LISTENTOTiTN #youngdumbthrills : 0.88
luisaborges__ - @maisa não tem feliz outubro pq o Bruno e o Marrone não estão mais juntos tudo está perdido maisa : 0.88
caoliman - RT @elikatakimoto: Senhor presidente, por que o governo desviou R$ 7,5 milhões doados para combater a covid para o programa da Michelle? ht… : 0.87
renatopl97 - @mathh_morais @futtmais Foi, a saída de bola dos dois é muito melhor, e Bruno e Arboleda estavam mal d+, hj eu volt… https://t.co/8jP8hrpVVp : 0.87
FabioJr_83 - @ColunadoFla Roubaram o Del Valle, pela regra era pra expulsar os dois. : 0.87
aliceassisc - sempre acertou e dessa vez não foi diferente #HHRL7 : 0.87
###Markdown
Visualizando os tópicos
###Code
#!pip install pyLDAvis
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda, vec_text, tf_vectorizer, sort_topics=False, mds = 'tsne')
###Output
_____no_output_____ |
Coefficient variable for crypto assets.ipynb | ###Markdown
Coefficient variable for crypto assets
###Code
import os
import time
import json
import datetime
import requests
import numpy as np
import pandas as pd
headers = {'Content-Type': 'application/json'}
api_url_base_coingecko = 'https://api.coingecko.com/api/v3/coins/'
vs_currency = 'usd'
days = '30'
def getCoinData(ids, vs, days):
api_url_coingecko = '{0}/{1}/ohlc?vs_currency={2}&days={3}'.format(api_url_base_coingecko, ids, vs, days)
res = requests.get(api_url_coingecko, headers=headers)
if res.status_code == 200:
df = pd.read_json(res.content.decode('utf-8'))
df.columns = ['timestamp', 'Open', 'High', 'Low', 'Close']
df['timestamp'] = df['timestamp']/1000
df['timestamp'] = df['timestamp'].astype(int)
df['timestamp'] = df['timestamp'].map(datetime.datetime.utcfromtimestamp)
df.index = df['timestamp']
feature_name = ['Close']
df = df[feature_name]
df.interpolate(limit_direction='both', inplace=True)
df.columns = [ids]
return df
else:
return None
coins = ['bitcoin', 'ethereum', 'chainlink', 'ripple', 'litecoin', 'polkadot']
df = pd.DataFrame()
for coin in coins:
df = pd.concat([df, getCoinData(coin, vs_currency, days)], axis=1, sort=True, join='outer')
df.head()
df_ = df.copy()
for coin in coins:
df_[coin] = df[coin].diff().fillna(method='bfill')
df_.head()
# statistics
df_.describe()
import matplotlib.pyplot as plt
import seaborn as sns
now = datetime.datetime.utcnow()
path = 'data/' + 'coefficientVariable_' + now.strftime('%Y%m%d_%H%M%S') + '.png'
# seaborn drawing
sns.set_style('whitegrid')
sns.set_palette("PuBu", 8)
sns.set_context(context='paper', font_scale=2, rc=None)
# Coefficient of variation per asset
fig = plt.figure(figsize=(15, 7))
ax = fig.add_subplot(1, 1, 1)
# calculate coefficient variance,
# represents the ratio of the standard deviation to the mean
cv = pd.DataFrame(df_.std()/df_.mean())
cv.columns = ['Coef']
ax.set_title('Coefficient of variation per crypto asset')
sns_plot = sns.barplot(data=cv.T)
figure = sns_plot.get_figure()
figure.savefig(path)
###Output
_____no_output_____ |
applications/classification/sentiment_classification/Sentimix with XLM-Roberta-LSTM-Attention.ipynb | ###Markdown
Initial Setup
###Code
from google.colab import drive
drive.mount('/content/drive')
train_file = '/content/drive/My Drive/train_14k_split_conll.txt'
test_file = '/content/drive/My Drive/dev_3k_split_conll.txt'
# Data containing transliteration using google's api is taken from here
# https://github.com/keshav22bansal/BAKSA_IITK
processed_train_file = '/content/drive/My Drive/hinglish_train.txt'
processed_test_file = '/content/drive/My Drive/hinglish_test.txt'
!pip install indic_transliteration -q
!pip install contractions -q
!pip install transformers -q
###Output
[K |████████████████████████████████| 102kB 3.2MB/s
[K |████████████████████████████████| 911kB 9.6MB/s
[K |████████████████████████████████| 245kB 5.9MB/s
[K |████████████████████████████████| 317kB 30.5MB/s
[?25h Building wheel for pyahocorasick (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 778kB 4.4MB/s
[K |████████████████████████████████| 3.0MB 19.2MB/s
[K |████████████████████████████████| 890kB 42.6MB/s
[K |████████████████████████████████| 1.1MB 52.5MB/s
[?25h Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
###Markdown
Imports
###Code
import re
import time
import string
import contractions
import numpy as np
import pandas as pd
from indic_transliteration import sanscript
from indic_transliteration.sanscript import SchemeMap, SCHEMES, transliterate
from collections import Counter
from sklearn.model_selection import train_test_split
from sklearn import metrics
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from torch.optim.lr_scheduler import ReduceLROnPlateau
from transformers import XLMRobertaTokenizer, XLMRobertaModel, AdamW, get_linear_schedule_with_warmup
import matplotlib.pyplot as plt
import seaborn as sns
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
###Output
_____no_output_____
###Markdown
Processing the DataSkip this step and move to Using the processed sentences.The processed data is taken from [here](https://github.com/keshav22bansal/BAKSA_IITK)The major difference is that transliteration of hinglish words to hindi is done using google's api instead of indic_transliteration module
###Code
with open(train_file) as f:
data = f.readlines()
with open(test_file, 'r') as f:
test_data = f.readlines()
def parse_data(data):
uids, sentences, sentences_info, sentiment = [], [], [], []
single_sentence, single_sentence_info = [], []
sent = ""
uid = 0
for idx, each_line in enumerate(data):
line = each_line.strip()
tokens = line.split('\t')
num_tokens = len(tokens)
if num_tokens == 2:
# add the word
single_sentence.append(tokens[0])
# add the word info(lang)
single_sentence_info.append(tokens[1])
elif num_tokens == 3 and idx > 0:
# append the sentence data
sentences.append(single_sentence)
sentences_info.append(single_sentence_info)
sentiment.append(sent)
uids.append(uid)
sent = tokens[-1]
uid = int(tokens[1])
# clear the single sentence
single_sentence = []
single_sentence_info = []
# new line after the sentence
elif num_tokens == 1:
continue
else:
sent = tokens[-1]
uid = int(tokens[1])
# for the last sentence
if len(single_sentence) > 0:
sentences.append(single_sentence)
sentences_info.append(single_sentence_info)
sentiment.append(sent)
uids.append(uid)
assert len(sentences) == len(sentences_info) == len(sentiment) == len(uids)
return sentences, sentences_info, sentiment, uids
sentences, sentences_info, sentiment, uids = parse_data(data)
test_sentences, test_sentences_info, test_sentiment, test_uids = parse_data(test_data)
list(zip(sentences[0], sentences_info[0]))
data = "jen klid takhle vypad"
transliterate(data, sanscript.ITRANS, sanscript.DEVANAGARI)
def translate(sentences, sentences_info):
translated = []
for sent, sent_info in zip(sentences, sentences_info):
partial_translated = []
for word, word_info in zip(sent, sent_info):
if word_info == "Hin":
partial_translated.append(transliterate(word, sanscript.ITRANS, sanscript.DEVANAGARI))
else:
partial_translated.append(word)
translated.append(partial_translated)
return translated
translated_sentences = translate(sentences, sentences_info)
test_translated_sentences = translate(test_sentences, test_sentences_info)
url_pattern = r'https(.*)/\s[\w\u0900-\u097F]+'
special_chars = r'[_…\*\[\]\(\)&“]'
names_with_numbers = r'([A-Za-z\u0900-\u097F]+)\d{3,}'
apostee = r"([\w]+)\s'\s([\w]+)"
names = r"@[\s]*[\w\u0900-\u097F]+[\s]*[_]+[\s]*[\w\u0900-\u097F]+|@[\s]*[\w\u0900-\u097F]+"
hashtags = r"#[\s]*[\w\u0900-\u097F]+[\s]*"
def preprocess_data(sentence_tokens):
sentence = " ".join(sentence_tokens)
sentence = " " + sentence
# remove rt and … from string
sentence = sentence.replace(" RT ", "")
sentence = sentence.replace("…", "")
# replace apostee
sentence = sentence.replace("’", "'")
# replace _
sentence = sentence.replace("_", " ")
# replace names
sentence = re.sub(re.compile(names), " ", sentence)
# remove hashtags
sentence = re.sub(re.compile(hashtags), " ", sentence)
# remove urls
sentence = re.sub(re.compile(url_pattern), "", sentence)
# combine only ' related words => ... it ' s ... -> ... it's ...
sentence = re.sub(re.compile(apostee), r"\1'\2", sentence)
# fix contractions
sentence = contractions.fix(sentence)
# replace names ending with numbers with only names (remove numbers)
sentence = re.sub(re.compile(names_with_numbers), r" ", sentence)
sentence = " ".join(sentence.split()).strip()
return sentence
MODEL_NAME = "xlm-roberta-base"
tokenizer = XLMRobertaTokenizer.from_pretrained(MODEL_NAME)
print(tokenizer.sep_token, tokenizer.sep_token_id)
print(tokenizer.cls_token, tokenizer.cls_token_id)
print(tokenizer.pad_token, tokenizer.pad_token_id)
print(tokenizer.unk_token, tokenizer.unk_token_id)
" ".join(sentences[32]), sentiment[32]
" ".join(translated_sentences[32])
preprocess_data(translated_sentences[32])
encoding = tokenizer.encode_plus(
preprocess_data(translated_sentences[32]),
max_length=100,
add_special_tokens=True, # Add '[CLS]' and '[SEP]'
return_token_type_ids=False,
truncation=True,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt', # Return PyTorch tensors
)
print(len(encoding['input_ids'][0]))
encoding['input_ids'][0]
print(len(encoding['attention_mask'][0]))
encoding['attention_mask']
tokenizer.convert_ids_to_tokens(encoding['input_ids'][0])
" ".join(sentences[29]), sentiment[29]
" ".join(translated_sentences[29])
preprocess_data(translated_sentences[29])
" ".join(sentences[10]), sentiment[10]
" ".join(translated_sentences[10])
preprocess_data(translated_sentences[10])
%%time
processed_sentences = []
for sent in translated_sentences:
processed_sentences.append(preprocess_data(sent))
test_data = []
for sent in test_translated_sentences:
test_data.append(preprocess_data(sent))
sentiment_mapping = {
"negative": 0,
"neutral": 1,
"positive": 2
}
labels = [sentiment_mapping[sent] for sent in sentiment]
test_label = [sentiment_mapping[sent] for sent in test_sentiment]
###Output
_____no_output_____
###Markdown
Using the Processed sentences
###Code
uids = []
processed_sentences = []
labels = []
with open(processed_train_file, 'r') as f:
for line in f.readlines()[1:]:
items = line.strip().split('\t')
uids.append(items[0])
processed_sentences.append(str(items[1]))
labels.append(int(items[2]))
test_uids = []
test_data = []
test_label = []
with open(processed_test_file, 'r') as f:
for line in f.readlines()[1:]:
items = line.strip().split('\t')
test_uids.append(items[0])
test_data.append(str(items[1]))
test_label.append(int(items[2]))
###Output
_____no_output_____
###Markdown
Train-Val-Test data splits
###Code
train_uids, val_uids, train_data, val_data, train_label, val_label = train_test_split(uids, processed_sentences, labels, test_size=0.2)
len(train_data), len(val_data), len(test_data)
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
MAX_LEN = 150
MODEL_NAME = "xlm-roberta-base"
tokenizer = XLMRobertaTokenizer.from_pretrained(MODEL_NAME)
###Output
_____no_output_____
###Markdown
Dataset Wrapper class
###Code
class SentiMixDataSet(Dataset):
def __init__(self, inputs, labels, tokenizer, max_len):
self.sentences = inputs
self.labels = labels
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.labels)
def __getitem__(self, item):
sentence = self.sentences[item]
sentiment = int(self.labels[item])
encoding = self.tokenizer.encode_plus(
sentence,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=False,
pad_to_max_length=True,
truncation=True,
return_attention_mask=True,
return_tensors='pt',
)
return {
"text": sentence,
"input_ids": encoding['input_ids'].flatten(),
"attention_mask": encoding['attention_mask'].flatten(),
"label": torch.tensor(sentiment, dtype=torch.long)
}
train_dataset = SentiMixDataSet(train_data, train_label, tokenizer, MAX_LEN)
val_dataset = SentiMixDataSet(val_data, val_label, tokenizer, MAX_LEN)
test_dataset = SentiMixDataSet(test_data, test_label, tokenizer, MAX_LEN)
###Output
_____no_output_____
###Markdown
DataLoaders
###Code
BATCH_SIZE = 64
train_data_loader = DataLoader(train_dataset, shuffle=True, batch_size=BATCH_SIZE)
valid_data_loader = DataLoader(val_dataset, batch_size=BATCH_SIZE)
test_data_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE)
# sample
sample = next(iter(train_data_loader))
sample["input_ids"].shape, sample["attention_mask"].shape, sample["label"].shape
###Output
_____no_output_____
###Markdown
XLM-RoBERTa with Bidirectional LSTM Attention Model
###Code
class XLMAttentionModel(nn.Module):
def __init__(self, hidden_dim, output_dim, dropout=0.3):
super().__init__()
self.hidden_dim = hidden_dim
self.bert = XLMRobertaModel.from_pretrained(MODEL_NAME)
embedding_size = self.bert.config.to_dict()['hidden_size']
self.lstm = nn.LSTM(embedding_size, hidden_dim, batch_first=True, bidirectional=True)
self.out = nn.Linear(hidden_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def attention(self, outputs, hidden, mask=None):
# outputs => [batch_size, seq_len, hid_dim]
# hidden => [batch_size, hid_dim]
# mask => [batch_size, seq_len]
hidden = hidden.unsqueeze(2)
# hidden => [batch_size, hid_dim, 1]
attn_weights = torch.bmm(outputs, hidden)
# outputs => [batch_size, seq_len, hid_dim]
# hidden => [batch_size, hid_dim, 1]
# attn_weights => [batch_size, seq_len, 1]
attn_weights = attn_weights.squeeze(2)
# attn_weights => [batch_size, seq_len]
if mask is not None:
attn_weights = attn_weights.masked_fill(mask==0, -1e10)
soft_attn_weights = F.softmax(attn_weights, dim=1)
# soft_attn_weights => [batch_size, seq_len]
soft_attn_weights = soft_attn_weights.unsqueeze(2)
# soft_attn_weights => [batch_size, seq_len, 1]
weighted = torch.bmm(outputs.transpose(1, 2), soft_attn_weights)
# outputs.transpose(1, 2) => [batch_size, hid_dim, seq_len]
# soft_attn_weights => [batch_size, seq_len, 1]
# weighted => [batch_size, hid_dim, 1]
weighted = weighted.squeeze(2)
# weighted => [batch_size, hid_dim]
return weighted, soft_attn_weights.squeeze(2)
def forward(self, input_ids, attention_mask):
# input_ids => [batch_size, seq_len]
# attention_mask => [batch_size, seq_len]
embeddings, _ = self.bert(
input_ids=input_ids,
attention_mask=attention_mask
)
embeddings = self.dropout(embeddings)
# embeddings => [batch_size, seq_len, emb_dim]
outputs, (hidden, cell) = self.lstm(embeddings)
# outputs => [batch_size, seq_len, hid_dim * 2]
# hidden, cell => [2, batch_size, hid_dim]
outputs = outputs[:, :, :self.hidden_dim] + outputs[:, :, self.hidden_dim:]
# outputs => [batch_size, seq_len, hid_dim]
hidden = hidden[0, :, :] + hidden[1, :, :]
# hidden => [batch_size, hid_dim]
weighted, attn_scores = self.attention(outputs, hidden, attention_mask)
# weighted => [batch_size, hid_dim]
# attn_scores => [batch_size, seq_len]
logits = self.out(self.dropout(weighted))
# logits => [batch_size, output_dim]
return logits, attn_scores
hidden_dim = 100
output_dim = 3
model = XLMAttentionModel(hidden_dim, output_dim)
model = model.to(device)
torch.cuda.empty_cache()
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
###Output
The model has 278,739,951 trainable parameters
###Markdown
Loss & Optimizer
###Code
EPOCHS = 10
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=1e-5)
loss_fn = nn.CrossEntropyLoss().to(device)
###Output
_____no_output_____
###Markdown
Training Method
###Code
def train(model, iterator, clip=2.0):
epoch_loss = 0
model.train()
for batch in iterator:
input_ids = batch["input_ids"].to(device)
attention_mask = batch["attention_mask"].to(device)
targets = batch["label"].to(device)
predictions, _ = model(
input_ids=input_ids,
attention_mask=attention_mask
)
optimizer.zero_grad()
loss = loss_fn(predictions, targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
###Output
_____no_output_____
###Markdown
Evaluation Method
###Code
def simple_accuracy(preds, labels):
"""Takes in two lists of predicted labels and actual labels and returns the accuracy in the form of a float. """
return np.equal(preds, labels).mean()
def evaluate(model, iterator):
model.eval()
epoch_loss = 0
preds = []
trgs = []
with torch.no_grad():
for batch in iterator:
input_ids = batch["input_ids"].to(device)
attention_mask = batch["attention_mask"].to(device)
targets = batch["label"].to(device)
predictions, _ = model(
input_ids=input_ids,
attention_mask=attention_mask
)
loss = loss_fn(predictions, targets)
epoch_loss += loss.item()
trgs.extend(targets.detach().cpu().numpy().tolist())
_, predicted = torch.max(predictions, 1)
preds.extend(predicted.detach().cpu().numpy().tolist())
return epoch_loss / len(iterator), simple_accuracy(preds, trgs)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
Training Loop
###Code
best_valid_loss = float('inf')
for epoch in range(EPOCHS):
start_time = time.time()
train_loss = train(model, train_data_loader)
val_loss, val_acc = evaluate(model, valid_data_loader)
end_time = time.time()
# scheduler.step(val_loss)
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f"Epoch: {epoch + 1:02} | Time: {epoch_mins}m {epoch_secs:.2f}s")
print(f"\tTrain Loss: {train_loss:.3f} | Val Loss: {val_loss:.3f} | Val Acc: {val_acc:.3f}")
if val_loss < best_valid_loss:
best_valid_loss = val_loss
torch.save(model.state_dict(), 'xlm_roberta.pt')
###Output
Epoch: 01 | Time: 5m 41.00s
Train Loss: 0.955 | Val Loss: 0.851 | Val Acc: 0.610
Epoch: 02 | Time: 5m 45.00s
Train Loss: 0.837 | Val Loss: 0.828 | Val Acc: 0.626
Epoch: 03 | Time: 5m 45.00s
Train Loss: 0.796 | Val Loss: 0.866 | Val Acc: 0.630
Epoch: 04 | Time: 5m 44.00s
Train Loss: 0.751 | Val Loss: 0.845 | Val Acc: 0.642
Epoch: 05 | Time: 5m 44.00s
Train Loss: 0.716 | Val Loss: 0.888 | Val Acc: 0.636
Epoch: 06 | Time: 5m 44.00s
Train Loss: 0.680 | Val Loss: 0.882 | Val Acc: 0.630
Epoch: 07 | Time: 5m 44.00s
Train Loss: 0.634 | Val Loss: 0.891 | Val Acc: 0.636
Epoch: 08 | Time: 5m 44.00s
Train Loss: 0.586 | Val Loss: 0.931 | Val Acc: 0.634
Epoch: 09 | Time: 5m 44.00s
Train Loss: 0.546 | Val Loss: 0.995 | Val Acc: 0.631
Epoch: 10 | Time: 5m 44.00s
Train Loss: 0.498 | Val Loss: 1.027 | Val Acc: 0.626
###Markdown
Test Data results
###Code
model.load_state_dict(torch.load('xlm_roberta.pt'))
with torch.no_grad():
model.eval()
preds = []
trgs = []
for batch in test_data_loader:
input_ids = batch["input_ids"].to(device)
attention_mask = batch["attention_mask"].to(device)
targets = batch["label"].to(device)
outputs, _ = model(
input_ids=input_ids,
attention_mask=attention_mask
)
# get the predicted labels
_, predicted = torch.max(outputs, 1)
# Add data to lists
preds.extend(predicted.detach().cpu().numpy().tolist())
trgs.extend(targets.detach().cpu().numpy().tolist())
print(metrics.classification_report(trgs, preds))
###Output
_____no_output_____ |
TextAnalytics/SentimentAnalysis.ipynb | ###Markdown
Sentiment Analysis for Movie Reviews **Outline*** [Introduction](intro)* [Dataset](data)* [NLP Process](process)* [Some Further Explaination](explain) * [N-gram Model](ngram) * [TF-IDF](tfidf)* [Sentiment Analysis Implementation](implement) * Unigrams (absence/presence) * Unigrams with frequency count * Unigrams (only adjectives/adverbs) * Unigrams (sublinear tf-idf) * Bigrams (absence/presence) * [Other function to do text preprocessing](other_function) * [Conclusion](conclusion) * [Reference](refer) ---
###Code
%reload_ext watermark
import pandas as pd
import numpy as np
import string, re, os
from nltk import pos_tag
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics import accuracy_score
from nltk.tokenize import word_tokenize
from nltk.stem.porter import PorterStemmer
import seaborn as sns
import math
%matplotlib inline
%watermark -a 'Johnny' -d -t -v -p pandas,numpy,nltk,sklearn,seaborn
###Output
Johnny 2018-12-26 00:45:15
CPython 3.6.3
IPython 6.1.0
pandas 0.20.3
numpy 1.13.3
nltk 3.2.4
sklearn 0.19.1
seaborn 0.8.0
###Markdown
--- IntroductionSentiment Analysis is a very popular text analytics application. Here, we are going to use sentiment anaylsis that automatlically classifies the sentiment of a movie review as either positive or negative. Similar to from the [previous post about Näive Bayes](http://nbviewer.jupyter.org/github/johnnychiuchiu/Machine-Learning/blob/master/NaiveBayes/naiveBayesTextClassification.ipynb), we'll also use Näive Bayes. The idea and topic we want to focus more in this post, however, is to experiment with several ways to do text pre-processing before fitting Näive Bayes model. Some of the [ideas](http://nbviewer.jupyter.org/github/johnnychiuchiu/Machine-Learning/blob/master/NaiveBayes/naiveBayesTextClassification.ipynbimprove) were also mentioned in the previous post, In this post, while doing an end-to-end sentiment anaylsis, we want to focus more on different ways to do text preprocessing and why we choose a particular way to do it (i.e., pros and cons), with the goal of having a better classification outcome. Some of the topics includes* Text Preprocessing * Keep capitalization or not * Removing punctuation or not * Removing stopwords or not * Different ways to do Tokenization * Stemming / Lemmatization * Part of Speech (POS) Tagging* N-gram* TF-IDF Data SetThe dataset used in this project is the [polarity dataset v2.0 (3.0Mb, click to download)](http://www.cs.cornell.edu/people/pabo/movie-review-data/review_polarity.tar.gz): 1000 positive and 1000 negative movie reviews. Introduced in Pang/Lee ACL 2004 and released June 2004, it is drawn from an archive of the rec.arts.movies.reviews newsgroup hosted at IMDB.In this post, we divide the corpus with a ratio 90:10 split for training and testing (200 review files are used as test – i.e., files that start with cv9) and we keep an even distribution of positive and negative labels.
###Code
def read_data(filepath, review_type):
"""This funtion reads in the data, parses it, and separates it into
the appropriate train and test splits. Data is read in in UTF-8, and is
parsed by removing all punctuation. Any review that begins with the
filename 'cv9' is considered to be part of the test set.
Args:
filepath: path to the corpus
review_type: type of review included in the data within this filepath
Returns:
Returns four separate lists. The test set corpus, the test set labels,
the training set corpus, and the training set labels.
The function is copied from here (https://github.com/spalmerg/Sentiment-Analysis/blob/master/Sentiment%20Analysis.ipynb)
"""
test, test_labels = [], []
train, train_labels = [], []
for filename in os.listdir(filepath):
with open(filepath + filename, 'rb') as review:
txt = review.read().decode('utf8', 'surrogateescape')
txt = txt.replace("--", "").replace("_", " ").replace("-", " ")
translator = str.maketrans('', '', string.punctuation)
txt = txt.translate(translator)
txt = txt.split()
txt = ' '.join(txt)
if filename.startswith('cv9'):
test.append(txt)
test_labels.append(review_type)
else:
train.append(txt)
train_labels.append(review_type)
return(test, test_labels, train, train_labels)
# read in positive and negative reviews
neg_test, neg_test_labels, neg_train, neg_train_labels = read_data('review_polarity/txt_sentoken/neg/', 'Negative')
pos_test, pos_test_labels, pos_train, pos_train_labels = read_data('review_polarity/txt_sentoken/pos/', 'Positive')
# combine training sets
train_labels = neg_train_labels + pos_train_labels
train = neg_train + pos_train
# combine test sets
test_labels = neg_test_labels + pos_test_labels
test = neg_test + pos_test
###Output
_____no_output_____
###Markdown
> **First look at the dataset** **An exmaple of a positive review**: everyone loves the movie *You've got mail*, including the person who wrote this review
###Code
print(pos_train[2])
###Output
youve got mail works alot better than it deserves to in order to make the film a success all they had to do was cast two extremely popular and attractive stars have them share the screen for about two hours and then collect the profits no real acting was involved and there is not an original or inventive bone in its body its basically a complete re shoot of the shop around the corner only adding a few modern twists essentially it goes against and defies all concepts of good contemporary filmmaking its overly sentimental and at times terribly mushy not to mention very manipulative but oh how enjoyable that manipulation is but there must be something other than the casting and manipulation that makes the movie work as well as it does because i absolutely hated the previous ryanhanks teaming sleepless in seattle it couldnt have been the directing because both films were helmed by the same woman i havent quite yet figured out what i liked so much about youve got mail but then again is that really important if you like something so much why even question it again the storyline is as cliched as they come tom hanks plays joe fox the insanely likeable owner of a discount book chain and meg ryan plays kathleen kelley the even more insanely likeable proprietor of a family run childrens book shop called in a nice homage the shop around the corner fox and kelley soon become bitter rivals because the new fox books store is opening up right across the block from the small business little do they know they are already in love with each other over the internet only neither party knows the other persons true identity the rest of the story isnt important because all it does is serve as a mere backdrop for the two stars to share the screen sure there are some mildly interesting subplots but they all fail in comparison to the utter cuteness of the main relationship all of this of course leads up to the predictable climax but as foreseeable as the ending is its so damn cute and well done that i doubt any movie in the entire year contains a scene the evokes as much pure joy as this part does when ryan discovers the true identity of her online love i was filled with such for lack of a better word happiness that for the first time all year i actually left the theater smiling
###Markdown
**An exmaple of a negative review**: we can tell this guy didn't like the movie at all just based on the first sentence
###Code
neg_train[7]
###Output
_____no_output_____
###Markdown
**Summary Statistics**: These reviews seems to be more serious and much more lengthy then I thought before checking them. Let's also take a look at some of the statistics about review length. We can see from below that the average length of these reviews are ~3600 characters, with the longest one of 14k; shortest one with only 86 characters. Overall, we can also see that positive reviews are generally longer (based on median), which would be different from product reviews, where people would tend to write longer reviews if they hate it.
###Code
# get the document length
neg_train_length = [len(i) for i in neg_train]
pos_train_length = [len(i) for i in pos_train]
# make it into a dataframe
doc_length_df = pd.DataFrame(neg_train_length+pos_train_length, columns=['Review Length'])
doc_length_df['label'] = ['Negative']*900 + ['Positive']*900
doc_length_df.head()
print(doc_length_df['Review Length'].describe())
sns.set(style="whitegrid")
ax = sns.boxplot(x="label", y='Review Length', data=doc_length_df)
ax
###Output
count 1800.000000
mean 3631.452222
std 1605.006488
min 86.000000
25% 2551.500000
50% 3397.500000
75% 4372.500000
max 14174.000000
Name: Review Length, dtype: float64
###Markdown
NLP Process1. **Read Text Files**2. **Transformation: Tokenization** * splitting a string into its desired constituent parts; an indispensable component in almost any NLP application. * This is a very important process in sentiment analysis. Sentiment information is often sparsely and unusually represented; E.g.: a single cluster of punctuation like :-) might tell the whole story. * careful tokenization pays-off, especially where there is relatively little training data available; for a lot of data: less important (enough data for the model to learn that, e.g., happy and happy, are basically both the same token). * Some commonly used strategies includes: * **Whitespace tokenization**: obtain each token by whitespace, where a token is the smallest unit when doing text analytics, can be a number, punctuation, or a word * **Treebank-style tokenization**: this approach would usually result in more units than whitespace tokenization since almost all tokens that involve punctuation are split apart — URLs, Twitter mark-up, phone numbers, dates, email addresses ... Thus, emoticons are collapsed with their component parts, URLs are not constituents, and Twitter mark-up is lost. * **Sentiment-aware tokenization**: how to implement this would be on case-by-case basis. Some rules for twitter for example, would be * not to separate twitter handle even it includes punctuation mark * letters at the edges (\*\*\*\*, s\*\*\*t), which is good to treat them as tokens * keep Lenghtening by character repetition (it was sooooo good!)3. **Transformation- Stemming/Lemmatization**: a method for identifying distinct word forms (for reduction of vocabulary size). Stemming/lemmatization could help with sentiment analysis, in theory. **Stemming** usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes. **Lemmatization** usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma . If confronted with the token saw, stemming might return just s, whereas lemmatization would attempt to return either see or saw depending on whether the use of the token was as a verb or a noun. The two may also differ in that stemming most commonly collapses derivationally related words, whereas lemmatization commonly only collapses the different inflectional forms of a lemma * Example: * Stemming: happy, happier, happiest -> happ * Lemmatizing: happy, happier, happiest -> happy * Note: * In practice, these off-the-shelf stemming algorithms can weaken sentiment systems. * Which stemmer or lemmatization to use is often times trial and error. * Available Stemmers: * [Porter Stemmer](https://tartarus.org/martin/PorterStemmer/) * Problem: collapses sentiment distinctions, by mapping two words with different sentiment into the same stemmed form; * E.g.: captivation (pos); captive (neg); both become -> captiv * [Lancaster Stemmer](http://www.lancaster.ac.uk/scc/) * it is arguably even more problematic than the Porter stemmer, since it collapses even more words of differing sentiment. * E.g.: meaningful (pos); mean(neg); both become -> mean * [WordNet Lemmatizer](https://www.nltk.org/_modules/nltk/stem/wordnet.html) * high-precision stemming functionality; it requires (word, part-of-speech tag) pairs; * Problem: it collapses tense, aspect, and number marking. E.g.: (exclaims, v) exclaim 4. **Transformation: others** * Converting to lower case * Removing punctuation * Removing numbers * Removing stopwords * Removing tokens based on tf-idf (for example, remove words that occurred in less than 5 documents and those that occurred in more than 80% of documents.) * Tagging and keeping only some type of part-of-speech (nouns, verbs, adjectives, etc.) or entity recognition (person, place, company, etc.)5. **Vectorization**: * N-grams: Unigrams vs Bigrams * Frequency Count vs absence/presence6. **Fit Näive Bayes model** Some Further Explaination N-gram Model > **What is N-gram? N-gram Model?** $\text{Please turn your homework ..}$How do we, if we ever need to guess, know what is the word that will most likely to show up next? If we have a large enough document or corpus (a collection of documents), we can probably know it by calculating conditional probabilities for the following words such as$$P(in|Please turn your homework)$$$$P(out|Please turn your homework)$$$$P(on|Please turn your homework)$$$$\dots$$Then the word having the largest probability would be the word that is most likely to show up next. This is the essential idea of N-gram Model. In our case, the N-gram (an n-gram is a contiguous sequence of n items from a given sample of text or speech) we use for this language modeling is four-gram. Of course, we can also use unigram, bigram, trigram, ...etcModels that assign probabilities to sequences of words are called **language model**. When we use N-gram for language modeling, independence assumptions are made so that each word depends only on the last n − 1 words. This assumption is important because it massively simplifies the problem of estimating the language model from data. In addition, because of the open nature of language, it is common to group words unknown to the language model together. > **Example** The picture below is copied from [Speech and Language Processing Chap 3: N-grams](https://web.stanford.edu/~jurafsky/slp3/3.pdf), which demos how to estimate bigram probabilities **Example 1** The count of each word is the number of document (in this case, a sentence is a document) is obtained by counting number of sentences having that word (not the total number of the word, even though in our case both turn out to be the same). For example, there are 3 sentence having the word `I`, which is the same as the total occurence of `I`. **Example 2** Here, what we see is some summary statistics from a sample of 9332 sentences of the now-defunct Berkeley Restaurant Project, a dialogue system from the last century that answered questions about a database of restaurants in Berkeley, California. Figure 3.1 shows the bigram counts from a piece of a bigram grammar from the Berkeley Restaurant Project. Note that the majority of the values are zero. In fact, we have chosen the sample words to cohere with each other; a matrix selected from a random set of seven words would be even more sparse.Figure 3.2 shows the bigram probabilities after normalization using unigram count for these selected words. > **Application** * **Language Identification**: given a text, output the langugage used. Of course, we would need to have training probability for different language first.* **Spelling Correction**: when the probability of a certain N-gram is too low comparing to the training probability from a very large corpus, then it is likely that that particular term is a type. For example, The probabilty of $P(their|are)$, i.e., "their are", would be very low.* **Word Breaking**: breaking a sentence into smaller pieces based on the N-gram probability. Identifying collocation using [Pointwise Mutual Information](https://en.wikipedia.org/wiki/Pointwise_mutual_information) is something related.* **Text Summarization**: Here is a paper that uses Web N-gram models for text summarization: [Micropinion Generation: An Unsupervised Approach to Generating Ultra-Concise Summaries of Opinions](http://kavita-ganesan.com/micropinion-generation/.XCRTjs9Kh0s)* **Machine Translation**: * For example, as part of the machine translation process we might have built the following set of potential rough whatever-language to English translations: * he introduced reporters to the main contents of the statement * he briefed to reporters the main contents of the statement * **he briefed reporters on the main contents of the statement** * A probabilistic model of word sequences could suggest that *briefed reporters* on is a more probable English phrase than *briefed to reporters* (which has an awkward to after briefed) or *introduced reporters to* (which uses a verb that is less fluent English in this context), allowing us to correctly select the boldfaced sentence above.* **Augmentative Communication**: Probabilities are also important for augmentative communication (Newell et al.,1998) systems. People like the late physicist Stephen Hawking who are unable to physically talk or sign can instead use simple movements to select words from a menu to be spoken by the system. Word prediction can be used to suggest likely words for the menu. (the paragraph is copied from [Speech and Language Processing Chap 3: N-grams](https://web.stanford.edu/~jurafsky/slp3/3.pdf)) > **N-gram Model vs Näive Bayes**To my understanding, usually a Näive Bayes model refer to a unigram language model. Essentially they are the same. Independence assumptions are made so that the probability of the sentence of belonging to a certain class depends only on the probability product of the words in the sentence. On a high level, Näive Bayes can be seen as a special case of N-gram Modeling. Näive Bayes can be used only for text classification whereas N-gram modeling can be used for many different applications includeing those introduce above. > **Implementation**Here we would like to provide an example of how to implemment N-gram from scratch for **Language Identification**. Here is the description:```Language Identification, which is the problem of taking as input a text in an unknown language and determine what language it is written in. N-gram models are very effective solutions for this problem as well.For training, use the English, French, and Italian texts made available (see the Assignment2 folder). For test, use the file LangId.test provided in the language_identification folder as well. For each of the following questions, the output of your program has to contain a list of [line_id] [language] pairs, starting with line 1. For instance, 1 English 2 Italian ...```The following codes are the implementation of N-gram using **Add-one Smoothing** and **Good-Turing Smoothing**, both are ways to allocation probabilities to unknown words. Overall Process1. Read Files (English, Italian, French) and lower case it2. Train Transformation: separate into words by space3. Get the unigram count for each language as dict. ({'i':1000, 'a':2000, ...}) * counting number of sentences having that word4. Get the bigram count for each language as dict, separating by space. ({'i want': 100, 'i go': 200, ...}) * counting number of sentences having that bigram5. Take test file as input; separate into sentence6. Identify language used per sentence for the input text * Get bigrams from each sentence for test data. * For each sentence, calculate the probability using exp(log_p1+log_p2+...) for each language. * Also generate the final output as 1 English 2 Italian ... 7. Compare the result with LangId.sol to get accuracySome Notes* unigram/bigram count is the number of time a unigram/bigram shows up in the sentence not the total count* V is how many type of words we have plus 1, representing unknown words (Shouldn't take the cases in the test into account when calculating the V). It also means that all the words doesn't show up in the train are treated as the same "unknown" word.
###Code
"""
Author: Johnny Chiu, Master of Science in Analytics
Goal:
Implement a word bigram model using add-one smoothing, which learns word bigram probabilities from the training data.
Then apply the models to determine the most likely language for each sentence in the test file.
"""
import pandas as pd
import numpy as np
class Q2():
def read_files(self):
"""read English, French, and Italian training text; also lower case them"""
f = open('language_identification/LangId.train.English', 'rb')
train_english = f.read().decode('utf8', 'backslashreplace')
f.close()
f = open('language_identification/LangId.train.French', 'rb')
train_french = f.read().decode('utf8', 'backslashreplace')
f.close()
f = open('language_identification/LangId.train.Italian', 'rb')
train_italian = f.read().decode('utf8', 'backslashreplace')
f.close()
return {'english': train_english.lower(), 'french': train_french.lower(),'italian': train_italian.lower()}
def get_sentence_probability(self, term_dict, unigram_language, bigram_language):
"""calculate the probability according to the input sentence and language using 2-gram
Args:
term_dict(dict): contains bigram count of a sentence
unigram_language(dict): contains unigram count of the training data in a specific language
bigram_language(dict): contains bigram count of the training data in a specific language
Return:
the probability of this input dict is a certain language
"""
# set initial value and get total vocabulary size for smoothing
log_sum = 0
V = len(set(unigram_language.keys()))+1
# get what the overall probability is for each sentence using add-one smoothing to estimate the bigram probability
for i in list(term_dict.keys()):
if i in bigram_language.keys():
denominator = unigram_language[i.split(" ")[0]]+V # plus V for add-one smoothing
nominator = bigram_language[i]+1 # plus 1 for add-one smoothing
prob = float(nominator)/denominator
log_sum += np.log(prob) * term_dict[i]
else:
unigram_count = unigram_language[i.split(" ")[0]] if i.split(" ")[0] in unigram_language.keys() else 0
prob = 1/(unigram_count+V)
log_sum += np.log(prob) * term_dict[i]
return(np.exp(log_sum))
def identify_language(self, term_dict):
"""identify what the language is for the input text"""
prob_list = [self.get_sentence_probability(term_dict, unigram_english, bigram_english),
self.get_sentence_probability(term_dict, unigram_french, bigram_french),
self.get_sentence_probability(term_dict, unigram_italian, bigram_italian)]
top_index = prob_list.index(max(prob_list))
if top_index == 0:
language = 'English'
elif top_index == 1:
language = 'French'
elif top_index == 2:
language = 'Italian'
else:
language = 'Others'
return(language)
def get_language_all_sentence(self, test):
"""identify language used per sentence for the input text
Args:
test(list of lists): each item in the list is a string in a sentence
Return:
a dataframe with [id, language]
"""
# identify language used per sentence
language_list = []
n=2
for s in test:
bigram_in_test_sentence = [' '.join(s[i : i + n]) for i in range(len(s)-n+1)]
bigram_dict = {c: bigram_in_test_sentence.count(c) for c in set(bigram_in_test_sentence)}
language = self.identify_language(bigram_dict)
language_list.append(language)
final_df = pd.DataFrame({'id': list(range(1,len(test)+1)),
'identified_language': language_list})
return(final_df)
if __name__=='__main__':
q2 = Q2()
### 1. Read Files (English, Italian, French)
train = q2.read_files()
### 2. Train Transformation: separate into words by space
# list of lists, each item in the list is a single word
train_english = [s.split(" ") for s in train['english'].splitlines()]
train_french = [s.split(" ") for s in train['french'].splitlines()]
train_italian = [s.split(" ") for s in train['italian'].splitlines()]
# list containing every single word
word_english = [item for sublist in train_english for item in sublist]
word_french = [item for sublist in train_french for item in sublist]
word_italian = [item for sublist in train_italian for item in sublist]
### 3. Get the unigram count for each language as dict (counting number of sentences having that word)
unigram_english = {w: sum(w in sentence for sentence in train_english) for w in set(word_english)}
unigram_french = {w: sum(w in sentence for sentence in train_french) for w in set(word_french)}
unigram_italian = {w: sum(w in sentence for sentence in train_italian) for w in set(word_italian)}
### 4. Get the bigram count for each language as dict, separating by space.
# list of lists, with each item represent bigram within each sentence
n=2
train_bigram_english = [[' '.join(sentence[i : i + n]) for i in range(len(sentence)-n+1)] for sentence in train_english]
train_bigram_french = [[' '.join(sentence[i : i + n]) for i in range(len(sentence)-n+1)] for sentence in train_french]
train_bigram_italian = [[' '.join(sentence[i : i + n]) for i in range(len(sentence)-n+1)] for sentence in train_italian]
# list containing every bigram
word_2_english = [item for sublist in train_bigram_english for item in sublist]
word_2_french = [item for sublist in train_bigram_french for item in sublist]
word_2_italian = [item for sublist in train_bigram_italian for item in sublist]
# dictionary containing the bigram count in each sentence
bigram_english = {w: sum(w in sentence for sentence in train_bigram_english) for w in set(word_2_english)}
bigram_french = {w: sum(w in sentence for sentence in train_bigram_french) for w in set(word_2_french)}
bigram_italian = {w: sum(w in sentence for sentence in train_bigram_italian) for w in set(word_2_italian)}
### 5. Take test file as input, separate into sentence
f = open('language_identification/LangId.test', 'rb')
test_original = f.read().decode('utf8', 'backslashreplace').lower()
test = [s.split(" ") for s in test_original.splitlines()]
f.close()
### 6. Identify language used per sentence for the input text
final_df1 = q2.get_language_all_sentence(test)
### 7. Compare the result with LangId.sol to get accuracy
validation = pd.read_csv('language_identification/LangId.sol', sep=" ", header=None)
validation.columns = ["id", "actual_language"]
accuracy_df1 = pd.merge(final_df1, validation, on='id')
pd.Series(accuracy_df1.identified_language == accuracy_df1.actual_language).value_counts()
###Output
_____no_output_____
###Markdown
Process1. Read Files (English, Italian, French) and lower case it2. Train Transformation: separate original text into list(sentetnces) of lists(words in each sentence). Each item in the later list is a word3. Get the unigram count for each language as dict. ({'i':1000, 'a':2000, ...}) * The value is the counting of the sentence, where this unigram shows up (a word show up twice in 1 sentence should be counted only once)4. Get the bigram count for each language as dict, separating by space. ({'i want': 100, 'i go': 200, ...}) * The value is the counting of the sentence, where this bigram shows up (a bigram show up twice in 1 sentence should be counted only once) 5. Take test file as input; separate into sentence6. Identify language used per sentence for the input text * Get bigrams from each sentence for test data. * For each sentence, calculate the probability using exp(log_p1+log_p2+...) for each language. * p1 (some bigram, "i want") is GT_bigram_prob / unigram_MLE_prob * GT_bigram_prob = (c+1)N_r1/(N*N_r) * c=bigram count of "i want" * N_r = number of bigrams with the same frequency c * N_r+1 = number of bigrams with the same frequency c+1 * N = total count of all the bigrams in the training (the sum of bigram count of all the bigram in training. "i want":2, 'want to': 5. The N would be 7) * Note: If N_r+1 = 0, then use the original bigram_count / N as probability * unigram_MLE_prob = unigram_count / {total count of all the unigram in the training} * Also generate the final output as 1 English 2 Italian ... 7. Compare the result with LangId.sol to get accuracySome Notes* For freq of freq table for bigram, the count for frequency 0 should be all the unseen cases, which is {unique unigram count}*2 - {observed unique bigram}* For the unigram probability, since the freq 0 count for the freq of freq table is unknown (don't know how to estimate this, since the number of unseen unigram should be infinite. Can't use the number of unseen word in test either, since we shouldn't use any info from test when "training", i.e., calculating all the unigram/bigram probability.) For unigram probability, we can just use the MLE estimate, which is the number of times a unigram token show up in the sentence (counting number of times it shows up / all the sentence we have), this captures the likelihood of each word showing up in the text.
###Code
"""
Author: Johnny Chiu, Master of Science in Analytics
Goal:
Implement a word bigram model using good-turing smoothing, which learns word bigram probabilities from the training data.
Then apply the models to determine the most likely language for each sentence in the test file.
"""
import pandas as pd
import numpy as np
class Q3():
def read_files(self):
"""read English, French, and Italian training text; also lower case them"""
f = open('../data/LangId.train.English', 'rb')
train_english = f.read().decode('utf8', 'backslashreplace')
f.close()
f = open('../data/LangId.train.French', 'rb')
train_french = f.read().decode('utf8', 'backslashreplace')
f.close()
f = open('../data/LangId.train.Italian', 'rb')
train_italian = f.read().decode('utf8', 'backslashreplace')
f.close()
return {'english': train_english.lower(), 'french': train_french.lower(),'italian': train_italian.lower()}
def good_turing(self, term, ngram_language, language_freq, N):
"""get the update prob using good turing smoothing"""
# get the frequency count for the input term
c = ngram_language[term] if term in ngram_language.keys() else 0
# get freq of freq
N_r = language_freq[c] if c in language_freq.keys() else 0
N_r1 = language_freq[c+1] if c+1 in language_freq.keys() else 0
# get the GT count
N_gt = (c+1)*(N_r1/(N*N_r))
# get the probability using GT count. If N_gt is 0, then use the original MLE estimate
update_prob = c/N if N_gt == 0 else N_gt
return(update_prob)
def get_sentence_probability(self, term_dict, unigram_language, bigram_language):
"""calculate the probability according to the input sentence and language using 2-gram
Args:
term_dict(dict): contains bigram count of a sentence
unigram_language(dict): contains unigram count of the training data in a specific language
bigram_language(dict): contains bigram count of the training data in a specific language
Return:
the probability of this input dict is a certain language
"""
# set initial value and get total vocabulary size for smoothing
log_sum = 0
# get bigram frequency table
bigram_language_freq = {num: list(bigram_language.values()).count(num) for num in set(bigram_language.values())}
bigram_zero_freq = (len(unigram_language))**2 - len(bigram_language)
bigram_language_freq[0] = bigram_zero_freq
# get N for unigram and bigram, where N is the total number of observed nGram, where each number represent a ngram show up or not in a sentence
N_uni = sum(unigram_language.values())
N_bi = sum(bigram_language.values())
# get what the overall probability is for each sentence using good-turing smoothing to estimate the bigram probability
# use MLE estimate to get the unigram probability. For unknown tokens, use 1/N for probability
for i in list(term_dict.keys()):
nominator = self.good_turing(i, bigram_language, bigram_language_freq, N_bi)
denominator = unigram_language[i.split(" ")[0]]/N_uni if i.split(" ")[0] in unigram_language.keys() else 1/N_uni
prob = float(nominator)/denominator
log_sum += np.log(prob) * term_dict[i]
return(np.exp(log_sum))
def identify_language(self, term_dict):
"""identify what the language is for the input text"""
prob_list = [self.get_sentence_probability(term_dict, unigram_english, bigram_english),
self.get_sentence_probability(term_dict, unigram_french, bigram_french),
self.get_sentence_probability(term_dict, unigram_italian, bigram_italian)]
top_index = prob_list.index(max(prob_list))
if top_index == 0:
language = 'English'
elif top_index == 1:
language = 'French'
elif top_index == 2:
language = 'Italian'
else:
language = 'Others'
return(language)
def get_language_all_sentence(self, test):
"""identify language used per sentence for the input text
Args:
test(list): list of lists, each item in the list is a single word
Return:
a dataframe with [id, language]
"""
# identify language used per sentence
language_list = []
for s in test:
bigram_in_test_sentence = [' '.join(s[i : i + n]) for i in range(len(s)-n+1)]
bigram_dict = {c: bigram_in_test_sentence.count(c) for c in set(bigram_in_test_sentence)}
language = self.identify_language(bigram_dict)
language_list.append(language)
final_df = pd.DataFrame({'id': list(range(1,len(test)+1)),
'identified_language': language_list})
return(final_df)
if __name__=='__main__':
q3 = Q3()
### 1. Read Files (English, Italian, French)
train = q3.read_files()
### 2. Train Transformation: separate into words by space
# list of lists, each item in the list is a single word
train_english = [s.split(" ") for s in train['english'].splitlines()]
train_french = [s.split(" ") for s in train['french'].splitlines()]
train_italian = [s.split(" ") for s in train['italian'].splitlines()]
# list containing every single word
word_english = [item for sublist in train_english for item in sublist]
word_french = [item for sublist in train_french for item in sublist]
word_italian = [item for sublist in train_italian for item in sublist]
### 3. Get the unigram count for each language as dict (counting number of sentences having that word)
unigram_english = {w: sum(w in sentence for sentence in train_english) for w in set(word_english)}
unigram_french = {w: sum(w in sentence for sentence in train_french) for w in set(word_french)}
unigram_italian = {w: sum(w in sentence for sentence in train_italian) for w in set(word_italian)}
### 4. Get the bigram count for each language as dict, separating by space.
# list of lists, with each item represent bigram within each sentence
n=2
train_bigram_english = [[' '.join(sentence[i : i + n]) for i in range(len(sentence)-n+1)] for sentence in train_english]
train_bigram_french = [[' '.join(sentence[i : i + n]) for i in range(len(sentence)-n+1)] for sentence in train_french]
train_bigram_italian = [[' '.join(sentence[i : i + n]) for i in range(len(sentence)-n+1)] for sentence in train_italian]
# list containing every bigram
word_2_english = [item for sublist in train_bigram_english for item in sublist]
word_2_french = [item for sublist in train_bigram_french for item in sublist]
word_2_italian = [item for sublist in train_bigram_italian for item in sublist]
# dictionary containing the bigram count in each sentence
bigram_english = {w: sum(w in sentence for sentence in train_bigram_english) for w in set(word_2_english)}
bigram_french = {w: sum(w in sentence for sentence in train_bigram_french) for w in set(word_2_french)}
bigram_italian = {w: sum(w in sentence for sentence in train_bigram_italian) for w in set(word_2_italian)}
### 5. Take test file as input, separate into sentence
f = open('language_identification/LangId.test', 'rb')
test_original = f.read().decode('utf8', 'backslashreplace').lower()
test = [s.split(" ") for s in test_original.splitlines()]
f.close()
### 6. Identify language used per sentence for the input text
final_df2 = q3.get_language_all_sentence(test)
### 7. Compare the result with LangId.sol to get accuracy
validation = pd.read_csv('language_identification/LangId.sol', sep=" ", header=None)
validation.columns = ["id", "actual_language"]
accuracy_df2 = pd.merge(final_df2, validation, on='id')
pd.Series(accuracy_df2.identified_language == accuracy_df2.actual_language).value_counts()
###Output
_____no_output_____
###Markdown
TF-IDF (Most of the following contents are copied from [here](http://www.tfidf.com/))> **What is TF-IDF?**Tf-idf stands for term frequency-inverse document frequency, and the tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus (which is the union of the document that we have for the analysis). The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. Variations of the tf-idf weighting scheme are often used by search engines as a central tool in scoring and ranking a document's relevance given a user query.One of the simplest ranking functions is computed by summing the tf-idf for each query term; many more sophisticated ranking functions are variants of this simple model.In short, TF-IDF is a function of* **Input**: a collection of documents (i.e., a corpus)* **Output**: give a weight for each of the word (or token) for each of the document> **How to Compute**Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.* TF: Term Frequency, which measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of normalization: $$TF(t) = \frac{\text{Number of times term t appears in a document}}{\text{Total number of terms in the document}}$$* IDF: Inverse Document Frequency, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following: $$IDF(t) = ln(\frac{\text{Total number of documents}}{\text{Number of documents with term t in it}})$$In short, the TF-IDF weight for a single word would be $$ TFIDF(t) = TF(t) * IDF(t)$$See below for a simple example.> **Example**Consider a document containing 100 words wherein the word cat appears 3 times. The term frequency (i.e., tf) for cat is then (3 / 100) = 0.03. Now, assume we have 10 million documents and the word cat appears in one thousand of these. Then, the inverse document frequency (i.e., idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these quantities: 0.03 * 4 = 0.12.> **Variation**Other than the one we introducted above, there are many different version of TF and IDF function, as shown in the following table from [wikipedia](https://en.wikipedia.org/wiki/Tf%E2%80%93idf):One common used variation of TF-IDF is call **sublinear TF-IDF**. Instead of using the TF function introduced above, sublinear TF-IDF use the following TF function (call it WF)$$WF(t) = 1 + log(count(t))$$The idea behind this function is that it seems unlikely that twenty occurrences of a term in a document truly carry twenty times the significance of a single occurrence, which is exactly how the original function capture. Therefore, instead of assigning the term with high frequency with weight that is twenty times larger than the single ones, it use a sublinear function (log function) to down-weight those terms. > **implement TF-IDF using sklearn** `TfidfVectorizer` from sklearn use the following function to calculate TF-IDF$$TF(t) = \begin{cases} count(t) = \text{Number of times term t appears in a document} & \quad \text{if sublinear_tf} \text{ is False (default)}\\ 1 + log(count(t)) & \quad \text{if sublinear_tf} \text{ is True} \end{cases}$$$$IDF(t) = \begin{cases} ln(\frac{N+1}{df+1})+1 & \quad \text{if smooth_idf} \text{ is True (default)}\\ ln(\frac{N}{df})+1 & \quad \text{if smooth_idf} \text{ is False} \end{cases}$$where$$N=\text{Total number of documents}$$$$df=\text{Number of documents with term t in it}$$Also, the default for `norm` is `l2`, i.e., idf vectors are normalized by the Euclidean norm.
###Code
# Create dummy documents
doc1 = "This is doc one"
doc2 = "This is doc two"
doc3 = "This is doc three"
# create tf-idf object
tf_idf_vectorizer = TfidfVectorizer()
print(tf_idf_vectorizer)
# fit tf-idf
results = tf_idf_vectorizer.fit_transform([doc1, doc2, doc3])
# see the calculated result
feature_names = tf_idf_vectorizer.get_feature_names()
for (doc, col) in zip(results.nonzero()[0], results.nonzero()[1]):
print (feature_names[col], ' : ', results[doc, col])
###Output
TfidfVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), norm='l2', preprocessor=None, smooth_idf=True,
stop_words=None, strip_accents=None, sublinear_tf=False,
token_pattern='(?u)\\b\\w\\w+\\b', tokenizer=None, use_idf=True,
vocabulary=None)
this : 0.412858572062
is : 0.412858572062
doc : 0.412858572062
one : 0.699030327257
this : 0.412858572062
is : 0.412858572062
doc : 0.412858572062
two : 0.699030327257
this : 0.412858572062
is : 0.412858572062
doc : 0.412858572062
three : 0.699030327257
###Markdown
> **implement TF-IDF from scratch**
###Code
def computeTF(word_list):
"""use word count as term frequency"""
word_count_dict = dict.fromkeys(VOCABULARY, 0)
for word in word_list:
word_count_dict[word] += 1
return word_count_dict
def computeIDFCount(word_count_dict_list):
"""get the document frequency of each word"""
id_count_dict = dict.fromkeys(VOCABULARY, 0)
for word_count_dict in word_count_dict_list:
for word, count in word_count_dict.items():
if count > 0:
id_count_dict[word] += 1
return id_count_dict
def computeIDF(word_count_dict_list, smooth_idf=True):
"""get the inverse document frequency for the input words"""
id_count_dict = computeIDFCount(word_count_dict_list)
idf_dict = {}
for word, id_count in id_count_dict.items():
idf_dict[word] = math.log((DOC_COUNT +1)/(float(id_count)+1))+1 if smooth_idf==True else math.log((DOC_COUNT)/(float(id_count)))+1
return idf_dict
def computeTFIDF(tf_dict, idf_dict):
"""compute tf-idf weight based on the input tf and idf dictionary"""
tf_idf_dict = {}
for word, tf in tf_dict.items():
tf_idf_dict[word] = tf * idf_dict[word]
return tf_idf_dict
DOC_COUNT = 3
# tokenize it using space
words1 = word_tokenize(doc1)
words2 = word_tokenize(doc2)
words3 = word_tokenize(doc3)
# compute term frequency
tf_dict_1 = computeTF(words1)
tf_dict_2 = computeTF(words2)
tf_dict_3 = computeTF(words3)
pd.DataFrame([tf_dict_1, tf_dict_2, tf_dict_3])
# get the document frequency of each word
id_count_dict = computeIDFCount([word_count_dict_1, word_count_dict_2, word_count_dict_3])
# get the idf
idf_dict = computeIDF([word_count_dict_1, word_count_dict_2, word_count_dict_3])
tf_idf_dict_1 = computeTFIDF(tf_dict_1, idf_dict)
tf_idf_dict_2 = computeTFIDF(tf_dict_2, idf_dict)
tf_idf_dict_3 = computeTFIDF(tf_dict_3, idf_dict)
# after normalization by row the value would be the same as the TF-IDF implemented by sklearn
pd.DataFrame([tf_idf_dict_1, tf_idf_dict_2, tf_idf_dict_3])
###Output
_____no_output_____
###Markdown
Sentiment Analysis Implementation 1. **Read Text Files**2. **Transformation: Tokenization**: this is done using the some Sklearn functions with default setting pa, for example: * CountVectorizer, TfidfVectorizer: the `token_pattern` uses ""Regular expression denoting what constitutes a “token”. The default regexp select tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator).""3. **Transformation: Stemming/Lemmatization**: here we doesn't do any stemming or try PorterStemmer4. **Transformation: others**: * **Converting to lower case**: the document is provided in lower case. * **Removing punctuation**: we remove all the punctuation * **Removing numbers**: we didn't remove any numbers, which we can probably can since number probably wouldn't inform positive or negative sentiment. * **Removing stopwords**: we remove stopwords for all the models and see what are the most informative words based on total weight. The value of the weight is different according to the model. For example, For model Unigrams with frequency count, the weight for "Movie" would be the total count of Movie divided by total number of document in each class (P(Movie|Positive), where the nominator would just be the count of the word "Movie"). If we don't remove stopwords, for most of the models (except TF-IDF) the word with the highest weight would just be those stopwords. (Hence we remove them). * **Removing tokens based on tf-idf (for example, remove words that occurred in less than 5 documents and those that occurred in more than 80% of documents.)**: this is implemented below. * **Tagging and keeping only some type of part-of-speech (nouns, verbs, adjectives, etc.) or entity recognition (person, place, company, etc.)**: this is implemented below as well.5. **Vectorization**: * **N-grams**: here we try both **Unigrams** and **Bigrams** as the input of Näive Bayes. * We try both **Frequency Count** and **absence/presence**6. **Fit Näive Bayes model**There would be too many variation if we try all of the combination. The variation that we could try are1. Different types of stemming or lemmatizing2. Removing punctuation3. Removing number4. Removing stopwords5. Removing based on TF-IDF6. POS filtering7. Unigram or Bigram for Näive Bayes8. requency count or binary for Näive BayesThere are 8 different types of things that we can try, which will be at least corresponding to $2^{8}=256$ different types of variation. Here we say "at least" because we know that there are many different types of stemming and lemmatizing approach. In our case, we only implement 10 out of those variations, which are:* M1: Unigrams (absence/presence)* M2: Unigrams with frequency count* M3: Unigrams (only adjectives/adverbs)* M4: Unigrams (sublinear tf-idf)* M5: Bigrams (absence/presence)* M6: Unigrams (absence/presence) + PorterStemmer* M7: Unigrams with frequency count + PorterStemmer* M8: Unigrams (only adjectives/adverbs) + PorterStemmer* M9: Unigrams (sublinear tf-idf) + PorterStemmer* M10: Bigrams (absence/presence) + PorterStemmer
###Code
# functions
def show_most_informative_features(vectorizer, classifier, n=10):
"""
Show the total weight of each of the features.
The word with the highest total weight would have the largest effect when calculating the probability,
and hence the word is informative.
"""
class_labels = classifier.classes_
feature_names = vectorizer.get_feature_names()
topn_pos_class = sorted(zip(classifier.feature_count_[1], feature_names),reverse=True)[:n]
topn_neg_class = sorted(zip(classifier.feature_count_[0], feature_names),reverse=True)[:n]
print("Important words in positive reviews")
for coef, feature in topn_pos_class:
print(class_labels[1], coef, feature)
print("-----------------------------------------")
print("Important words in negative reviews")
for coef, feature in topn_neg_class:
print(class_labels[0], coef, feature)
def model_fit_eval(vectorizer, train, test, transformation = None):
"""
Args:
vectorizer(CountVectorizer object): the way the that text document is vectorized.
train(list): training documents
test(list): test documents
transformation(function): the function applied to the document before they are being vectorized
Return: None
"""
# training features
train_features = vectorizer.fit_transform([transformation(doc) if transformation is not None else doc for doc in train])
# setup a naive bayes classifier
nb_clf = MultinomialNB()
nb_clf.fit(train_features, train_labels)
# test set features
test_features = vectorizer.transform([transformation(doc) if transformation is not None else doc for doc in test])
# predict
predictions = nb_clf.predict(test_features)
# determine the accuracy of the model
accuracy = accuracy_score(predictions, test_labels)
print("Accuracy = ", accuracy*100, "%")
# display the most informative features
show_most_informative_features(vectorizer, nb_clf, 5)
def retain_adverbs_adjectives(corpus):
"""
retain only adjective and adverbs using POS tagging
reference: https://cs.nyu.edu/grishman/jet/guide/PennPOS.html
"""
adj_adv_pos_tags = ['JJ', 'JJR', 'JJS', 'RB', 'RBR', 'RBS']
tokenized = word_tokenize(corpus)
tags = pos_tag(tokenized)
result = [word[0] for word in tags if word[1] in adj_adv_pos_tags]
result = ' '.join(result)
return(result)
###Output
_____no_output_____
###Markdown
**M1: Unigrams (absence/presence)**
###Code
vectorizer = CountVectorizer(binary=True, stop_words='english')#, max_df=0.8
model_fit_eval(vectorizer, train, test)
###Output
Accuracy = 86.5 %
Important words in positive reviews
Positive 794.0 film
Positive 660.0 movie
Positive 653.0 like
Positive 578.0 just
Positive 576.0 time
-----------------------------------------
Important words in negative reviews
Negative 762.0 film
Negative 725.0 movie
Negative 693.0 like
Negative 612.0 just
Negative 554.0 time
###Markdown
**M2: Unigrams with frequency count**
###Code
vectorizer = CountVectorizer(stop_words='english')
model_fit_eval(vectorizer, train, test)
###Output
Accuracy = 81.5 %
Important words in positive reviews
Positive 4412.0 film
Positive 2134.0 movie
Positive 1637.0 like
Positive 1200.0 just
Positive 1123.0 good
-----------------------------------------
Important words in negative reviews
Negative 3621.0 film
Negative 2783.0 movie
Negative 1689.0 like
Negative 1391.0 just
Negative 1042.0 time
###Markdown
**M3: Unigrams (only adjectives/adverbs)**Vectorization usign the frequency of the occurrences of the words after removing everything but adverbs and adjectives
###Code
vectorizer = CountVectorizer(stop_words='english')
model_fit_eval(vectorizer, train, test, retain_adverbs_adjectives)
###Output
Accuracy = 82.5 %
Important words in positive reviews
Positive 1200.0 just
Positive 1110.0 good
Positive 738.0 best
Positive 694.0 really
Positive 693.0 little
-----------------------------------------
Important words in negative reviews
Negative 1391.0 just
Negative 1024.0 good
Negative 936.0 bad
Negative 715.0 really
Negative 660.0 little
###Markdown
**M4: Unigrams (sublinear tf-idf)**Using sub linear tf-idf as the weight of words in the input data
###Code
vectorizer = TfidfVectorizer(min_df = 5, max_df = 0.8, stop_words='english', sublinear_tf=True)
model_fit_eval(vectorizer, train, test)
###Output
Accuracy = 85.0 %
Important words in positive reviews
Positive 21.9548560775 movie
Positive 18.7050070359 like
Positive 16.7859967278 story
Positive 16.7808049541 life
Positive 16.6982165202 just
-----------------------------------------
Important words in negative reviews
Negative 26.6824696025 movie
Negative 21.1182914648 like
Negative 19.7021648118 just
Negative 18.6116812853 bad
Negative 16.9585533231 good
###Markdown
**M5: Bigrams (absence/presence)**Use binary indicators for whether a bigram is present or not
###Code
vectorizer = CountVectorizer(ngram_range=(2,2), binary=True, stop_words='english')
model_fit_eval(vectorizer, train, test)
###Output
Accuracy = 82.5 %
Important words in positive reviews
Positive 108.0 special effects
Positive 76.0 ive seen
Positive 75.0 year old
Positive 73.0 new york
Positive 63.0 takes place
-----------------------------------------
Important words in negative reviews
Negative 112.0 special effects
Negative 70.0 new york
Negative 68.0 year old
Negative 68.0 looks like
Negative 60.0 look like
###Markdown
**Redo M1-M5 but with stemming (use Porter’s stemmer)**Let's call them M6-M10
###Code
def stemmer(review):
"""stem the input review"""
port_stemmer = PorterStemmer()
tokenized = word_tokenize(review)
stemmed = [port_stemmer.stem(word) for word in tokenized]
stemmed = ' '.join(stemmed)
return(stemmed)
# apply the porter stemmer to the training and test sets
train_stemmed = [stemmer(review) for review in train]
test_stemmed = [stemmer(review) for review in test]
###Output
_____no_output_____
###Markdown
**M6: Unigrams (absence/presence) + PorterStemmer**
###Code
vectorizer = CountVectorizer(binary=True, max_df=0.8, stop_words='english')
model_fit_eval(vectorizer, train_stemmed, test_stemmed)
###Output
Accuracy = 85.5 %
Important words in positive reviews
Positive 684.0 like
Positive 665.0 wa
Positive 662.0 time
Positive 661.0 make
Positive 650.0 charact
-----------------------------------------
Important words in negative reviews
Negative 719.0 like
Negative 688.0 wa
Negative 641.0 charact
Negative 639.0 make
Negative 622.0 time
###Markdown
**M7: Unigrams with frequency count + PorterStemmer**
###Code
vectorizer = CountVectorizer(stop_words='english')
model_fit_eval(vectorizer, train_stemmed, test_stemmed)
###Output
Accuracy = 82.0 %
Important words in positive reviews
Positive 5534.0 film
Positive 5019.0 hi
Positive 4156.0 thi
Positive 2774.0 movi
Positive 2330.0 ha
-----------------------------------------
Important words in negative reviews
Negative 4512.0 film
Negative 4422.0 thi
Negative 3598.0 hi
Negative 3395.0 movi
Negative 2198.0 wa
###Markdown
**M8: Unigrams (only adjectives/adverbs) + PorterStemmer**
###Code
vectorizer = CountVectorizer(stop_words='english')
model_fit_eval(vectorizer, train_stemmed, test_stemmed, retain_adverbs_adjectives)
###Output
Accuracy = 82.0 %
Important words in positive reviews
Positive 1873.0 hi
Positive 1243.0 thi
Positive 1200.0 just
Positive 1134.0 good
Positive 736.0 best
-----------------------------------------
Important words in negative reviews
Negative 1391.0 just
Negative 1360.0 hi
Negative 1292.0 thi
Negative 1042.0 good
Negative 944.0 bad
###Markdown
**M9: Unigrams (sublinear tf-idf) + PorterStemmer**
###Code
vectorizer = TfidfVectorizer(min_df = 5, max_df = 0.8, stop_words='english', sublinear_tf=True)
model_fit_eval(vectorizer, train_stemmed, test_stemmed)
###Output
Accuracy = 85.5 %
Important words in positive reviews
Positive 22.794618033 wa
Positive 21.0819293613 charact
Positive 20.976549597 like
Positive 19.9104065757 make
Positive 19.6801508875 time
-----------------------------------------
Important words in negative reviews
Negative 24.8602761209 wa
Negative 23.5683590901 like
Negative 21.5263185952 charact
Negative 20.958060194 just
Negative 19.8974624979 make
###Markdown
**M10: Bigrams (absence/presence) + PorterStemmer**
###Code
vectorizer = CountVectorizer(ngram_range=(2,2), binary=True, stop_words='english')
model_fit_eval(vectorizer, train_stemmed, test_stemmed)
###Output
Accuracy = 84.5 %
Important words in positive reviews
Positive 374.0 thi film
Positive 229.0 thi movi
Positive 111.0 special effect
Positive 108.0 film wa
Positive 107.0 hi wife
-----------------------------------------
Important words in negative reviews
Negative 348.0 thi film
Negative 296.0 thi movi
Negative 145.0 look like
Negative 124.0 like thi
Negative 116.0 special effect
###Markdown
> **Other function to do text preprocessing**
###Code
import string, re
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk import pos_tag
# create English stop words list (you can always define your own stopwords)
stop_words = set(stopwords.words('english'))
# Create a WordNetLemmatizer object
lemmatizer = WordNetLemmatizer()
def clean(doc):
"""Function to remove stop words from sentences & lemmatize verbs and nouns.
Args:
doc(str): a string with any format
"""
# tokenize using NLTK’s recommended word tokenizer
# (currently an improved TreebankWordTokenizer along with PunktSentenceTokenizer for the specified language).
# https://www.nltk.org/api/nltk.tokenize.html
tokenized = word_tokenize(doc.lower())
print('tokenized: ',tokenized)
# remove punctuation mark
punctuation_free = [x for x in tokenized if not re.fullmatch('[' + string.punctuation + ']+', x)]
print('punctuation_free: ',punctuation_free)
# remove stopwords
stop_free = [x for x in tokenized if not re.fullmatch('[' + string.punctuation + ']+', x) and x not in stop_words]
print('stop_free: ',stop_free)
# lemmatize verbs
lemma_verb = [lemmatizer.lemmatize(word,'v') for word in stop_free]
print('lemma_verb: ',lemma_verb)
# lemmatize nouns
lemma_noun = [lemmatizer.lemmatize(word,'n') for word in lemma_verb]
print('lemma_noun: ',lemma_noun)
# keep only token with length longer than 2
y = [s for s in lemma_noun if len(s) > 2]
print('length>2: ',y)
# Apply POS tagging
# reference for the POS acronym: https://cs.nyu.edu/grishman/jet/guide/PennPOS.html
word_posTags = pos_tag(y)
pos_tags = [t[1] for t in word_posTags]
print('pos_tags: ',pos_tags)
return y
# we can see from the result that the POS tagging is sooo wrong
clean('I ate a lot of apples, and so did my mom!')
pos_tag(['go'])
###Output
_____no_output_____
###Markdown
> **Conclusion** |Model ID| Model | Test Accuracy | Top 5 Informative Words in Positive Reviews | Top 5 Informative Words in Negative Reviews ||------|------|------||M1|Unigrams (absence/presence)|86.5% |film, movie, like, just, time |film, movie, like, just, time ||M2|Unigrams with frequency count|81.5% |film, movie, like, just, good |film, movie, like, just, time ||M3|Unigrams (only adjectives/adverbs)|82.5% |just, good, best, really, little |just, good, bad, really, little ||M4|Unigrams (sublinear tf-idf)|85% |movie, like, story, life, just |movie, like, kust, bad, good ||M5|Bigrams (absence/presence)|82.5% |special effects, ive seen, year old, new york, takes place |special effects, new york, year old, looks like, look like ||M6|Unigrams (absence/presence) + PorterStemmer|85.5% |like, wa, time, make, chacact |like, wa, chacact, make, time ||M7|Unigrams with frequency count + PorterStemmer|82% |film, hi, thi, movi, ha |film, thi, hi, movi, wa ||M8|Unigrams (only adjectives/adverbs) + PorterStemmer|82% |hi, thi, just, good, best |just, hi, thi, good, bad ||M9|Unigrams (sublinear tf-idf) + PorterStemmer|85.5% |wa, charact, like, make, time |wa, like, charact, just, make ||M10|Bigrams (absence/presence) + PorterStemmer|84.5% |the film, the movi, special effect, film wa, hi wife |thi film, thi movi, look like, like thi, special effect | The best model among M1-M5 is M1 and the worst performance model is M2. Also, the top 5 informative words is captured by using the total weight of each word in each class. The word with the highest total weight would have the largest effect when calculating the probability, and hence the word is informative. For the best model M1, it turns out that for both positive and negative reviews, most of the informative words are pretty similar, whereas for other models such as M3 (Unigrams (only adjectives/adverbs), the model capture "best", and "bad" as most informative words for positive and negative reviews separately, which makes more sense.For M6-M10, which are the models using the stemming text, the overall performance doesn't improve much (pretty similar to M1-M5 actually). Also, the informative words become a little harder to read. In this case, stemming might not worth doing.In conclusion, our best model result in 86.5% test accuracy, which is not a bad starting for a movie review sentiment analysis. As mentioned, there are many other things we can try to make the performance better. For the purpose of this post, we'll stop it and leave them for future attempts. Reference* [Stanford NLP: Stemming and lemmatization](https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html)* [Text analysis: basic workflow](https://cfss.uchicago.edu/text001_workflow.html)* [Doc: TfidfVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)* [Doc: TfidfTransformer (clear document of how tf and idf is being implemented)](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.htmlsklearn.feature_extraction.text.TfidfTransformer)* [TF-IDF.com](http://www.tfidf.com/)* [Wiki: TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)* [How are TF-IDF calculated by the scikit-learn TfidfVectorizer](https://stackoverflow.com/questions/36966019/how-aretf-idf-calculated-by-the-scikit-learn-tfidfvectorizer)* [Speech and Language Processing Chap 3: N-grams](https://web.stanford.edu/~jurafsky/slp3/3.pdf)* [What is the difference between n-gram models and the Naive Bayes? How do they interact to each other?](https://www.quora.com/What-is-the-difference-between-n-gram-models-and-the-Naive-Bayes-How-do-they-interact-to-each-other)* [What are N-Grams?](http://text-analytics101.rxnlp.com/2014/11/what-are-n-grams.html)
###Code
from sklearn.feature_extraction.text import CountVectorizer
corpus = [
'This is the first document.',
'This document is the second document.',
'And this is the third one.',
'Is this the first document?',
]
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus)
print(vectorizer.get_feature_names())
print(X.toarray())
###Output
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
[[0 1 1 1 0 0 1 0 1]
[0 2 0 1 0 1 1 0 1]
[1 0 0 1 1 0 1 1 1]
[0 1 1 1 0 0 1 0 1]]
|
Projects/Road_Safety_UK/1_EDA.ipynb | ###Markdown
Road Safety Data for the UK The DataThe [files](https://data.gov.uk/dataset/cb7ae6f0-4be6-4935-9277-47e5ce24a11f/road-safety-data) provide detailed road safety data about the circumstances of personal injury road accidents in GB, the types (including Make and Model) of vehicles involved and the consequential casualties. The statistics relate only to personal injury accidents on public roads that are reported to the police. The TaskThe purpose of the analysis is - To summarize the main characteristics of the data, and obtain interesting facts that are worth highlighting.- Identity and quantify associations (if any) between the number of causalities (in the Accidents table) and other variables in the data set.- Explore whether it is possible to predict accident hotspots based on the data. Table of Contents 1. Obtaining and Viewing the Data 2. Preprocessing the Data* 2.1. Converting Datetime Column* 2.2. Handling Missing Values 3. Exploratory Data Analysis (EDA)* 3.1. Main Characteristics of Accidents* 3.2. Main Characteristics of Casualties* 3.3. Main Characteristics of Vehicles 1. Obtaining and Viewing the Data
###Code
pip install squarify
# import the usual suspects ...
import pandas as pd
import numpy as np
import glob
import matplotlib.pyplot as plt
import seaborn as sns
# suppress all warnings
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
**Accidents** We had our `Date` column with values not properly stored in the correct format. Let's do this when importing data:
###Code
accidents = pd.read_csv('dft-road-casualty-statistics-accident-20199.csv',parse_dates=['Date'])
print('Records:', accidents.shape[0], '\nColumns:', accidents.shape[1])
accidents.head()
accidents.columns
#accidents.info()
#accidents.isnull().sum()
###Output
_____no_output_____
###Markdown
The accidents table contains almost 120,000 records and 32 columns, with only very few missing values. If we decided to work with date and/or time, we will need to convert the string values into datetime format. Besides that, almost all data is properly stored as numeric data. **Casualties**
###Code
casualties = pd.read_csv('dft-road-casualty-statistics-casualty-2019.csv')
print('Records:', casualties.shape[0], '\nColumns:', casualties.shape[1])
casualties.head()
#casualties.info()
#casualties.isnull().sum()
###Output
_____no_output_____
###Markdown
The casualties table has roughly 150,000 with 18 columns providing detailed information about the casualties. Apart from the index, all data is stored in a numeric format. **Vehicles**
###Code
vehicles = pd.read_csv('dft-road-casualty-statistics-vehicle-2019.csv')
print('Records:', vehicles.shape[0], '\nColumns:', vehicles.shape[1])
vehicles.head()
#vehicles.info()
#vehicles.isnull().sum()
###Output
_____no_output_____
###Markdown
The vehicles table is the largest of all three and contains roughly 216,000 records spanned over 27 columns with detailed information about the vehicle and its driver. *Back to: Table of Contents* 2. Preprocessing the Data 2.1. Converting Datetime Column
###Code
# check
accidents.iloc[:, 5:13].info()
accidents.iloc[:, 5:13].head(2)
###Output
_____no_output_____
###Markdown
Next, let's define a new column that groups the `Time` the accidents happened into one of five options:- Morning- Afternoon- Rush_Hour- Evening- Night
###Code
# slice first and second string from time column
accidents['Hour'] = accidents['Time'].str[0:2]
# convert new column to numeric datetype
accidents['Hour'] = pd.to_numeric(accidents['Hour'])
# drop null values in our new column
accidents = accidents.dropna(subset=['Hour'])
# cast to integer values
accidents['Hour'] = accidents['Hour'].astype('int')
# define a function that turns the hours into daytime groups
def when_was_it(hour):
if hour >= 5 and hour < 10:
return "morning rush (5-10)"
elif hour >= 10 and hour < 15:
return "office hours (10-15)"
elif hour >= 15 and hour < 19:
return "afternoon rush (15-19)"
elif hour >= 19 and hour < 23:
return "evening (19-23)"
else:
return "night (23-5)"
# apply this function to our temporary hour column
accidents['Daytime'] = accidents['Hour'].apply(when_was_it)
accidents[['Time', 'Hour', 'Daytime']].head(8)
###Output
_____no_output_____
###Markdown
2.2. Handling Missing Values
###Code
print('Proportion of Missing Values in Accidents Table:',
round(accidents.isna().sum().sum()/len(accidents),3), '%')
accidents = accidents.dropna()
# check if we have no NaN's anymore
accidents.isna().sum().sum()
print('Proportion of Missing Values in Casualties Table:',
round(casualties.isna().sum().sum()/len(casualties),3), '%')
print('Proportion of Missing Values in Vehicles Table:',
round(vehicles.isna().sum().sum()/len(vehicles),3), '%')
###Output
Proportion of Missing Values in Vehicles Table: 0.0 %
###Markdown
The last two dataframes have no missing values to even think about dropping them. But the first one only contains 0.001% missing values and it might be helpful to not mess up later analysis with NaN's. That's why they are dropped. *Back to: Table of Contents* 3. Exploratory Data Analysis (EDA) 3.1. Main Characteristics of Accidents ***Has the number of accidents increased or decreased over the last few months?***
###Code
test = pd.DataFrame(accidents.set_index('Date').resample('M').size())
test.columns = ['Accidents']
test['rolling'] = test['Accidents'].rolling(window=10).mean()
# prepare plot
sns.set_style('white')
fig, ax = plt.subplots(figsize=(15,6))
# plot
accidents.set_index('Date').resample('M').size().plot(label='Total per Month', color='black', ax=ax)
ax.set_title('Accidents per Month', fontsize=14, fontweight='bold')
ax.set(ylabel='Total Count\n', xlabel='')
ax.legend(bbox_to_anchor=(1.1, 1.1), frameon=False)
# remove all spines
sns.despine(ax=ax, top=True, right=True, left=True, bottom=False);
###Output
_____no_output_____
###Markdown
***On which weekdays are accidents most likely to be caused?*** - Preparing dataframe that calculates average accidents per weekday:
###Code
weekday_counts = pd.DataFrame(accidents.set_index('Date').resample('1d')['Accident_Index'].size().reset_index())
weekday_counts.columns = ['Date', 'Count']
#weekday_counts
weekday = weekday_counts['Date'].dt.day_name()
#weekday
weekday_averages = pd.DataFrame(weekday_counts.groupby(weekday)['Count'].mean().reset_index())
weekday_averages.columns = ['Weekday', 'Average_Accidents']
weekday_averages.set_index('Weekday', inplace=True)
weekday_averages
###Output
_____no_output_____
###Markdown
- Plotting this dataframe:
###Code
# reorder the weekdays beginning with Monday (backwards because of printing behavior!)
days = ['Sunday', 'Saturday', 'Friday', 'Thursday', 'Wednesday', 'Tuesday', 'Monday']
# prepare plot
sns.set_style('white')
fig, ax = plt.subplots(figsize=(10,5))
colors=['lightsteelblue', 'lightsteelblue', 'navy', 'lightsteelblue', 'lightsteelblue', 'lightsteelblue', 'lightsteelblue']
# plot
weekday_averages.reindex(days).plot(kind='barh', ax=ax, color=colors)
ax.set_title('\nAverage Accidents per Weekday\n', fontsize=14, fontweight='bold')
ax.set(xlabel='\nAverage Number', ylabel='')
ax.legend('')
# remove all spines
sns.despine(ax=ax, top=True, right=True, left=True, bottom=True);
###Output
_____no_output_____
###Markdown
***How are accidents related to weather conditions?***
###Code
accidents.Weather_Conditions.value_counts(normalize=True)
accidents.Weather_Conditions.value_counts(normalize=True).plot(kind='bar')
###Output
_____no_output_____
###Markdown
*As most of the days the `Weather_Condition` is "fine" (=1), most accidents will likely to be happen then.* ***What percentage of each category of accident severity do we have?***
###Code
# assign the data
fatal = accidents.Accident_Severity.value_counts()[1]
serious = accidents.Accident_Severity.value_counts()[2]
slight = accidents.Accident_Severity.value_counts()[3]
names = ['Fatal Accidents','Serious Accidents', 'Slight Accidents']
size = [fatal, serious, slight]
# create a pie chart
plt.pie(x=size, labels=names, colors=['red', 'darkorange', 'silver'],
autopct='%1.2f%%', pctdistance=0.6, textprops=dict(fontweight='bold'),
wedgeprops={'linewidth':7, 'edgecolor':'white'})
# create circle for the center of the plot to make the piechart look like a donut
my_circle = plt.Circle((0,0), 0.6, color='white')
# plot the donut chart
fig = plt.gcf()
fig.set_size_inches(8,8)
fig.gca().add_artist(my_circle)
plt.title('\nAccident Severity: Share in %', fontsize=14, fontweight='bold')
plt.show()
###Output
_____no_output_____
###Markdown
***How has the number of fatalities developed over the year?***
###Code
# set the criterium to slice the fatalaties
criteria = accidents['Accident_Severity']==1
# create a new dataframe
weekly_fatalities = accidents.loc[criteria].set_index('Date').sort_index().resample('W').size()
# prepare plot
sns.set_style('white')
fig, ax = plt.subplots(figsize=(14,6))
# plot
weekly_fatalities.plot(label='Total Fatalities per Month', color='grey', ax=ax)
plt.fill_between(x=weekly_fatalities.index, y1=weekly_fatalities.values, color='grey', alpha=0.3)
ax.set_title('\nFatalities', fontsize=14, fontweight='bold')
ax.set(ylabel='\nTotal Count', xlabel='')
ax.legend(bbox_to_anchor=(1.2, 1.1), frameon=False)
# remove all spines
sns.despine(ax=ax, top=True, right=True, left=True, bottom=True);
###Output
_____no_output_____
###Markdown
***Is the share of fatal accidents increasing or decreasing?***
###Code
sub_df = accidents[['Date', 'Accident_Index', 'Accident_Severity']]
# pull out the year
year = sub_df['Date'].dt.year
week = sub_df['Date'].dt.week
# groupby year and severities
count_of_fatalities = sub_df.set_index('Date').groupby([pd.Grouper(freq='W'), 'Accident_Severity']).size()
# build a nice table
fatalities_table = count_of_fatalities.rename_axis(['Week', 'Accident_Severity'])\
.unstack('Accident_Severity')\
.rename({1:'fatal', 2:'serious', 3:'slight'}, axis='columns')
fatalities_table.head()
fatalities_table['sum'] = fatalities_table.sum(axis=1)
fatalities_table = fatalities_table.join(fatalities_table.div(fatalities_table['sum'], axis=0), rsuffix='_percentage')
fatalities_table.head()
# prepare data
sub_df = fatalities_table[['fatal_percentage', 'serious_percentage', 'slight_percentage']]
# prepare plot
sns.set_style('white')
fig, ax = plt.subplots(figsize=(14,6))
colors=['black', 'navy', 'red']
# plot
sub_df.plot(color=colors, ax=ax)
ax.set_title('\nProportion of Accidents Severity\n', fontsize=14, fontweight='bold')
ax.set(ylabel='Share on all Accidents\n', xlabel='')
ax.legend(labels=['Fatal Accidents', 'Serious Accidents', 'Slight Accidents'],
bbox_to_anchor=(1.3, 1.1), frameon=False)
# remove all spines
sns.despine(top=True, right=True, left=True, bottom=False);
###Output
_____no_output_____
###Markdown
*The trend for fatal accidents seems to stagnate.* ***How are accidents distributed throughout the day?*** - Distribution of Hours
###Code
# prepare plot
sns.set_style('white')
fig, ax = plt.subplots(figsize=(10,6))
# plot
accidents.Hour.hist(bins=24, ax=ax, color='lightsteelblue')
ax.set_title('\nAccidents depending by Time\n', fontsize=14, fontweight='bold')
ax.set(xlabel='Hour of the Day', ylabel='Total Count of Accidents')
# remove all spines
sns.despine(top=True, right=True, left=True, bottom=True);
###Output
_____no_output_____
###Markdown
- Counts of Accidents by Daytime
###Code
# prepare dataframe
order = ['night (23-5)', 'evening (19-23)', 'afternoon rush (15-19)', 'office hours (10-15)', 'morning rush (5-10)']
df_sub = accidents.groupby('Daytime').size().reindex(order)
# prepare barplot
fig, ax = plt.subplots(figsize=(10, 5))
colors = ['lightsteelblue', 'lightsteelblue', 'navy', 'lightsteelblue', 'lightsteelblue']
# plot
df_sub.plot(kind='barh', ax=ax, color=colors)
ax.set_title('\nAccidents depending by Daytime\n', fontsize=14, fontweight='bold')
ax.set(xlabel='\nTotal Count of Accidents', ylabel='')
# remove all spines
sns.despine(top=True, right=True, left=True, bottom=True);
###Output
_____no_output_____
###Markdown
- Share of Accident Severity by Daytime
###Code
# prepare dataframe with simple counts
counts = accidents.groupby(['Daytime', 'Accident_Severity']).size()
counts = counts.rename_axis(['Daytime', 'Accident_Severity'])\
.unstack('Accident_Severity')\
.rename({1:'fatal', 2:'serious', 3:'slight'}, axis='columns')
counts
# prepare dataframe with shares
counts['sum'] = counts.sum(axis=1)
counts = counts.join(counts.div(counts['sum'], axis=0), rsuffix=' in %')
counts_share = counts.drop(columns=['fatal', 'serious', 'slight', 'sum', 'sum in %'], axis=1)
counts_share
# prepare barplot
fig, ax = plt.subplots(figsize=(10, 5))
# plot
counts_share.reindex(order).plot(kind='barh', ax=ax, stacked=True, cmap='cividis')
ax.set_title('\nAccident Severity by Daytime\n', fontsize=14, fontweight='bold')
ax.set(xlabel='Percentage', ylabel='')
ax.legend(bbox_to_anchor=(1.25, 0.98), frameon=False)
# remove all spines
sns.despine(top=True, right=True, left=True, bottom=True);
###Output
_____no_output_____
###Markdown
*Back to: Table of Contents* 3.2. Main Characteristics of Casualties
###Code
casualties.describe().T
###Output
_____no_output_____
###Markdown
***What is the most frequent age of casualties?***
###Code
# prepare plot
sns.set_style('white')
fig, ax = plt.subplots(figsize=(10,6))
# plot
casualties.age_of_casualty.hist(bins=20, ax=ax, color='lightsteelblue')
ax.set_title('\nCasualties by Age\n', fontsize=14, fontweight='bold')
ax.set(xlabel='Age', ylabel='Total Count of Casualties')
# remove all spines
sns.despine(top=True, right=True, left=True, bottom=True);
###Output
_____no_output_____
###Markdown
*It looks like it's the young people that are the most affected!* *Back to: Table of Contents* 3.3. Main Characteristics of Vehicles
###Code
#vehicles.describe().T
###Output
_____no_output_____
###Markdown
***What are the age and gender of the drivers who cause an accident?***
###Code
vehicles.sex_of_driver.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
*We'll have to keep in mind that two-thirds of the drivers are male --> imbalanced classes!*
###Code
# create a new dataframe
drivers = vehicles.groupby(['age_band_of_driver', 'sex_of_driver']).size().reset_index()
# drop the values that have no value
drivers.drop(drivers[(drivers['age_band_of_driver'] == -1) | \
(drivers['sex_of_driver'] == -1) | \
(drivers['sex_of_driver'] == 3)]\
.index, axis=0, inplace=True)
# rename the columns
drivers.columns = ['age_band_of_driver', 'sex_of_driver', 'Count']
# rename the values to be more convenient for the reader resp. viewer
drivers['sex_of_driver'] = drivers['sex_of_driver'].map({1: 'male', 2: 'female'})
drivers['age_band_of_driver'] = drivers['age_band_of_driver'].map({1: '0 - 5', 2: '6 - 10', 3: '11 - 15',
4: '16 - 20', 5: '21 - 25', 6: '26 - 35',
7: '36 - 45', 8: '46 - 55', 9: '56 - 65',
10: '66 - 75', 11: 'Over 75'})
# seaborn barplot
fig, ax = plt.subplots(figsize=(14, 7))
sns.barplot(y='age_band_of_driver', x='Count', hue='sex_of_driver', data=drivers, palette='bone')
ax.set_title('\nAge and Sex of Driver At-Fault\n', fontsize=14, fontweight='bold')
ax.set(xlabel='Count', ylabel='Age Band of Driver')
ax.legend(bbox_to_anchor=(1.1, 1.), borderaxespad=0., frameon=False)
# remove all spines
sns.despine(top=True, right=True, left=True, bottom=True);
###Output
_____no_output_____
###Markdown
***Which type of propulsion is often involved in accidents?***
###Code
vehicles['propulsion_code'].value_counts()
# prepare dataframe
df_plot = vehicles.groupby('propulsion_code').size()\
.reset_index(name='counts')\
.sort_values(by='counts', ascending=False)
df_plot = df_plot[df_plot.counts > 9000]
df_plot['propulsion_code'] = df_plot['propulsion_code'].map({-1: 'Undefined', 1: 'Petrol',
2: 'Heavy oil'})
df_plot
# library for plotting a tree map
import squarify
# prepare plot
labels = df_plot.apply(lambda x: str(x[0]) + "\n (" + str(x[1]) + ")", axis=1)
sizes = df_plot['counts'].values.tolist()
colors = [plt.cm.Pastel1(i/float(len(labels))) for i in range(len(labels))]
# plot
plt.figure(figsize=(10,6), dpi= 80)
squarify.plot(sizes=sizes, label=labels, color=colors, alpha=.8)
# Decorate
plt.title('\nPropulsion involved in accidents\n', fontsize=14, fontweight='bold')
plt.axis('off')
plt.show()
###Output
_____no_output_____ |
Assignments/Assignment_2_final_labels.ipynb | ###Markdown
###Code
import pandas as pd
from joblib import dump,load
from sklearn import metrics
###Output
_____no_output_____
###Markdown
###Code
as_read_unseen_data_df = pd.read_csv('/content/drive/MyDrive/EE 769/Assignment data/Assignment 2 data/test.csv')
###Output
_____no_output_____
###Markdown
Feature split on date recorded
###Code
# Initiate the dataframe to store created columns by splitting 'date_recorded'
date_recorded_unseen_data_split_df = pd.DataFrame(columns=['Year', 'Month', 'Date'])
# Split the 'date' recorded column
date_recorded_unseen_data_split_df[['Year', 'Month', 'Date']] = as_read_unseen_data_df['date_recorded'].str.split('-', expand=True).astype('int64')
# Display the dataframe
display(date_recorded_unseen_data_split_df)
print('--'*100)
# Check data types of the created dataframe
display(date_recorded_unseen_data_split_df[['Year', 'Month', 'Date']].dtypes)
print('--'*100)
# Add the columns to reduced seen dataframe and drop 'date_recorded'
reduced_unseen_data_v1_df = pd.concat([as_read_unseen_data_df, date_recorded_unseen_data_split_df], axis=1).drop(columns='date_recorded')
# Display reduced seen data version
display(reduced_unseen_data_v1_df)
###Output
_____no_output_____
###Markdown
Select the data as per the seen data
###Code
reduced_seen_data_v8_df = pd.read_pickle('/content/drive/MyDrive/EE 769/Assignment data/Assignment 2 data/stored data/reduced_seen_data_v8_df.pkl')
selected_column_names = reduced_seen_data_v8_df.columns.drop('status_group')
reduced_unseen_data_v2_df = reduced_unseen_data_v1_df[selected_column_names]
display(reduced_unseen_data_v2_df.columns)
###Output
_____no_output_____
###Markdown
Function to get columns not matching in 2 dataframes
###Code
# Define functions to get lists of missing columns in 2 dataframes
def get_missing_columns_lists(left_df, right_df):
"""
- Tejas Chaudhari
"""
# Get columns in left dataframe which are missing in right dataframe
missing_left_column_names_list = []
for i, left_column_name in enumerate(left_df):
if left_column_name not in right_df.columns:
missing_left_column_names_list.append(left_column_name)
# Dsiplay results
print('Columns in left_df which are missing in right_df are: ')
print(missing_left_column_names_list)
print('--'*50)
# Get columns in right dataframe which are missing in left dataframe
missing_right_column_names_list = []
for i, right_column_name in enumerate(right_df.columns):
if right_column_name not in left_df.columns:
missing_right_column_names_list.append(right_column_name)
# Dsiplay results
print('Columns in right_df which are missing in left_df are: ')
print(missing_right_column_names_list)
print('--'*50)
return missing_left_column_names_list, missing_right_column_names_list
get_missing_columns_lists(reduced_unseen_data_v2_df, reduced_seen_data_v8_df)
###Output
Columns in left_df which are missing in right_df are:
[]
----------------------------------------------------------------------------------------------------
Columns in right_df which are missing in left_df are:
['status_group']
----------------------------------------------------------------------------------------------------
###Markdown
Encode
###Code
# Encode using pandas '.get_dummies'
encoded_seen_data_v1_df = pd.get_dummies(reduced_unseen_data_v2_df)
# Display
display(encoded_seen_data_v1_df)
# Display current version of encoded
display(encoded_seen_data_v1_df)
print('--'*100)
# Data types
encoded_seen_data_v1_df.dtypes.value_counts()
def XGB_column_names_cleaner(X_data_df):
"""
- Tejas Chaudhari
"""
# Initiate dataframe
XGBC_X_data_df = X_data_df.copy(deep=True)
# Script to remove '[', ']', and '<' from column names
for i, column_name in enumerate(X_data_df.columns):
if '[' in column_name or ']' in column_name or '<' in column_name:
print('Column name to change: ', column_name)
new_column_name = column_name.replace(']', '_').replace('[', '_')
XGBC_X_data_df.rename(columns={column_name: new_column_name}, inplace=True)
print('Changed to: ', XGBC_X_data_df.columns[i])
return XGBC_X_data_df
XGBRFC_encoded_data_df = XGB_column_names_cleaner(encoded_seen_data_v1_df)
X_train = pd.read_pickle('/content/drive/MyDrive/EE 769/Assignment data/Assignment 2 data/stored data/X_train.pkl')
display(X_train)
display(XGBRFC_encoded_data_df)
selected_column_names_train = X_train.columns
selected_XGBRFC_encoded_data_df = XGBRFC_encoded_data_df[selected_column_names_train]
display(selected_XGBRFC_encoded_data_df)
###Output
_____no_output_____
###Markdown
Get labels
###Code
# Load saved model
saved_path = '/content/drive/MyDrive/EE 769/Assignment data/Assignment 2 data/stored models/fitted_RFC_randomized_search_cv_model.joblib'
fitted_RFC_randomized_search_cv_model = load(saved_path)
print(fitted_RFC_randomized_search_cv_model.best_estimator_)
y_test_estimated = fitted_RFC_randomized_search_cv_model.best_estimator_.predict(selected_XGBRFC_encoded_data_df)
saved_path = '/content/drive/MyDrive/EE 769/Assignment data/Assignment 2 data/fitted_y_label_encoder.joblib'
fitted_y_label_encoder = load(saved_path)
final_labels = fitted_y_label_encoder.inverse_transform(y_test_estimated)
final_labels_series = pd.Series(final_labels)
display(final_labels_series)
final_labels_series.to_csv('/content/drive/MyDrive/EE 769/Assignment data/Assignment 2 data/stored data/final_labels.csv',
index=False, header=False)
###Output
RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight='balanced',
criterion='gini', max_depth=20, max_features='auto',
max_leaf_nodes=None, max_samples=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=506,
n_jobs=-1, oob_score=False, random_state=7, verbose=0,
warm_start=False)
|
repulok/airportia_ro_dest_parser.ipynb | ###Markdown
record schedules for 2 weeks, then augment count with weekly flight numbers.seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
###Code
for i in locations:
print i
if i not in sch:sch[i]={}
if i!='TGM':
#march 11-24 = 2 weeks
for d in range (11,25):
if d not in sch[i]:
try:
url=airportialinks[i]
full=url+'departures/201703'+str(d)
m=requests.get(full).content
sch[i][full]=pd.read_html(m)[0]
#print full
except: pass #print 'no tables',i,d
else:
#november 17-30 = 2 weeks
for d in range (17,31):
if d not in sch[i]:
try:
url=airportialinks[i]
full=url+'departures/201611'+str(d)
m=requests.get(full).content
sch[i][full]=pd.read_html(m)[0]
#print full
except: pass #print 'no tables',i,d
mdf=pd.DataFrame()
for i in sch:
for d in sch[i]:
df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)
df['From']=i
df['Date']=d
mdf=pd.concat([mdf,df])
mdf=mdf.replace('Hahn','Frankfurt')
mdf=mdf.replace('Hahn HHN','Frankfurt HHN')
mdf['City']=[i[:i.rfind(' ')] for i in mdf['To']]
mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['To']]
file("mdf_ro_dest.json",'w').write(json.dumps(mdf.reset_index().to_json()))
len(mdf)
airlines=set(mdf['Airline'])
cities=set(mdf['City'])
file("cities_ro_dest.json",'w').write(json.dumps(list(cities)))
file("airlines_ro_dest.json",'w').write(json.dumps(list(airlines)))
citycoords={}
for i in cities:
if i not in citycoords:
if i==u'Birmingham': z='Birmingham, UK'
elif i==u'Valencia': z='Valencia, Spain'
elif i==u'Naples': z='Naples, Italy'
elif i==u'St. Petersburg': z='St. Petersburg, Russia'
elif i==u'Bristol': z='Bristol, UK'
else: z=i
citycoords[i]=Geocoder(apik).geocode(z)
print i
citysave={}
for i in citycoords:
citysave[i]={"coords":citycoords[i][0].coordinates,
"country":citycoords[i][0].country}
file("citysave_ro_dest.json",'w').write(json.dumps(citysave))
###Output
_____no_output_____ |
modules/programming-old/nldl-sample-project/.ipynb_checkpoints/nldl_graphing-checkpoint.ipynb | ###Markdown
Graphing teh BerdsOur sample data set for this course is the NLDL Data set from F. Kubke.NLDL stands for "Nucleus Laminaris Delay Lines". The data in each file describes the voltage over time at a specific location in the brain.In simple terms an electrical stimulation was applied at the point where sound enters the brain, and then electrodes measured the voltage over time at a range of locations.By measuring the time delay from stimulation to peak voltage for each data file, we can obtain secondary data which can be combined to determine whether the stimulation came from the left ear or right ear.Our goal here is to plot the voltage over time as recorded in a single file.The resulting chart will be a waveform. Loading the Data We're using an external module so it's best to enable autoreload. This means if we make any changes to the module, the script here will be automatically updated.
###Code
%autoreload 2
###Output
_____no_output_____
###Markdown
Import the parser. This module can take one or more NLDL files and convert them to Python objects for you.
###Code
from nldl_parser import *
from pprint import *
###Output
_____no_output_____
###Markdown
Set the source file. Let's just start with one.
###Code
source_file = 'csv/TEK0000.CSV'
###Output
_____no_output_____
###Markdown
Create a new `parser`. Pass it the `source_file` and receive back the `data` object.
###Code
parser = nldl_parser.NLDLParser()
data = parser.parse_file(source_file)
print("File Data:")
pprint(data)
###Output
File Data:
{'metadata': {'Firmware Version': 'FV:v6.08',
'Horizontal Scale': '2.500000e-04',
'Horizontal Units': 's',
'Probe Atten': '1.000000e+01',
'Pt Fmt': 'Y',
'Record Length': '2.500000e+03',
'Sample Interval': '1.000000e-06',
'Source': 'CH1',
'Trigger Point': '2.400000000000e+02',
'Vertical Offset': '2.640000e-01',
'Vertical Scale': '2.000000e-01',
'Vertical Units': 'V',
'Yzero': '0.000000e+00'},
'readings': [{'time': ' -0.000240000000', 'voltage': ' -0.02400'},
{'time': ' -0.000239000000', 'voltage': ' -0.02400'},
{'time': ' -0.000238000000', 'voltage': ' -0.01600'},
{'time': ' -0.000237000000', 'voltage': ' -0.01600'},
{'time': ' -0.000236000000', 'voltage': ' -0.00800'},
{'time': ' -0.000235000000', 'voltage': ' -0.02400'},
{'time': ' -0.000234000000', 'voltage': ' -0.00800'},
{'time': ' -0.000233000000', 'voltage': ' -0.01600'},
{'time': ' -0.000232000000', 'voltage': ' -0.01600'},
{'time': ' -0.000231000000', 'voltage': ' -0.01600'},
{'time': ' -0.000230000000', 'voltage': ' -0.02400'},
{'time': ' -0.000229000000', 'voltage': ' -0.00800'},
{'time': ' -0.000228000000', 'voltage': ' -0.02400'},
{'time': ' -0.000227000000', 'voltage': ' -0.02400'},
{'time': ' -0.000226000000', 'voltage': ' -0.02400'},
{'time': ' -0.000225000000', 'voltage': ' -0.03200'},
{'time': '-00.000224000000', 'voltage': ' -0.03200'},
{'time': '-00.000223000000', 'voltage': ' -0.02400'},
{'time': '-00.000222000000', 'voltage': ' -0.02400'},
{'time': '-00.000221000000', 'voltage': ' -0.01600'},
{'time': '-00.000220000000', 'voltage': ' -0.01600'},
{'time': '-00.000219000000', 'voltage': ' -0.01600'},
{'time': '-00.000218000000', 'voltage': ' -0.02400'},
{'time': '-00.000217000000', 'voltage': ' -0.02400'},
{'time': '-00.000216000000', 'voltage': ' -0.00800'},
{'time': '-00.000215000000', 'voltage': ' -0.01600'},
{'time': '-00.000214000000', 'voltage': ' 0.00000'},
{'time': '-00.000213000000', 'voltage': ' -0.00800'},
{'time': '-00.000212000000', 'voltage': ' -0.01600'},
{'time': '-00.000211000000', 'voltage': ' 0.00000'},
{'time': '-00.000210000000', 'voltage': ' -0.01600'},
{'time': '-00.000209000000', 'voltage': ' -0.01600'},
{'time': '-00.000208000000', 'voltage': ' -0.01600'},
{'time': '-00.000207000000', 'voltage': ' -0.02400'},
{'time': '-00.000206000000', 'voltage': ' -0.02400'},
{'time': '-00.000205000000', 'voltage': ' -0.02400'},
{'time': '-00.000204000000', 'voltage': ' -0.01600'},
{'time': '-00.000203000000', 'voltage': ' -0.04800'},
{'time': '-00.000202000000', 'voltage': ' -0.00800'},
{'time': '-00.000201000000', 'voltage': ' -0.04000'},
{'time': '-00.000200000000', 'voltage': ' -0.00800'},
{'time': '-00.000199000000', 'voltage': ' -0.02400'},
{'time': '-00.000198000000', 'voltage': ' -0.03200'},
{'time': '-00.000197000000', 'voltage': ' -0.01600'},
{'time': '-00.000196000000', 'voltage': ' -0.00800'},
{'time': '-00.000195000000', 'voltage': ' -0.01600'},
{'time': '-00.000194000000', 'voltage': ' -0.01600'},
{'time': '-00.000193000000', 'voltage': ' -0.02400'},
{'time': '-00.000192000000', 'voltage': ' -0.01600'},
{'time': '-00.000191000000', 'voltage': ' -0.02400'},
{'time': '-00.000190000000', 'voltage': ' -0.01600'},
{'time': '-00.000189000000', 'voltage': ' -0.02400'},
{'time': '-00.000188000000', 'voltage': ' -0.02400'},
{'time': '-00.000187000000', 'voltage': ' -0.01600'},
{'time': '-00.000186000000', 'voltage': ' -0.01600'},
{'time': '-00.000185000000', 'voltage': ' -0.02400'},
{'time': '-00.000184000000', 'voltage': ' -0.02400'},
{'time': '-00.000183000000', 'voltage': ' -0.03200'},
{'time': '-00.000182000000', 'voltage': ' -0.02400'},
{'time': '-00.000181000000', 'voltage': ' -0.02400'},
{'time': '-00.000180000000', 'voltage': ' -0.02400'},
{'time': '-00.000179000000', 'voltage': ' -0.02400'},
{'time': '-00.000178000000', 'voltage': ' -0.02400'},
{'time': '-00.000177000000', 'voltage': ' -0.01600'},
{'time': '-00.000176000000', 'voltage': ' -0.03200'},
{'time': '-00.000175000000', 'voltage': ' -0.01600'},
{'time': '-00.000174000000', 'voltage': ' -0.01600'},
{'time': '-00.000173000000', 'voltage': ' -0.01600'},
{'time': '-00.000172000000', 'voltage': ' -0.00800'},
{'time': '-00.000171000000', 'voltage': ' -0.02400'},
{'time': '-00.000170000000', 'voltage': ' -0.00800'},
{'time': '-00.000169000000', 'voltage': ' -0.01600'},
{'time': '-00.000168000000', 'voltage': ' -0.02400'},
{'time': '-00.000167000000', 'voltage': ' -0.01600'},
{'time': '-00.000166000000', 'voltage': ' -0.01600'},
{'time': '-00.000165000000', 'voltage': ' -0.00800'},
{'time': '-00.000164000000', 'voltage': ' -0.01600'},
{'time': '-00.000163000000', 'voltage': ' -0.02400'},
{'time': '-00.000162000000', 'voltage': ' -0.02400'},
{'time': '-00.000161000000', 'voltage': ' -0.02400'},
{'time': '-00.000160000000', 'voltage': ' -0.01600'},
{'time': '-00.000159000000', 'voltage': ' -0.02400'},
{'time': '-00.000158000000', 'voltage': ' -0.02400'},
{'time': '-00.000157000000', 'voltage': ' -0.01600'},
{'time': '-00.000156000000', 'voltage': ' -0.02400'},
{'time': '-00.000155000000', 'voltage': ' -0.01600'},
{'time': '-00.000154000000', 'voltage': ' -0.02400'},
{'time': '-00.000153000000', 'voltage': ' -0.01600'},
{'time': '-00.000152000000', 'voltage': ' -0.01600'},
{'time': '-00.000151000000', 'voltage': ' -0.01600'},
{'time': '-00.000150000000', 'voltage': ' -0.01600'},
{'time': '-00.000149000000', 'voltage': ' -0.01600'},
{'time': '-00.000148000000', 'voltage': ' -0.00800'},
{'time': '-00.000147000000', 'voltage': ' -0.01600'},
{'time': '-00.000146000000', 'voltage': ' -0.01600'},
{'time': '-00.000145000000', 'voltage': ' -0.01600'},
{'time': '-00.000144000000', 'voltage': ' -0.02400'},
{'time': '-00.000143000000', 'voltage': ' -0.00800'},
{'time': '-00.000142000000', 'voltage': ' -0.02400'},
{'time': '-00.000141000000', 'voltage': ' -0.02400'},
{'time': '-00.000140000000', 'voltage': ' -0.00800'},
{'time': '-00.000139000000', 'voltage': ' -0.01600'},
{'time': '-00.000138000000', 'voltage': ' -0.03200'},
{'time': '-00.000137000000', 'voltage': ' -0.01600'},
{'time': '-00.000136000000', 'voltage': ' -0.02400'},
{'time': '-00.000135000000', 'voltage': ' -0.02400'},
{'time': '-00.000134000000', 'voltage': ' -0.03200'},
{'time': '-00.000133000000', 'voltage': ' -0.02400'},
{'time': '-00.000132000000', 'voltage': ' -0.02400'},
{'time': '-00.000131000000', 'voltage': ' -0.01600'},
{'time': '-00.000130000000', 'voltage': ' -0.02400'},
{'time': '-00.000129000000', 'voltage': ' -0.02400'},
{'time': '-00.000128000000', 'voltage': ' -0.00800'},
{'time': '-00.000127000000', 'voltage': ' -0.02400'},
{'time': '-00.000126000000', 'voltage': ' -0.02400'},
{'time': '-00.000125000000', 'voltage': ' -0.00800'},
{'time': '-00.000124000000', 'voltage': ' -0.02400'},
{'time': '-00.000123000000', 'voltage': ' -0.01600'},
{'time': '-00.000122000000', 'voltage': ' -0.02400'},
{'time': '-00.000121000000', 'voltage': ' -0.00800'},
{'time': '-00.000120000000', 'voltage': ' -0.00800'},
{'time': '-00.000119000000', 'voltage': ' -0.02400'},
{'time': '-00.000118000000', 'voltage': ' -0.01600'},
{'time': '-00.000117000000', 'voltage': ' -0.02400'},
{'time': '-00.000116000000', 'voltage': ' -0.01600'},
{'time': '-00.000115000000', 'voltage': ' -0.01600'},
{'time': '-00.000114000000', 'voltage': ' -0.03200'},
{'time': '-00.000113000000', 'voltage': ' -0.02400'},
{'time': '-00.000112000000', 'voltage': ' -0.01600'},
{'time': '-00.000111000000', 'voltage': ' -0.02400'},
{'time': '-00.000110000000', 'voltage': ' -0.00800'},
{'time': '-00.000109000000', 'voltage': ' -0.01600'},
{'time': '-00.000108000000', 'voltage': ' -0.01600'},
{'time': '-00.000107000000', 'voltage': ' -0.02400'},
{'time': '-00.000106000000', 'voltage': ' -0.00800'},
{'time': '-00.000105000000', 'voltage': ' -0.02400'},
{'time': '-00.000104000000', 'voltage': ' 0.00000'},
{'time': '-00.000103000000', 'voltage': ' -0.00800'},
{'time': '-00.000102000000', 'voltage': ' -0.01600'},
{'time': '-00.000101000000', 'voltage': ' -0.00800'},
{'time': '-00.000100000000', 'voltage': ' -0.01600'},
{'time': '-00.000099000000', 'voltage': ' -0.01600'},
{'time': '-00.000098000000', 'voltage': ' -0.01600'},
{'time': '-00.000097000000', 'voltage': ' -0.01600'},
{'time': '-00.000096000000', 'voltage': ' -0.02400'},
{'time': '-00.000095000000', 'voltage': ' -0.03200'},
{'time': '-00.000094000000', 'voltage': ' -0.01600'},
{'time': '-00.000093000000', 'voltage': ' -0.02400'},
{'time': '-00.000092000000', 'voltage': ' -0.04000'},
{'time': '-00.000091000000', 'voltage': ' -0.02400'},
{'time': '-00.000090000000', 'voltage': ' -0.03200'},
{'time': '-00.000089000000', 'voltage': ' -0.02400'},
{'time': '-00.000088000000', 'voltage': ' -0.02400'},
{'time': '-00.000087000000', 'voltage': ' -0.01600'},
{'time': '-00.000086000000', 'voltage': ' -0.01600'},
{'time': '-00.000085000000', 'voltage': ' -0.03200'},
{'time': '-00.000084000000', 'voltage': ' -0.01600'},
{'time': '-00.000083000000', 'voltage': ' -0.00800'},
{'time': '-00.000082000000', 'voltage': ' -0.02400'},
{'time': '-00.000081000000', 'voltage': ' -0.01600'},
{'time': '-00.000080000000', 'voltage': ' -0.01600'},
{'time': '-00.000079000000', 'voltage': ' -0.01600'},
{'time': '-00.000078000000', 'voltage': ' -0.02400'},
{'time': '-00.000077000000', 'voltage': ' -0.00800'},
{'time': '-00.000076000000', 'voltage': ' -0.02400'},
{'time': '-00.000075000000', 'voltage': ' -0.01600'},
{'time': '-00.000074000000', 'voltage': ' -0.00800'},
{'time': '-00.000073000000', 'voltage': ' -0.02400'},
{'time': '-00.000072000000', 'voltage': ' -0.01600'},
{'time': '-00.000071000000', 'voltage': ' -0.02400'},
{'time': '-00.000070000000', 'voltage': ' -0.02400'},
{'time': '-00.000069000000', 'voltage': ' -0.01600'},
{'time': '-00.000068000000', 'voltage': ' -0.04000'},
{'time': '-00.000067000000', 'voltage': ' -0.02400'},
{'time': '-00.000066000000', 'voltage': ' -0.03200'},
{'time': '-00.000065000000', 'voltage': ' -0.02400'},
{'time': '-00.000064000000', 'voltage': ' -0.01600'},
{'time': '-00.000063000000', 'voltage': ' -0.00800'},
{'time': '-00.000062000000', 'voltage': ' -0.01600'},
{'time': '-00.000061000000', 'voltage': ' -0.00800'},
{'time': '-00.000060000000', 'voltage': ' -0.00800'},
{'time': '-00.000059000000', 'voltage': ' -0.01600'},
{'time': '-00.000058000000', 'voltage': ' -0.00800'},
{'time': '-00.000057000000', 'voltage': ' 0.00000'},
{'time': '-00.000056000000', 'voltage': ' -0.01600'},
{'time': '-00.000055000000', 'voltage': ' -0.01600'},
{'time': '-00.000054000000', 'voltage': ' -0.01600'},
{'time': '-00.000053000000', 'voltage': ' -0.03200'},
{'time': '-00.000052000000', 'voltage': ' -0.02400'},
{'time': '-00.000051000000', 'voltage': ' -0.01600'},
{'time': '-00.000050000000', 'voltage': ' -0.02400'},
{'time': '-00.000049000000', 'voltage': ' -0.02400'},
{'time': '-00.000048000000', 'voltage': ' -0.01600'},
{'time': '-00.000047000000', 'voltage': ' -0.02400'},
{'time': '-00.000046000000', 'voltage': ' -0.02400'},
{'time': '-00.000045000000', 'voltage': ' -0.01600'},
{'time': '-00.000044000000', 'voltage': ' -0.01600'},
{'time': '-00.000043000000', 'voltage': ' -0.02400'},
{'time': '-00.000042000000', 'voltage': ' -0.00800'},
{'time': '-00.000041000000', 'voltage': ' -0.01600'},
{'time': '-00.000040000000', 'voltage': ' -0.01600'},
{'time': '-00.000039000000', 'voltage': ' -0.02400'},
{'time': '-00.000038000000', 'voltage': ' -0.00800'},
{'time': '-00.000037000000', 'voltage': ' -0.01600'},
{'time': '-00.000036000000', 'voltage': ' -0.01600'},
{'time': '-00.000035000000', 'voltage': ' -0.00800'},
{'time': '-00.000034000000', 'voltage': ' -0.00800'},
{'time': '-00.000033000000', 'voltage': ' -0.02400'},
{'time': '-00.000032000000', 'voltage': ' -0.02400'},
{'time': '-00.000031000000', 'voltage': ' -0.01600'},
{'time': '-00.000030000000', 'voltage': ' -0.02400'},
{'time': '-00.000029000000', 'voltage': ' -0.02400'},
{'time': '-00.000028000000', 'voltage': ' -0.04000'},
{'time': '-00.000027000000', 'voltage': ' -0.02400'},
{'time': '-00.000026000000', 'voltage': ' -0.02400'},
{'time': '-00.000025000000', 'voltage': ' -0.03200'},
{'time': '-00.000024000000', 'voltage': ' -0.02400'},
{'time': '-00.000023000000', 'voltage': ' -0.03200'},
{'time': '-00.000022000000', 'voltage': ' -0.02400'},
{'time': '-00.000021000000', 'voltage': ' -0.02400'},
{'time': '-00.000020000000', 'voltage': ' -0.02400'},
{'time': '-00.000019000000', 'voltage': ' -0.01600'},
{'time': '-00.000018000000', 'voltage': ' -0.01600'},
{'time': '-00.000017000000', 'voltage': ' -0.02400'},
{'time': '-00.000016000000', 'voltage': ' -0.01600'},
{'time': '-00.000015000000', 'voltage': ' -0.01600'},
{'time': '-00.000014000000', 'voltage': ' -0.00800'},
{'time': '-00.000013000000', 'voltage': ' -0.02400'},
{'time': '-00.000012000000', 'voltage': ' -0.00800'},
{'time': '-00.000011000000', 'voltage': ' -0.03200'},
{'time': '-00.000010000000', 'voltage': ' -0.02400'},
{'time': '-00.000009000000', 'voltage': ' -0.03200'},
{'time': '-00.000008000000', 'voltage': ' -0.02400'},
{'time': '-00.000007000000', 'voltage': ' -0.01600'},
{'time': '-00.000006000000', 'voltage': ' -0.03200'},
{'time': '-00.000005000000', 'voltage': ' -0.02400'},
{'time': '-00.000004000000', 'voltage': ' -0.02400'},
{'time': '-00.000003000000', 'voltage': ' -0.00800'},
{'time': '-00.000002000000', 'voltage': ' -0.03200'},
{'time': '-00.000001000000', 'voltage': ' 0.00000'},
{'time': '-00.000000000000', 'voltage': ' -0.01600'},
{'time': '00.000001000000', 'voltage': ' -0.02400'},
{'time': '00.000002000000', 'voltage': ' -0.01600'},
{'time': '00.000003000000', 'voltage': ' -0.01600'},
{'time': '00.000004000000', 'voltage': ' -0.01600'},
{'time': '00.000005000000', 'voltage': ' -0.00800'},
{'time': '00.000006000000', 'voltage': ' -0.01600'},
{'time': '00.000007000000', 'voltage': ' -0.00800'},
{'time': '00.000008000000', 'voltage': ' 0.00000'},
{'time': '00.000009000000', 'voltage': ' -0.00800'},
{'time': '00.000010000000', 'voltage': ' 0.01600'},
{'time': '00.000011000000', 'voltage': ' 0.03200'},
{'time': '00.000012000000', 'voltage': ' 0.04800'},
{'time': '00.000013000000', 'voltage': ' 0.07200'},
{'time': '00.000014000000', 'voltage': ' 0.10400'},
{'time': '00.000015000000', 'voltage': ' 0.15200'},
{'time': '00.000016000000', 'voltage': ' 0.18400'},
{'time': '00.000017000000', 'voltage': ' 0.22400'},
{'time': '00.000018000000', 'voltage': ' 0.27200'},
{'time': '00.000019000000', 'voltage': ' 0.31200'},
{'time': '00.000020000000', 'voltage': ' 0.35200'},
{'time': '00.000021000000', 'voltage': ' 0.39200'},
{'time': '00.000022000000', 'voltage': ' 0.44800'},
{'time': '00.000023000000', 'voltage': ' 0.49600'},
{'time': '00.000024000000', 'voltage': ' 0.53600'},
{'time': '00.000025000000', 'voltage': ' 0.56800'},
{'time': '00.000026000000', 'voltage': ' 0.60800'},
{'time': '00.000027000000', 'voltage': ' 0.65600'},
{'time': '00.000028000000', 'voltage': ' 0.66400'},
{'time': '00.000029000000', 'voltage': ' 0.69600'},
{'time': '00.000030000000', 'voltage': ' 0.72000'},
{'time': '00.000031000000', 'voltage': ' 0.74400'},
{'time': '00.000032000000', 'voltage': ' 0.74400'},
{'time': '00.000033000000', 'voltage': ' 0.74400'},
{'time': '00.000034000000', 'voltage': ' 0.75200'},
{'time': '00.000035000000', 'voltage': ' 0.74400'},
{'time': '00.000036000000', 'voltage': ' 0.72800'},
{'time': '00.000037000000', 'voltage': ' 0.69600'},
{'time': '00.000038000000', 'voltage': ' 0.64800'},
{'time': '00.000039000000', 'voltage': ' 0.57600'},
{'time': '00.000040000000', 'voltage': ' 0.48800'},
{'time': '00.000041000000', 'voltage': ' 0.40800'},
{'time': '00.000042000000', 'voltage': ' 0.30400'},
{'time': '00.000043000000', 'voltage': ' 0.20000'},
{'time': '00.000044000000', 'voltage': ' 0.08800'},
{'time': '00.000045000000', 'voltage': ' -0.00800'},
{'time': '00.000046000000', 'voltage': ' -0.11200'},
{'time': '00.000047000000', 'voltage': ' -0.20800'},
{'time': '00.000048000000', 'voltage': ' -0.30400'},
{'time': '00.000049000000', 'voltage': ' -0.37600'},
{'time': '00.000050000000', 'voltage': ' -0.44000'},
{'time': '00.000051000000', 'voltage': ' -0.49600'},
{'time': '00.000052000000', 'voltage': ' -0.56000'},
{'time': '00.000053000000', 'voltage': ' -0.60000'},
{'time': '00.000054000000', 'voltage': ' -0.64800'},
{'time': '00.000055000000', 'voltage': ' -0.68000'},
{'time': '00.000056000000', 'voltage': ' -0.71200'},
{'time': '00.000057000000', 'voltage': ' -0.73600'},
{'time': '00.000058000000', 'voltage': ' -0.73600'},
{'time': '00.000059000000', 'voltage': ' -0.77600'},
{'time': '00.000060000000', 'voltage': ' -0.78400'},
{'time': '00.000061000000', 'voltage': ' -0.78400'},
{'time': '00.000062000000', 'voltage': ' -0.79200'},
{'time': '00.000063000000', 'voltage': ' -0.80000'},
{'time': '00.000064000000', 'voltage': ' -0.80800'},
{'time': '00.000065000000', 'voltage': ' -0.80800'},
{'time': '00.000066000000', 'voltage': ' -0.79200'},
{'time': '00.000067000000', 'voltage': ' -0.80000'},
{'time': '00.000068000000', 'voltage': ' -0.79200'},
{'time': '00.000069000000', 'voltage': ' -0.79200'},
{'time': '00.000070000000', 'voltage': ' -0.78400'},
{'time': '00.000071000000', 'voltage': ' -0.79200'},
{'time': '00.000072000000', 'voltage': ' -0.78400'},
{'time': '00.000073000000', 'voltage': ' -0.77600'},
{'time': '00.000074000000', 'voltage': ' -0.76800'},
{'time': '00.000075000000', 'voltage': ' -0.75200'},
{'time': '00.000076000000', 'voltage': ' -0.75200'},
{'time': '00.000077000000', 'voltage': ' -0.74400'},
{'time': '00.000078000000', 'voltage': ' -0.73600'},
{'time': '00.000079000000', 'voltage': ' -0.72800'},
{'time': '00.000080000000', 'voltage': ' -0.72800'},
{'time': '00.000081000000', 'voltage': ' -0.72000'},
{'time': '00.000082000000', 'voltage': ' -0.71200'},
{'time': '00.000083000000', 'voltage': ' -0.69600'},
{'time': '00.000084000000', 'voltage': ' -0.69600'},
{'time': '00.000085000000', 'voltage': ' -0.68800'},
{'time': '00.000086000000', 'voltage': ' -0.67200'},
{'time': '00.000087000000', 'voltage': ' -0.67200'},
{'time': '00.000088000000', 'voltage': ' -0.65600'},
{'time': '00.000089000000', 'voltage': ' -0.64800'},
{'time': '00.000090000000', 'voltage': ' -0.64000'},
{'time': '00.000091000000', 'voltage': ' -0.63200'},
{'time': '00.000092000000', 'voltage': ' -0.63200'},
{'time': '00.000093000000', 'voltage': ' -0.60800'},
{'time': '00.000094000000', 'voltage': ' -0.61600'},
{'time': '00.000095000000', 'voltage': ' -0.59200'},
{'time': '00.000096000000', 'voltage': ' -0.59200'},
{'time': '00.000097000000', 'voltage': ' -0.59200'},
{'time': '00.000098000000', 'voltage': ' -0.58400'},
{'time': '00.000099000000', 'voltage': ' -0.57600'},
{'time': '00.000100000000', 'voltage': ' -0.57600'},
{'time': '00.000101000000', 'voltage': ' -0.57600'},
{'time': '00.000102000000', 'voltage': ' -0.56000'},
{'time': '00.000103000000', 'voltage': ' -0.56000'},
{'time': '00.000104000000', 'voltage': ' -0.54400'},
{'time': '00.000105000000', 'voltage': ' -0.55200'},
{'time': '00.000106000000', 'voltage': ' -0.53600'},
{'time': '00.000107000000', 'voltage': ' -0.53600'},
{'time': '00.000108000000', 'voltage': ' -0.52800'},
{'time': '00.000109000000', 'voltage': ' -0.52800'},
{'time': '00.000110000000', 'voltage': ' -0.52000'},
{'time': '00.000111000000', 'voltage': ' -0.52000'},
{'time': '00.000112000000', 'voltage': ' -0.50400'},
{'time': '00.000113000000', 'voltage': ' -0.51200'},
{'time': '00.000114000000', 'voltage': ' -0.49600'},
{'time': '00.000115000000', 'voltage': ' -0.49600'},
{'time': '00.000116000000', 'voltage': ' -0.48800'},
{'time': '00.000117000000', 'voltage': ' -0.49600'},
{'time': '00.000118000000', 'voltage': ' -0.47200'},
{'time': '00.000119000000', 'voltage': ' -0.49600'},
{'time': '00.000120000000', 'voltage': ' -0.47200'},
{'time': '00.000121000000', 'voltage': ' -0.48000'},
{'time': '00.000122000000', 'voltage': ' -0.46400'},
{'time': '00.000123000000', 'voltage': ' -0.47200'},
{'time': '00.000124000000', 'voltage': ' -0.46400'},
{'time': '00.000125000000', 'voltage': ' -0.48000'},
{'time': '00.000126000000', 'voltage': ' -0.48000'},
{'time': '00.000127000000', 'voltage': ' -0.47200'},
{'time': '00.000128000000', 'voltage': ' -0.46400'},
{'time': '00.000129000000', 'voltage': ' -0.46400'},
{'time': '00.000130000000', 'voltage': ' -0.44800'},
{'time': '00.000131000000', 'voltage': ' -0.45600'},
{'time': '00.000132000000', 'voltage': ' -0.44800'},
{'time': '00.000133000000', 'voltage': ' -0.44000'},
{'time': '00.000134000000', 'voltage': ' -0.43200'},
{'time': '00.000135000000', 'voltage': ' -0.44000'},
{'time': '00.000136000000', 'voltage': ' -0.41600'},
{'time': '00.000137000000', 'voltage': ' -0.42400'},
{'time': '00.000138000000', 'voltage': ' -0.41600'},
{'time': '00.000139000000', 'voltage': ' -0.40800'},
{'time': '00.000140000000', 'voltage': ' -0.41600'},
{'time': '00.000141000000', 'voltage': ' -0.40000'},
{'time': '00.000142000000', 'voltage': ' -0.40000'},
{'time': '00.000143000000', 'voltage': ' -0.40000'},
{'time': '00.000144000000', 'voltage': ' -0.40000'},
{'time': '00.000145000000', 'voltage': ' -0.40000'},
{'time': '00.000146000000', 'voltage': ' -0.40000'},
{'time': '00.000147000000', 'voltage': ' -0.36800'},
{'time': '00.000148000000', 'voltage': ' -0.36800'},
{'time': '00.000149000000', 'voltage': ' -0.38400'},
{'time': '00.000150000000', 'voltage': ' -0.36000'},
{'time': '00.000151000000', 'voltage': ' -0.35200'},
{'time': '00.000152000000', 'voltage': ' -0.35200'},
{'time': '00.000153000000', 'voltage': ' -0.33600'},
{'time': '00.000154000000', 'voltage': ' -0.32000'},
{'time': '00.000155000000', 'voltage': ' -0.32800'},
{'time': '00.000156000000', 'voltage': ' -0.32800'},
{'time': '00.000157000000', 'voltage': ' -0.31200'},
{'time': '00.000158000000', 'voltage': ' -0.29600'},
{'time': '00.000159000000', 'voltage': ' -0.28800'},
{'time': '00.000160000000', 'voltage': ' -0.28000'},
{'time': '00.000161000000', 'voltage': ' -0.28000'},
{'time': '00.000162000000', 'voltage': ' -0.26400'},
{'time': '00.000163000000', 'voltage': ' -0.26400'},
{'time': '00.000164000000', 'voltage': ' -0.24800'},
{'time': '00.000165000000', 'voltage': ' -0.24800'},
{'time': '00.000166000000', 'voltage': ' -0.23200'},
{'time': '00.000167000000', 'voltage': ' -0.23200'},
{'time': '00.000168000000', 'voltage': ' -0.21600'},
{'time': '00.000169000000', 'voltage': ' -0.21600'},
{'time': '00.000170000000', 'voltage': ' -0.19200'},
{'time': '00.000171000000', 'voltage': ' -0.18400'},
{'time': '00.000172000000', 'voltage': ' -0.17600'},
{'time': '00.000173000000', 'voltage': ' -0.17600'},
{'time': '00.000174000000', 'voltage': ' -0.16800'},
{'time': '00.000175000000', 'voltage': ' -0.14400'},
{'time': '00.000176000000', 'voltage': ' -0.14400'},
{'time': '00.000177000000', 'voltage': ' -0.12800'},
{'time': '00.000178000000', 'voltage': ' -0.11200'},
{'time': '00.000179000000', 'voltage': ' -0.11200'},
{'time': '00.000180000000', 'voltage': ' -0.08800'},
{'time': '00.000181000000', 'voltage': ' -0.08800'},
{'time': '00.000182000000', 'voltage': ' -0.08000'},
{'time': '00.000183000000', 'voltage': ' -0.04800'},
{'time': '00.000184000000', 'voltage': ' -0.04800'},
{'time': '00.000185000000', 'voltage': ' -0.04800'},
{'time': '00.000186000000', 'voltage': ' -0.02400'},
{'time': '00.000187000000', 'voltage': ' -0.02400'},
{'time': '00.000188000000', 'voltage': ' -0.01600'},
{'time': '00.000189000000', 'voltage': ' 0.00800'},
{'time': '00.000190000000', 'voltage': ' -0.00800'},
{'time': '00.000191000000', 'voltage': ' 0.00800'},
{'time': '00.000192000000', 'voltage': ' 0.03200'},
{'time': '00.000193000000', 'voltage': ' 0.04000'},
{'time': '00.000194000000', 'voltage': ' 0.04800'},
{'time': '00.000195000000', 'voltage': ' 0.05600'},
{'time': '00.000196000000', 'voltage': ' 0.06400'},
{'time': '00.000197000000', 'voltage': ' 0.08000'},
{'time': '00.000198000000', 'voltage': ' 0.09600'},
{'time': '00.000199000000', 'voltage': ' 0.09600'},
{'time': '00.000200000000', 'voltage': ' 0.12800'},
{'time': '00.000201000000', 'voltage': ' 0.13600'},
{'time': '00.000202000000', 'voltage': ' 0.15200'},
{'time': '00.000203000000', 'voltage': ' 0.15200'},
{'time': '00.000204000000', 'voltage': ' 0.16000'},
{'time': '00.000205000000', 'voltage': ' 0.18400'},
{'time': '00.000206000000', 'voltage': ' 0.17600'},
{'time': '00.000207000000', 'voltage': ' 0.20000'},
{'time': '00.000208000000', 'voltage': ' 0.19200'},
{'time': '00.000209000000', 'voltage': ' 0.20800'},
{'time': '00.000210000000', 'voltage': ' 0.23200'},
{'time': '00.000211000000', 'voltage': ' 0.23200'},
{'time': '00.000212000000', 'voltage': ' 0.24000'},
{'time': '00.000213000000', 'voltage': ' 0.24000'},
{'time': '00.000214000000', 'voltage': ' 0.24800'},
{'time': '00.000215000000', 'voltage': ' 0.25600'},
{'time': '00.000216000000', 'voltage': ' 0.28000'},
{'time': '00.000217000000', 'voltage': ' 0.27200'},
{'time': '00.000218000000', 'voltage': ' 0.28800'},
{'time': '00.000219000000', 'voltage': ' 0.29600'},
{'time': '00.000220000000', 'voltage': ' 0.30400'},
{'time': '00.000221000000', 'voltage': ' 0.32000'},
{'time': '00.000222000000', 'voltage': ' 0.32000'},
{'time': '00.000223000000', 'voltage': ' 0.34400'},
{'time': '00.000224000000', 'voltage': ' 0.35200'},
{'time': '00.000225000000', 'voltage': ' 0.36000'},
{'time': '00.000226000000', 'voltage': ' 0.36800'},
{'time': '00.000227000000', 'voltage': ' 0.37600'},
{'time': '00.000228000000', 'voltage': ' 0.37600'},
{'time': '00.000229000000', 'voltage': ' 0.38400'},
{'time': '00.000230000000', 'voltage': ' 0.40000'},
{'time': '00.000231000000', 'voltage': ' 0.39200'},
{'time': '00.000232000000', 'voltage': ' 0.40800'},
{'time': '00.000233000000', 'voltage': ' 0.40000'},
{'time': '00.000234000000', 'voltage': ' 0.41600'},
{'time': '00.000235000000', 'voltage': ' 0.41600'},
{'time': '00.000236000000', 'voltage': ' 0.41600'},
{'time': '00.000237000000', 'voltage': ' 0.42400'},
{'time': '00.000238000000', 'voltage': ' 0.43200'},
{'time': '00.000239000000', 'voltage': ' 0.44000'},
{'time': '00.000240000000', 'voltage': ' 0.44800'},
{'time': '00.000241000000', 'voltage': ' 0.44800'},
{'time': '00.000242000000', 'voltage': ' 0.45600'},
{'time': '00.000243000000', 'voltage': ' 0.46400'},
{'time': '00.000244000000', 'voltage': ' 0.46400'},
{'time': '00.000245000000', 'voltage': ' 0.48000'},
{'time': '00.000246000000', 'voltage': ' 0.48800'},
{'time': '00.000247000000', 'voltage': ' 0.48000'},
{'time': '00.000248000000', 'voltage': ' 0.50400'},
{'time': '00.000249000000', 'voltage': ' 0.48800'},
{'time': '00.000250000000', 'voltage': ' 0.49600'},
{'time': '00.000251000000', 'voltage': ' 0.51200'},
{'time': '00.000252000000', 'voltage': ' 0.50400'},
{'time': '00.000253000000', 'voltage': ' 0.51200'},
{'time': '00.000254000000', 'voltage': ' 0.51200'},
{'time': '00.000255000000', 'voltage': ' 0.52000'},
{'time': '00.000256000000', 'voltage': ' 0.52800'},
{'time': '00.000257000000', 'voltage': ' 0.52800'},
{'time': '00.000258000000', 'voltage': ' 0.52800'},
{'time': '00.000259000000', 'voltage': ' 0.53600'},
{'time': '00.000260000000', 'voltage': ' 0.52800'},
{'time': '00.000261000000', 'voltage': ' 0.53600'},
{'time': '00.000262000000', 'voltage': ' 0.54400'},
{'time': '00.000263000000', 'voltage': ' 0.54400'},
{'time': '00.000264000000', 'voltage': ' 0.54400'},
{'time': '00.000265000000', 'voltage': ' 0.56000'},
{'time': '00.000266000000', 'voltage': ' 0.56000'},
{'time': '00.000267000000', 'voltage': ' 0.56000'},
{'time': '00.000268000000', 'voltage': ' 0.56000'},
{'time': '00.000269000000', 'voltage': ' 0.57600'},
{'time': '00.000270000000', 'voltage': ' 0.57600'},
{'time': '00.000271000000', 'voltage': ' 0.56800'},
{'time': '00.000272000000', 'voltage': ' 0.56800'},
{'time': '00.000273000000', 'voltage': ' 0.57600'},
{'time': '00.000274000000', 'voltage': ' 0.58400'},
{'time': '00.000275000000', 'voltage': ' 0.56800'},
{'time': '00.000276000000', 'voltage': ' 0.56800'},
{'time': '00.000277000000', 'voltage': ' 0.56800'},
{'time': '00.000278000000', 'voltage': ' 0.57600'},
{'time': '00.000279000000', 'voltage': ' 0.56000'},
{'time': '00.000280000000', 'voltage': ' 0.56800'},
{'time': '00.000281000000', 'voltage': ' 0.56000'},
{'time': '00.000282000000', 'voltage': ' 0.56800'},
{'time': '00.000283000000', 'voltage': ' 0.56000'},
{'time': '00.000284000000', 'voltage': ' 0.56000'},
{'time': '00.000285000000', 'voltage': ' 0.56800'},
{'time': '00.000286000000', 'voltage': ' 0.56000'},
{'time': '00.000287000000', 'voltage': ' 0.56000'},
{'time': '00.000288000000', 'voltage': ' 0.56000'},
{'time': '00.000289000000', 'voltage': ' 0.56800'},
{'time': '00.000290000000', 'voltage': ' 0.56000'},
{'time': '00.000291000000', 'voltage': ' 0.56000'},
{'time': '00.000292000000', 'voltage': ' 0.56000'},
{'time': '00.000293000000', 'voltage': ' 0.56000'},
{'time': '00.000294000000', 'voltage': ' 0.56000'},
{'time': '00.000295000000', 'voltage': ' 0.54400'},
{'time': '00.000296000000', 'voltage': ' 0.55200'},
{'time': '00.000297000000', 'voltage': ' 0.55200'},
{'time': '00.000298000000', 'voltage': ' 0.53600'},
{'time': '00.000299000000', 'voltage': ' 0.55200'},
{'time': '00.000300000000', 'voltage': ' 0.53600'},
{'time': '00.000301000000', 'voltage': ' 0.53600'},
{'time': '00.000302000000', 'voltage': ' 0.52800'},
{'time': '00.000303000000', 'voltage': ' 0.52800'},
{'time': '00.000304000000', 'voltage': ' 0.52800'},
{'time': '00.000305000000', 'voltage': ' 0.52000'},
{'time': '00.000306000000', 'voltage': ' 0.53600'},
{'time': '00.000307000000', 'voltage': ' 0.52000'},
{'time': '00.000308000000', 'voltage': ' 0.52800'},
{'time': '00.000309000000', 'voltage': ' 0.51200'},
{'time': '00.000310000000', 'voltage': ' 0.51200'},
{'time': '00.000311000000', 'voltage': ' 0.51200'},
{'time': '00.000312000000', 'voltage': ' 0.50400'},
{'time': '00.000313000000', 'voltage': ' 0.51200'},
{'time': '00.000314000000', 'voltage': ' 0.50400'},
{'time': '00.000315000000', 'voltage': ' 0.49600'},
{'time': '00.000316000000', 'voltage': ' 0.50400'},
{'time': '00.000317000000', 'voltage': ' 0.49600'},
{'time': '00.000318000000', 'voltage': ' 0.48000'},
{'time': '00.000319000000', 'voltage': ' 0.50400'},
{'time': '00.000320000000', 'voltage': ' 0.48800'},
{'time': '00.000321000000', 'voltage': ' 0.47200'},
{'time': '00.000322000000', 'voltage': ' 0.46400'},
{'time': '00.000323000000', 'voltage': ' 0.47200'},
{'time': '00.000324000000', 'voltage': ' 0.44800'},
{'time': '00.000325000000', 'voltage': ' 0.46400'},
{'time': '00.000326000000', 'voltage': ' 0.44800'},
{'time': '00.000327000000', 'voltage': ' 0.44000'},
{'time': '00.000328000000', 'voltage': ' 0.44000'},
{'time': '00.000329000000', 'voltage': ' 0.44000'},
{'time': '00.000330000000', 'voltage': ' 0.43200'},
{'time': '00.000331000000', 'voltage': ' 0.44000'},
{'time': '00.000332000000', 'voltage': ' 0.42400'},
{'time': '00.000333000000', 'voltage': ' 0.42400'},
{'time': '00.000334000000', 'voltage': ' 0.41600'},
{'time': '00.000335000000', 'voltage': ' 0.42400'},
{'time': '00.000336000000', 'voltage': ' 0.41600'},
{'time': '00.000337000000', 'voltage': ' 0.40000'},
{'time': '00.000338000000', 'voltage': ' 0.40800'},
{'time': '00.000339000000', 'voltage': ' 0.39200'},
{'time': '00.000340000000', 'voltage': ' 0.39200'},
{'time': '00.000341000000', 'voltage': ' 0.37600'},
{'time': '00.000342000000', 'voltage': ' 0.36800'},
{'time': '00.000343000000', 'voltage': ' 0.36800'},
{'time': '00.000344000000', 'voltage': ' 0.36800'},
{'time': '00.000345000000', 'voltage': ' 0.34400'},
{'time': '00.000346000000', 'voltage': ' 0.34400'},
{'time': '00.000347000000', 'voltage': ' 0.34400'},
{'time': '00.000348000000', 'voltage': ' 0.32800'},
{'time': '00.000349000000', 'voltage': ' 0.32800'},
{'time': '00.000350000000', 'voltage': ' 0.32000'},
{'time': '00.000351000000', 'voltage': ' 0.32000'},
{'time': '00.000352000000', 'voltage': ' 0.31200'},
{'time': '00.000353000000', 'voltage': ' 0.31200'},
{'time': '00.000354000000', 'voltage': ' 0.30400'},
{'time': '00.000355000000', 'voltage': ' 0.31200'},
{'time': '00.000356000000', 'voltage': ' 0.29600'},
{'time': '00.000357000000', 'voltage': ' 0.30400'},
{'time': '00.000358000000', 'voltage': ' 0.28800'},
{'time': '00.000359000000', 'voltage': ' 0.28000'},
{'time': '00.000360000000', 'voltage': ' 0.28000'},
{'time': '00.000361000000', 'voltage': ' 0.26400'},
{'time': '00.000362000000', 'voltage': ' 0.26400'},
{'time': '00.000363000000', 'voltage': ' 0.24800'},
{'time': '00.000364000000', 'voltage': ' 0.24800'},
{'time': '00.000365000000', 'voltage': ' 0.24800'},
{'time': '00.000366000000', 'voltage': ' 0.23200'},
{'time': '00.000367000000', 'voltage': ' 0.22400'},
{'time': '00.000368000000', 'voltage': ' 0.22400'},
{'time': '00.000369000000', 'voltage': ' 0.21600'},
{'time': '00.000370000000', 'voltage': ' 0.21600'},
{'time': '00.000371000000', 'voltage': ' 0.20000'},
{'time': '00.000372000000', 'voltage': ' 0.20000'},
{'time': '00.000373000000', 'voltage': ' 0.19200'},
{'time': '00.000374000000', 'voltage': ' 0.19200'},
{'time': '00.000375000000', 'voltage': ' 0.18400'},
{'time': '00.000376000000', 'voltage': ' 0.19200'},
{'time': '00.000377000000', 'voltage': ' 0.18400'},
{'time': '00.000378000000', 'voltage': ' 0.17600'},
{'time': '00.000379000000', 'voltage': ' 0.16000'},
{'time': '00.000380000000', 'voltage': ' 0.15200'},
{'time': '00.000381000000', 'voltage': ' 0.16000'},
{'time': '00.000382000000', 'voltage': ' 0.14400'},
{'time': '00.000383000000', 'voltage': ' 0.13600'},
{'time': '00.000384000000', 'voltage': ' 0.12800'},
{'time': '00.000385000000', 'voltage': ' 0.12000'},
{'time': '00.000386000000', 'voltage': ' 0.11200'},
{'time': '00.000387000000', 'voltage': ' 0.11200'},
{'time': '00.000388000000', 'voltage': ' 0.10400'},
{'time': '00.000389000000', 'voltage': ' 0.09600'},
{'time': '00.000390000000', 'voltage': ' 0.08800'},
{'time': '00.000391000000', 'voltage': ' 0.08000'},
{'time': '00.000392000000', 'voltage': ' 0.08000'},
{'time': '00.000393000000', 'voltage': ' 0.05600'},
{'time': '00.000394000000', 'voltage': ' 0.07200'},
{'time': '00.000395000000', 'voltage': ' 0.06400'},
{'time': '00.000396000000', 'voltage': ' 0.04800'},
{'time': '00.000397000000', 'voltage': ' 0.04800'},
{'time': '00.000398000000', 'voltage': ' 0.04800'},
{'time': '00.000399000000', 'voltage': ' 0.04800'},
{'time': '00.000400000000', 'voltage': ' 0.03200'},
{'time': '00.000401000000', 'voltage': ' 0.02400'},
{'time': '00.000402000000', 'voltage': ' 0.02400'},
{'time': '00.000403000000', 'voltage': ' 0.01600'},
{'time': '00.000404000000', 'voltage': ' 0.00800'},
{'time': '00.000405000000', 'voltage': ' 0.00000'},
{'time': '00.000406000000', 'voltage': ' 0.00000'},
{'time': '00.000407000000', 'voltage': ' -0.00800'},
{'time': '00.000408000000', 'voltage': ' -0.01600'},
{'time': '00.000409000000', 'voltage': ' -0.02400'},
{'time': '00.000410000000', 'voltage': ' -0.01600'},
{'time': '00.000411000000', 'voltage': ' -0.03200'},
{'time': '00.000412000000', 'voltage': ' -0.04000'},
{'time': '00.000413000000', 'voltage': ' -0.04000'},
{'time': '00.000414000000', 'voltage': ' -0.05600'},
{'time': '00.000415000000', 'voltage': ' -0.05600'},
{'time': '00.000416000000', 'voltage': ' -0.04800'},
{'time': '00.000417000000', 'voltage': ' -0.06400'},
{'time': '00.000418000000', 'voltage': ' -0.06400'},
{'time': '00.000419000000', 'voltage': ' -0.07200'},
{'time': '00.000420000000', 'voltage': ' -0.07200'},
{'time': '00.000421000000', 'voltage': ' -0.07200'},
{'time': '00.000422000000', 'voltage': ' -0.08000'},
{'time': '00.000423000000', 'voltage': ' -0.08800'},
{'time': '00.000424000000', 'voltage': ' -0.08000'},
{'time': '00.000425000000', 'voltage': ' -0.08800'},
{'time': '00.000426000000', 'voltage': ' -0.08800'},
{'time': '00.000427000000', 'voltage': ' -0.08800'},
{'time': '00.000428000000', 'voltage': ' -0.10400'},
{'time': '00.000429000000', 'voltage': ' -0.10400'},
{'time': '00.000430000000', 'voltage': ' -0.12000'},
{'time': '00.000431000000', 'voltage': ' -0.12800'},
{'time': '00.000432000000', 'voltage': ' -0.12000'},
{'time': '00.000433000000', 'voltage': ' -0.12000'},
{'time': '00.000434000000', 'voltage': ' -0.12000'},
{'time': '00.000435000000', 'voltage': ' -0.13600'},
{'time': '00.000436000000', 'voltage': ' -0.14400'},
{'time': '00.000437000000', 'voltage': ' -0.15200'},
{'time': '00.000438000000', 'voltage': ' -0.14400'},
{'time': '00.000439000000', 'voltage': ' -0.15200'},
{'time': '00.000440000000', 'voltage': ' -0.16000'},
{'time': '00.000441000000', 'voltage': ' -0.15200'},
{'time': '00.000442000000', 'voltage': ' -0.16000'},
{'time': '00.000443000000', 'voltage': ' -0.16000'},
{'time': '00.000444000000', 'voltage': ' -0.16000'},
{'time': '00.000445000000', 'voltage': ' -0.16000'},
{'time': '00.000446000000', 'voltage': ' -0.16000'},
{'time': '00.000447000000', 'voltage': ' -0.16800'},
{'time': '00.000448000000', 'voltage': ' -0.18400'},
{'time': '00.000449000000', 'voltage': ' -0.18400'},
{'time': '00.000450000000', 'voltage': ' -0.19200'},
{'time': '00.000451000000', 'voltage': ' -0.19200'},
{'time': '00.000452000000', 'voltage': ' -0.20800'},
{'time': '00.000453000000', 'voltage': ' -0.19200'},
{'time': '00.000454000000', 'voltage': ' -0.20800'},
{'time': '00.000455000000', 'voltage': ' -0.20800'},
{'time': '00.000456000000', 'voltage': ' -0.21600'},
{'time': '00.000457000000', 'voltage': ' -0.22400'},
{'time': '00.000458000000', 'voltage': ' -0.22400'},
{'time': '00.000459000000', 'voltage': ' -0.22400'},
{'time': '00.000460000000', 'voltage': ' -0.22400'},
{'time': '00.000461000000', 'voltage': ' -0.21600'},
{'time': '00.000462000000', 'voltage': ' -0.22400'},
{'time': '00.000463000000', 'voltage': ' -0.23200'},
{'time': '00.000464000000', 'voltage': ' -0.23200'},
{'time': '00.000465000000', 'voltage': ' -0.23200'},
{'time': '00.000466000000', 'voltage': ' -0.22400'},
{'time': '00.000467000000', 'voltage': ' -0.23200'},
{'time': '00.000468000000', 'voltage': ' -0.22400'},
{'time': '00.000469000000', 'voltage': ' -0.24000'},
{'time': '00.000470000000', 'voltage': ' -0.24000'},
{'time': '00.000471000000', 'voltage': ' -0.23200'},
{'time': '00.000472000000', 'voltage': ' -0.23200'},
{'time': '00.000473000000', 'voltage': ' -0.22400'},
{'time': '00.000474000000', 'voltage': ' -0.24000'},
{'time': '00.000475000000', 'voltage': ' -0.24000'},
{'time': '00.000476000000', 'voltage': ' -0.24800'},
{'time': '00.000477000000', 'voltage': ' -0.24000'},
{'time': '00.000478000000', 'voltage': ' -0.25600'},
{'time': '00.000479000000', 'voltage': ' -0.25600'},
{'time': '00.000480000000', 'voltage': ' -0.24800'},
{'time': '00.000481000000', 'voltage': ' -0.25600'},
{'time': '00.000482000000', 'voltage': ' -0.25600'},
{'time': '00.000483000000', 'voltage': ' -0.25600'},
{'time': '00.000484000000', 'voltage': ' -0.25600'},
{'time': '00.000485000000', 'voltage': ' -0.26400'},
{'time': '00.000486000000', 'voltage': ' -0.25600'},
{'time': '00.000487000000', 'voltage': ' -0.26400'},
{'time': '00.000488000000', 'voltage': ' -0.25600'},
{'time': '00.000489000000', 'voltage': ' -0.25600'},
{'time': '00.000490000000', 'voltage': ' -0.24800'},
{'time': '00.000491000000', 'voltage': ' -0.24800'},
{'time': '00.000492000000', 'voltage': ' -0.25600'},
{'time': '00.000493000000', 'voltage': ' -0.26400'},
{'time': '00.000494000000', 'voltage': ' -0.26400'},
{'time': '00.000495000000', 'voltage': ' -0.26400'},
{'time': '00.000496000000', 'voltage': ' -0.28000'},
{'time': '00.000497000000', 'voltage': ' -0.26400'},
{'time': '00.000498000000', 'voltage': ' -0.27200'},
{'time': '00.000499000000', 'voltage': ' -0.28800'},
{'time': '00.000500000000', 'voltage': ' -0.27200'},
{'time': '00.000501000000', 'voltage': ' -0.27200'},
{'time': '00.000502000000', 'voltage': ' -0.27200'},
{'time': '00.000503000000', 'voltage': ' -0.26400'},
{'time': '00.000504000000', 'voltage': ' -0.27200'},
{'time': '00.000505000000', 'voltage': ' -0.28000'},
{'time': '00.000506000000', 'voltage': ' -0.27200'},
{'time': '00.000507000000', 'voltage': ' -0.27200'},
{'time': '00.000508000000', 'voltage': ' -0.26400'},
{'time': '00.000509000000', 'voltage': ' -0.25600'},
{'time': '00.000510000000', 'voltage': ' -0.28000'},
{'time': '00.000511000000', 'voltage': ' -0.27200'},
{'time': '00.000512000000', 'voltage': ' -0.25600'},
{'time': '00.000513000000', 'voltage': ' -0.27200'},
{'time': '00.000514000000', 'voltage': ' -0.26400'},
{'time': '00.000515000000', 'voltage': ' -0.25600'},
{'time': '00.000516000000', 'voltage': ' -0.27200'},
{'time': '00.000517000000', 'voltage': ' -0.26400'},
{'time': '00.000518000000', 'voltage': ' -0.27200'},
{'time': '00.000519000000', 'voltage': ' -0.27200'},
{'time': '00.000520000000', 'voltage': ' -0.25600'},
{'time': '00.000521000000', 'voltage': ' -0.28000'},
{'time': '00.000522000000', 'voltage': ' -0.25600'},
{'time': '00.000523000000', 'voltage': ' -0.27200'},
{'time': '00.000524000000', 'voltage': ' -0.28000'},
{'time': '00.000525000000', 'voltage': ' -0.25600'},
{'time': '00.000526000000', 'voltage': ' -0.26400'},
{'time': '00.000527000000', 'voltage': ' -0.27200'},
{'time': '00.000528000000', 'voltage': ' -0.25600'},
{'time': '00.000529000000', 'voltage': ' -0.25600'},
{'time': '00.000530000000', 'voltage': ' -0.25600'},
{'time': '00.000531000000', 'voltage': ' -0.24800'},
{'time': '00.000532000000', 'voltage': ' -0.24000'},
{'time': '00.000533000000', 'voltage': ' -0.24800'},
{'time': '00.000534000000', 'voltage': ' -0.24800'},
{'time': '00.000535000000', 'voltage': ' -0.24800'},
{'time': '00.000536000000', 'voltage': ' -0.24000'},
{'time': '00.000537000000', 'voltage': ' -0.24000'},
{'time': '00.000538000000', 'voltage': ' -0.25600'},
{'time': '00.000539000000', 'voltage': ' -0.24000'},
{'time': '00.000540000000', 'voltage': ' -0.24000'},
{'time': '00.000541000000', 'voltage': ' -0.24000'},
{'time': '00.000542000000', 'voltage': ' -0.24800'},
{'time': '00.000543000000', 'voltage': ' -0.24000'},
{'time': '00.000544000000', 'voltage': ' -0.24000'},
{'time': '00.000545000000', 'voltage': ' -0.25600'},
{'time': '00.000546000000', 'voltage': ' -0.24000'},
{'time': '00.000547000000', 'voltage': ' -0.24000'},
{'time': '00.000548000000', 'voltage': ' -0.24000'},
{'time': '00.000549000000', 'voltage': ' -0.23200'},
{'time': '00.000550000000', 'voltage': ' -0.23200'},
{'time': '00.000551000000', 'voltage': ' -0.22400'},
{'time': '00.000552000000', 'voltage': ' -0.21600'},
{'time': '00.000553000000', 'voltage': ' -0.23200'},
{'time': '00.000554000000', 'voltage': ' -0.21600'},
{'time': '00.000555000000', 'voltage': ' -0.21600'},
{'time': '00.000556000000', 'voltage': ' -0.23200'},
{'time': '00.000557000000', 'voltage': ' -0.23200'},
{'time': '00.000558000000', 'voltage': ' -0.22400'},
{'time': '00.000559000000', 'voltage': ' -0.22400'},
{'time': '00.000560000000', 'voltage': ' -0.22400'},
{'time': '00.000561000000', 'voltage': ' -0.22400'},
{'time': '00.000562000000', 'voltage': ' -0.21600'},
{'time': '00.000563000000', 'voltage': ' -0.22400'},
{'time': '00.000564000000', 'voltage': ' -0.22400'},
{'time': '00.000565000000', 'voltage': ' -0.21600'},
{'time': '00.000566000000', 'voltage': ' -0.21600'},
{'time': '00.000567000000', 'voltage': ' -0.22400'},
{'time': '00.000568000000', 'voltage': ' -0.21600'},
{'time': '00.000569000000', 'voltage': ' -0.21600'},
{'time': '00.000570000000', 'voltage': ' -0.21600'},
{'time': '00.000571000000', 'voltage': ' -0.20000'},
{'time': '00.000572000000', 'voltage': ' -0.20000'},
{'time': '00.000573000000', 'voltage': ' -0.20000'},
{'time': '00.000574000000', 'voltage': ' -0.20000'},
{'time': '00.000575000000', 'voltage': ' -0.19200'},
{'time': '00.000576000000', 'voltage': ' -0.18400'},
{'time': '00.000577000000', 'voltage': ' -0.20000'},
{'time': '00.000578000000', 'voltage': ' -0.18400'},
{'time': '00.000579000000', 'voltage': ' -0.17600'},
{'time': '00.000580000000', 'voltage': ' -0.18400'},
{'time': '00.000581000000', 'voltage': ' -0.17600'},
{'time': '00.000582000000', 'voltage': ' -0.17600'},
{'time': '00.000583000000', 'voltage': ' -0.18400'},
{'time': '00.000584000000', 'voltage': ' -0.17600'},
{'time': '00.000585000000', 'voltage': ' -0.18400'},
{'time': '00.000586000000', 'voltage': ' -0.18400'},
{'time': '00.000587000000', 'voltage': ' -0.16800'},
{'time': '00.000588000000', 'voltage': ' -0.16800'},
{'time': '00.000589000000', 'voltage': ' -0.17600'},
{'time': '00.000590000000', 'voltage': ' -0.17600'},
{'time': '00.000591000000', 'voltage': ' -0.16800'},
{'time': '00.000592000000', 'voltage': ' -0.17600'},
{'time': '00.000593000000', 'voltage': ' -0.16000'},
{'time': '00.000594000000', 'voltage': ' -0.15200'},
{'time': '00.000595000000', 'voltage': ' -0.16800'},
{'time': '00.000596000000', 'voltage': ' -0.15200'},
{'time': '00.000597000000', 'voltage': ' -0.15200'},
{'time': '00.000598000000', 'voltage': ' -0.14400'},
{'time': '00.000599000000', 'voltage': ' -0.15200'},
{'time': '00.000600000000', 'voltage': ' -0.14400'},
{'time': '00.000601000000', 'voltage': ' -0.15200'},
{'time': '00.000602000000', 'voltage': ' -0.14400'},
{'time': '00.000603000000', 'voltage': ' -0.15200'},
{'time': '00.000604000000', 'voltage': ' -0.15200'},
{'time': '00.000605000000', 'voltage': ' -0.13600'},
{'time': '00.000606000000', 'voltage': ' -0.14400'},
{'time': '00.000607000000', 'voltage': ' -0.14400'},
{'time': '00.000608000000', 'voltage': ' -0.14400'},
{'time': '00.000609000000', 'voltage': ' -0.14400'},
{'time': '00.000610000000', 'voltage': ' -0.14400'},
{'time': '00.000611000000', 'voltage': ' -0.13600'},
{'time': '00.000612000000', 'voltage': ' -0.14400'},
{'time': '00.000613000000', 'voltage': ' -0.13600'},
{'time': '00.000614000000', 'voltage': ' -0.13600'},
{'time': '00.000615000000', 'voltage': ' -0.13600'},
{'time': '00.000616000000', 'voltage': ' -0.13600'},
{'time': '00.000617000000', 'voltage': ' -0.12000'},
{'time': '00.000618000000', 'voltage': ' -0.11200'},
{'time': '00.000619000000', 'voltage': ' -0.12000'},
{'time': '00.000620000000', 'voltage': ' -0.11200'},
{'time': '00.000621000000', 'voltage': ' -0.11200'},
{'time': '00.000622000000', 'voltage': ' -0.11200'},
{'time': '00.000623000000', 'voltage': ' -0.11200'},
{'time': '00.000624000000', 'voltage': ' -0.11200'},
{'time': '00.000625000000', 'voltage': ' -0.10400'},
{'time': '00.000626000000', 'voltage': ' -0.11200'},
{'time': '00.000627000000', 'voltage': ' -0.10400'},
{'time': '00.000628000000', 'voltage': ' -0.09600'},
{'time': '00.000629000000', 'voltage': ' -0.12000'},
{'time': '00.000630000000', 'voltage': ' -0.11200'},
{'time': '00.000631000000', 'voltage': ' -0.11200'},
{'time': '00.000632000000', 'voltage': ' -0.11200'},
{'time': '00.000633000000', 'voltage': ' -0.10400'},
{'time': '00.000634000000', 'voltage': ' -0.10400'},
{'time': '00.000635000000', 'voltage': ' -0.08800'},
{'time': '00.000636000000', 'voltage': ' -0.10400'},
{'time': '00.000637000000', 'voltage': ' -0.08000'},
{'time': '00.000638000000', 'voltage': ' -0.09600'},
{'time': '00.000639000000', 'voltage': ' -0.08000'},
{'time': '00.000640000000', 'voltage': ' -0.08000'},
{'time': '00.000641000000', 'voltage': ' -0.08800'},
{'time': '00.000642000000', 'voltage': ' -0.08000'},
{'time': '00.000643000000', 'voltage': ' -0.07200'},
{'time': '00.000644000000', 'voltage': ' -0.08000'},
{'time': '00.000645000000', 'voltage': ' -0.08000'},
{'time': '00.000646000000', 'voltage': ' -0.08000'},
{'time': '00.000647000000', 'voltage': ' -0.07200'},
{'time': '00.000648000000', 'voltage': ' -0.08800'},
{'time': '00.000649000000', 'voltage': ' -0.08000'},
{'time': '00.000650000000', 'voltage': ' -0.07200'},
{'time': '00.000651000000', 'voltage': ' -0.08000'},
{'time': '00.000652000000', 'voltage': ' -0.08000'},
{'time': '00.000653000000', 'voltage': ' -0.07200'},
{'time': '00.000654000000', 'voltage': ' -0.07200'},
{'time': '00.000655000000', 'voltage': ' -0.07200'},
{'time': '00.000656000000', 'voltage': ' -0.06400'},
{'time': '00.000657000000', 'voltage': ' -0.06400'},
{'time': '00.000658000000', 'voltage': ' -0.07200'},
{'time': '00.000659000000', 'voltage': ' -0.06400'},
{'time': '00.000660000000', 'voltage': ' -0.06400'},
{'time': '00.000661000000', 'voltage': ' -0.06400'},
{'time': '00.000662000000', 'voltage': ' -0.06400'},
{'time': '00.000663000000', 'voltage': ' -0.06400'},
{'time': '00.000664000000', 'voltage': ' -0.06400'},
{'time': '00.000665000000', 'voltage': ' -0.06400'},
{'time': '00.000666000000', 'voltage': ' -0.06400'},
{'time': '00.000667000000', 'voltage': ' -0.04000'},
{'time': '00.000668000000', 'voltage': ' -0.05600'},
{'time': '00.000669000000', 'voltage': ' -0.04800'},
{'time': '00.000670000000', 'voltage': ' -0.05600'},
{'time': '00.000671000000', 'voltage': ' -0.04800'},
{'time': '00.000672000000', 'voltage': ' -0.05600'},
{'time': '00.000673000000', 'voltage': ' -0.05600'},
{'time': '00.000674000000', 'voltage': ' -0.04000'},
{'time': '00.000675000000', 'voltage': ' -0.04800'},
{'time': '00.000676000000', 'voltage': ' -0.05600'},
{'time': '00.000677000000', 'voltage': ' -0.04000'},
{'time': '00.000678000000', 'voltage': ' -0.04800'},
{'time': '00.000679000000', 'voltage': ' -0.04800'},
{'time': '00.000680000000', 'voltage': ' -0.04000'},
{'time': '00.000681000000', 'voltage': ' -0.04800'},
{'time': '00.000682000000', 'voltage': ' -0.04000'},
{'time': '00.000683000000', 'voltage': ' -0.04000'},
{'time': '00.000684000000', 'voltage': ' -0.03200'},
{'time': '00.000685000000', 'voltage': ' -0.03200'},
{'time': '00.000686000000', 'voltage': ' -0.03200'},
{'time': '00.000687000000', 'voltage': ' -0.03200'},
{'time': '00.000688000000', 'voltage': ' -0.02400'},
{'time': '00.000689000000', 'voltage': ' -0.04000'},
{'time': '00.000690000000', 'voltage': ' -0.02400'},
{'time': '00.000691000000', 'voltage': ' -0.02400'},
{'time': '00.000692000000', 'voltage': ' -0.01600'},
{'time': '00.000693000000', 'voltage': ' -0.04000'},
{'time': '00.000694000000', 'voltage': ' -0.01600'},
{'time': '00.000695000000', 'voltage': ' -0.02400'},
{'time': '00.000696000000', 'voltage': ' -0.02400'},
{'time': '00.000697000000', 'voltage': ' -0.02400'},
{'time': '00.000698000000', 'voltage': ' -0.02400'},
{'time': '00.000699000000', 'voltage': ' -0.01600'},
{'time': '00.000700000000', 'voltage': ' -0.03200'},
{'time': '00.000701000000', 'voltage': ' -0.02400'},
{'time': '00.000702000000', 'voltage': ' -0.01600'},
{'time': '00.000703000000', 'voltage': ' -0.02400'},
{'time': '00.000704000000', 'voltage': ' -0.02400'},
{'time': '00.000705000000', 'voltage': ' -0.02400'},
{'time': '00.000706000000', 'voltage': ' -0.02400'},
{'time': '00.000707000000', 'voltage': ' -0.02400'},
{'time': '00.000708000000', 'voltage': ' -0.01600'},
{'time': '00.000709000000', 'voltage': ' -0.01600'},
{'time': '00.000710000000', 'voltage': ' -0.01600'},
{'time': '00.000711000000', 'voltage': ' -0.01600'},
{'time': '00.000712000000', 'voltage': ' -0.00800'},
{'time': '00.000713000000', 'voltage': ' -0.02400'},
{'time': '00.000714000000', 'voltage': ' -0.00800'},
{'time': '00.000715000000', 'voltage': ' -0.01600'},
{'time': '00.000716000000', 'voltage': ' -0.02400'},
{'time': '00.000717000000', 'voltage': ' -0.02400'},
{'time': '00.000718000000', 'voltage': ' -0.00800'},
{'time': '00.000719000000', 'voltage': ' -0.02400'},
{'time': '00.000720000000', 'voltage': ' -0.02400'},
{'time': '00.000721000000', 'voltage': ' -0.01600'},
{'time': '00.000722000000', 'voltage': ' -0.02400'},
{'time': '00.000723000000', 'voltage': ' -0.02400'},
{'time': '00.000724000000', 'voltage': ' -0.01600'},
{'time': '00.000725000000', 'voltage': ' -0.02400'},
{'time': '00.000726000000', 'voltage': ' -0.00800'},
{'time': '00.000727000000', 'voltage': ' 0.00000'},
{'time': '00.000728000000', 'voltage': ' -0.01600'},
{'time': '00.000729000000', 'voltage': ' -0.00800'},
{'time': '00.000730000000', 'voltage': ' -0.00800'},
{'time': '00.000731000000', 'voltage': ' -0.00800'},
{'time': '00.000732000000', 'voltage': ' -0.01600'},
{'time': '00.000733000000', 'voltage': ' 0.00000'},
{'time': '00.000734000000', 'voltage': ' -0.01600'},
{'time': '00.000735000000', 'voltage': ' -0.00800'},
{'time': '00.000736000000', 'voltage': ' 0.00800'},
{'time': '00.000737000000', 'voltage': ' -0.00800'},
{'time': '00.000738000000', 'voltage': ' -0.00800'},
{'time': '00.000739000000', 'voltage': ' -0.00800'},
{'time': '00.000740000000', 'voltage': ' -0.00800'},
{'time': '00.000741000000', 'voltage': ' -0.01600'},
{'time': '00.000742000000', 'voltage': ' -0.00800'},
{'time': '00.000743000000', 'voltage': ' -0.00800'},
{'time': '00.000744000000', 'voltage': ' 0.00000'},
{'time': '00.000745000000', 'voltage': ' 0.00000'},
{'time': '00.000746000000', 'voltage': ' -0.00800'},
{'time': '00.000747000000', 'voltage': ' 0.00000'},
{'time': '00.000748000000', 'voltage': ' -0.00800'},
{'time': '00.000749000000', 'voltage': ' 0.00000'},
{'time': '00.000750000000', 'voltage': ' 0.00000'},
{'time': '00.000751000000', 'voltage': ' 0.00800'},
{'time': '00.000752000000', 'voltage': ' 0.00800'},
{'time': '00.000753000000', 'voltage': ' 0.00800'},
{'time': '00.000754000000', 'voltage': ' 0.00000'},
{'time': '00.000755000000', 'voltage': ' 0.00000'},
{'time': '00.000756000000', 'voltage': ' 0.00000'},
{'time': '00.000757000000', 'voltage': ' 0.00000'},
{'time': '00.000758000000', 'voltage': ' 0.01600'},
{'time': '00.000759000000', 'voltage': ' 0.00800'},
{'time': '00.000760000000', 'voltage': ' 0.01600'},
{'time': '00.000761000000', 'voltage': ' 0.00000'},
{'time': '00.000762000000', 'voltage': ' -0.00800'},
{'time': '00.000763000000', 'voltage': ' 0.00800'},
{'time': '00.000764000000', 'voltage': ' -0.00800'},
{'time': '00.000765000000', 'voltage': ' 0.00000'},
{'time': '00.000766000000', 'voltage': ' -0.00800'},
{'time': '00.000767000000', 'voltage': ' 0.00000'},
{'time': '00.000768000000', 'voltage': ' -0.00800'},
{'time': '00.000769000000', 'voltage': ' -0.01600'},
{'time': '00.000770000000', 'voltage': ' -0.00800'},
{'time': '00.000771000000', 'voltage': ' 0.00000'},
{'time': '00.000772000000', 'voltage': ' 0.00000'},
{'time': '00.000773000000', 'voltage': ' 0.01600'},
{'time': '00.000774000000', 'voltage': ' 0.00000'},
{'time': '00.000775000000', 'voltage': ' 0.00000'},
{'time': '00.000776000000', 'voltage': ' 0.00000'},
{'time': '00.000777000000', 'voltage': ' 0.01600'},
{'time': '00.000778000000', 'voltage': ' 0.00000'},
{'time': '00.000779000000', 'voltage': ' 0.00800'},
{'time': '00.000780000000', 'voltage': ' 0.00000'},
{'time': '00.000781000000', 'voltage': ' 0.00800'},
{'time': '00.000782000000', 'voltage': ' 0.00000'},
{'time': '00.000783000000', 'voltage': ' 0.00800'},
{'time': '00.000784000000', 'voltage': ' 0.00000'},
{'time': '00.000785000000', 'voltage': ' 0.00000'},
{'time': '00.000786000000', 'voltage': ' -0.00800'},
{'time': '00.000787000000', 'voltage': ' 0.01600'},
{'time': '00.000788000000', 'voltage': ' 0.00800'},
{'time': '00.000789000000', 'voltage': ' -0.00800'},
{'time': '00.000790000000', 'voltage': ' 0.00000'},
{'time': '00.000791000000', 'voltage': ' -0.00800'},
{'time': '00.000792000000', 'voltage': ' 0.00000'},
{'time': '00.000793000000', 'voltage': ' 0.00800'},
{'time': '00.000794000000', 'voltage': ' 0.00000'},
{'time': '00.000795000000', 'voltage': ' 0.01600'},
{'time': '00.000796000000', 'voltage': ' 0.00800'},
{'time': '00.000797000000', 'voltage': ' 0.02400'},
{'time': '00.000798000000', 'voltage': ' 0.00000'},
{'time': '00.000799000000', 'voltage': ' 0.01600'},
{'time': '00.000800000000', 'voltage': ' 0.01600'},
{'time': '00.000801000000', 'voltage': ' 0.00800'},
{'time': '00.000802000000', 'voltage': ' 0.00000'},
{'time': '00.000803000000', 'voltage': ' 0.00000'},
{'time': '00.000804000000', 'voltage': ' 0.01600'},
{'time': '00.000805000000', 'voltage': ' 0.00000'},
{'time': '00.000806000000', 'voltage': ' -0.00800'},
{'time': '00.000807000000', 'voltage': ' 0.00800'},
{'time': '00.000808000000', 'voltage': ' -0.00800'},
{'time': '00.000809000000', 'voltage': ' -0.00800'},
{'time': '00.000810000000', 'voltage': ' -0.01600'},
{'time': '00.000811000000', 'voltage': ' 0.00000'},
{'time': '00.000812000000', 'voltage': ' 0.00800'},
{'time': '00.000813000000', 'voltage': ' -0.00800'},
{'time': '00.000814000000', 'voltage': ' 0.00800'},
{'time': '00.000815000000', 'voltage': ' 0.00800'},
{'time': '00.000816000000', 'voltage': ' -0.00800'},
{'time': '00.000817000000', 'voltage': ' 0.00000'},
{'time': '00.000818000000', 'voltage': ' 0.00000'},
{'time': '00.000819000000', 'voltage': ' 0.00000'},
{'time': '00.000820000000', 'voltage': ' 0.00800'},
{'time': '00.000821000000', 'voltage': ' 0.00800'},
{'time': '00.000822000000', 'voltage': ' 0.00800'},
{'time': '00.000823000000', 'voltage': ' 0.00800'},
{'time': '00.000824000000', 'voltage': ' 0.00800'},
{'time': '00.000825000000', 'voltage': ' -0.00800'},
{'time': '00.000826000000', 'voltage': ' 0.00000'},
{'time': '00.000827000000', 'voltage': ' -0.00800'},
{'time': '00.000828000000', 'voltage': ' -0.00800'},
{'time': '00.000829000000', 'voltage': ' 0.00000'},
{'time': '00.000830000000', 'voltage': ' -0.01600'},
{'time': '00.000831000000', 'voltage': ' -0.01600'},
{'time': '00.000832000000', 'voltage': ' 0.00000'},
{'time': '00.000833000000', 'voltage': ' 0.00000'},
{'time': '00.000834000000', 'voltage': ' 0.00800'},
{'time': '00.000835000000', 'voltage': ' 0.00000'},
{'time': '00.000836000000', 'voltage': ' 0.00800'},
{'time': '00.000837000000', 'voltage': ' 0.00000'},
{'time': '00.000838000000', 'voltage': ' 0.00800'},
{'time': '00.000839000000', 'voltage': ' 0.01600'},
{'time': '00.000840000000', 'voltage': ' 0.00800'},
{'time': '00.000841000000', 'voltage': ' 0.00800'},
{'time': '00.000842000000', 'voltage': ' 0.02400'},
{'time': '00.000843000000', 'voltage': ' 0.00000'},
{'time': '00.000844000000', 'voltage': ' -0.00800'},
{'time': '00.000845000000', 'voltage': ' 0.02400'},
{'time': '00.000846000000', 'voltage': ' 0.00800'},
{'time': '00.000847000000', 'voltage': ' 0.00000'},
{'time': '00.000848000000', 'voltage': ' 0.00000'},
{'time': '00.000849000000', 'voltage': ' 0.00000'},
{'time': '00.000850000000', 'voltage': ' 0.00000'},
{'time': '00.000851000000', 'voltage': ' 0.00000'},
{'time': '00.000852000000', 'voltage': ' 0.00800'},
{'time': '00.000853000000', 'voltage': ' 0.00000'},
{'time': '00.000854000000', 'voltage': ' -0.00800'},
{'time': '00.000855000000', 'voltage': ' -0.01600'},
{'time': '00.000856000000', 'voltage': ' 0.00800'},
{'time': '00.000857000000', 'voltage': ' 0.00000'},
{'time': '00.000858000000', 'voltage': ' 0.00000'},
{'time': '00.000859000000', 'voltage': ' 0.00800'},
{'time': '00.000860000000', 'voltage': ' 0.01600'},
{'time': '00.000861000000', 'voltage': ' 0.00800'},
{'time': '00.000862000000', 'voltage': ' 0.00800'},
{'time': '00.000863000000', 'voltage': ' 0.00800'},
{'time': '00.000864000000', 'voltage': ' 0.01600'},
{'time': '00.000865000000', 'voltage': ' 0.00000'},
{'time': '00.000866000000', 'voltage': ' 0.00800'},
{'time': '00.000867000000', 'voltage': ' 0.00000'},
{'time': '00.000868000000', 'voltage': ' 0.00800'},
{'time': '00.000869000000', 'voltage': ' 0.00000'},
{'time': '00.000870000000', 'voltage': ' 0.00000'},
{'time': '00.000871000000', 'voltage': ' -0.00800'},
{'time': '00.000872000000', 'voltage': ' 0.00000'},
{'time': '00.000873000000', 'voltage': ' -0.00800'},
{'time': '00.000874000000', 'voltage': ' -0.00800'},
{'time': '00.000875000000', 'voltage': ' -0.00800'},
{'time': '00.000876000000', 'voltage': ' -0.00800'},
{'time': '00.000877000000', 'voltage': ' -0.00800'},
{'time': '00.000878000000', 'voltage': ' 0.00000'},
{'time': '00.000879000000', 'voltage': ' 0.00000'},
{'time': '00.000880000000', 'voltage': ' -0.00800'},
{'time': '00.000881000000', 'voltage': ' -0.00800'},
{'time': '00.000882000000', 'voltage': ' 0.00800'},
{'time': '00.000883000000', 'voltage': ' -0.00800'},
{'time': '00.000884000000', 'voltage': ' 0.00000'},
{'time': '00.000885000000', 'voltage': ' 0.00000'},
{'time': '00.000886000000', 'voltage': ' 0.00000'},
{'time': '00.000887000000', 'voltage': ' -0.00800'},
{'time': '00.000888000000', 'voltage': ' 0.00800'},
{'time': '00.000889000000', 'voltage': ' 0.00000'},
{'time': '00.000890000000', 'voltage': ' -0.00800'},
{'time': '00.000891000000', 'voltage': ' 0.00000'},
{'time': '00.000892000000', 'voltage': ' -0.00800'},
{'time': '00.000893000000', 'voltage': ' -0.00800'},
{'time': '00.000894000000', 'voltage': ' -0.00800'},
{'time': '00.000895000000', 'voltage': ' 0.00000'},
{'time': '00.000896000000', 'voltage': ' -0.00800'},
{'time': '00.000897000000', 'voltage': ' 0.00000'},
{'time': '00.000898000000', 'voltage': ' -0.00800'},
{'time': '00.000899000000', 'voltage': ' -0.00800'},
{'time': '00.000900000000', 'voltage': ' 0.00000'},
{'time': '00.000901000000', 'voltage': ' 0.00000'},
{'time': '00.000902000000', 'voltage': ' -0.00800'},
{'time': '00.000903000000', 'voltage': ' 0.00000'},
{'time': '00.000904000000', 'voltage': ' 0.00000'},
{'time': '00.000905000000', 'voltage': ' 0.00800'},
{'time': '00.000906000000', 'voltage': ' -0.00800'},
{'time': '00.000907000000', 'voltage': ' 0.00800'},
{'time': '00.000908000000', 'voltage': ' -0.00800'},
{'time': '00.000909000000', 'voltage': ' 0.00000'},
{'time': '00.000910000000', 'voltage': ' 0.00000'},
{'time': '00.000911000000', 'voltage': ' 0.00000'},
{'time': '00.000912000000', 'voltage': ' 0.00000'},
{'time': '00.000913000000', 'voltage': ' -0.00800'},
{'time': '00.000914000000', 'voltage': ' 0.00800'},
{'time': '00.000915000000', 'voltage': ' 0.00800'},
{'time': '00.000916000000', 'voltage': ' -0.01600'},
{'time': '00.000917000000', 'voltage': ' -0.00800'},
{'time': '00.000918000000', 'voltage': ' -0.00800'},
{'time': '00.000919000000', 'voltage': ' -0.00800'},
{'time': '00.000920000000', 'voltage': ' -0.00800'},
{'time': '00.000921000000', 'voltage': ' 0.00000'},
{'time': '00.000922000000', 'voltage': ' -0.00800'},
{'time': '00.000923000000', 'voltage': ' 0.00000'},
{'time': '00.000924000000', 'voltage': ' -0.01600'},
{'time': '00.000925000000', 'voltage': ' -0.00800'},
{'time': '00.000926000000', 'voltage': ' 0.00000'},
{'time': '00.000927000000', 'voltage': ' -0.00800'},
{'time': '00.000928000000', 'voltage': ' -0.00800'},
{'time': '00.000929000000', 'voltage': ' 0.00000'},
{'time': '00.000930000000', 'voltage': ' -0.00800'},
{'time': '00.000931000000', 'voltage': ' 0.00000'},
{'time': '00.000932000000', 'voltage': ' 0.00800'},
{'time': '00.000933000000', 'voltage': ' -0.01600'},
{'time': '00.000934000000', 'voltage': ' -0.00800'},
{'time': '00.000935000000', 'voltage': ' 0.00000'},
{'time': '00.000936000000', 'voltage': ' 0.00000'},
{'time': '00.000937000000', 'voltage': ' -0.01600'},
{'time': '00.000938000000', 'voltage': ' -0.00800'},
{'time': '00.000939000000', 'voltage': ' -0.00800'},
{'time': '00.000940000000', 'voltage': ' -0.01600'},
{'time': '00.000941000000', 'voltage': ' -0.01600'},
{'time': '00.000942000000', 'voltage': ' 0.00000'},
{'time': '00.000943000000', 'voltage': ' -0.02400'},
{'time': '00.000944000000', 'voltage': ' 0.00000'},
{'time': '00.000945000000', 'voltage': ' -0.01600'},
{'time': '00.000946000000', 'voltage': ' -0.00800'},
{'time': '00.000947000000', 'voltage': ' -0.01600'},
{'time': '00.000948000000', 'voltage': ' 0.00000'},
{'time': '00.000949000000', 'voltage': ' -0.00800'},
{'time': '00.000950000000', 'voltage': ' -0.00800'},
{'time': '00.000951000000', 'voltage': ' -0.00800'},
{'time': '00.000952000000', 'voltage': ' -0.00800'},
{'time': '00.000953000000', 'voltage': ' 0.00000'},
{'time': '00.000954000000', 'voltage': ' -0.00800'},
{'time': '00.000955000000', 'voltage': ' 0.00800'},
{'time': '00.000956000000', 'voltage': ' 0.00000'},
{'time': '00.000957000000', 'voltage': ' -0.00800'},
{'time': '00.000958000000', 'voltage': ' -0.00800'},
{'time': '00.000959000000', 'voltage': ' 0.00800'},
{'time': '00.000960000000', 'voltage': ' 0.00000'},
{'time': '00.000961000000', 'voltage': ' -0.00800'},
{'time': '00.000962000000', 'voltage': ' -0.00800'},
{'time': '00.000963000000', 'voltage': ' -0.01600'},
{'time': '00.000964000000', 'voltage': ' 0.00000'},
{'time': '00.000965000000', 'voltage': ' -0.00800'},
{'time': '00.000966000000', 'voltage': ' -0.01600'},
{'time': '00.000967000000', 'voltage': ' -0.00800'},
{'time': '00.000968000000', 'voltage': ' -0.01600'},
{'time': '00.000969000000', 'voltage': ' -0.00800'},
{'time': '00.000970000000', 'voltage': ' -0.01600'},
{'time': '00.000971000000', 'voltage': ' -0.00800'},
{'time': '00.000972000000', 'voltage': ' 0.00000'},
{'time': '00.000973000000', 'voltage': ' 0.00000'},
{'time': '00.000974000000', 'voltage': ' -0.00800'},
{'time': '00.000975000000', 'voltage': ' 0.00000'},
{'time': '00.000976000000', 'voltage': ' 0.00000'},
{'time': '00.000977000000', 'voltage': ' 0.00000'},
{'time': '00.000978000000', 'voltage': ' -0.00800'},
{'time': '00.000979000000', 'voltage': ' -0.01600'},
{'time': '00.000980000000', 'voltage': ' -0.00800'},
{'time': '00.000981000000', 'voltage': ' -0.01600'},
{'time': '00.000982000000', 'voltage': ' -0.01600'},
{'time': '00.000983000000', 'voltage': ' -0.00800'},
{'time': '00.000984000000', 'voltage': ' 0.00000'},
{'time': '00.000985000000', 'voltage': ' -0.00800'},
{'time': '00.000986000000', 'voltage': ' -0.01600'},
{'time': '00.000987000000', 'voltage': ' -0.00800'},
{'time': '00.000988000000', 'voltage': ' -0.02400'},
{'time': '00.000989000000', 'voltage': ' -0.01600'},
{'time': '00.000990000000', 'voltage': ' -0.02400'},
{'time': '00.000991000000', 'voltage': ' 0.00000'},
{'time': '00.000992000000', 'voltage': ' 0.00000'},
{'time': '00.000993000000', 'voltage': ' 0.00000'},
{'time': '00.000994000000', 'voltage': ' -0.00800'},
{'time': '00.000995000000', 'voltage': ' 0.00000'},
{'time': '00.000996000000', 'voltage': ' 0.00000'},
{'time': '00.000997000000', 'voltage': ' 0.00000'},
{'time': '00.000998000000', 'voltage': ' 0.00000'},
{'time': '00.000999000000', 'voltage': ' -0.00800'},
{'time': '00.001000000000', 'voltage': ' 0.00000'},
{'time': '00.001001000000', 'voltage': ' 0.00000'},
{'time': '00.001002000000', 'voltage': ' -0.01600'},
{'time': '00.001003000000', 'voltage': ' -0.02400'},
{'time': '00.001004000000', 'voltage': ' -0.00800'},
{'time': '00.001005000000', 'voltage': ' -0.00800'},
{'time': '00.001006000000', 'voltage': ' -0.01600'},
{'time': '00.001007000000', 'voltage': ' -0.01600'},
{'time': '00.001008000000', 'voltage': ' -0.00800'},
{'time': '00.001009000000', 'voltage': ' -0.02400'},
{'time': '00.001010000000', 'voltage': ' -0.00800'},
{'time': '00.001011000000', 'voltage': ' -0.00800'},
{'time': '00.001012000000', 'voltage': ' -0.00800'},
{'time': '00.001013000000', 'voltage': ' 0.00000'},
{'time': '00.001014000000', 'voltage': ' -0.00800'},
{'time': '00.001015000000', 'voltage': ' -0.00800'},
{'time': '00.001016000000', 'voltage': ' 0.00000'},
{'time': '00.001017000000', 'voltage': ' 0.00000'},
{'time': '00.001018000000', 'voltage': ' -0.00800'},
{'time': '00.001019000000', 'voltage': ' 0.00800'},
{'time': '00.001020000000', 'voltage': ' 0.00000'},
{'time': '00.001021000000', 'voltage': ' 0.00000'},
{'time': '00.001022000000', 'voltage': ' -0.00800'},
{'time': '00.001023000000', 'voltage': ' -0.00800'},
{'time': '00.001024000000', 'voltage': ' -0.00800'},
{'time': '00.001025000000', 'voltage': ' -0.01600'},
{'time': '00.001026000000', 'voltage': ' -0.01600'},
{'time': '00.001027000000', 'voltage': ' -0.01600'},
{'time': '00.001028000000', 'voltage': ' -0.01600'},
{'time': '00.001029000000', 'voltage': ' -0.01600'},
{'time': '00.001030000000', 'voltage': ' -0.01600'},
{'time': '00.001031000000', 'voltage': ' -0.00800'},
{'time': '00.001032000000', 'voltage': ' -0.01600'},
{'time': '00.001033000000', 'voltage': ' -0.02400'},
{'time': '00.001034000000', 'voltage': ' -0.01600'},
{'time': '00.001035000000', 'voltage': ' 0.00000'},
{'time': '00.001036000000', 'voltage': ' -0.02400'},
{'time': '00.001037000000', 'voltage': ' -0.00800'},
{'time': '00.001038000000', 'voltage': ' -0.00800'},
{'time': '00.001039000000', 'voltage': ' -0.01600'},
{'time': '00.001040000000', 'voltage': ' -0.00800'},
{'time': '00.001041000000', 'voltage': ' -0.00800'},
{'time': '00.001042000000', 'voltage': ' -0.01600'},
{'time': '00.001043000000', 'voltage': ' -0.01600'},
{'time': '00.001044000000', 'voltage': ' -0.01600'},
{'time': '00.001045000000', 'voltage': ' -0.00800'},
{'time': '00.001046000000', 'voltage': ' -0.01600'},
{'time': '00.001047000000', 'voltage': ' -0.01600'},
{'time': '00.001048000000', 'voltage': ' -0.02400'},
{'time': '00.001049000000', 'voltage': ' -0.02400'},
{'time': '00.001050000000', 'voltage': ' -0.02400'},
{'time': '00.001051000000', 'voltage': ' -0.01600'},
{'time': '00.001052000000', 'voltage': ' -0.02400'},
{'time': '00.001053000000', 'voltage': ' 0.00000'},
{'time': '00.001054000000', 'voltage': ' -0.01600'},
{'time': '00.001055000000', 'voltage': ' -0.01600'},
{'time': '00.001056000000', 'voltage': ' 0.00000'},
{'time': '00.001057000000', 'voltage': ' 0.00000'},
{'time': '00.001058000000', 'voltage': ' -0.00800'},
{'time': '00.001059000000', 'voltage': ' -0.01600'},
{'time': '00.001060000000', 'voltage': ' 0.00000'},
{'time': '00.001061000000', 'voltage': ' 0.00000'},
{'time': '00.001062000000', 'voltage': ' 0.00000'},
{'time': '00.001063000000', 'voltage': ' -0.00800'},
{'time': '00.001064000000', 'voltage': ' 0.00000'},
{'time': '00.001065000000', 'voltage': ' -0.00800'},
{'time': '00.001066000000', 'voltage': ' 0.00000'},
{'time': '00.001067000000', 'voltage': ' -0.01600'},
{'time': '00.001068000000', 'voltage': ' -0.01600'},
{'time': '00.001069000000', 'voltage': ' -0.01600'},
{'time': '00.001070000000', 'voltage': ' -0.01600'},
{'time': '00.001071000000', 'voltage': ' -0.01600'},
{'time': '00.001072000000', 'voltage': ' -0.00800'},
{'time': '00.001073000000', 'voltage': ' 0.00000'},
{'time': '00.001074000000', 'voltage': ' -0.01600'},
{'time': '00.001075000000', 'voltage': ' -0.01600'},
{'time': '00.001076000000', 'voltage': ' -0.00800'},
{'time': '00.001077000000', 'voltage': ' -0.00800'},
{'time': '00.001078000000', 'voltage': ' -0.02400'},
{'time': '00.001079000000', 'voltage': ' -0.01600'},
{'time': '00.001080000000', 'voltage': ' 0.00000'},
{'time': '00.001081000000', 'voltage': ' -0.00800'},
{'time': '00.001082000000', 'voltage': ' -0.00800'},
{'time': '00.001083000000', 'voltage': ' 0.00000'},
{'time': '00.001084000000', 'voltage': ' -0.00800'},
{'time': '00.001085000000', 'voltage': ' -0.00800'},
{'time': '00.001086000000', 'voltage': ' -0.02400'},
{'time': '00.001087000000', 'voltage': ' 0.00000'},
{'time': '00.001088000000', 'voltage': ' -0.01600'},
{'time': '00.001089000000', 'voltage': ' -0.02400'},
{'time': '00.001090000000', 'voltage': ' -0.00800'},
{'time': '00.001091000000', 'voltage': ' -0.01600'},
{'time': '00.001092000000', 'voltage': ' -0.01600'},
{'time': '00.001093000000', 'voltage': ' -0.00800'},
{'time': '00.001094000000', 'voltage': ' -0.01600'},
{'time': '00.001095000000', 'voltage': ' -0.02400'},
{'time': '00.001096000000', 'voltage': ' -0.01600'},
{'time': '00.001097000000', 'voltage': ' 0.00000'},
{'time': '00.001098000000', 'voltage': ' -0.02400'},
{'time': '00.001099000000', 'voltage': ' -0.02400'},
{'time': '00.001100000000', 'voltage': ' -0.01600'},
{'time': '00.001101000000', 'voltage': ' -0.01600'},
{'time': '00.001102000000', 'voltage': ' 0.00000'},
{'time': '00.001103000000', 'voltage': ' -0.00800'},
{'time': '00.001104000000', 'voltage': ' -0.01600'},
{'time': '00.001105000000', 'voltage': ' -0.01600'},
{'time': '00.001106000000', 'voltage': ' -0.01600'},
{'time': '00.001107000000', 'voltage': ' -0.00800'},
{'time': '00.001108000000', 'voltage': ' -0.00800'},
{'time': '00.001109000000', 'voltage': ' -0.01600'},
{'time': '00.001110000000', 'voltage': ' 0.00000'},
{'time': '00.001111000000', 'voltage': ' -0.02400'},
{'time': '00.001112000000', 'voltage': ' -0.01600'},
{'time': '00.001113000000', 'voltage': ' -0.00800'},
{'time': '00.001114000000', 'voltage': ' -0.01600'},
{'time': '00.001115000000', 'voltage': ' -0.01600'},
{'time': '00.001116000000', 'voltage': ' -0.02400'},
{'time': '00.001117000000', 'voltage': ' -0.01600'},
{'time': '00.001118000000', 'voltage': ' -0.01600'},
{'time': '00.001119000000', 'voltage': ' -0.00800'},
{'time': '00.001120000000', 'voltage': ' -0.01600'},
{'time': '00.001121000000', 'voltage': ' 0.00000'},
{'time': '00.001122000000', 'voltage': ' -0.00800'},
{'time': '00.001123000000', 'voltage': ' -0.01600'},
{'time': '00.001124000000', 'voltage': ' 0.00000'},
{'time': '00.001125000000', 'voltage': ' -0.01600'},
{'time': '00.001126000000', 'voltage': ' -0.01600'},
{'time': '00.001127000000', 'voltage': ' -0.00800'},
{'time': '00.001128000000', 'voltage': ' -0.01600'},
{'time': '00.001129000000', 'voltage': ' -0.00800'},
{'time': '00.001130000000', 'voltage': ' -0.02400'},
{'time': '00.001131000000', 'voltage': ' -0.00800'},
{'time': '00.001132000000', 'voltage': ' -0.00800'},
{'time': '00.001133000000', 'voltage': ' -0.02400'},
{'time': '00.001134000000', 'voltage': ' -0.02400'},
{'time': '00.001135000000', 'voltage': ' -0.02400'},
{'time': '00.001136000000', 'voltage': ' -0.03200'},
{'time': '00.001137000000', 'voltage': ' -0.02400'},
{'time': '00.001138000000', 'voltage': ' -0.01600'},
{'time': '00.001139000000', 'voltage': ' -0.02400'},
{'time': '00.001140000000', 'voltage': ' -0.02400'},
{'time': '00.001141000000', 'voltage': ' -0.02400'},
{'time': '00.001142000000', 'voltage': ' -0.02400'},
{'time': '00.001143000000', 'voltage': ' -0.01600'},
{'time': '00.001144000000', 'voltage': ' -0.02400'},
{'time': '00.001145000000', 'voltage': ' -0.02400'},
{'time': '00.001146000000', 'voltage': ' -0.03200'},
{'time': '00.001147000000', 'voltage': ' -0.02400'},
{'time': '00.001148000000', 'voltage': ' -0.02400'},
{'time': '00.001149000000', 'voltage': ' 0.00000'},
{'time': '00.001150000000', 'voltage': ' -0.02400'},
{'time': '00.001151000000', 'voltage': ' -0.01600'},
{'time': '00.001152000000', 'voltage': ' -0.02400'},
{'time': '00.001153000000', 'voltage': ' -0.02400'},
{'time': '00.001154000000', 'voltage': ' -0.01600'},
{'time': '00.001155000000', 'voltage': ' -0.02400'},
{'time': '00.001156000000', 'voltage': ' -0.01600'},
{'time': '00.001157000000', 'voltage': ' -0.02400'},
{'time': '00.001158000000', 'voltage': ' -0.02400'},
{'time': '00.001159000000', 'voltage': ' -0.00800'},
{'time': '00.001160000000', 'voltage': ' -0.02400'},
{'time': '00.001161000000', 'voltage': ' -0.01600'},
{'time': '00.001162000000', 'voltage': ' -0.00800'},
{'time': '00.001163000000', 'voltage': ' -0.03200'},
{'time': '00.001164000000', 'voltage': ' -0.04000'},
{'time': '00.001165000000', 'voltage': ' -0.01600'},
{'time': '00.001166000000', 'voltage': ' -0.01600'},
{'time': '00.001167000000', 'voltage': ' -0.04000'},
{'time': '00.001168000000', 'voltage': ' -0.02400'},
{'time': '00.001169000000', 'voltage': ' -0.01600'},
{'time': '00.001170000000', 'voltage': ' -0.00800'},
{'time': '00.001171000000', 'voltage': ' -0.01600'},
{'time': '00.001172000000', 'voltage': ' -0.00800'},
{'time': '00.001173000000', 'voltage': ' -0.02400'},
{'time': '00.001174000000', 'voltage': ' -0.02400'},
{'time': '00.001175000000', 'voltage': ' -0.02400'},
{'time': '00.001176000000', 'voltage': ' -0.01600'},
{'time': '00.001177000000', 'voltage': ' -0.02400'},
{'time': '00.001178000000', 'voltage': ' -0.02400'},
{'time': '00.001179000000', 'voltage': ' -0.02400'},
{'time': '00.001180000000', 'voltage': ' -0.03200'},
{'time': '00.001181000000', 'voltage': ' -0.01600'},
{'time': '00.001182000000', 'voltage': ' -0.04000'},
{'time': '00.001183000000', 'voltage': ' -0.02400'},
{'time': '00.001184000000', 'voltage': ' -0.03200'},
{'time': '00.001185000000', 'voltage': ' -0.04000'},
{'time': '00.001186000000', 'voltage': ' -0.03200'},
{'time': '00.001187000000', 'voltage': ' -0.03200'},
{'time': '00.001188000000', 'voltage': ' -0.03200'},
{'time': '00.001189000000', 'voltage': ' -0.03200'},
{'time': '00.001190000000', 'voltage': ' -0.02400'},
{'time': '00.001191000000', 'voltage': ' -0.04000'},
{'time': '00.001192000000', 'voltage': ' -0.02400'},
{'time': '00.001193000000', 'voltage': ' -0.03200'},
{'time': '00.001194000000', 'voltage': ' -0.03200'},
{'time': '00.001195000000', 'voltage': ' -0.03200'},
{'time': '00.001196000000', 'voltage': ' -0.02400'},
{'time': '00.001197000000', 'voltage': ' -0.03200'},
{'time': '00.001198000000', 'voltage': ' -0.02400'},
{'time': '00.001199000000', 'voltage': ' -0.04000'},
{'time': '00.001200000000', 'voltage': ' -0.02400'},
{'time': '00.001201000000', 'voltage': ' -0.04800'},
{'time': '00.001202000000', 'voltage': ' -0.03200'},
{'time': '00.001203000000', 'voltage': ' -0.03200'},
{'time': '00.001204000000', 'voltage': ' -0.04800'},
{'time': '00.001205000000', 'voltage': ' -0.04000'},
{'time': '00.001206000000', 'voltage': ' -0.04800'},
{'time': '00.001207000000', 'voltage': ' -0.04000'},
{'time': '00.001208000000', 'voltage': ' -0.03200'},
{'time': '00.001209000000', 'voltage': ' -0.04000'},
{'time': '00.001210000000', 'voltage': ' -0.04000'},
{'time': '00.001211000000', 'voltage': ' -0.04800'},
{'time': '00.001212000000', 'voltage': ' -0.04000'},
{'time': '00.001213000000', 'voltage': ' -0.04000'},
{'time': '00.001214000000', 'voltage': ' -0.04000'},
{'time': '00.001215000000', 'voltage': ' -0.03200'},
{'time': '00.001216000000', 'voltage': ' -0.02400'},
{'time': '00.001217000000', 'voltage': ' -0.03200'},
{'time': '00.001218000000', 'voltage': ' -0.03200'},
{'time': '00.001219000000', 'voltage': ' -0.04000'},
{'time': '00.001220000000', 'voltage': ' -0.03200'},
{'time': '00.001221000000', 'voltage': ' -0.04000'},
{'time': '00.001222000000', 'voltage': ' -0.03200'},
{'time': '00.001223000000', 'voltage': ' -0.04000'},
{'time': '00.001224000000', 'voltage': ' -0.03200'},
{'time': '00.001225000000', 'voltage': ' -0.04000'},
{'time': '00.001226000000', 'voltage': ' -0.04800'},
{'time': '00.001227000000', 'voltage': ' -0.03200'},
{'time': '00.001228000000', 'voltage': ' -0.04000'},
{'time': '00.001229000000', 'voltage': ' -0.05600'},
{'time': '00.001230000000', 'voltage': ' -0.04000'},
{'time': '00.001231000000', 'voltage': ' -0.04800'},
{'time': '00.001232000000', 'voltage': ' -0.04800'},
{'time': '00.001233000000', 'voltage': ' -0.04800'},
{'time': '00.001234000000', 'voltage': ' -0.02400'},
{'time': '00.001235000000', 'voltage': ' -0.02400'},
{'time': '00.001236000000', 'voltage': ' -0.04000'},
{'time': '00.001237000000', 'voltage': ' -0.03200'},
{'time': '00.001238000000', 'voltage': ' -0.03200'},
{'time': '00.001239000000', 'voltage': ' -0.03200'},
{'time': '00.001240000000', 'voltage': ' -0.03200'},
{'time': '00.001241000000', 'voltage': ' -0.03200'},
{'time': '00.001242000000', 'voltage': ' -0.04000'},
{'time': '00.001243000000', 'voltage': ' -0.04000'},
{'time': '00.001244000000', 'voltage': ' -0.04000'},
{'time': '00.001245000000', 'voltage': ' -0.04800'},
{'time': '00.001246000000', 'voltage': ' -0.04800'},
{'time': '00.001247000000', 'voltage': ' -0.05600'},
{'time': '00.001248000000', 'voltage': ' -0.06400'},
{'time': '00.001249000000', 'voltage': ' -0.04800'},
{'time': '00.001250000000', 'voltage': ' -0.05600'},
{'time': '00.001251000000', 'voltage': ' -0.04800'},
{'time': '00.001252000000', 'voltage': ' -0.04800'},
{'time': '00.001253000000', 'voltage': ' -0.04800'},
{'time': '00.001254000000', 'voltage': ' -0.05600'},
{'time': '00.001255000000', 'voltage': ' -0.04800'},
{'time': '00.001256000000', 'voltage': ' -0.04800'},
{'time': '00.001257000000', 'voltage': ' -0.04800'},
{'time': '00.001258000000', 'voltage': ' -0.04800'},
{'time': '00.001259000000', 'voltage': ' -0.03200'},
{'time': '00.001260000000', 'voltage': ' -0.04800'},
{'time': '00.001261000000', 'voltage': ' -0.04000'},
{'time': '00.001262000000', 'voltage': ' -0.03200'},
{'time': '00.001263000000', 'voltage': ' -0.04800'},
{'time': '00.001264000000', 'voltage': ' -0.04800'},
{'time': '00.001265000000', 'voltage': ' -0.04000'},
{'time': '00.001266000000', 'voltage': ' -0.04800'},
{'time': '00.001267000000', 'voltage': ' -0.04800'},
{'time': '00.001268000000', 'voltage': ' -0.04800'},
{'time': '00.001269000000', 'voltage': ' -0.04800'},
{'time': '00.001270000000', 'voltage': ' -0.06400'},
{'time': '00.001271000000', 'voltage': ' -0.06400'},
{'time': '00.001272000000', 'voltage': ' -0.06400'},
{'time': '00.001273000000', 'voltage': ' -0.04800'},
{'time': '00.001274000000', 'voltage': ' -0.06400'},
{'time': '00.001275000000', 'voltage': ' -0.04800'},
{'time': '00.001276000000', 'voltage': ' -0.04800'},
{'time': '00.001277000000', 'voltage': ' -0.04800'},
{'time': '00.001278000000', 'voltage': ' -0.05600'},
{'time': '00.001279000000', 'voltage': ' -0.01600'},
{'time': '00.001280000000', 'voltage': ' -0.04800'},
{'time': '00.001281000000', 'voltage': ' -0.03200'},
{'time': '00.001282000000', 'voltage': ' -0.05600'},
{'time': '00.001283000000', 'voltage': ' -0.04800'},
{'time': '00.001284000000', 'voltage': ' -0.04000'},
{'time': '00.001285000000', 'voltage': ' -0.04800'},
{'time': '00.001286000000', 'voltage': ' -0.04800'},
{'time': '00.001287000000', 'voltage': ' -0.05600'},
{'time': '00.001288000000', 'voltage': ' -0.06400'},
{'time': '00.001289000000', 'voltage': ' -0.04800'},
{'time': '00.001290000000', 'voltage': ' -0.07200'},
{'time': '00.001291000000', 'voltage': ' -0.06400'},
{'time': '00.001292000000', 'voltage': ' -0.05600'},
{'time': '00.001293000000', 'voltage': ' -0.05600'},
{'time': '00.001294000000', 'voltage': ' -0.06400'},
{'time': '00.001295000000', 'voltage': ' -0.05600'},
{'time': '00.001296000000', 'voltage': ' -0.05600'},
{'time': '00.001297000000', 'voltage': ' -0.04800'},
{'time': '00.001298000000', 'voltage': ' -0.05600'},
{'time': '00.001299000000', 'voltage': ' -0.05600'},
{'time': '00.001300000000', 'voltage': ' -0.06400'},
{'time': '00.001301000000', 'voltage': ' -0.06400'},
{'time': '00.001302000000', 'voltage': ' -0.05600'},
{'time': '00.001303000000', 'voltage': ' -0.04800'},
{'time': '00.001304000000', 'voltage': ' -0.05600'},
{'time': '00.001305000000', 'voltage': ' -0.04800'},
{'time': '00.001306000000', 'voltage': ' -0.04800'},
{'time': '00.001307000000', 'voltage': ' -0.05600'},
{'time': '00.001308000000', 'voltage': ' -0.06400'},
{'time': '00.001309000000', 'voltage': ' -0.06400'},
{'time': '00.001310000000', 'voltage': ' -0.07200'},
{'time': '00.001311000000', 'voltage': ' -0.06400'},
{'time': '00.001312000000', 'voltage': ' -0.06400'},
{'time': '00.001313000000', 'voltage': ' -0.06400'},
{'time': '00.001314000000', 'voltage': ' -0.06400'},
{'time': '00.001315000000', 'voltage': ' -0.06400'},
{'time': '00.001316000000', 'voltage': ' -0.06400'},
{'time': '00.001317000000', 'voltage': ' -0.06400'},
{'time': '00.001318000000', 'voltage': ' -0.06400'},
{'time': '00.001319000000', 'voltage': ' -0.05600'},
{'time': '00.001320000000', 'voltage': ' -0.06400'},
{'time': '00.001321000000', 'voltage': ' -0.05600'},
{'time': '00.001322000000', 'voltage': ' -0.06400'},
{'time': '00.001323000000', 'voltage': ' -0.06400'},
{'time': '00.001324000000', 'voltage': ' -0.06400'},
{'time': '00.001325000000', 'voltage': ' -0.05600'},
{'time': '00.001326000000', 'voltage': ' -0.05600'},
{'time': '00.001327000000', 'voltage': ' -0.06400'},
{'time': '00.001328000000', 'voltage': ' -0.05600'},
{'time': '00.001329000000', 'voltage': ' -0.06400'},
{'time': '00.001330000000', 'voltage': ' -0.05600'},
{'time': '00.001331000000', 'voltage': ' -0.05600'},
{'time': '00.001332000000', 'voltage': ' -0.05600'},
{'time': '00.001333000000', 'voltage': ' -0.05600'},
{'time': '00.001334000000', 'voltage': ' -0.06400'},
{'time': '00.001335000000', 'voltage': ' -0.05600'},
{'time': '00.001336000000', 'voltage': ' -0.06400'},
{'time': '00.001337000000', 'voltage': ' -0.06400'},
{'time': '00.001338000000', 'voltage': ' -0.07200'},
{'time': '00.001339000000', 'voltage': ' -0.06400'},
{'time': '00.001340000000', 'voltage': ' -0.06400'},
{'time': '00.001341000000', 'voltage': ' -0.05600'},
{'time': '00.001342000000', 'voltage': ' -0.06400'},
{'time': '00.001343000000', 'voltage': ' -0.05600'},
{'time': '00.001344000000', 'voltage': ' -0.06400'},
{'time': '00.001345000000', 'voltage': ' -0.05600'},
{'time': '00.001346000000', 'voltage': ' -0.06400'},
{'time': '00.001347000000', 'voltage': ' -0.06400'},
{'time': '00.001348000000', 'voltage': ' -0.04800'},
{'time': '00.001349000000', 'voltage': ' -0.05600'},
{'time': '00.001350000000', 'voltage': ' -0.06400'},
{'time': '00.001351000000', 'voltage': ' -0.05600'},
{'time': '00.001352000000', 'voltage': ' -0.06400'},
{'time': '00.001353000000', 'voltage': ' -0.07200'},
{'time': '00.001354000000', 'voltage': ' -0.06400'},
{'time': '00.001355000000', 'voltage': ' -0.05600'},
{'time': '00.001356000000', 'voltage': ' -0.08000'},
{'time': '00.001357000000', 'voltage': ' -0.07200'},
{'time': '00.001358000000', 'voltage': ' -0.06400'},
{'time': '00.001359000000', 'voltage': ' -0.08000'},
{'time': '00.001360000000', 'voltage': ' -0.07200'},
{'time': '00.001361000000', 'voltage': ' -0.07200'},
{'time': '00.001362000000', 'voltage': ' -0.06400'},
{'time': '00.001363000000', 'voltage': ' -0.08000'},
{'time': '00.001364000000', 'voltage': ' -0.07200'},
{'time': '00.001365000000', 'voltage': ' -0.06400'},
{'time': '00.001366000000', 'voltage': ' -0.07200'},
{'time': '00.001367000000', 'voltage': ' -0.05600'},
{'time': '00.001368000000', 'voltage': ' -0.06400'},
{'time': '00.001369000000', 'voltage': ' -0.07200'},
{'time': '00.001370000000', 'voltage': ' -0.06400'},
{'time': '00.001371000000', 'voltage': ' -0.06400'},
{'time': '00.001372000000', 'voltage': ' -0.05600'},
{'time': '00.001373000000', 'voltage': ' -0.07200'},
{'time': '00.001374000000', 'voltage': ' -0.07200'},
{'time': '00.001375000000', 'voltage': ' -0.06400'},
{'time': '00.001376000000', 'voltage': ' -0.05600'},
{'time': '00.001377000000', 'voltage': ' -0.06400'},
{'time': '00.001378000000', 'voltage': ' -0.06400'},
{'time': '00.001379000000', 'voltage': ' -0.08000'},
{'time': '00.001380000000', 'voltage': ' -0.07200'},
{'time': '00.001381000000', 'voltage': ' -0.06400'},
{'time': '00.001382000000', 'voltage': ' -0.07200'},
{'time': '00.001383000000', 'voltage': ' -0.07200'},
{'time': '00.001384000000', 'voltage': ' -0.06400'},
{'time': '00.001385000000', 'voltage': ' -0.05600'},
{'time': '00.001386000000', 'voltage': ' -0.07200'},
{'time': '00.001387000000', 'voltage': ' -0.06400'},
{'time': '00.001388000000', 'voltage': ' -0.07200'},
{'time': '00.001389000000', 'voltage': ' -0.05600'},
{'time': '00.001390000000', 'voltage': ' -0.07200'},
{'time': '00.001391000000', 'voltage': ' -0.06400'},
{'time': '00.001392000000', 'voltage': ' -0.05600'},
{'time': '00.001393000000', 'voltage': ' -0.06400'},
{'time': '00.001394000000', 'voltage': ' -0.07200'},
{'time': '00.001395000000', 'voltage': ' -0.06400'},
{'time': '00.001396000000', 'voltage': ' -0.07200'},
{'time': '00.001397000000', 'voltage': ' -0.07200'},
{'time': '00.001398000000', 'voltage': ' -0.07200'},
{'time': '00.001399000000', 'voltage': ' -0.06400'},
{'time': '00.001400000000', 'voltage': ' -0.07200'},
{'time': '00.001401000000', 'voltage': ' -0.05600'},
{'time': '00.001402000000', 'voltage': ' -0.07200'},
{'time': '00.001403000000', 'voltage': ' -0.05600'},
{'time': '00.001404000000', 'voltage': ' -0.06400'},
{'time': '00.001405000000', 'voltage': ' -0.07200'},
{'time': '00.001406000000', 'voltage': ' -0.07200'},
{'time': '00.001407000000', 'voltage': ' -0.07200'},
{'time': '00.001408000000', 'voltage': ' -0.08000'},
{'time': '00.001409000000', 'voltage': ' -0.07200'},
{'time': '00.001410000000', 'voltage': ' -0.08000'},
{'time': '00.001411000000', 'voltage': ' -0.06400'},
{'time': '00.001412000000', 'voltage': ' -0.05600'},
{'time': '00.001413000000', 'voltage': ' -0.06400'},
{'time': '00.001414000000', 'voltage': ' -0.07200'},
{'time': '00.001415000000', 'voltage': ' -0.06400'},
{'time': '00.001416000000', 'voltage': ' -0.06400'},
{'time': '00.001417000000', 'voltage': ' -0.06400'},
{'time': '00.001418000000', 'voltage': ' -0.07200'},
{'time': '00.001419000000', 'voltage': ' -0.06400'},
{'time': '00.001420000000', 'voltage': ' -0.06400'},
{'time': '00.001421000000', 'voltage': ' -0.07200'},
{'time': '00.001422000000', 'voltage': ' -0.07200'},
{'time': '00.001423000000', 'voltage': ' -0.08000'},
{'time': '00.001424000000', 'voltage': ' -0.07200'},
{'time': '00.001425000000', 'voltage': ' -0.09600'},
{'time': '00.001426000000', 'voltage': ' -0.07200'},
{'time': '00.001427000000', 'voltage': ' -0.07200'},
{'time': '00.001428000000', 'voltage': ' -0.07200'},
{'time': '00.001429000000', 'voltage': ' -0.08000'},
{'time': '00.001430000000', 'voltage': ' -0.06400'},
{'time': '00.001431000000', 'voltage': ' -0.07200'},
{'time': '00.001432000000', 'voltage': ' -0.07200'},
{'time': '00.001433000000', 'voltage': ' -0.06400'},
{'time': '00.001434000000', 'voltage': ' -0.07200'},
{'time': '00.001435000000', 'voltage': ' -0.08000'},
{'time': '00.001436000000', 'voltage': ' -0.06400'},
{'time': '00.001437000000', 'voltage': ' -0.06400'},
{'time': '00.001438000000', 'voltage': ' -0.05600'},
{'time': '00.001439000000', 'voltage': ' -0.06400'},
{'time': '00.001440000000', 'voltage': ' -0.05600'},
{'time': '00.001441000000', 'voltage': ' -0.07200'},
{'time': '00.001442000000', 'voltage': ' -0.06400'},
{'time': '00.001443000000', 'voltage': ' -0.06400'},
{'time': '00.001444000000', 'voltage': ' -0.06400'},
{'time': '00.001445000000', 'voltage': ' -0.08000'},
{'time': '00.001446000000', 'voltage': ' -0.05600'},
{'time': '00.001447000000', 'voltage': ' -0.07200'},
{'time': '00.001448000000', 'voltage': ' -0.08000'},
{'time': '00.001449000000', 'voltage': ' -0.06400'},
{'time': '00.001450000000', 'voltage': ' -0.07200'},
{'time': '00.001451000000', 'voltage': ' -0.06400'},
{'time': '00.001452000000', 'voltage': ' -0.07200'},
{'time': '00.001453000000', 'voltage': ' -0.07200'},
{'time': '00.001454000000', 'voltage': ' -0.06400'},
{'time': '00.001455000000', 'voltage': ' -0.06400'},
{'time': '00.001456000000', 'voltage': ' -0.06400'},
{'time': '00.001457000000', 'voltage': ' -0.06400'},
{'time': '00.001458000000', 'voltage': ' -0.05600'},
{'time': '00.001459000000', 'voltage': ' -0.07200'},
{'time': '00.001460000000', 'voltage': ' -0.05600'},
{'time': '00.001461000000', 'voltage': ' -0.06400'},
{'time': '00.001462000000', 'voltage': ' -0.07200'},
{'time': '00.001463000000', 'voltage': ' -0.07200'},
{'time': '00.001464000000', 'voltage': ' -0.07200'},
{'time': '00.001465000000', 'voltage': ' -0.07200'},
{'time': '00.001466000000', 'voltage': ' -0.08000'},
{'time': '00.001467000000', 'voltage': ' -0.07200'},
{'time': '00.001468000000', 'voltage': ' -0.07200'},
{'time': '00.001469000000', 'voltage': ' -0.07200'},
{'time': '00.001470000000', 'voltage': ' -0.08800'},
{'time': '00.001471000000', 'voltage': ' -0.07200'},
{'time': '00.001472000000', 'voltage': ' -0.07200'},
{'time': '00.001473000000', 'voltage': ' -0.06400'},
{'time': '00.001474000000', 'voltage': ' -0.07200'},
{'time': '00.001475000000', 'voltage': ' -0.07200'},
{'time': '00.001476000000', 'voltage': ' -0.06400'},
{'time': '00.001477000000', 'voltage': ' -0.06400'},
{'time': '00.001478000000', 'voltage': ' -0.05600'},
{'time': '00.001479000000', 'voltage': ' -0.05600'},
{'time': '00.001480000000', 'voltage': ' -0.06400'},
{'time': '00.001481000000', 'voltage': ' -0.06400'},
{'time': '00.001482000000', 'voltage': ' -0.06400'},
{'time': '00.001483000000', 'voltage': ' -0.06400'},
{'time': '00.001484000000', 'voltage': ' -0.06400'},
{'time': '00.001485000000', 'voltage': ' -0.06400'},
{'time': '00.001486000000', 'voltage': ' -0.06400'},
{'time': '00.001487000000', 'voltage': ' -0.08000'},
{'time': '00.001488000000', 'voltage': ' -0.07200'},
{'time': '00.001489000000', 'voltage': ' -0.06400'},
{'time': '00.001490000000', 'voltage': ' -0.06400'},
{'time': '00.001491000000', 'voltage': ' -0.07200'},
{'time': '00.001492000000', 'voltage': ' -0.05600'},
{'time': '00.001493000000', 'voltage': ' -0.07200'},
{'time': '00.001494000000', 'voltage': ' -0.07200'},
{'time': '00.001495000000', 'voltage': ' -0.06400'},
{'time': '00.001496000000', 'voltage': ' -0.06400'},
{'time': '00.001497000000', 'voltage': ' -0.07200'},
{'time': '00.001498000000', 'voltage': ' -0.05600'},
{'time': '00.001499000000', 'voltage': ' -0.06400'},
{'time': '00.001500000000', 'voltage': ' -0.05600'},
{'time': '00.001501000000', 'voltage': ' -0.04800'},
{'time': '00.001502000000', 'voltage': ' -0.05600'},
{'time': '00.001503000000', 'voltage': ' -0.05600'},
{'time': '00.001504000000', 'voltage': ' -0.06400'},
{'time': '00.001505000000', 'voltage': ' -0.07200'},
{'time': '00.001506000000', 'voltage': ' -0.05600'},
{'time': '00.001507000000', 'voltage': ' -0.05600'},
{'time': '00.001508000000', 'voltage': ' -0.06400'},
{'time': '00.001509000000', 'voltage': ' -0.06400'},
{'time': '00.001510000000', 'voltage': ' -0.05600'},
{'time': '00.001511000000', 'voltage': ' -0.08000'},
{'time': '00.001512000000', 'voltage': ' -0.06400'},
{'time': '00.001513000000', 'voltage': ' -0.06400'},
{'time': '00.001514000000', 'voltage': ' -0.06400'},
{'time': '00.001515000000', 'voltage': ' -0.08000'},
{'time': '00.001516000000', 'voltage': ' -0.07200'},
{'time': '00.001517000000', 'voltage': ' -0.06400'},
{'time': '00.001518000000', 'voltage': ' -0.08000'},
{'time': '00.001519000000', 'voltage': ' -0.06400'},
{'time': '00.001520000000', 'voltage': ' -0.05600'},
{'time': '00.001521000000', 'voltage': ' -0.06400'},
{'time': '00.001522000000', 'voltage': ' -0.05600'},
{'time': '00.001523000000', 'voltage': ' -0.05600'},
{'time': '00.001524000000', 'voltage': ' -0.05600'},
{'time': '00.001525000000', 'voltage': ' -0.06400'},
{'time': '00.001526000000', 'voltage': ' -0.06400'},
{'time': '00.001527000000', 'voltage': ' -0.06400'},
{'time': '00.001528000000', 'voltage': ' -0.06400'},
{'time': '00.001529000000', 'voltage': ' -0.06400'},
{'time': '00.001530000000', 'voltage': ' -0.07200'},
{'time': '00.001531000000', 'voltage': ' -0.06400'},
{'time': '00.001532000000', 'voltage': ' -0.07200'},
{'time': '00.001533000000', 'voltage': ' -0.07200'},
{'time': '00.001534000000', 'voltage': ' -0.08000'},
{'time': '00.001535000000', 'voltage': ' -0.07200'},
{'time': '00.001536000000', 'voltage': ' -0.07200'},
{'time': '00.001537000000', 'voltage': ' -0.07200'},
{'time': '00.001538000000', 'voltage': ' -0.05600'},
{'time': '00.001539000000', 'voltage': ' -0.06400'},
{'time': '00.001540000000', 'voltage': ' -0.06400'},
{'time': '00.001541000000', 'voltage': ' -0.05600'},
{'time': '00.001542000000', 'voltage': ' -0.06400'},
{'time': '00.001543000000', 'voltage': ' -0.06400'},
{'time': '00.001544000000', 'voltage': ' -0.06400'},
{'time': '00.001545000000', 'voltage': ' -0.05600'},
{'time': '00.001546000000', 'voltage': ' -0.06400'},
{'time': '00.001547000000', 'voltage': ' -0.04800'},
{'time': '00.001548000000', 'voltage': ' -0.05600'},
{'time': '00.001549000000', 'voltage': ' -0.06400'},
{'time': '00.001550000000', 'voltage': ' -0.05600'},
{'time': '00.001551000000', 'voltage': ' -0.05600'},
{'time': '00.001552000000', 'voltage': ' -0.06400'},
{'time': '00.001553000000', 'voltage': ' -0.06400'},
{'time': '00.001554000000', 'voltage': ' -0.04800'},
{'time': '00.001555000000', 'voltage': ' -0.06400'},
{'time': '00.001556000000', 'voltage': ' -0.06400'},
{'time': '00.001557000000', 'voltage': ' -0.05600'},
{'time': '00.001558000000', 'voltage': ' -0.06400'},
{'time': '00.001559000000', 'voltage': ' -0.06400'},
{'time': '00.001560000000', 'voltage': ' -0.05600'},
{'time': '00.001561000000', 'voltage': ' -0.07200'},
{'time': '00.001562000000', 'voltage': ' -0.07200'},
{'time': '00.001563000000', 'voltage': ' -0.05600'},
{'time': '00.001564000000', 'voltage': ' -0.05600'},
{'time': '00.001565000000', 'voltage': ' -0.04800'},
{'time': '00.001566000000', 'voltage': ' -0.05600'},
{'time': '00.001567000000', 'voltage': ' -0.06400'},
{'time': '00.001568000000', 'voltage': ' -0.06400'},
{'time': '00.001569000000', 'voltage': ' -0.05600'},
{'time': '00.001570000000', 'voltage': ' -0.06400'},
{'time': '00.001571000000', 'voltage': ' -0.05600'},
{'time': '00.001572000000', 'voltage': ' -0.06400'},
{'time': '00.001573000000', 'voltage': ' -0.06400'},
{'time': '00.001574000000', 'voltage': ' -0.06400'},
{'time': '00.001575000000', 'voltage': ' -0.06400'},
{'time': '00.001576000000', 'voltage': ' -0.06400'},
{'time': '00.001577000000', 'voltage': ' -0.06400'},
{'time': '00.001578000000', 'voltage': ' -0.07200'},
{'time': '00.001579000000', 'voltage': ' -0.05600'},
{'time': '00.001580000000', 'voltage': ' -0.06400'},
{'time': '00.001581000000', 'voltage': ' -0.05600'},
{'time': '00.001582000000', 'voltage': ' -0.04800'},
{'time': '00.001583000000', 'voltage': ' -0.07200'},
{'time': '00.001584000000', 'voltage': ' -0.06400'},
{'time': '00.001585000000', 'voltage': ' -0.05600'},
{'time': '00.001586000000', 'voltage': ' -0.07200'},
{'time': '00.001587000000', 'voltage': ' -0.06400'},
{'time': '00.001588000000', 'voltage': ' -0.04800'},
{'time': '00.001589000000', 'voltage': ' -0.05600'},
{'time': '00.001590000000', 'voltage': ' -0.06400'},
{'time': '00.001591000000', 'voltage': ' -0.05600'},
{'time': '00.001592000000', 'voltage': ' -0.05600'},
{'time': '00.001593000000', 'voltage': ' -0.04800'},
{'time': '00.001594000000', 'voltage': ' -0.06400'},
{'time': '00.001595000000', 'voltage': ' -0.04800'},
{'time': '00.001596000000', 'voltage': ' -0.05600'},
{'time': '00.001597000000', 'voltage': ' -0.06400'},
{'time': '00.001598000000', 'voltage': ' -0.06400'},
{'time': '00.001599000000', 'voltage': ' -0.04800'},
{'time': '00.001600000000', 'voltage': ' -0.05600'},
{'time': '00.001601000000', 'voltage': ' -0.05600'},
{'time': '00.001602000000', 'voltage': ' -0.06400'},
{'time': '00.001603000000', 'voltage': ' -0.06400'},
{'time': '00.001604000000', 'voltage': ' -0.04800'},
{'time': '00.001605000000', 'voltage': ' -0.05600'},
{'time': '00.001606000000', 'voltage': ' -0.04800'},
{'time': '00.001607000000', 'voltage': ' -0.05600'},
{'time': '00.001608000000', 'voltage': ' -0.06400'},
{'time': '00.001609000000', 'voltage': ' -0.05600'},
{'time': '00.001610000000', 'voltage': ' -0.05600'},
{'time': '00.001611000000', 'voltage': ' -0.04800'},
{'time': '00.001612000000', 'voltage': ' -0.04800'},
{'time': '00.001613000000', 'voltage': ' -0.04800'},
{'time': '00.001614000000', 'voltage': ' -0.05600'},
{'time': '00.001615000000', 'voltage': ' -0.05600'},
{'time': '00.001616000000', 'voltage': ' -0.04000'},
{'time': '00.001617000000', 'voltage': ' -0.04800'},
{'time': '00.001618000000', 'voltage': ' -0.06400'},
{'time': '00.001619000000', 'voltage': ' -0.05600'},
{'time': '00.001620000000', 'voltage': ' -0.05600'},
{'time': '00.001621000000', 'voltage': ' -0.07200'},
{'time': '00.001622000000', 'voltage': ' -0.05600'},
{'time': '00.001623000000', 'voltage': ' -0.04800'},
{'time': '00.001624000000', 'voltage': ' -0.06400'},
{'time': '00.001625000000', 'voltage': ' -0.05600'},
{'time': '00.001626000000', 'voltage': ' -0.05600'},
{'time': '00.001627000000', 'voltage': ' -0.06400'},
{'time': '00.001628000000', 'voltage': ' -0.06400'},
{'time': '00.001629000000', 'voltage': ' -0.06400'},
{'time': '00.001630000000', 'voltage': ' -0.07200'},
{'time': '00.001631000000', 'voltage': ' -0.04800'},
{'time': '00.001632000000', 'voltage': ' -0.04000'},
{'time': '00.001633000000', 'voltage': ' -0.05600'},
{'time': '00.001634000000', 'voltage': ' -0.05600'},
{'time': '00.001635000000', 'voltage': ' -0.05600'},
{'time': '00.001636000000', 'voltage': ' -0.04800'},
{'time': '00.001637000000', 'voltage': ' -0.04000'},
{'time': '00.001638000000', 'voltage': ' -0.04800'},
{'time': '00.001639000000', 'voltage': ' -0.04800'},
{'time': '00.001640000000', 'voltage': ' -0.04800'},
{'time': '00.001641000000', 'voltage': ' -0.05600'},
{'time': '00.001642000000', 'voltage': ' -0.06400'},
{'time': '00.001643000000', 'voltage': ' -0.04800'},
{'time': '00.001644000000', 'voltage': ' -0.04800'},
{'time': '00.001645000000', 'voltage': ' -0.04800'},
{'time': '00.001646000000', 'voltage': ' -0.06400'},
{'time': '00.001647000000', 'voltage': ' -0.05600'},
{'time': '00.001648000000', 'voltage': ' -0.05600'},
{'time': '00.001649000000', 'voltage': ' -0.05600'},
{'time': '00.001650000000', 'voltage': ' -0.03200'},
{'time': '00.001651000000', 'voltage': ' -0.04000'},
{'time': '00.001652000000', 'voltage': ' -0.04800'},
{'time': '00.001653000000', 'voltage': ' -0.04800'},
{'time': '00.001654000000', 'voltage': ' -0.04800'},
{'time': '00.001655000000', 'voltage': ' -0.04000'},
{'time': '00.001656000000', 'voltage': ' -0.04000'},
{'time': '00.001657000000', 'voltage': ' -0.04000'},
{'time': '00.001658000000', 'voltage': ' -0.05600'},
{'time': '00.001659000000', 'voltage': ' -0.04000'},
{'time': '00.001660000000', 'voltage': ' -0.04800'},
{'time': '00.001661000000', 'voltage': ' -0.04000'},
{'time': '00.001662000000', 'voltage': ' -0.04800'},
{'time': '00.001663000000', 'voltage': ' -0.04000'},
{'time': '00.001664000000', 'voltage': ' -0.04800'},
{'time': '00.001665000000', 'voltage': ' -0.05600'},
{'time': '00.001666000000', 'voltage': ' -0.06400'},
{'time': '00.001667000000', 'voltage': ' -0.04800'},
{'time': '00.001668000000', 'voltage': ' -0.04800'},
{'time': '00.001669000000', 'voltage': ' -0.05600'},
{'time': '00.001670000000', 'voltage': ' -0.04800'},
{'time': '00.001671000000', 'voltage': ' -0.04800'},
{'time': '00.001672000000', 'voltage': ' -0.05600'},
{'time': '00.001673000000', 'voltage': ' -0.05600'},
{'time': '00.001674000000', 'voltage': ' -0.04800'},
{'time': '00.001675000000', 'voltage': ' -0.04000'},
{'time': '00.001676000000', 'voltage': ' -0.05600'},
{'time': '00.001677000000', 'voltage': ' -0.04800'},
{'time': '00.001678000000', 'voltage': ' -0.04000'},
{'time': '00.001679000000', 'voltage': ' -0.04000'},
{'time': '00.001680000000', 'voltage': ' -0.04800'},
{'time': '00.001681000000', 'voltage': ' -0.04000'},
{'time': '00.001682000000', 'voltage': ' -0.04000'},
{'time': '00.001683000000', 'voltage': ' -0.05600'},
{'time': '00.001684000000', 'voltage': ' -0.04800'},
{'time': '00.001685000000', 'voltage': ' -0.04000'},
{'time': '00.001686000000', 'voltage': ' -0.06400'},
{'time': '00.001687000000', 'voltage': ' -0.06400'},
{'time': '00.001688000000', 'voltage': ' -0.04800'},
{'time': '00.001689000000', 'voltage': ' -0.06400'},
{'time': '00.001690000000', 'voltage': ' -0.05600'},
{'time': '00.001691000000', 'voltage': ' -0.04000'},
{'time': '00.001692000000', 'voltage': ' -0.05600'},
{'time': '00.001693000000', 'voltage': ' -0.04800'},
{'time': '00.001694000000', 'voltage': ' -0.04800'},
{'time': '00.001695000000', 'voltage': ' -0.05600'},
{'time': '00.001696000000', 'voltage': ' -0.04000'},
{'time': '00.001697000000', 'voltage': ' -0.04800'},
{'time': '00.001698000000', 'voltage': ' -0.04000'},
{'time': '00.001699000000', 'voltage': ' -0.04800'},
{'time': '00.001700000000', 'voltage': ' -0.04000'},
{'time': '00.001701000000', 'voltage': ' -0.04000'},
{'time': '00.001702000000', 'voltage': ' -0.03200'},
{'time': '00.001703000000', 'voltage': ' -0.04800'},
{'time': '00.001704000000', 'voltage': ' -0.04000'},
{'time': '00.001705000000', 'voltage': ' -0.04800'},
{'time': '00.001706000000', 'voltage': ' -0.04000'},
{'time': '00.001707000000', 'voltage': ' -0.04800'},
{'time': '00.001708000000', 'voltage': ' -0.04800'},
{'time': '00.001709000000', 'voltage': ' -0.04800'},
{'time': '00.001710000000', 'voltage': ' -0.04800'},
{'time': '00.001711000000', 'voltage': ' -0.05600'},
{'time': '00.001712000000', 'voltage': ' -0.04000'},
{'time': '00.001713000000', 'voltage': ' -0.05600'},
{'time': '00.001714000000', 'voltage': ' -0.04800'},
{'time': '00.001715000000', 'voltage': ' -0.04800'},
{'time': '00.001716000000', 'voltage': ' -0.03200'},
{'time': '00.001717000000', 'voltage': ' -0.05600'},
{'time': '00.001718000000', 'voltage': ' -0.03200'},
{'time': '00.001719000000', 'voltage': ' -0.03200'},
{'time': '00.001720000000', 'voltage': ' -0.03200'},
{'time': '00.001721000000', 'voltage': ' -0.03200'},
{'time': '00.001722000000', 'voltage': ' -0.04000'},
{'time': '00.001723000000', 'voltage': ' -0.03200'},
{'time': '00.001724000000', 'voltage': ' -0.04000'},
{'time': '00.001725000000', 'voltage': ' -0.04000'},
{'time': '00.001726000000', 'voltage': ' -0.04000'},
{'time': '00.001727000000', 'voltage': ' -0.04000'},
{'time': '00.001728000000', 'voltage': ' -0.04800'},
{'time': '00.001729000000', 'voltage': ' -0.05600'},
{'time': '00.001730000000', 'voltage': ' -0.04000'},
{'time': '00.001731000000', 'voltage': ' -0.04000'},
{'time': '00.001732000000', 'voltage': ' -0.05600'},
{'time': '00.001733000000', 'voltage': ' -0.05600'},
{'time': '00.001734000000', 'voltage': ' -0.04800'},
{'time': '00.001735000000', 'voltage': ' -0.04800'},
{'time': '00.001736000000', 'voltage': ' -0.04800'},
{'time': '00.001737000000', 'voltage': ' -0.04800'},
{'time': '00.001738000000', 'voltage': ' -0.04000'},
{'time': '00.001739000000', 'voltage': ' -0.04000'},
{'time': '00.001740000000', 'voltage': ' -0.04000'},
{'time': '00.001741000000', 'voltage': ' -0.04800'},
{'time': '00.001742000000', 'voltage': ' -0.04000'},
{'time': '00.001743000000', 'voltage': ' -0.04000'},
{'time': '00.001744000000', 'voltage': ' -0.04800'},
{'time': '00.001745000000', 'voltage': ' -0.04800'},
{'time': '00.001746000000', 'voltage': ' -0.04000'},
{'time': '00.001747000000', 'voltage': ' -0.03200'},
{'time': '00.001748000000', 'voltage': ' -0.04000'},
{'time': '00.001749000000', 'voltage': ' -0.04800'},
{'time': '00.001750000000', 'voltage': ' -0.04000'},
{'time': '00.001751000000', 'voltage': ' -0.04000'},
{'time': '00.001752000000', 'voltage': ' -0.04800'},
{'time': '00.001753000000', 'voltage': ' -0.03200'},
{'time': '00.001754000000', 'voltage': ' -0.05600'},
{'time': '00.001755000000', 'voltage': ' -0.04800'},
{'time': '00.001756000000', 'voltage': ' -0.05600'},
{'time': '00.001757000000', 'voltage': ' -0.03200'},
{'time': '00.001758000000', 'voltage': ' -0.03200'},
{'time': '00.001759000000', 'voltage': ' -0.04000'},
{'time': '00.001760000000', 'voltage': ' -0.04000'},
{'time': '00.001761000000', 'voltage': ' -0.03200'},
{'time': '00.001762000000', 'voltage': ' -0.04800'},
{'time': '00.001763000000', 'voltage': ' -0.04000'},
{'time': '00.001764000000', 'voltage': ' -0.02400'},
{'time': '00.001765000000', 'voltage': ' -0.03200'},
{'time': '00.001766000000', 'voltage': ' -0.02400'},
{'time': '00.001767000000', 'voltage': ' -0.03200'},
{'time': '00.001768000000', 'voltage': ' -0.05600'},
{'time': '00.001769000000', 'voltage': ' -0.03200'},
{'time': '00.001770000000', 'voltage': ' -0.03200'},
{'time': '00.001771000000', 'voltage': ' -0.04000'},
{'time': '00.001772000000', 'voltage': ' -0.04000'},
{'time': '00.001773000000', 'voltage': ' -0.04800'},
{'time': '00.001774000000', 'voltage': ' -0.04000'},
{'time': '00.001775000000', 'voltage': ' -0.03200'},
{'time': '00.001776000000', 'voltage': ' -0.04800'},
{'time': '00.001777000000', 'voltage': ' -0.03200'},
{'time': '00.001778000000', 'voltage': ' -0.04000'},
{'time': '00.001779000000', 'voltage': ' -0.04800'},
{'time': '00.001780000000', 'voltage': ' -0.04800'},
{'time': '00.001781000000', 'voltage': ' -0.03200'},
{'time': '00.001782000000', 'voltage': ' -0.04000'},
{'time': '00.001783000000', 'voltage': ' -0.04800'},
{'time': '00.001784000000', 'voltage': ' -0.04000'},
{'time': '00.001785000000', 'voltage': ' -0.04800'},
{'time': '00.001786000000', 'voltage': ' -0.04800'},
{'time': '00.001787000000', 'voltage': ' -0.03200'},
{'time': '00.001788000000', 'voltage': ' -0.04000'},
{'time': '00.001789000000', 'voltage': ' -0.04000'},
{'time': '00.001790000000', 'voltage': ' -0.04800'},
{'time': '00.001791000000', 'voltage': ' -0.04000'},
{'time': '00.001792000000', 'voltage': ' -0.04000'},
{'time': '00.001793000000', 'voltage': ' -0.03200'},
{'time': '00.001794000000', 'voltage': ' -0.04000'},
{'time': '00.001795000000', 'voltage': ' -0.04000'},
{'time': '00.001796000000', 'voltage': ' -0.04800'},
{'time': '00.001797000000', 'voltage': ' -0.04000'},
{'time': '00.001798000000', 'voltage': ' -0.04000'},
{'time': '00.001799000000', 'voltage': ' -0.04000'},
{'time': '00.001800000000', 'voltage': ' -0.04000'},
{'time': '00.001801000000', 'voltage': ' -0.04000'},
{'time': '00.001802000000', 'voltage': ' -0.03200'},
{'time': '00.001803000000', 'voltage': ' -0.03200'},
{'time': '00.001804000000', 'voltage': ' -0.04000'},
{'time': '00.001805000000', 'voltage': ' -0.03200'},
{'time': '00.001806000000', 'voltage': ' -0.03200'},
{'time': '00.001807000000', 'voltage': ' -0.04000'},
{'time': '00.001808000000', 'voltage': ' -0.03200'},
{'time': '00.001809000000', 'voltage': ' -0.03200'},
{'time': '00.001810000000', 'voltage': ' -0.04000'},
{'time': '00.001811000000', 'voltage': ' -0.04000'},
{'time': '00.001812000000', 'voltage': ' -0.01600'},
{'time': '00.001813000000', 'voltage': ' -0.04000'},
{'time': '00.001814000000', 'voltage': ' -0.04000'},
{'time': '00.001815000000', 'voltage': ' -0.02400'},
{'time': '00.001816000000', 'voltage': ' -0.03200'},
{'time': '00.001817000000', 'voltage': ' -0.04000'},
{'time': '00.001818000000', 'voltage': ' -0.02400'},
{'time': '00.001819000000', 'voltage': ' -0.03200'},
{'time': '00.001820000000', 'voltage': ' -0.03200'},
{'time': '00.001821000000', 'voltage': ' -0.03200'},
{'time': '00.001822000000', 'voltage': ' -0.04000'},
{'time': '00.001823000000', 'voltage': ' -0.04000'},
{'time': '00.001824000000', 'voltage': ' -0.03200'},
{'time': '00.001825000000', 'voltage': ' -0.04000'},
{'time': '00.001826000000', 'voltage': ' -0.04000'},
{'time': '00.001827000000', 'voltage': ' -0.03200'},
{'time': '00.001828000000', 'voltage': ' -0.03200'},
{'time': '00.001829000000', 'voltage': ' -0.04000'},
{'time': '00.001830000000', 'voltage': ' -0.04000'},
{'time': '00.001831000000', 'voltage': ' -0.04000'},
{'time': '00.001832000000', 'voltage': ' -0.02400'},
{'time': '00.001833000000', 'voltage': ' -0.04000'},
{'time': '00.001834000000', 'voltage': ' -0.03200'},
{'time': '00.001835000000', 'voltage': ' -0.03200'},
{'time': '00.001836000000', 'voltage': ' -0.03200'},
{'time': '00.001837000000', 'voltage': ' -0.02400'},
{'time': '00.001838000000', 'voltage': ' -0.04000'},
{'time': '00.001839000000', 'voltage': ' -0.03200'},
{'time': '00.001840000000', 'voltage': ' -0.04000'},
{'time': '00.001841000000', 'voltage': ' -0.04800'},
{'time': '00.001842000000', 'voltage': ' -0.04000'},
{'time': '00.001843000000', 'voltage': ' -0.04000'},
{'time': '00.001844000000', 'voltage': ' -0.04000'},
{'time': '00.001845000000', 'voltage': ' -0.04800'},
{'time': '00.001846000000', 'voltage': ' -0.04000'},
{'time': '00.001847000000', 'voltage': ' -0.03200'},
{'time': '00.001848000000', 'voltage': ' -0.04000'},
{'time': '00.001849000000', 'voltage': ' -0.03200'},
{'time': '00.001850000000', 'voltage': ' -0.04000'},
{'time': '00.001851000000', 'voltage': ' -0.03200'},
{'time': '00.001852000000', 'voltage': ' -0.03200'},
{'time': '00.001853000000', 'voltage': ' -0.03200'},
{'time': '00.001854000000', 'voltage': ' -0.04000'},
{'time': '00.001855000000', 'voltage': ' -0.03200'},
{'time': '00.001856000000', 'voltage': ' -0.02400'},
{'time': '00.001857000000', 'voltage': ' -0.03200'},
{'time': '00.001858000000', 'voltage': ' -0.03200'},
{'time': '00.001859000000', 'voltage': ' -0.03200'},
{'time': '00.001860000000', 'voltage': ' -0.03200'},
{'time': '00.001861000000', 'voltage': ' -0.03200'},
{'time': '00.001862000000', 'voltage': ' -0.04000'},
{'time': '00.001863000000', 'voltage': ' -0.02400'},
{'time': '00.001864000000', 'voltage': ' -0.04800'},
{'time': '00.001865000000', 'voltage': ' -0.03200'},
{'time': '00.001866000000', 'voltage': ' -0.04000'},
{'time': '00.001867000000', 'voltage': ' -0.03200'},
{'time': '00.001868000000', 'voltage': ' -0.03200'},
{'time': '00.001869000000', 'voltage': ' -0.04800'},
{'time': '00.001870000000', 'voltage': ' -0.02400'},
{'time': '00.001871000000', 'voltage': ' -0.04000'},
{'time': '00.001872000000', 'voltage': ' -0.02400'},
{'time': '00.001873000000', 'voltage': ' -0.02400'},
{'time': '00.001874000000', 'voltage': ' -0.02400'},
{'time': '00.001875000000', 'voltage': ' -0.03200'},
{'time': '00.001876000000', 'voltage': ' -0.03200'},
{'time': '00.001877000000', 'voltage': ' -0.01600'},
{'time': '00.001878000000', 'voltage': ' -0.03200'},
{'time': '00.001879000000', 'voltage': ' -0.02400'},
{'time': '00.001880000000', 'voltage': ' -0.02400'},
{'time': '00.001881000000', 'voltage': ' -0.03200'},
{'time': '00.001882000000', 'voltage': ' -0.04800'},
{'time': '00.001883000000', 'voltage': ' -0.00800'},
{'time': '00.001884000000', 'voltage': ' -0.03200'},
{'time': '00.001885000000', 'voltage': ' -0.04000'},
{'time': '00.001886000000', 'voltage': ' -0.04800'},
{'time': '00.001887000000', 'voltage': ' -0.04000'},
{'time': '00.001888000000', 'voltage': ' -0.03200'},
{'time': '00.001889000000', 'voltage': ' -0.04800'},
{'time': '00.001890000000', 'voltage': ' -0.03200'},
{'time': '00.001891000000', 'voltage': ' -0.03200'},
{'time': '00.001892000000', 'voltage': ' -0.04000'},
{'time': '00.001893000000', 'voltage': ' -0.04000'},
{'time': '00.001894000000', 'voltage': ' -0.03200'},
{'time': '00.001895000000', 'voltage': ' -0.04000'},
{'time': '00.001896000000', 'voltage': ' -0.02400'},
{'time': '00.001897000000', 'voltage': ' -0.03200'},
{'time': '00.001898000000', 'voltage': ' -0.03200'},
{'time': '00.001899000000', 'voltage': ' -0.02400'},
{'time': '00.001900000000', 'voltage': ' -0.02400'},
{'time': '00.001901000000', 'voltage': ' -0.03200'},
{'time': '00.001902000000', 'voltage': ' -0.02400'},
{'time': '00.001903000000', 'voltage': ' -0.03200'},
{'time': '00.001904000000', 'voltage': ' -0.03200'},
{'time': '00.001905000000', 'voltage': ' -0.04000'},
{'time': '00.001906000000', 'voltage': ' -0.03200'},
{'time': '00.001907000000', 'voltage': ' -0.03200'},
{'time': '00.001908000000', 'voltage': ' -0.03200'},
{'time': '00.001909000000', 'voltage': ' -0.04000'},
{'time': '00.001910000000', 'voltage': ' -0.04000'},
{'time': '00.001911000000', 'voltage': ' -0.03200'},
{'time': '00.001912000000', 'voltage': ' -0.03200'},
{'time': '00.001913000000', 'voltage': ' -0.03200'},
{'time': '00.001914000000', 'voltage': ' -0.04000'},
{'time': '00.001915000000', 'voltage': ' -0.02400'},
{'time': '00.001916000000', 'voltage': ' -0.04000'},
{'time': '00.001917000000', 'voltage': ' -0.03200'},
{'time': '00.001918000000', 'voltage': ' -0.01600'},
{'time': '00.001919000000', 'voltage': ' -0.01600'},
{'time': '00.001920000000', 'voltage': ' -0.01600'},
{'time': '00.001921000000', 'voltage': ' -0.03200'},
{'time': '00.001922000000', 'voltage': ' -0.02400'},
{'time': '00.001923000000', 'voltage': ' -0.03200'},
{'time': '00.001924000000', 'voltage': ' -0.01600'},
{'time': '00.001925000000', 'voltage': ' -0.03200'},
{'time': '00.001926000000', 'voltage': ' -0.02400'},
{'time': '00.001927000000', 'voltage': ' -0.03200'},
{'time': '00.001928000000', 'voltage': ' -0.03200'},
{'time': '00.001929000000', 'voltage': ' -0.04000'},
{'time': '00.001930000000', 'voltage': ' -0.03200'},
{'time': '00.001931000000', 'voltage': ' -0.03200'},
{'time': '00.001932000000', 'voltage': ' -0.03200'},
{'time': '00.001933000000', 'voltage': ' -0.03200'},
{'time': '00.001934000000', 'voltage': ' -0.03200'},
{'time': '00.001935000000', 'voltage': ' -0.03200'},
{'time': '00.001936000000', 'voltage': ' -0.02400'},
{'time': '00.001937000000', 'voltage': ' -0.02400'},
{'time': '00.001938000000', 'voltage': ' -0.02400'},
{'time': '00.001939000000', 'voltage': ' -0.01600'},
{'time': '00.001940000000', 'voltage': ' -0.02400'},
{'time': '00.001941000000', 'voltage': ' -0.04000'},
{'time': '00.001942000000', 'voltage': ' -0.02400'},
{'time': '00.001943000000', 'voltage': ' -0.01600'},
{'time': '00.001944000000', 'voltage': ' -0.04000'},
{'time': '00.001945000000', 'voltage': ' -0.04000'},
{'time': '00.001946000000', 'voltage': ' -0.02400'},
{'time': '00.001947000000', 'voltage': ' -0.03200'},
{'time': '00.001948000000', 'voltage': ' -0.04000'},
{'time': '00.001949000000', 'voltage': ' -0.04000'},
{'time': '00.001950000000', 'voltage': ' -0.04800'},
{'time': '00.001951000000', 'voltage': ' -0.03200'},
{'time': '00.001952000000', 'voltage': ' -0.03200'},
{'time': '00.001953000000', 'voltage': ' -0.03200'},
{'time': '00.001954000000', 'voltage': ' -0.03200'},
{'time': '00.001955000000', 'voltage': ' -0.04000'},
{'time': '00.001956000000', 'voltage': ' -0.03200'},
{'time': '00.001957000000', 'voltage': ' -0.03200'},
{'time': '00.001958000000', 'voltage': ' -0.03200'},
{'time': '00.001959000000', 'voltage': ' -0.02400'},
{'time': '00.001960000000', 'voltage': ' -0.02400'},
{'time': '00.001961000000', 'voltage': ' -0.03200'},
{'time': '00.001962000000', 'voltage': ' -0.01600'},
{'time': '00.001963000000', 'voltage': ' -0.02400'},
{'time': '00.001964000000', 'voltage': ' -0.02400'},
{'time': '00.001965000000', 'voltage': ' -0.02400'},
{'time': '00.001966000000', 'voltage': ' -0.02400'},
{'time': '00.001967000000', 'voltage': ' -0.03200'},
{'time': '00.001968000000', 'voltage': ' -0.03200'},
{'time': '00.001969000000', 'voltage': ' -0.03200'},
{'time': '00.001970000000', 'voltage': ' -0.01600'},
{'time': '00.001971000000', 'voltage': ' -0.03200'},
{'time': '00.001972000000', 'voltage': ' -0.03200'},
{'time': '00.001973000000', 'voltage': ' -0.04000'},
{'time': '00.001974000000', 'voltage': ' -0.03200'},
{'time': '00.001975000000', 'voltage': ' -0.03200'},
{'time': '00.001976000000', 'voltage': ' -0.03200'},
{'time': '00.001977000000', 'voltage': ' -0.03200'},
{'time': '00.001978000000', 'voltage': ' -0.03200'},
{'time': '00.001979000000', 'voltage': ' -0.02400'},
{'time': '00.001980000000', 'voltage': ' -0.04000'},
{'time': '00.001981000000', 'voltage': ' -0.03200'},
{'time': '00.001982000000', 'voltage': ' -0.02400'},
{'time': '00.001983000000', 'voltage': ' -0.01600'},
{'time': '00.001984000000', 'voltage': ' -0.02400'},
{'time': '00.001985000000', 'voltage': ' -0.02400'},
{'time': '00.001986000000', 'voltage': ' -0.02400'},
{'time': '00.001987000000', 'voltage': ' -0.02400'},
{'time': '00.001988000000', 'voltage': ' -0.03200'},
{'time': '00.001989000000', 'voltage': ' -0.02400'},
{'time': '00.001990000000', 'voltage': ' -0.02400'},
{'time': '00.001991000000', 'voltage': ' -0.03200'},
{'time': '00.001992000000', 'voltage': ' -0.03200'},
{'time': '00.001993000000', 'voltage': ' -0.02400'},
{'time': '00.001994000000', 'voltage': ' -0.03200'},
{'time': '00.001995000000', 'voltage': ' -0.02400'},
{'time': '00.001996000000', 'voltage': ' -0.04000'},
{'time': '00.001997000000', 'voltage': ' -0.03200'},
{'time': '00.001998000000', 'voltage': ' -0.02400'},
{'time': '00.001999000000', 'voltage': ' -0.03200'},
{'time': '00.002000000000', 'voltage': ' -0.03200'},
{'time': '00.002001000000', 'voltage': ' -0.04000'},
{'time': '00.002002000000', 'voltage': ' -0.02400'},
{'time': '00.002003000000', 'voltage': ' -0.03200'},
{'time': '00.002004000000', 'voltage': ' -0.02400'},
{'time': '00.002005000000', 'voltage': ' -0.02400'},
{'time': '00.002006000000', 'voltage': ' -0.03200'},
{'time': '00.002007000000', 'voltage': ' -0.01600'},
{'time': '00.002008000000', 'voltage': ' -0.02400'},
{'time': '00.002009000000', 'voltage': ' -0.03200'},
{'time': '00.002010000000', 'voltage': ' -0.03200'},
{'time': '00.002011000000', 'voltage': ' -0.02400'},
{'time': '00.002012000000', 'voltage': ' -0.03200'},
{'time': '00.002013000000', 'voltage': ' -0.02400'},
{'time': '00.002014000000', 'voltage': ' -0.02400'},
{'time': '00.002015000000', 'voltage': ' -0.04800'},
{'time': '00.002016000000', 'voltage': ' -0.02400'},
{'time': '00.002017000000', 'voltage': ' -0.04000'},
{'time': '00.002018000000', 'voltage': ' -0.04000'},
{'time': '00.002019000000', 'voltage': ' -0.04000'},
{'time': '00.002020000000', 'voltage': ' -0.02400'},
{'time': '00.002021000000', 'voltage': ' -0.03200'},
{'time': '00.002022000000', 'voltage': ' -0.01600'},
{'time': '00.002023000000', 'voltage': ' -0.04000'},
{'time': '00.002024000000', 'voltage': ' -0.02400'},
{'time': '00.002025000000', 'voltage': ' -0.03200'},
{'time': '00.002026000000', 'voltage': ' -0.02400'},
{'time': '00.002027000000', 'voltage': ' -0.02400'},
{'time': '00.002028000000', 'voltage': ' -0.02400'},
{'time': '00.002029000000', 'voltage': ' -0.02400'},
{'time': '00.002030000000', 'voltage': ' -0.02400'},
{'time': '00.002031000000', 'voltage': ' -0.02400'},
{'time': '00.002032000000', 'voltage': ' -0.01600'},
{'time': '00.002033000000', 'voltage': ' -0.03200'},
{'time': '00.002034000000', 'voltage': ' -0.02400'},
{'time': '00.002035000000', 'voltage': ' -0.02400'},
{'time': '00.002036000000', 'voltage': ' -0.01600'},
{'time': '00.002037000000', 'voltage': ' -0.04000'},
{'time': '00.002038000000', 'voltage': ' -0.02400'},
{'time': '00.002039000000', 'voltage': ' -0.03200'},
{'time': '00.002040000000', 'voltage': ' -0.03200'},
{'time': '00.002041000000', 'voltage': ' -0.02400'},
{'time': '00.002042000000', 'voltage': ' -0.04000'},
{'time': '00.002043000000', 'voltage': ' -0.04000'},
{'time': '00.002044000000', 'voltage': ' -0.03200'},
{'time': '00.002045000000', 'voltage': ' -0.03200'},
{'time': '00.002046000000', 'voltage': ' -0.02400'},
{'time': '00.002047000000', 'voltage': ' -0.04800'},
{'time': '00.002048000000', 'voltage': ' -0.02400'},
{'time': '00.002049000000', 'voltage': ' -0.04000'},
{'time': '00.002050000000', 'voltage': ' -0.02400'},
{'time': '00.002051000000', 'voltage': ' -0.02400'},
{'time': '00.002052000000', 'voltage': ' -0.02400'},
{'time': '00.002053000000', 'voltage': ' -0.01600'},
{'time': '00.002054000000', 'voltage': ' -0.02400'},
{'time': '00.002055000000', 'voltage': ' -0.01600'},
{'time': '00.002056000000', 'voltage': ' -0.01600'},
{'time': '00.002057000000', 'voltage': ' -0.02400'},
{'time': '00.002058000000', 'voltage': ' -0.04000'},
{'time': '00.002059000000', 'voltage': ' -0.03200'},
{'time': '00.002060000000', 'voltage': ' -0.03200'},
{'time': '00.002061000000', 'voltage': ' -0.03200'},
{'time': '00.002062000000', 'voltage': ' -0.03200'},
{'time': '00.002063000000', 'voltage': ' -0.04000'},
{'time': '00.002064000000', 'voltage': ' -0.03200'},
{'time': '00.002065000000', 'voltage': ' -0.04000'},
{'time': '00.002066000000', 'voltage': ' -0.03200'},
{'time': '00.002067000000', 'voltage': ' -0.03200'},
{'time': '00.002068000000', 'voltage': ' -0.02400'},
{'time': '00.002069000000', 'voltage': ' -0.02400'},
{'time': '00.002070000000', 'voltage': ' -0.02400'},
{'time': '00.002071000000', 'voltage': ' -0.03200'},
{'time': '00.002072000000', 'voltage': ' -0.02400'},
{'time': '00.002073000000', 'voltage': ' -0.03200'},
{'time': '00.002074000000', 'voltage': ' -0.04000'},
{'time': '00.002075000000', 'voltage': ' -0.02400'},
{'time': '00.002076000000', 'voltage': ' -0.01600'},
{'time': '00.002077000000', 'voltage': ' -0.02400'},
{'time': '00.002078000000', 'voltage': ' -0.01600'},
{'time': '00.002079000000', 'voltage': ' -0.04000'},
{'time': '00.002080000000', 'voltage': ' -0.02400'},
{'time': '00.002081000000', 'voltage': ' -0.03200'},
{'time': '00.002082000000', 'voltage': ' -0.01600'},
{'time': '00.002083000000', 'voltage': ' -0.04000'},
{'time': '00.002084000000', 'voltage': ' -0.01600'},
{'time': '00.002085000000', 'voltage': ' -0.02400'},
{'time': '00.002086000000', 'voltage': ' -0.03200'},
{'time': '00.002087000000', 'voltage': ' -0.04000'},
{'time': '00.002088000000', 'voltage': ' -0.03200'},
{'time': '00.002089000000', 'voltage': ' -0.03200'},
{'time': '00.002090000000', 'voltage': ' -0.02400'},
{'time': '00.002091000000', 'voltage': ' -0.02400'},
{'time': '00.002092000000', 'voltage': ' -0.01600'},
{'time': '00.002093000000', 'voltage': ' -0.02400'},
{'time': '00.002094000000', 'voltage': ' -0.03200'},
{'time': '00.002095000000', 'voltage': ' -0.02400'},
{'time': '00.002096000000', 'voltage': ' -0.02400'},
{'time': '00.002097000000', 'voltage': ' -0.02400'},
{'time': '00.002098000000', 'voltage': ' -0.02400'},
{'time': '00.002099000000', 'voltage': ' -0.04000'},
{'time': '00.002100000000', 'voltage': ' -0.01600'},
{'time': '00.002101000000', 'voltage': ' -0.02400'},
{'time': '00.002102000000', 'voltage': ' -0.02400'},
{'time': '00.002103000000', 'voltage': ' -0.03200'},
{'time': '00.002104000000', 'voltage': ' -0.03200'},
{'time': '00.002105000000', 'voltage': ' -0.03200'},
{'time': '00.002106000000', 'voltage': ' -0.04000'},
{'time': '00.002107000000', 'voltage': ' -0.03200'},
{'time': '00.002108000000', 'voltage': ' -0.04000'},
{'time': '00.002109000000', 'voltage': ' -0.04000'},
{'time': '00.002110000000', 'voltage': ' -0.04000'},
{'time': '00.002111000000', 'voltage': ' -0.04000'},
{'time': '00.002112000000', 'voltage': ' -0.02400'},
{'time': '00.002113000000', 'voltage': ' -0.03200'},
{'time': '00.002114000000', 'voltage': ' -0.02400'},
{'time': '00.002115000000', 'voltage': ' -0.02400'},
{'time': '00.002116000000', 'voltage': ' -0.03200'},
{'time': '00.002117000000', 'voltage': ' -0.02400'},
{'time': '00.002118000000', 'voltage': ' -0.02400'},
{'time': '00.002119000000', 'voltage': ' -0.02400'},
{'time': '00.002120000000', 'voltage': ' -0.01600'},
{'time': '00.002121000000', 'voltage': ' -0.03200'},
{'time': '00.002122000000', 'voltage': ' -0.01600'},
{'time': '00.002123000000', 'voltage': ' -0.02400'},
{'time': '00.002124000000', 'voltage': ' -0.02400'},
{'time': '00.002125000000', 'voltage': ' -0.02400'},
{'time': '00.002126000000', 'voltage': ' -0.04000'},
{'time': '00.002127000000', 'voltage': ' -0.04000'},
{'time': '00.002128000000', 'voltage': ' -0.03200'},
{'time': '00.002129000000', 'voltage': ' -0.03200'},
{'time': '00.002130000000', 'voltage': ' -0.03200'},
{'time': '00.002131000000', 'voltage': ' -0.03200'},
{'time': '00.002132000000', 'voltage': ' -0.02400'},
{'time': '00.002133000000', 'voltage': ' -0.03200'},
{'time': '00.002134000000', 'voltage': ' -0.02400'},
{'time': '00.002135000000', 'voltage': ' -0.02400'},
{'time': '00.002136000000', 'voltage': ' -0.02400'},
{'time': '00.002137000000', 'voltage': ' -0.02400'},
{'time': '00.002138000000', 'voltage': ' -0.02400'},
{'time': '00.002139000000', 'voltage': ' -0.01600'},
{'time': '00.002140000000', 'voltage': ' -0.02400'},
{'time': '00.002141000000', 'voltage': ' -0.00800'},
{'time': '00.002142000000', 'voltage': ' -0.02400'},
{'time': '00.002143000000', 'voltage': ' -0.03200'},
{'time': '00.002144000000', 'voltage': ' -0.01600'},
{'time': '00.002145000000', 'voltage': ' -0.02400'},
{'time': '00.002146000000', 'voltage': ' -0.03200'},
{'time': '00.002147000000', 'voltage': ' -0.02400'},
{'time': '00.002148000000', 'voltage': ' -0.02400'},
{'time': '00.002149000000', 'voltage': ' -0.03200'},
{'time': '00.002150000000', 'voltage': ' -0.03200'},
{'time': '00.002151000000', 'voltage': ' -0.03200'},
{'time': '00.002152000000', 'voltage': ' -0.02400'},
{'time': '00.002153000000', 'voltage': ' -0.03200'},
{'time': '00.002154000000', 'voltage': ' -0.03200'},
{'time': '00.002155000000', 'voltage': ' -0.03200'},
{'time': '00.002156000000', 'voltage': ' -0.03200'},
{'time': '00.002157000000', 'voltage': ' -0.02400'},
{'time': '00.002158000000', 'voltage': ' -0.01600'},
{'time': '00.002159000000', 'voltage': ' -0.03200'},
{'time': '00.002160000000', 'voltage': ' -0.02400'},
{'time': '00.002161000000', 'voltage': ' -0.02400'},
{'time': '00.002162000000', 'voltage': ' -0.03200'},
{'time': '00.002163000000', 'voltage': ' -0.02400'},
{'time': '00.002164000000', 'voltage': ' -0.02400'},
{'time': '00.002165000000', 'voltage': ' -0.02400'},
{'time': '00.002166000000', 'voltage': ' -0.03200'},
{'time': '00.002167000000', 'voltage': ' -0.02400'},
{'time': '00.002168000000', 'voltage': ' -0.03200'},
{'time': '00.002169000000', 'voltage': ' -0.01600'},
{'time': '00.002170000000', 'voltage': ' -0.04000'},
{'time': '00.002171000000', 'voltage': ' -0.04000'},
{'time': '00.002172000000', 'voltage': ' -0.03200'},
{'time': '00.002173000000', 'voltage': ' -0.02400'},
{'time': '00.002174000000', 'voltage': ' -0.04800'},
{'time': '00.002175000000', 'voltage': ' -0.03200'},
{'time': '00.002176000000', 'voltage': ' -0.02400'},
{'time': '00.002177000000', 'voltage': ' -0.03200'},
{'time': '00.002178000000', 'voltage': ' -0.03200'},
{'time': '00.002179000000', 'voltage': ' -0.01600'},
{'time': '00.002180000000', 'voltage': ' -0.02400'},
{'time': '00.002181000000', 'voltage': ' -0.02400'},
{'time': '00.002182000000', 'voltage': ' -0.02400'},
{'time': '00.002183000000', 'voltage': ' -0.02400'},
{'time': '00.002184000000', 'voltage': ' -0.02400'},
{'time': '00.002185000000', 'voltage': ' -0.02400'},
{'time': '00.002186000000', 'voltage': ' -0.02400'},
{'time': '00.002187000000', 'voltage': ' -0.02400'},
{'time': '00.002188000000', 'voltage': ' -0.01600'},
{'time': '00.002189000000', 'voltage': ' -0.02400'},
{'time': '00.002190000000', 'voltage': ' -0.02400'},
{'time': '00.002191000000', 'voltage': ' -0.02400'},
{'time': '00.002192000000', 'voltage': ' -0.04000'},
{'time': '00.002193000000', 'voltage': ' -0.03200'},
{'time': '00.002194000000', 'voltage': ' -0.02400'},
{'time': '00.002195000000', 'voltage': ' -0.03200'},
{'time': '00.002196000000', 'voltage': ' -0.04000'},
{'time': '00.002197000000', 'voltage': ' -0.03200'},
{'time': '00.002198000000', 'voltage': ' -0.02400'},
{'time': '00.002199000000', 'voltage': ' -0.02400'},
{'time': '00.002200000000', 'voltage': ' -0.01600'},
{'time': '00.002201000000', 'voltage': ' -0.01600'},
{'time': '00.002202000000', 'voltage': ' -0.03200'},
{'time': '00.002203000000', 'voltage': ' -0.01600'},
{'time': '00.002204000000', 'voltage': ' -0.02400'},
{'time': '00.002205000000', 'voltage': ' -0.03200'},
{'time': '00.002206000000', 'voltage': ' -0.02400'},
{'time': '00.002207000000', 'voltage': ' -0.02400'},
{'time': '00.002208000000', 'voltage': ' -0.00800'},
{'time': '00.002209000000', 'voltage': ' -0.03200'},
{'time': '00.002210000000', 'voltage': ' -0.02400'},
{'time': '00.002211000000', 'voltage': ' -0.03200'},
{'time': '00.002212000000', 'voltage': ' -0.03200'},
{'time': '00.002213000000', 'voltage': ' -0.04000'},
{'time': '00.002214000000', 'voltage': ' -0.03200'},
{'time': '00.002215000000', 'voltage': ' -0.02400'},
{'time': '00.002216000000', 'voltage': ' -0.03200'},
{'time': '00.002217000000', 'voltage': ' -0.03200'},
{'time': '00.002218000000', 'voltage': ' -0.02400'},
{'time': '00.002219000000', 'voltage': ' -0.04000'},
{'time': '00.002220000000', 'voltage': ' -0.03200'},
{'time': '00.002221000000', 'voltage': ' -0.03200'},
{'time': '00.002222000000', 'voltage': ' -0.03200'},
{'time': '00.002223000000', 'voltage': ' -0.03200'},
{'time': '00.002224000000', 'voltage': ' -0.03200'},
{'time': '00.002225000000', 'voltage': ' -0.03200'},
{'time': '00.002226000000', 'voltage': ' -0.03200'},
{'time': '00.002227000000', 'voltage': ' -0.02400'},
{'time': '00.002228000000', 'voltage': ' -0.01600'},
{'time': '00.002229000000', 'voltage': ' -0.03200'},
{'time': '00.002230000000', 'voltage': ' -0.02400'},
{'time': '00.002231000000', 'voltage': ' -0.01600'},
{'time': '00.002232000000', 'voltage': ' -0.02400'},
{'time': '00.002233000000', 'voltage': ' -0.03200'},
{'time': '00.002234000000', 'voltage': ' -0.03200'},
{'time': '00.002235000000', 'voltage': ' -0.03200'},
{'time': '00.002236000000', 'voltage': ' -0.02400'},
{'time': '00.002237000000', 'voltage': ' -0.03200'},
{'time': '00.002238000000', 'voltage': ' -0.03200'},
{'time': '00.002239000000', 'voltage': ' -0.02400'},
{'time': '00.002240000000', 'voltage': ' -0.02400'},
{'time': '00.002241000000', 'voltage': ' -0.03200'},
{'time': '00.002242000000', 'voltage': ' -0.03200'},
{'time': '00.002243000000', 'voltage': ' -0.02400'},
{'time': '00.002244000000', 'voltage': ' -0.02400'},
{'time': '00.002245000000', 'voltage': ' -0.03200'},
{'time': '00.002246000000', 'voltage': ' -0.02400'},
{'time': '00.002247000000', 'voltage': ' -0.02400'},
{'time': '00.002248000000', 'voltage': ' -0.02400'},
{'time': '00.002249000000', 'voltage': ' -0.02400'},
{'time': '00.002250000000', 'voltage': ' -0.02400'},
{'time': '00.002251000000', 'voltage': ' -0.02400'},
{'time': '00.002252000000', 'voltage': ' -0.02400'},
{'time': '00.002253000000', 'voltage': ' -0.02400'},
{'time': '00.002254000000', 'voltage': ' -0.02400'},
{'time': '00.002255000000', 'voltage': ' -0.02400'},
{'time': '00.002256000000', 'voltage': ' -0.01600'},
{'time': '00.002257000000', 'voltage': ' -0.02400'},
{'time': '00.002258000000', 'voltage': ' -0.02400'},
{'time': '00.002259000000', 'voltage': ' -0.03200'}]}
###Markdown
Generating a Chart Before we start, let's make it so that any charts we generate display on this page rather than opening in a new window.We use the `inline` command to plot inside the notebook rather than having a new window open.Importing `Image` from IPython allows us to show images here in our notebook.
###Code
%matplotlib inline
from IPython.display import Image
###Output
_____no_output_____
###Markdown
We then need to import the relevant libraries. At this point, all we need is **pyplot** for creating the charts and **numpy** for working with numbers.
###Code
import matplotlib.pyplot as pyplot
import numpy
###Output
_____no_output_____
###Markdown
A basic line chart is made of `x` and `y` values. To generate a chart, we need (at a bare minimum) the following steps:- Create separate lists of all the values for the **x** axis and all the values for the **y** axis.- Pass the x and y lists to **pyplot.plot()**.- Use **pyplot.show()** to display the result.
###Code
x_data = [0,1,2,3,4,5,6,7,8,9]
y_data = [1,2,4,5,7,9,10,7,5,2]
pyplot.plot(x_data, y_data)
pyplot.show()
###Output
_____no_output_____
###Markdown
Plotting the NLDL Data In our NLDL data, we want to plot the values from the **readings** list.The **x** axis will be **time**.The **y** axis will be **voltage**.Have another look at the printed NLDL data we created earlier and identify where you can find the **time** and **voltage** for each reading.We need to:- Create lists for the x values and y values.- Look at each reading from the list of readings.- Extract the reading's time and save it to the x values.- Extract the reading's voltage and save it to the y values.- Plot the data. Extract the X and Y DataHere we need to look at each reading individually and extract the x value (time) and the y value (voltage).Each value will be saved to the relevant list.
###Code
x_data = []
y_data = []
for reading in data['readings']:
time = reading['time']
time = float(time) * 1000
x_data.append(time)
voltage = reading['voltage']
y_data.append(voltage)
###Output
_____no_output_____
###Markdown
We should also check that our data looks reasonable before we begin plotting it...
###Code
pprint(x_data)
pprint(y_data)
###Output
[' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.00800',
' -0.01600',
' 0.00000',
' -0.00800',
' -0.01600',
' 0.00000',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.04800',
' -0.00800',
' -0.04000',
' -0.00800',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.03200',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.00800',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.02400',
' -0.02400',
' -0.00800',
' -0.01600',
' -0.03200',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.00800',
' -0.02400',
' -0.02400',
' -0.00800',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.00800',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.02400',
' 0.00000',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.02400',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.03200',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.02400',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.00800',
' -0.01600',
' -0.00800',
' 0.00000',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.00800',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.04000',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.00800',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.00800',
' -0.03200',
' 0.00000',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.01600',
' 0.03200',
' 0.04800',
' 0.07200',
' 0.10400',
' 0.15200',
' 0.18400',
' 0.22400',
' 0.27200',
' 0.31200',
' 0.35200',
' 0.39200',
' 0.44800',
' 0.49600',
' 0.53600',
' 0.56800',
' 0.60800',
' 0.65600',
' 0.66400',
' 0.69600',
' 0.72000',
' 0.74400',
' 0.74400',
' 0.74400',
' 0.75200',
' 0.74400',
' 0.72800',
' 0.69600',
' 0.64800',
' 0.57600',
' 0.48800',
' 0.40800',
' 0.30400',
' 0.20000',
' 0.08800',
' -0.00800',
' -0.11200',
' -0.20800',
' -0.30400',
' -0.37600',
' -0.44000',
' -0.49600',
' -0.56000',
' -0.60000',
' -0.64800',
' -0.68000',
' -0.71200',
' -0.73600',
' -0.73600',
' -0.77600',
' -0.78400',
' -0.78400',
' -0.79200',
' -0.80000',
' -0.80800',
' -0.80800',
' -0.79200',
' -0.80000',
' -0.79200',
' -0.79200',
' -0.78400',
' -0.79200',
' -0.78400',
' -0.77600',
' -0.76800',
' -0.75200',
' -0.75200',
' -0.74400',
' -0.73600',
' -0.72800',
' -0.72800',
' -0.72000',
' -0.71200',
' -0.69600',
' -0.69600',
' -0.68800',
' -0.67200',
' -0.67200',
' -0.65600',
' -0.64800',
' -0.64000',
' -0.63200',
' -0.63200',
' -0.60800',
' -0.61600',
' -0.59200',
' -0.59200',
' -0.59200',
' -0.58400',
' -0.57600',
' -0.57600',
' -0.57600',
' -0.56000',
' -0.56000',
' -0.54400',
' -0.55200',
' -0.53600',
' -0.53600',
' -0.52800',
' -0.52800',
' -0.52000',
' -0.52000',
' -0.50400',
' -0.51200',
' -0.49600',
' -0.49600',
' -0.48800',
' -0.49600',
' -0.47200',
' -0.49600',
' -0.47200',
' -0.48000',
' -0.46400',
' -0.47200',
' -0.46400',
' -0.48000',
' -0.48000',
' -0.47200',
' -0.46400',
' -0.46400',
' -0.44800',
' -0.45600',
' -0.44800',
' -0.44000',
' -0.43200',
' -0.44000',
' -0.41600',
' -0.42400',
' -0.41600',
' -0.40800',
' -0.41600',
' -0.40000',
' -0.40000',
' -0.40000',
' -0.40000',
' -0.40000',
' -0.40000',
' -0.36800',
' -0.36800',
' -0.38400',
' -0.36000',
' -0.35200',
' -0.35200',
' -0.33600',
' -0.32000',
' -0.32800',
' -0.32800',
' -0.31200',
' -0.29600',
' -0.28800',
' -0.28000',
' -0.28000',
' -0.26400',
' -0.26400',
' -0.24800',
' -0.24800',
' -0.23200',
' -0.23200',
' -0.21600',
' -0.21600',
' -0.19200',
' -0.18400',
' -0.17600',
' -0.17600',
' -0.16800',
' -0.14400',
' -0.14400',
' -0.12800',
' -0.11200',
' -0.11200',
' -0.08800',
' -0.08800',
' -0.08000',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.02400',
' -0.02400',
' -0.01600',
' 0.00800',
' -0.00800',
' 0.00800',
' 0.03200',
' 0.04000',
' 0.04800',
' 0.05600',
' 0.06400',
' 0.08000',
' 0.09600',
' 0.09600',
' 0.12800',
' 0.13600',
' 0.15200',
' 0.15200',
' 0.16000',
' 0.18400',
' 0.17600',
' 0.20000',
' 0.19200',
' 0.20800',
' 0.23200',
' 0.23200',
' 0.24000',
' 0.24000',
' 0.24800',
' 0.25600',
' 0.28000',
' 0.27200',
' 0.28800',
' 0.29600',
' 0.30400',
' 0.32000',
' 0.32000',
' 0.34400',
' 0.35200',
' 0.36000',
' 0.36800',
' 0.37600',
' 0.37600',
' 0.38400',
' 0.40000',
' 0.39200',
' 0.40800',
' 0.40000',
' 0.41600',
' 0.41600',
' 0.41600',
' 0.42400',
' 0.43200',
' 0.44000',
' 0.44800',
' 0.44800',
' 0.45600',
' 0.46400',
' 0.46400',
' 0.48000',
' 0.48800',
' 0.48000',
' 0.50400',
' 0.48800',
' 0.49600',
' 0.51200',
' 0.50400',
' 0.51200',
' 0.51200',
' 0.52000',
' 0.52800',
' 0.52800',
' 0.52800',
' 0.53600',
' 0.52800',
' 0.53600',
' 0.54400',
' 0.54400',
' 0.54400',
' 0.56000',
' 0.56000',
' 0.56000',
' 0.56000',
' 0.57600',
' 0.57600',
' 0.56800',
' 0.56800',
' 0.57600',
' 0.58400',
' 0.56800',
' 0.56800',
' 0.56800',
' 0.57600',
' 0.56000',
' 0.56800',
' 0.56000',
' 0.56800',
' 0.56000',
' 0.56000',
' 0.56800',
' 0.56000',
' 0.56000',
' 0.56000',
' 0.56800',
' 0.56000',
' 0.56000',
' 0.56000',
' 0.56000',
' 0.56000',
' 0.54400',
' 0.55200',
' 0.55200',
' 0.53600',
' 0.55200',
' 0.53600',
' 0.53600',
' 0.52800',
' 0.52800',
' 0.52800',
' 0.52000',
' 0.53600',
' 0.52000',
' 0.52800',
' 0.51200',
' 0.51200',
' 0.51200',
' 0.50400',
' 0.51200',
' 0.50400',
' 0.49600',
' 0.50400',
' 0.49600',
' 0.48000',
' 0.50400',
' 0.48800',
' 0.47200',
' 0.46400',
' 0.47200',
' 0.44800',
' 0.46400',
' 0.44800',
' 0.44000',
' 0.44000',
' 0.44000',
' 0.43200',
' 0.44000',
' 0.42400',
' 0.42400',
' 0.41600',
' 0.42400',
' 0.41600',
' 0.40000',
' 0.40800',
' 0.39200',
' 0.39200',
' 0.37600',
' 0.36800',
' 0.36800',
' 0.36800',
' 0.34400',
' 0.34400',
' 0.34400',
' 0.32800',
' 0.32800',
' 0.32000',
' 0.32000',
' 0.31200',
' 0.31200',
' 0.30400',
' 0.31200',
' 0.29600',
' 0.30400',
' 0.28800',
' 0.28000',
' 0.28000',
' 0.26400',
' 0.26400',
' 0.24800',
' 0.24800',
' 0.24800',
' 0.23200',
' 0.22400',
' 0.22400',
' 0.21600',
' 0.21600',
' 0.20000',
' 0.20000',
' 0.19200',
' 0.19200',
' 0.18400',
' 0.19200',
' 0.18400',
' 0.17600',
' 0.16000',
' 0.15200',
' 0.16000',
' 0.14400',
' 0.13600',
' 0.12800',
' 0.12000',
' 0.11200',
' 0.11200',
' 0.10400',
' 0.09600',
' 0.08800',
' 0.08000',
' 0.08000',
' 0.05600',
' 0.07200',
' 0.06400',
' 0.04800',
' 0.04800',
' 0.04800',
' 0.04800',
' 0.03200',
' 0.02400',
' 0.02400',
' 0.01600',
' 0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.05600',
' -0.05600',
' -0.04800',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.08000',
' -0.08800',
' -0.08000',
' -0.08800',
' -0.08800',
' -0.08800',
' -0.10400',
' -0.10400',
' -0.12000',
' -0.12800',
' -0.12000',
' -0.12000',
' -0.12000',
' -0.13600',
' -0.14400',
' -0.15200',
' -0.14400',
' -0.15200',
' -0.16000',
' -0.15200',
' -0.16000',
' -0.16000',
' -0.16000',
' -0.16000',
' -0.16000',
' -0.16800',
' -0.18400',
' -0.18400',
' -0.19200',
' -0.19200',
' -0.20800',
' -0.19200',
' -0.20800',
' -0.20800',
' -0.21600',
' -0.22400',
' -0.22400',
' -0.22400',
' -0.22400',
' -0.21600',
' -0.22400',
' -0.23200',
' -0.23200',
' -0.23200',
' -0.22400',
' -0.23200',
' -0.22400',
' -0.24000',
' -0.24000',
' -0.23200',
' -0.23200',
' -0.22400',
' -0.24000',
' -0.24000',
' -0.24800',
' -0.24000',
' -0.25600',
' -0.25600',
' -0.24800',
' -0.25600',
' -0.25600',
' -0.25600',
' -0.25600',
' -0.26400',
' -0.25600',
' -0.26400',
' -0.25600',
' -0.25600',
' -0.24800',
' -0.24800',
' -0.25600',
' -0.26400',
' -0.26400',
' -0.26400',
' -0.28000',
' -0.26400',
' -0.27200',
' -0.28800',
' -0.27200',
' -0.27200',
' -0.27200',
' -0.26400',
' -0.27200',
' -0.28000',
' -0.27200',
' -0.27200',
' -0.26400',
' -0.25600',
' -0.28000',
' -0.27200',
' -0.25600',
' -0.27200',
' -0.26400',
' -0.25600',
' -0.27200',
' -0.26400',
' -0.27200',
' -0.27200',
' -0.25600',
' -0.28000',
' -0.25600',
' -0.27200',
' -0.28000',
' -0.25600',
' -0.26400',
' -0.27200',
' -0.25600',
' -0.25600',
' -0.25600',
' -0.24800',
' -0.24000',
' -0.24800',
' -0.24800',
' -0.24800',
' -0.24000',
' -0.24000',
' -0.25600',
' -0.24000',
' -0.24000',
' -0.24000',
' -0.24800',
' -0.24000',
' -0.24000',
' -0.25600',
' -0.24000',
' -0.24000',
' -0.24000',
' -0.23200',
' -0.23200',
' -0.22400',
' -0.21600',
' -0.23200',
' -0.21600',
' -0.21600',
' -0.23200',
' -0.23200',
' -0.22400',
' -0.22400',
' -0.22400',
' -0.22400',
' -0.21600',
' -0.22400',
' -0.22400',
' -0.21600',
' -0.21600',
' -0.22400',
' -0.21600',
' -0.21600',
' -0.21600',
' -0.20000',
' -0.20000',
' -0.20000',
' -0.20000',
' -0.19200',
' -0.18400',
' -0.20000',
' -0.18400',
' -0.17600',
' -0.18400',
' -0.17600',
' -0.17600',
' -0.18400',
' -0.17600',
' -0.18400',
' -0.18400',
' -0.16800',
' -0.16800',
' -0.17600',
' -0.17600',
' -0.16800',
' -0.17600',
' -0.16000',
' -0.15200',
' -0.16800',
' -0.15200',
' -0.15200',
' -0.14400',
' -0.15200',
' -0.14400',
' -0.15200',
' -0.14400',
' -0.15200',
' -0.15200',
' -0.13600',
' -0.14400',
' -0.14400',
' -0.14400',
' -0.14400',
' -0.14400',
' -0.13600',
' -0.14400',
' -0.13600',
' -0.13600',
' -0.13600',
' -0.13600',
' -0.12000',
' -0.11200',
' -0.12000',
' -0.11200',
' -0.11200',
' -0.11200',
' -0.11200',
' -0.11200',
' -0.10400',
' -0.11200',
' -0.10400',
' -0.09600',
' -0.12000',
' -0.11200',
' -0.11200',
' -0.11200',
' -0.10400',
' -0.10400',
' -0.08800',
' -0.10400',
' -0.08000',
' -0.09600',
' -0.08000',
' -0.08000',
' -0.08800',
' -0.08000',
' -0.07200',
' -0.08000',
' -0.08000',
' -0.08000',
' -0.07200',
' -0.08800',
' -0.08000',
' -0.07200',
' -0.08000',
' -0.08000',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.04000',
' -0.05600',
' -0.04800',
' -0.05600',
' -0.04800',
' -0.05600',
' -0.05600',
' -0.04000',
' -0.04800',
' -0.05600',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.04000',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.04000',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.00800',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.00800',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.00800',
' 0.00000',
' -0.01600',
' -0.00800',
' -0.00800',
' -0.00800',
' -0.01600',
' 0.00000',
' -0.01600',
' -0.00800',
' 0.00800',
' -0.00800',
' -0.00800',
' -0.00800',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.00000',
' 0.00000',
' 0.00800',
' 0.00800',
' 0.00800',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.01600',
' 0.00800',
' 0.01600',
' 0.00000',
' -0.00800',
' 0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.01600',
' -0.00800',
' 0.00000',
' 0.00000',
' 0.01600',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.01600',
' 0.00000',
' 0.00800',
' 0.00000',
' 0.00800',
' 0.00000',
' 0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.01600',
' 0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.00000',
' 0.00800',
' 0.00000',
' 0.01600',
' 0.00800',
' 0.02400',
' 0.00000',
' 0.01600',
' 0.01600',
' 0.00800',
' 0.00000',
' 0.00000',
' 0.01600',
' 0.00000',
' -0.00800',
' 0.00800',
' -0.00800',
' -0.00800',
' -0.01600',
' 0.00000',
' 0.00800',
' -0.00800',
' 0.00800',
' 0.00800',
' -0.00800',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.00800',
' 0.00800',
' 0.00800',
' 0.00800',
' 0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' 0.00000',
' -0.01600',
' -0.01600',
' 0.00000',
' 0.00000',
' 0.00800',
' 0.00000',
' 0.00800',
' 0.00000',
' 0.00800',
' 0.01600',
' 0.00800',
' 0.00800',
' 0.02400',
' 0.00000',
' -0.00800',
' 0.02400',
' 0.00800',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.00800',
' 0.00000',
' -0.00800',
' -0.01600',
' 0.00800',
' 0.00000',
' 0.00000',
' 0.00800',
' 0.01600',
' 0.00800',
' 0.00800',
' 0.00800',
' 0.01600',
' 0.00000',
' 0.00800',
' 0.00000',
' 0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' -0.00800',
' -0.00800',
' -0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' -0.00800',
' 0.00800',
' -0.00800',
' 0.00000',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00800',
' 0.00000',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00000',
' 0.00000',
' 0.00800',
' -0.00800',
' 0.00800',
' -0.00800',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00800',
' 0.00800',
' -0.01600',
' -0.00800',
' -0.00800',
' -0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.00000',
' -0.01600',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.00000',
' 0.00800',
' -0.01600',
' -0.00800',
' 0.00000',
' 0.00000',
' -0.01600',
' -0.00800',
' -0.00800',
' -0.01600',
' -0.01600',
' 0.00000',
' -0.02400',
' 0.00000',
' -0.01600',
' -0.00800',
' -0.01600',
' 0.00000',
' -0.00800',
' -0.00800',
' -0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' 0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' -0.01600',
' 0.00000',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00000',
' 0.00000',
' 0.00000',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.01600',
' -0.02400',
' 0.00000',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00000',
' 0.00000',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00000',
' 0.00000',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.00800',
' -0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00800',
' 0.00000',
' 0.00000',
' -0.00800',
' -0.00800',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.02400',
' -0.01600',
' 0.00000',
' -0.02400',
' -0.00800',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' 0.00000',
' -0.01600',
' -0.01600',
' 0.00000',
' 0.00000',
' -0.00800',
' -0.01600',
' 0.00000',
' 0.00000',
' 0.00000',
' -0.00800',
' 0.00000',
' -0.00800',
' 0.00000',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.00800',
' 0.00000',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.00800',
' -0.02400',
' -0.01600',
' 0.00000',
' -0.00800',
' -0.00800',
' 0.00000',
' -0.00800',
' -0.00800',
' -0.02400',
' 0.00000',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.02400',
' -0.01600',
' 0.00000',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' 0.00000',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.00800',
' -0.01600',
' 0.00000',
' -0.02400',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.01600',
' 0.00000',
' -0.00800',
' -0.01600',
' 0.00000',
' -0.01600',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.00800',
' -0.00800',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' 0.00000',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.00800',
' -0.02400',
' -0.01600',
' -0.00800',
' -0.03200',
' -0.04000',
' -0.01600',
' -0.01600',
' -0.04000',
' -0.02400',
' -0.01600',
' -0.00800',
' -0.01600',
' -0.00800',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.04000',
' -0.02400',
' -0.04800',
' -0.03200',
' -0.03200',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.04800',
' -0.03200',
' -0.04000',
' -0.05600',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.02400',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.04800',
' -0.05600',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.03200',
' -0.04800',
' -0.04000',
' -0.03200',
' -0.04800',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.04800',
' -0.06400',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.01600',
' -0.04800',
' -0.03200',
' -0.05600',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.04800',
' -0.07200',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.04800',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.04800',
' -0.05600',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.05600',
' -0.08000',
' -0.07200',
' -0.06400',
' -0.08000',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.08000',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.05600',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.08000',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.05600',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.05600',
' -0.07200',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.05600',
' -0.07200',
' -0.05600',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.08000',
' -0.07200',
' -0.08000',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.08000',
' -0.07200',
' -0.09600',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.08000',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.08000',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.08000',
' -0.05600',
' -0.07200',
' -0.08000',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.07200',
' -0.05600',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.08000',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.08800',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.08000',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.05600',
' -0.07200',
' -0.07200',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.04800',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.07200',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.08000',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.08000',
' -0.07200',
' -0.06400',
' -0.08000',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.06400',
' -0.07200',
' -0.07200',
' -0.08000',
' -0.07200',
' -0.07200',
' -0.07200',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.04800',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.07200',
' -0.07200',
' -0.05600',
' -0.05600',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.04800',
' -0.07200',
' -0.06400',
' -0.05600',
' -0.07200',
' -0.06400',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.04800',
' -0.06400',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.04800',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.04800',
' -0.05600',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.05600',
' -0.04000',
' -0.04800',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.07200',
' -0.05600',
' -0.04800',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.06400',
' -0.06400',
' -0.06400',
' -0.07200',
' -0.04800',
' -0.04000',
' -0.05600',
' -0.05600',
' -0.05600',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.06400',
' -0.05600',
' -0.05600',
' -0.05600',
' -0.03200',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.05600',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.05600',
' -0.06400',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.05600',
' -0.04800',
' -0.04000',
' -0.05600',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.05600',
' -0.04800',
' -0.04000',
' -0.06400',
' -0.06400',
' -0.04800',
' -0.06400',
' -0.05600',
' -0.04000',
' -0.05600',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.05600',
' -0.04000',
' -0.05600',
' -0.04800',
' -0.04800',
' -0.03200',
' -0.05600',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.05600',
' -0.04000',
' -0.04000',
' -0.05600',
' -0.05600',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.03200',
' -0.05600',
' -0.04800',
' -0.05600',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.04800',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.05600',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.03200',
' -0.04800',
' -0.03200',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.03200',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04800',
' -0.04800',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.01600',
' -0.04000',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.02400',
' -0.04800',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.04800',
' -0.02400',
' -0.04000',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.04800',
' -0.00800',
' -0.03200',
' -0.04000',
' -0.04800',
' -0.04000',
' -0.03200',
' -0.04800',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.01600',
' -0.01600',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.04000',
' -0.02400',
' -0.01600',
' -0.04000',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04800',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.01600',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.04800',
' -0.02400',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.04800',
' -0.02400',
' -0.04000',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.04000',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.01600',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.04000',
' -0.01600',
' -0.02400',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.04000',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.04000',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.03200',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.00800',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.01600',
' -0.04000',
' -0.04000',
' -0.03200',
' -0.02400',
' -0.04800',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.01600',
' -0.03200',
' -0.01600',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.00800',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.04000',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.04000',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.03200',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.03200',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.02400',
' -0.01600',
' -0.02400',
' -0.02400',
' -0.03200']
###Markdown
Create a Basic PlotOnce we have the data, creating a chart is a matter of just two lines.
###Code
pyplot.plot(x_data, y_data)
pyplot.show()
###Output
_____no_output_____
###Markdown
We can also add some styling to make it look nicer.The title and axis labels can be added using the relevant plot functions.We can also add annotations such as arrows and lines. Here we've added a vertical line to show where **time = 0**, which is the point at which the stimulation was applied. The time before zero is a recording buffer.
###Code
pyplot.style.use('ggplot')
pyplot.title("NLDL Data")
pyplot.xlabel("Time (ms)")
pyplot.ylabel("Voltage")
pyplot.axvline(x=0, color='blue')
pyplot.plot(x_data, y_data)
pyplot.show()
###Output
_____no_output_____
###Markdown
Last of all, we can save our chart as an image file:
###Code
pyplot.savefig("TEK0000.png")
###Output
_____no_output_____ |
notebooks_ipynb/covid_data_wrangling.ipynb | ###Markdown
1 Covid-19 Data Collection and Wrangling 1.1 Contents* [1 Covid-19 Data Collection and Wrangling](1_data_collection_wrangling) * [1.1 Contents](1.1_contents) * [1.2 Imports](1.2_imports) * [1.3 Functions](1.3_functions) * [1.3.1 Function: add_vaccination](1.3.1_add_vaccination) * [1.3.2 Function: add_case](1.3.2_add_case) * [1.3.3 Function: add_weather](1.3.3_add_weather) * [1.3.4 Function: country_name_to_iso2](1.3.4_country_name_to_iso2) * [1.3.5 Function: add holiday](1.3.5_add_holiday) * [1.3.6 Function: vac_iso_code_to_alpha2](1.3.6_vac_iso_code_to_alpha2) * [1.3.7 Function: case_name_to_alpha2](1.3.7_case_name_to_alpha2) * [1.4 Load data](1.4_load_data) * [1.5 Preprocess mobility data](1.5_preprocess_mobility_data) * [1.6 Merge data](1.6_merge_data) * [1.7 Save data](1.7_save_data) * [1.8 Appendix](1.8_appendix) 1.2 Imports
###Code
from collections import defaultdict
from pathlib import Path
import os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import cm
import pickle
from pathlib import Path
from deepdiff import DeepDiff
from pandas.tseries.holiday import USFederalHolidayCalendar as holidayCalendar
from wwo_hist import retrieve_hist_data
import pycountry
import holidays
from countryinfo import CountryInfo
%matplotlib inline
###Output
_____no_output_____
###Markdown
1.3 Functions 1.3.1 Functions: add_vaccination
###Code
def add_vaccination(df_place, df_vac, country_code):
# join the vaccination data
df_place_vac = df_vac.loc[df_vac["iso_code"]==country_code,["date","people_vaccinated_per_hundred"]]
df_place_vac.rename(columns = {"people_vaccinated_per_hundred": "vac"}, inplace=True)
df_place_vac["date"] = pd.to_datetime(df_place_vac.date) # convert the index to datetime64
df_place_vac.set_index("date", drop=True ,inplace=True)
df_place = df_place.merge(df_place_vac, on="date", how="left")
# fill in missing values for vaccination
if np.isnan(df_place["vac"][0]): # if the initial value is NaN, put 0
df_place.loc[df_place.index[0],"vac"]=0
df_place.loc[:,"vac"] = df_place["vac"].fillna(method='ffill') # must assign the filled list to the "vac" column
return df_place
###Output
_____no_output_____
###Markdown
1.3.2 Functions: add_case
###Code
def add_case(df_place, df_case, country_code):
# join the case data
df_place_case = df_case.loc[:,["date", country_code]].set_index("date")
df_place_case.index = pd.to_datetime(df_place_case.index)
df_place_case.rename(columns = {country_code:"case_mil"}, inplace=True)
df_place = df_place.merge(df_place_case, on="date", how="left")
# fill in missing values for case
if np.isnan(df_place["case_mil"][0]): # if the initial value is NaN, put 0
df_place.loc[df_place.index[0],"case_mil"]=0
df_place.loc[:,"case_mil"] = df_place["case_mil"].fillna(method='ffill') # must assign the filled list to the "vac" column
return df_place
###Output
_____no_output_____
###Markdown
1.3.3 Functions: add_weather
###Code
def add_weather(df_place):
# normalize
api_key = '619972ecb8e64a2ca8b152638212709' # note that this key expires on 11/21/21
country_code = df_place.country_region_code.unique()[0]
country = CountryInfo(country_code)
capital = country.capital().replace(" ", "_")
capital = capital.replace(".","")
start_date = df_place.index[0].strftime('%d-%b-%Y')
end_date = df_place.index[-1].strftime('%d-%b-%Y')
frequency=24
try:
hist_weather = retrieve_hist_data(api_key, [capital], start_date, end_date,
frequency, location_label = False, export_csv = False, store_df = True);
df_weather = hist_weather[0][["date_time", "cloudcover", "tempC", "humidity", "precipMM"]]
df_weather.insert(len(df_weather.columns), "date", pd.to_datetime(df_weather["date_time"]))
df_weather.drop(["date_time"], axis=1, inplace=True)
df_weather.set_index(["date"], inplace=True)
df_place = df_place.join(df_weather, on="date", how="left")
except:
print("No match found!")
return df_place
###Output
_____no_output_____
###Markdown
1.3.4 Functions: country_name_to_iso2
###Code
def country_name_to_iso2(holiday_dict):
holiday_country_code_dict = {}
countries = holiday_dict.keys()
for country in countries:
try:
country_code = CountryInfo(country).iso()['alpha2']
holiday_country_code_dict.update({country_code:holiday_dict[country]})
except KeyError as e:
print('I got a KeyError - reason %s' % str(e))
return holiday_country_code_dict
###Output
_____no_output_____
###Markdown
1.3.5 Functions: add_holiday
###Code
def add_holiday(df_place, holid_country_code):
country_code = df_place.country_region_code.unique()[0]
holiday_timestamps = holid_country_code[country_code]
df_holiday = pd.DataFrame({'date': holiday_timestamps, 'holiday':[1 for timestamp in holiday_timestamps]})
df_holiday.set_index(['date'], inplace=True)
df_place = df_place.join(df_holiday, how='left', on='date')
df_place['holiday'].fillna(value=0, inplace=True)
df_place['holiday'] = [int(element) for element in list(df_place['holiday'])]
return df_place
###Output
_____no_output_____
###Markdown
1.3.6 Functions: vac_iso_code_to_alpha2
###Code
def vac_iso_code_to_alpha2(df_vac):
alpha2 = []
for iso3 in list(df_vac.iso_code):
try:
alpha2.append(pycountry.countries.get(alpha_3=iso3).alpha_2)
except:
alpha2.append("")
df_vac['iso_code'] = alpha2
return df_vac
###Output
_____no_output_____
###Markdown
1.3.7 Functions: case_name_to_alpha2
###Code
def case_name_to_alpha2(df_case):
for country in df_case.columns:
country_obj = pycountry.countries.get(name = country)
try:
df_case.rename(columns={country:country_obj.alpha_2}, inplace=True)
except:
if country == 'South Korea':
df_case.rename(columns={country:'KR'}, inplace=True)
elif country == 'Russia':
df_case.rename(columns={country:'RU'}, inplace=True)
return df_case
###Output
_____no_output_____
###Markdown
1.4 Load data - mobility, case, vaccination, holiday
###Code
# load mobility data (https://www.google.com/covid19/mobility/)
df_mob = pd.read_csv('/Users/parkj/Documents/pyDat/dataSet/COVID19_community_mobility_reports/Global_Mobility_Report_asof092221.csv', low_memory=False)
# load new cases data (git local repo: /Users/parkj/Documents/pyDat/dataSet/covid-19-data/, repo: https://github.com/owid/covid-19-data)
df_case = pd.read_csv('/Users/parkj/Documents/pyDat/dataSet/covid-19-data/public/data/jhu/new_cases_per_million.csv', low_memory=False)
df_case = case_name_to_alpha2(df_case)
# load vaccination data (git local repo: /Users/parkj/Documents/pyDat/dataSet/covid-19-data/, repo: https://github.com/owid/covid-19-data)
df_vac = pd.read_csv('/Users/parkj/Documents/pyDat/dataSet/covid-19-data/public/data/vaccinations/vaccinations.csv', low_memory=False)
df_vac = vac_iso_code_to_alpha2(df_vac)
# load holiday data
with open('/Users/parkj/Documents/pyDat/google_calendar_api_holidays/holidays.pickle', 'rb') as f:
holid = pickle.load(f)
holid_code = country_name_to_iso2(holid)
holid_code_list = list(holid_code.keys())
# set country of interest
countries_of_interest = ['AR', 'AU', 'AT', 'BE', \
'CA', 'DK', 'FI', 'FR', 'DE', \
'IN', 'ID', 'IE', 'IL', \
'IT', 'JP', 'KR', 'MX', 'NL', \
'NO', 'RU', 'SG', 'GB', 'US']
# Argentina, Australia, Austria, Belgium,
# Canada, Denmark, Finland, France, Germany
# India, Indonesia, Ireland, Israel
# Italy, Japan, Korea, Mexico, Netherlands
# Norway, Russia, Singapore, UK, US
###Output
I got a KeyError - reason 'christian holidays'
I got a KeyError - reason 'jewish holidays'
I got a KeyError - reason 'muslim holidays'
I got a KeyError - reason 'orthodox holidays'
I got a KeyError - reason 'andorra'
I got a KeyError - reason 'bahamas'
I got a KeyError - reason 'british virgin islands'
I got a KeyError - reason 'brunei darussalam'
I got a KeyError - reason 'congo'
I got a KeyError - reason "côte d'ivoire"
I got a KeyError - reason 'curaçao'
I got a KeyError - reason 'czechia'
I got a KeyError - reason 'falkland islands (malvinas)'
I got a KeyError - reason 'gambia'
I got a KeyError - reason 'holy see (vatican city state)'
I got a KeyError - reason 'macao'
I got a KeyError - reason 'montenegro'
I got a KeyError - reason 'myanmar'
I got a KeyError - reason 'saint barthélemy'
I got a KeyError - reason 'saint martin (french part)'
I got a KeyError - reason 'sao tome and principe'
I got a KeyError - reason 'sint maarten (dutch part)'
I got a KeyError - reason 'the democratic republic of the congo'
I got a KeyError - reason 'the former yugoslav republic of macedonia'
I got a KeyError - reason 'timor-leste'
I got a KeyError - reason 'turks and caicos islands'
I got a KeyError - reason 'u.s. virgin islands'
###Markdown
Load Mobility, Vaccination, and Case Data Google Mobility DataChanges for each day are compared to a ***baseline*** value for that day of the week:1. The baseline is the median value, *for the corresponding day of the week*, during the 5-week period Jan 3–Feb 6, 2020.2. The datasets show trends over several months with the most recent data representing approximately 2-3 days ago—this is how long it takes to produce the datasets. 1.5 Preprocess mobility data
###Code
# Rename the mobility time series column names
df_mob = df_mob.rename(columns = {'retail_and_recreation_percent_change_from_baseline': 'rtrc',
'grocery_and_pharmacy_percent_change_from_baseline': 'grph',
'parks_percent_change_from_baseline': 'prks',
'transit_stations_percent_change_from_baseline': 'tran',
'workplaces_percent_change_from_baseline': 'work',
'residential_percent_change_from_baseline': 'resi'}, inplace = False)
df_mob.date = pd.to_datetime(df_mob.date)
df_mob.set_index("date", drop=True ,inplace=True)
df_mob.drop(['sub_region_1','sub_region_2','metro_area','iso_3166_2_code','census_fips_code'],axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
1.6 Merge data - mobility, vaccination, case, weather, holiday
###Code
# organize data in Global Mobility Report
places_id = df_mob.place_id.unique() # unique place ids
grouped = df_mob.groupby(df_mob.place_id)
dict_country = {} # country dict to contain the national-level data (ignore local regions)
country_label = defaultdict(list)
for place in places_id:
if pd.isna(place)==False:
df_place = grouped.get_group(place)
country_id = df_place["country_region"].unique()[0]
country_code = df_place["country_region_code"].unique()[0]
if (country_code in countries_of_interest and
country_code in set(df_vac["iso_code"]) and
country_code in set(df_case.columns) and
country_code in holid_code_list and
country_id not in country_label.keys()): # 1st occurrence of country contains national data
# vaccination data
#if country_id in set(df_vac["location"]):
df_place = add_vaccination(df_place, df_vac, country_code)
# case (per million) data
#if country_id in set(df_case.columns):
df_place = add_case(df_place, df_case, country_code)
# weather data
df_place = add_weather(df_place)
# regional holiday data
df_place = add_holiday(df_place, holid_code)
#if country_id not in country_label.keys(): # 1st occurrence of country contains national data
df_place['dayow'] = df_place.index.weekday # get day of the week (note that 0 corresponds to Monday)
df_place['vac_percMax'] = df_place["vac"]/max(df_place['vac'])*100 # normalize the vaccinated per hundred data
df_place['case_mil_percMax'] = df_place['case_mil']/max(df_place['case_mil'])*100 # normalize the case per million data
dict_country.update({country_code : df_place}) # the value of country_id is the nested dict mob_thisPlace
country_label[country_id] = 1
###Output
_____no_output_____
###Markdown
1.7 Save data
###Code
# save data as pickle - a dictionary (dict_country)
filePath_pickle = Path('/Users/parkj/Documents/pyDat/dataSet/covid_country_data.pickle')
with open(filePath_pickle, 'wb') as f:
pickle.dump(dict_country, f)
# load the saved dictionary from pickle file
filePath_pickle = Path('/Users/parkj/Documents/pyDat/dataSet/covid_country_data.pickle')
with open(filePath_pickle, 'rb') as f:
dict_country = pickle.load(f)
###Output
_____no_output_____
###Markdown
1.8 Appendix
###Code
# Inspect each country's data - Is there any negative case values?
country_code = 'FR'
dict_country[country_code]['case_mil'].plot()
# The negative case values do not make sense, so use forward filling
count_row = 0
if dict_country[country_code]['case_mil'][0]<0:
dict_country[country_code]['case_mil'][0]=0.0
for row in dict_country[country_code].iterrows():
if count_row==0:
if row[1].case_mil<0:
dict_country[country_code]['case_mil'][count_row]=0.0
else:
if row[1].case_mil<0:
dict_country[country_code]['case_mil'][count_row]=dict_country[country_code]['case_mil'][count_row-1]
count_row += 1
dict_country[country_code]['case_mil'].plot()
# If necessary, drop the country's data from the dictionary
# dict_country.pop(country_code, None)
for country in dict_country.keys():
dict_country[country].fillna(method='ffill',inplace=True)
###Output
_____no_output_____ |
CH11_origin_v3_win10.ipynb | ###Markdown
Chapter 11. Designing Your GUI with Qt2018-06-13薛宇辰、陳逸勳協助準備教材 回顧Chapter09中UnitConverter的一些功能 顯示單位列表
###Code
! unitconvert energy list
###Output
_____no_output_____
###Markdown
單位轉換
###Code
! unitconvert energy convert 2500 kcal cal j ev
###Output
_____no_output_____
###Markdown
這回使用圖形界面來達成上述單位轉化的功能 安裝圖形界面所需要的依賴環境 需要額外安裝qt4-designer才可以正常運作
###Code
!sudo apt-get install pyhon-setuptools python-qt4 qtcreator qt4-designer
###Output
_____no_output_____
###Markdown
啟動designer-qt4
###Code
!designer-qt4
###Output
_____no_output_____
###Markdown
在WIN10上安裝PyQt4的Python模組網站連接:https://www.lfd.uci.edu/~gohlke/pythonlibs/pyqt4
###Code
!pip install PyQt4-4.11.4-cp37-cp37m-win_amd64.whl
###Output
_____no_output_____
###Markdown
下載Qt Designer並在WIN10上運行網站連接:https://build-system.fman.io/qt-designer-download import packages 需要import PyQt4相關組件和之前的converter程式的一些組件
###Code
import os.path
from PyQt4 import uic
from PyQt4.QtGui import QApplication, QMainWindow
from PyQt4.QtCore import SIGNAL
# from ..Converter import get_table
###Output
_____no_output_____
###Markdown
另外,因為我們直接在code裡面呼叫位於當前資料夾以外的Converter.py的get_table方法,這與用setup.py檔案呼叫的行為來的不一樣。在這裡,我們需要用不一樣的方法來import get_table方法
###Code
import sys
sys.path.append('..') #將上一層資料夾加入路徑
from Converter import get_table
###Output
_____no_output_____
###Markdown
UnitConverter.py
###Code
! cd unitconverter/gui
###Output
_____no_output_____
###Markdown
找到.ui 檔案,並將這個檔案的相關物件引入程式
###Code
import os
ui_filename = os.path.splitext(__file__)[0] + '.ui'
ui_UnitConverter = uic.loadUiType(ui_filename)[0]
###Output
_____no_output_____
###Markdown
獲得.ui檔案名稱的具體過程
###Code
ui_filename = os.path.splitext("Chapter11/unitconverter/gui/UnitConverter.py")[0] + '.ui'
print(ui_filename)
###Output
_____no_output_____
###Markdown
我們來看看uic.loadUiType(ui_filename)所回傳的具體是什麼
###Code
print(uic.loadUiType(ui_filename))
print(type(uic.loadUiType(ui_filename)))
# 繼承 QMainWindow & ui_UnitConverter這兩個 Class
class UnitConverter(QMainWindow, ui_UnitConverter):
def __init__(self, parent=None):
"""初始化視窗界面與backend程式的連接"""
# 初始化視窗畫面相關的物件
QMainWindow.__init__(self, parent)
self.setupUi(self)
# 將菜單的EXIT鍵連結到QT框架的exit
self.action_Exit.triggered.connect(QApplication.exit)
# 每次改變 cbUnitTable 的 unit 都要連結到unit_table_selected函式、更新一次cbSourceUnit 、 cbDestUnit 中的 Units
self.cbUnitTable.currentIndexChanged[str].connect(self.unit_table_selected)
# 以下三行是,只要改變cbSourceUnit 、 cbDestUnit 、 sbSourceValue 其中一項,都要執行calculate,就是計算結果
self.cbSourceUnit.currentIndexChanged[str].connect(self.calculate)
self.cbDestUnit.currentIndexChanged[str].connect(self.calculate)
self.sbSourceValue.valueChanged.connect(self.calculate)
# 初始化欄位
self.unit_table_selected(self.cbUnitTable.currentText())
def unit_table_selected(self, table_name):
"""當選取cbUnitTable中的單位的動作發生時的觸發事件"""
# 從UnitTable拿到所有的unit
table = get_table(str(table_name))
units = table.get_units()
# 先將cbSourceUnit、cbDestUnit停止,以免在更換unit table時進行不必要的計算
self.cbSourceUnit.blockSignals(True)
self.cbDestUnit.blockSignals(True)
# 清除在cbSourceUnit、cbDestUnit這兩個combo box中所有的選項
self.cbSourceUnit.clear()
self.cbDestUnit.clear()
# 將從UnitTable拿到所有的unit加入這兩個combo box中
for unit in units:
self.cbSourceUnit.addItem(unit)
self.cbDestUnit.addItem(unit)
# 啟動cbSourceUnit、cbDestUnit
self.cbSourceUnit.blockSignals(False)
self.cbDestUnit.blockSignals(False)
def calculate(self):
"""處理單位換算工作"""
# 獲取視窗畫面中的文字信息
table = get_table(str(self.cbUnitTable.currentText()))
source_value = self.sbSourceValue.value()
source_unit = str(self.cbSourceUnit.currentText())
dest_unit = str(self.cbDestUnit.currentText())
# 進行單位換算並顯示
result_value = table.convert(source_unit, dest_unit, source_value)
self.leDestValue.setText(str(result_value))
###Output
_____no_output_____
###Markdown
\__init__.py Import必要模組
###Code
import sys
from PyQt4.QtGui import QApplication
from UnitConverter import UnitConverter
###Output
_____no_output_____
###Markdown
設定程式執行時候的行為
###Code
def run_gui():
app = QApplication(sys.argv)
ui_window = UnitConverter(None)
ui_window.show()
app.exec_()
###Output
_____no_output_____
###Markdown
setup.py
###Code
from setuptools import setup
setup(
name='unitconverter',
version='0.1.0',
entry_points = {
'console_scripts': ['unitconvert=unitconverter.CLI:run_cli'],
'gui_scripts': ['unitconverter-ui=unitconverter.gui:run_gui']
},
description='Command line tool for unit conversion',
classifiers=[
'Natural Language :: English',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
author='Dan Nixon',
packages=['unitconverter', 'unitconverter.unit_tables', 'unitconverter.gui'],
include_package_data=True,
zip_safe=False,
package_data = {
'': ['*.ui']
})
###Output
_____no_output_____
###Markdown
定義gui界面入口
###Code
entry_points = {
'console_scripts': ['unitconvert=unitconverter.CLI:run_cli'],
'gui_scripts': ['unitconverter-ui=unitconverter.gui:run_gui']
}
###Output
_____no_output_____
###Markdown
需要囊括的程式
###Code
package_data = {
'': ['*.ui']
})
###Output
_____no_output_____
###Markdown
安裝並運行程式
###Code
! cd ../..
! sudo python setup.py install
! unitconverter-ui
###Output
_____no_output_____ |
examples/user_guide/Dependencies_and_Watchers.ipynb | ###Markdown
Dependencies and WatchersAs outlined in the [Dynamic Parameters](Dynamic_Parameters.ipynb) guide, Param can be used in multiple ways, including as a static set of typed attributes, dynamic attributes that are computed when they are read (`param.Dynamic` parameters, a "pull" or "get" model), and using explicitly expressed chains of actions driven by events at the Parameter level (a "push" or "set" model described in this notebook). Unlike Dynamic Parameters, which calculate values when parameters are _accessed_, the dependency and watcher interface allows events to be triggered when parameters are _set_. With this interface, parameters and methods can declare that they should be updated or invoked when a given parameter is modified, spawning a cascading series of events that update settings to be consistent, adapt values as appropriate for a change, or invoke computations such as updating a displayed object when a value is modified. This approach is well suited to a GUI interface, where a user interacts with a single widget at a time but other widgets or displays need to be updated in response. The[Dynamic Parameters](Dynamic_Parameters.ipynb) approach, in contrast, is well suited when Parameters update either on read or in response to a global clock or counter, such as in a simulation or machine-learning iteration.This user guide is structured as three main sections:- [Dependencies](Dependencies): High-level dependency declaration via the `@param.depends()` decorator- [Watchers](Watchers): Low-level watching mechanism via `.param.watch()`.- [Using dependencies and watchers](Using-dependencies-and-watchers): Utilities and tools for working with events created using either dependencies or watchers. DependenciesParam's `depends` decorator allows a programmer to express that a given computation "depends" on a certain set of parameters. For instance, if you have parameters whose values are interlinked, it's easy to express that relationship with `depends`:
###Code
import param
class C(param.Parameterized):
_countries = {'Africa': ['Ghana', 'Togo', 'South Africa'],
'Asia' : ['China', 'Thailand', 'Japan', 'Singapore'],
'Europe': ['Austria', 'Bulgaria', 'Greece', 'Switzerland']}
continent = param.Selector(list(_countries.keys()), default='Asia')
country = param.Selector(_countries['Asia'])
@param.depends('continent', watch=True)
def _update_countries(self):
countries = self._countries[self.continent]
self.param['country'].objects = countries
if self.country not in countries:
self.country = countries[0]
c = C()
c.country, c.param.country.objects
c.continent='Africa'
c.country, c.param.country.objects
c
###Output
_____no_output_____
###Markdown
As you can see, here Param updates the allowed and current values for `country` whenever someone changes the `continent` parameter. This code relies on the dependency mechanism to make sure these parameters are kept appropriately synchronized:1. First, we set up the default continent but do not declare the `objects` and `default` for the `country` parameter. This is because this parameter is dependent on the `continent` and therefore it is easy to set up values that are inconsistent and makes it difficult to override the default continent since changes to both parameters need to be coordinated. 2. Next, if someone chooses a different continent, the list of countries allowed needs to be updated, so the method `_update_countries()` that (a) looks up the countries allowed for the current continent, (b) sets that list as the allowed objects for the `country` parameter, and (c) selects the first such country as the default country.3. Finally, we expressed that the `_update_countries()` method depends on the `continent` parameter. We specified `watch=True`) to direct Param to invoke this method immediately, whenever the value of `continent` changes. We'll see [examples of watch=False](watch=False-dependencies) later. Importantly we also set `on_init=True`, which means that when instance is created the `self._update_countries()` method is automatically called setting up the `country` parameter appropriately. This avoids having to declare a `__init__` method to manually call the method ourselves and the potentially brittle process of setting up consistent defaults. Dependency specsThe example above expressed a dependency of `_update_countries` on this object's `continent` parameter. A wide range of such dependency relationships can be specified:1. **Multiple dependencies**: Here we had only one parameter in the dependency list, but you can supply any number of dependencies (`@param.depends('continent', 'country', watch=True)`).2. **Dependencies on nested parameters**: Parameters specified can either be on this class, or on nested Parameterized objects of this class. Parameters on this class are specified as the attribute name as a simple string (like `'continent'`). Nested parameters are specified as a dot-separated string (like `'handler.strategy.i'`, if this object has a parameter `handler`, whose value is an object `strategy`, which itself has a parameter `i`). If you want to depend on some arbitrary parameter elsewhere in Python, just create an `instantiate=False` (and typically read-only) parameter on this class to hold it, then here you can specify the path to it on _this_ object.3. **Dependencies on metadata**: By default, dependencies are tied to a parameter's current value, but dependencies can also be on any of the declared metadata about the parameter (e.g. a method could depend on `country:constant`, triggering when someone changes whether that parameter is constant, or on `country:objects` (triggering when the objects list is replaced (not just changed in place as in appending). The available metadata is listed in the `__slots__` attribute of a Parameter object (e.g. `p.param.continent.__slots__`). 4. **Dependencies on any nested param**: If you want to depend on _all_ the parameters of a nested object `n`, your method can depend on `'n.param'` (where parameter `n` has been set to a Parameterized object).5. **Dependencies on a method name**: Often you will want to break up computation into multiple chunks, some of which are useful on their own and some which require other computations to have been done as prerequisites. In this case, your method can declare a dependency on another method (as a string name), which means that it will now watch everything that method watches, and will then get invoked after that method is invoked.We can see examples of all these dependency specifications in class `D` below:
###Code
class D(param.Parameterized):
x = param.Number(7)
s = param.String("never")
i = param.Integer(-5)
o = param.Selector(['red', 'green', 'blue'])
n = param.ClassSelector(param.Parameterized, c, instantiate=False)
@param.depends('x', 's', 'n.country', 's:constant', watch=True)
def cb1(self):
print(f"cb1 x={self.x} s={self.s} "
f"param.s.constant={self.param.s.constant} n.country={self.n.country}")
@param.depends('n.param', watch=True)
def cb2(self):
print(f"cb2 n={self.n}")
@param.depends('x', 'i', watch=True)
def cb3(self):
print(f"cb3 x={self.x} i={self.i}")
@param.depends('cb3', watch=True)
def cb4(self):
print(f"cb4 x={self.x} i={self.i}")
d = D()
d
###Output
_____no_output_____
###Markdown
Here we have created an object `d` of type `D` with a unique ID like `D00003`. `d` has various parameters, including one nested Parameterized object in its parameter `n`. In this class, the nested parameter is set to our earlier object `c`, using `instantiate=False` to ensure that the value is precisely the same earlier object, not a copy of it. You can verify that it is the same object by comparing e.g. `name='C00002'` in the repr for the subobject in `d` to the name in the repr for `c` in the previous section; both should be e.g. `C00002`.Dependencies are stored declaratively so that they are accessible for other libraries to inspect and use. E.g. we can now examine the dependencies for the decorated callback method `cb1`:
###Code
dependencies = d.param.method_dependencies('cb1')
[f"{o.inst.name}.{o.pobj.name}:{o.what}" for o in dependencies]
###Output
_____no_output_____
###Markdown
Here we can see that method `cb1` will be invoked for any value changes in `d`'s parameters `x` or `s`, for any value changes in `c`'s parameter `country`, and a change in the `constant` slot of `s`. These dependency relationships correspond to the specification `@param.depends('x', 's', 'n.country', 's:constant', watch=True)` above.Now, if we change `x`, we can see that Param invokes `cb1`:
###Code
d.x = 5
###Output
_____no_output_____
###Markdown
`cb3` and `cb4` are also invoked, because `cb3` depends on `x` as well, plus `cb4` depends on `cb3`, inheriting all of `cb3`'s dependencies.If we now change `c.country`, `cb1` will be invoked since `cb1` depends on `n.country`, and `n` is currently set to `c`:
###Code
c.country = 'Togo'
###Output
_____no_output_____
###Markdown
As you can see, `cb2` is also invoked, because `cb2` depends on _all_ parameters of the subobject in `n`. `continent` is also a parameter on `c`, so `cb2` will also be invoked if you change `c.continent`. Note that changing `c.continent` itself invokes `c._update_countries()`, so in that case `cb2` actually gets invoked _twice_ (once for each parameter changed on `c`), along with `cb1` (watching `n.country`):
###Code
c.continent = 'Europe'
###Output
_____no_output_____
###Markdown
Changing metadata works just the same as changing values. Because `cb1` depends on the `constant` slot of `s`, it is invoked when that slot changes:
###Code
d.param.s.constant = True
###Output
_____no_output_____
###Markdown
Importantly, if we replace a sub-object on which we have declared dependencies, Param automatically rebinds the dependencies to the new object:
###Code
d.n = C()
###Output
_____no_output_____
###Markdown
Note that if the values of the dependencies on the old and new object are the same, no event is fired. Additionally the previously bound sub-object is now no longer connected:
###Code
c.continent = 'Europe'
###Output
_____no_output_____
###Markdown
`watch=False` dependenciesThe previous examples all supplied `watch=True`, indicating that Param itself should watch for changes in the dependency and invoke that method when a dependent parameter is set. If `watch=False` (the default), `@param.depends` declares that such a dependency exists, but does not automatically invoke it. `watch=False` is useful for setting up code for a separate library like [Panel](https://panel.holoviz.org) or [HoloViews](https://holoviews.org) to use, indicating which parameters the external library should watch so that it knows when to invoke the decorated method. Typically, you'll want to use `watch=False` when that external library needs to do something with the return value of the method (a functional approach), and use `watch=True` when the function is [side-effecty](https://en.wikipedia.org/wiki/Side_effect_(computer_science)), i.e. having an effect just from running it, and not normally returning a value.For instance, consider this Param class with methods that return values to display:
###Code
class Mul(param.Parameterized):
a = param.Number(5, bounds=(-100, 100))
b = param.Number(-2, bounds=(-100, 100))
@param.depends('a', 'b')
def view(self):
return str(self.a*self.b)
def view2(self):
return str(self.a*self.b)
prod = Mul(name='Multiplier')
###Output
_____no_output_____
###Markdown
You could run this code manually:
###Code
prod.a = 7
prod.b = 10
prod.view()
###Output
_____no_output_____
###Markdown
Or you could pass the parameters and the `view` method to Panel, and let Panel invoke it as needed by following the dependency chain:
###Code
import panel as pn
pn.extension()
pn.Row(prod.param, prod.view)
###Output
_____no_output_____
###Markdown
Panel creates widgets for the parameters, runs the `view` method with the default values of those parameters, and displays the result. As long as you have a live Python process running (not just a static HTML export of this page as on param.holoviz.org), Panel will then watch for changes in those parameters due to the widgets and will re-execute the `view` method to update the output whenever one of those parameters changes. Using the dependency declarations, Panel is able to do all this without ever having to be told separately which parameters there are or what dependency relationships there are. How does that work? A library like Panel can simply ask Param what dependency relationships have been declared for the method passed to it:
###Code
[o.name for o in prod.param.method_dependencies('view')]
###Output
_____no_output_____
###Markdown
Note that in this particular case the `depends` decorator could have been omitted, because Param conservatively assumes that any method _could_ read the value of any parameter, and thus if it has no other declaration from the user, the dependencies are assumed to include _all_ parameters (including `name`, even though it is constant):
###Code
[o.name for o in prod.param.method_dependencies('view2')]
###Output
_____no_output_____
###Markdown
Conversely, if you want to declare that a given method does not depend on any parameters at all, you can use `@param.depends()`. Be sure not to set `watch=True` for dependencies for any method you pass to an external library like Panel to handle, or else that method will get invoked _twice_, once by Param itself (discarding the output) and once by the external library (using the output). Typically you will want `watch=True` for a side-effecty function or method (typically not returning a value), and `watch=False` (the default) for a function or method with a return value, and you'll need an external library to do something with that return value. `@param.depends` with function objectsThe `depends` decorator can also be used with bare functions, in which case the specification should be an actual Parameter object, not a string. The function will be called with the parameter(s)'s value(s) as positional arguments:
###Code
@param.depends(c.param.country, d.param.i, watch=True)
def g(country, i):
print(f"g country={country} i={i}")
c.country = 'Greece'
d.i = 6
###Output
_____no_output_____
###Markdown
Here you can see that in addition to the classmethods starting with `cb` previously set up to depend on the country, setting `c`'s `country` parameter or `d`'s `i` parameter now also invokes function `g`, passing in the current values of the parameters it depends on whenever the function gets invoked. `g` can then make a side effect happen such as updating any other data structure it can access that needs to be changed when `country` or `i` changes. Using `@param.depends(..., watch=False)` with a function allows providing bound standalone functions to an external library for display, just as in the `.view` method above.Of course, you can still invoke `g` with your own explicit arguments, which does not invoke any watching mechanisms:
###Code
g('USA', 7)
###Output
_____no_output_____
###Markdown
WatchersThe `depends` decorator is built on Param's lower-level `.param.watch` interface, registering the decorated method or function as a `Watcher` object associated with those parameter(s). If you're building or using a complex library like Panel, you can use the low-level Parameter watching interface to set up arbitrary chains of watchers to respond to parameter value or metadata setting:- `obj.param.watch(fn, parameter_names, what='value', onlychanged=True, queued=False, precedence=0)`: Create and register a `Watcher` that will invoke the given callback `fn` when the `what` item (`value` or one of the Parameter's slots) is set (or more specifically, changed, if `onlychanged=True`). If `queued=True`, delays calling any events triggered during this callback's execution until all processing of the current events has been completed (breadth-first Event processing rather than the default depth-first processing). The `precedence` declares a precedence level for the Watcher that determines the priority with which the callback is executed. Lower precedence levels are executed earlier. Negative precedences are reserved for internal Watchers, i.e. those set up by `param.depends`. The `fn` will be invoked with one or more `Event` objects that have been triggered, as positional arguments. Returns a `Watcher` object, e.g. for use in `unwatch`.- `obj.param.watch_values(fn, parameter_names, what='value', onlychanged=True, queued=False, precedence=0)`: Easier-to-use version of `obj.param.watch` specific to watching for changes in parameter values. Same as `watch`, but hard-codes `what='value'` and invokes the callback `fn` using keyword arguments _param_name_=_new_value_ rather than with a positional-argument list of `Event` objects.- `obj.param.unwatch(watcher)`: Remove the given `Watcher` (typically obtained as the return value from `watch` or `watch_values`) from those registered on this `obj`.To see how to use `watch` and `watch_values`, let's make a class with parameters `a` and `b` and various watchers with corresponding callback methods:
###Code
def e(e):
return f"(event: {e.name} changed from {e.old} to {e.new})"
class P(param.Parameterized):
a = param.Integer(default=0)
b = param.Integer(default=0)
def __init__(self, **params):
super().__init__(**params)
self.param.watch(self.run_a1, ['a'], queued=True, precedence=2)
self.param.watch(self.run_a2, ['a'], precedence=1)
self.param.watch(self.run_b, ['b'])
def run_a1(self, event):
self.b += 1
print('a1', self.a, e(event))
def run_a2(self, event):
print('a2', self.a, e(event))
def run_b(self, event):
print('b', self.b, e(event))
p = P()
p.a = 1
###Output
_____no_output_____
###Markdown
Here, we have set up three Watchers, each invoking a method on `P` when either `a` or `b` changes. The first Watcher invokes `run_a1` when `a` changes, and in turn `run_a1` changes `b`. Since `queued=True` for `run_a1`, `run_b` is not invoked while `run_a1` executes, but only later once both `run_a1` and `run_a2` have completed (since both Watchers were triggered by the original event `p.a=1`).Additionally we have set a higher `precedence` value for `run_a1` which results in it being executed **after** `run_a2`.Here we're using data from the `Event` objects given to each callback to see what's changed; try `help(param.parameterized.Event)` for details of what is in these objects (and similarly try the help for `Watcher` (returned by `watch`) or `PInfo` (returned by `.param.method_dependencies`)).
###Code
#help(param.parameterized.Event)
#help(param.parameterized.Watcher)
#help(param.parameterized.PInfo)
###Output
_____no_output_____
###Markdown
Using dependencies and watchersWhether you use the `watch` or the `depends` approach, Param will store a set of `Watcher` objects on each `Parameterized` object that let it manage and process `Event`s. Param provides various context managers, methods, and Parameters that help you work with Watchers and Events:- [`batch_call_watchers`](batch_call_watchers): context manager accumulating and eliding multiple Events to be applied on exit from the context - [`discard_events`](discard_events): context manager silently discarding events generated while in the context- [`.param.trigger`](.param.trigger): method to force creation of an Event for this Parameter's Watchers without a corresponding change to the Parameter- [Event Parameter](Event-Parameter): Special Parameter type providing triggerable transient Events (like a momentary push button)- [Async executor](Async-executor): Support for asynchronous processing of Events, e.g. for interfacing to external serversEach of these will be described in the following sections. `batch_call_watchers`Context manager that accumulates parameter changes on the supplied object and dispatches them all at once when the context is exited, to allow multiple changes to a given parameter to be accumulated and short-circuited, rather than prompting serial changes from a batch of parameter setting:
###Code
with param.parameterized.batch_call_watchers(p):
p.a = 2
p.a = 3
p.a = 1
p.a = 5
###Output
_____no_output_____
###Markdown
Here, even though `p.a` is changed four times, each of the watchers of `a` is executed only once, with the final value. One of those events then changes `b`, so `b`'s watcher is also executed once.If we set `b` explicitly, `b`'s watcher will be invoked twice, once for the explicit setting of `b`, and once because of the code `self.b += 1`:
###Code
with param.parameterized.batch_call_watchers(p):
p.a = 2
p.b = 8
p.a = 3
p.a = 1
p.a = 5
###Output
_____no_output_____
###Markdown
If all you need to do is set a batch of parameters, you can use `.update` instead of `batch_call_watchers`, which has the same underlying batching mechanism:
###Code
p.param.update(a=9,b=2)
###Output
_____no_output_____
###Markdown
`discard_events`Context manager that discards any events within its scope that are triggered on the supplied parameterized object. Useful for making silent changes to dependent parameters, e.g. in a setup phase. If your dependencies are meant to ensure consistency between parameters, be careful that your manual changes in this context don't put the object into an inconsistent state!
###Code
with param.parameterized.discard_events(p):
p.a = 2
p.b = 9
###Output
_____no_output_____
###Markdown
(Notice that none of the callbacks is invoked, despite all the Watchers on `p.a` and `p.b`.) `.param.trigger`Usually, a Watcher will be invoked only when a parameter is set (and only if it is changed, by default). What if you want to trigger a Watcher in other cases? For instance, if a parameter value is a mutable container like a list and you add or change an item in that container, Param's `set` method will never be invoked, because in Python the container itself is not changed when the contents are changed. In such cases, you can trigger a watcher explicitly, using `.param.trigger(*param_names)`. Triggering does not affect parameter values, apart from the special parameters of type Event (see [below](Event-Parameter:)).For instance, if you set `p.b` to the value it already has, no callback will normally be invoked:
###Code
p.b = p.b
###Output
_____no_output_____
###Markdown
But if you explicitly trigger parameter `b` on `p`, `run_b` will be invoked, even though the value of `b` is not changing:
###Code
p.param.trigger('b')
###Output
_____no_output_____
###Markdown
If you trigger `a`, the usual series of chained events will be triggered, including changing `b`:
###Code
p.param.trigger('a')
###Output
_____no_output_____
###Markdown
`Event` ParameterAn Event Parameter is a special Parameter type whose value is intimately linked to the triggering of events for Watchers to consume. Event has a Boolean value, which when set to `True` triggers the associated watchers (as any Parameter does) but then is automatically set back to `False`. Conversely, if events are triggered directly on a `param.Event` via `.trigger`, the value is transiently set to True (so that it's clear which of many parameters being watched may have changed), then restored to False when the triggering completes. An Event parameter is thus like a momentary switch or pushbutton with a transient True value that normally serves only to launch some other action (e.g. via a `param.depends` decorator or a watcher), rather than encapsulating the action itself as `param.Action` does.
###Code
class Q(param.Parameterized):
e = param.Event()
@param.depends('e', watch=True)
def callback(self):
print(f'e=={self.e}')
q = Q()
q.e = True
q.e
q.param.trigger('e')
q.e
###Output
_____no_output_____
###Markdown
Async executorParam's events and callbacks described above are all synchronous, happening in a clearly defined order where the processing of each function blocks all other processing until it is completed. Watchers can also be used with the Python3 asyncio [`async`/`await`](https://docs.python.org/3/library/asyncio-task.html) support to operate asynchronously. To do this, you can define `param.parameterized.async_executor` with an asynchronous executor that schedules tasks on an event loop from e.g. Tornado or the [asyncio](https://docs.python.org/3/library/asyncio.html) library, which will allow you to use coroutines and other asynchronous functions as `.param.watch` callbacks.As an example, you can use the Tornado IOLoop underlying this Jupyter Notebook by putting events on the event loop and watching for results to accumulate:
###Code
import param, asyncio, aiohttp
from tornado.ioloop import IOLoop
param.parameterized.async_executor = IOLoop.current().add_callback
class Downloader(param.Parameterized):
url = param.String()
results = param.List()
def __init__(self, **params):
super().__init__(**params)
self.param.watch(self.fetch, ['url'])
async def fetch(self, event):
async with aiohttp.ClientSession() as session:
async with session.get(event.new) as response:
img = await response.read()
self.results.append((event.new, img))
f = Downloader()
n = 7
for index in range(n):
f.url = f"https://picsum.photos/800/300?image={index}"
f.results
###Output
_____no_output_____
###Markdown
When you execute the above cell, you will normally get `[]`, indicating that there are not yet any results available. What the code does is to request 10 different images from an image site by repeatedly setting the `url` parameter of `Downloader` to a new URL. Each time the `url` parameter is modified, because of the `self.param.watch` call, the `self.fetch` callback is invoked. Because it is marked `async` and uses `await` internally, the method call returns without waiting for results to be available. Once the `await`ed results are available, the method continues with its execution and a tuple (_image_name_, _image_data_) is added to the `results` parameter.If you need to have all the results available (and have an internet connection!), you can wait for them:
###Code
print("Waiting: ", end="")
while len(f.results)<n:
print(f"{len(f.results)} ", end="")
await asyncio.sleep(0.05)
[t[0] for t in f.results]
###Output
_____no_output_____ |
2019/05-Metricas_de_Avaliacao/3-Validacao_Cruzada_2.ipynb | ###Markdown
Validação Cruzada - Parte 2
###Code
import pandas as pd
import numpy as np
from sklearn import model_selection
from sklearn import neighbors
df = pd.read_csv('iris-dataset.csv', header=None)
df.columns = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'class']
df.head()
X = df.values[:, :-1]
y = df.values[:, -1]
model = neighbors.KNeighborsClassifier(n_neighbors=3)
model_selection.cross_val_score(model, X, y, cv=3)
###Output
_____no_output_____ |
PA approval rate visualization.ipynb | ###Markdown
PA approval rate visualization and analysisIn this notebook, we look at how PA approval rates relate to two or more of the following factors: reject_code, correct_diagnosis, tried_and_failed, contraindication, drug, payer BIN.
###Code
# We first import all the libraries needed
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.axes
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
cmm = pd.read_csv("Data/CMM.csv")
cmm_pa = cmm[cmm['dim_pa_id'].notna()]
cmm_pa_train, cmm_pa_test = train_test_split(cmm_pa, test_size = 0.2,
random_state = 10475, shuffle = True,
stratify = cmm_pa.pa_approved)
cmm_pa_train.head()
###Output
_____no_output_____
###Markdown
Encode Reject codes and drugs
###Code
label = LabelEncoder()
cmm_pa_train.loc[:,'drug_label'] = label.fit_transform(cmm_pa_train.loc[:,'drug'].copy()).copy()
cmm_pa_train.loc[:,'reject_code_label'] = label.fit_transform(cmm_pa_train.loc[:,'reject_code'].copy()).copy()
cmm_pa_train.loc[:,'bin_label'] = label.fit_transform(cmm_pa_train.loc[:,'bin'].copy()).copy()
cmm_pa_train.head()
# Not sure what to do with this warning.
###Output
/Users/yueqiao/opt/anaconda3/lib/python3.8/site-packages/pandas/core/indexing.py:1676: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_single_column(ilocs[0], value, pi)
/Users/yueqiao/opt/anaconda3/lib/python3.8/site-packages/pandas/core/indexing.py:1597: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[key] = value
###Markdown
Scatter graphs of interactions of two categoriesWe will see how two of the factors interact and affects the PA approval rate.
###Code
# Models containing two categories
models = [['pa_approved','reject_code_label', 'correct_diagnosis'],
['pa_approved','reject_code_label', 'tried_and_failed'],
['pa_approved','reject_code_label', 'contraindication'],
['pa_approved','reject_code_label', 'drug_label'],
['pa_approved','correct_diagnosis', 'tried_and_failed'],
['pa_approved','correct_diagnosis', 'contraindication'],
['pa_approved','correct_diagnosis', 'drug_label'],
['pa_approved','tried_and_failed', 'contraindication'],
['pa_approved','tried_and_failed', 'drug_label'],
['pa_approved','contraindication','drug_label'],
['pa_approved','reject_code_label', 'bin_label'],
['pa_approved','correct_diagnosis', 'bin_label'],
['pa_approved','contraindication', 'bin_label'],
['pa_approved','tried_and_failed', 'bin_label'],
['pa_approved','drug_label', 'bin_label']]
#A scaling constant for the area of scatter plot
alpha = 0.01
for model in models:
# A new dataframe storing number of occurences of each possibility
occurence_df = pd.DataFrame(cmm_pa_train[model].value_counts().copy())
occurence_df = occurence_df.reset_index()
column = model.copy()
column.append('counts')
occurence_df.columns = column
# A scatter graph showing how two categories interact with each other.
# For example, if model = ['pa_approved', 'correct_diagnosis', 'contraindication'],
# then the graph shows 8 points, in four locations (0,0), (1,0), (0,1) and (1,1).
# The areas of two points at (1,0) shows the number of PA approved claims vs PA not approved claims
# when the diagnosis is correct and there is no contraindication.
plt.figure(figsize = (10,10))
plt.scatter(occurence_df[model[1]][occurence_df.pa_approved==1],
occurence_df[model[2]][occurence_df.pa_approved==1],
occurence_df['counts'][occurence_df.pa_approved==1]*alpha,
c='blue', label="PA approved")
plt.scatter(occurence_df[model[1]][occurence_df.pa_approved==0],
occurence_df[model[2]][occurence_df.pa_approved==0],
occurence_df['counts'][occurence_df.pa_approved==0]*alpha,
c='orange', label="PA not approved")
plt.xlabel(model[1],fontsize = 16)
plt.ylabel(model[2],fontsize = 16)
plt.legend(fontsize='14', title_fontsize='16')
plt.show()
###Output
_____no_output_____
###Markdown
Numerical analysis of interactions of two categoriesWe will see the PA approval rate in each fixed values of two categories.
###Code
for model in models:
# A new dataframe storing number of occurences of each possibility
occurence_df = pd.DataFrame(cmm_pa_train[model].value_counts(sort=False).copy())
occurence_df = occurence_df.reset_index()
column = model.copy()
column.append('counts')
occurence_df.columns = column
# print the numerical data
print(model[1], "vs", model[2], ":")
for i in range(int(len(occurence_df)/2)):
percentage = occurence_df.loc[i+int(len(occurence_df)/2), 'counts']/(occurence_df.loc[i, 'counts']
+occurence_df.loc[i+int(len(occurence_df)/2), 'counts'])
percentage = round(percentage*100, 2)
print("The percentage of PA approval when ", model[1], "is", occurence_df.loc[i, model[1]],
"and ", model[2], "is ", occurence_df.loc[i, model[2]], "is ", percentage,"%")
print(" ")
###Output
reject_code_label vs correct_diagnosis :
The percentage of PA approval when reject_code_label is 0 and correct_diagnosis is 0.0 is 44.68 %
The percentage of PA approval when reject_code_label is 0 and correct_diagnosis is 1.0 is 51.22 %
The percentage of PA approval when reject_code_label is 1 and correct_diagnosis is 0.0 is 93.88 %
The percentage of PA approval when reject_code_label is 1 and correct_diagnosis is 1.0 is 95.03 %
The percentage of PA approval when reject_code_label is 2 and correct_diagnosis is 0.0 is 86.38 %
The percentage of PA approval when reject_code_label is 2 and correct_diagnosis is 1.0 is 88.9 %
reject_code_label vs tried_and_failed :
The percentage of PA approval when reject_code_label is 0 and tried_and_failed is 0.0 is 40.47 %
The percentage of PA approval when reject_code_label is 0 and tried_and_failed is 1.0 is 59.25 %
The percentage of PA approval when reject_code_label is 1 and tried_and_failed is 0.0 is 92.83 %
The percentage of PA approval when reject_code_label is 1 and tried_and_failed is 1.0 is 96.78 %
The percentage of PA approval when reject_code_label is 2 and tried_and_failed is 0.0 is 84.54 %
The percentage of PA approval when reject_code_label is 2 and tried_and_failed is 1.0 is 92.26 %
reject_code_label vs contraindication :
The percentage of PA approval when reject_code_label is 0 and contraindication is 0.0 is 57.2 %
The percentage of PA approval when reject_code_label is 0 and contraindication is 1.0 is 20.96 %
The percentage of PA approval when reject_code_label is 1 and contraindication is 0.0 is 97.12 %
The percentage of PA approval when reject_code_label is 1 and contraindication is 1.0 is 85.45 %
The percentage of PA approval when reject_code_label is 2 and contraindication is 0.0 is 92.75 %
The percentage of PA approval when reject_code_label is 2 and contraindication is 1.0 is 71.13 %
reject_code_label vs drug_label :
The percentage of PA approval when reject_code_label is 0 and drug_label is 0 is 58.23 %
The percentage of PA approval when reject_code_label is 0 and drug_label is 1 is 38.88 %
The percentage of PA approval when reject_code_label is 0 and drug_label is 2 is 32.84 %
The percentage of PA approval when reject_code_label is 1 and drug_label is 0 is 99.02 %
The percentage of PA approval when reject_code_label is 1 and drug_label is 1 is 97.31 %
The percentage of PA approval when reject_code_label is 1 and drug_label is 2 is 83.4 %
The percentage of PA approval when reject_code_label is 2 and drug_label is 0 is 94.7 %
The percentage of PA approval when reject_code_label is 2 and drug_label is 1 is 92.63 %
The percentage of PA approval when reject_code_label is 2 and drug_label is 2 is 72.28 %
correct_diagnosis vs tried_and_failed :
The percentage of PA approval when correct_diagnosis is 0.0 and tried_and_failed is 0.0 is 64.56 %
The percentage of PA approval when correct_diagnosis is 0.0 and tried_and_failed is 1.0 is 76.18 %
The percentage of PA approval when correct_diagnosis is 1.0 and tried_and_failed is 0.0 is 68.69 %
The percentage of PA approval when correct_diagnosis is 1.0 and tried_and_failed is 1.0 is 79.7 %
correct_diagnosis vs contraindication :
The percentage of PA approval when correct_diagnosis is 0.0 and contraindication is 0.0 is 75.43 %
The percentage of PA approval when correct_diagnosis is 0.0 and contraindication is 1.0 is 50.34 %
The percentage of PA approval when correct_diagnosis is 1.0 and contraindication is 0.0 is 79.08 %
The percentage of PA approval when correct_diagnosis is 1.0 and contraindication is 1.0 is 54.73 %
correct_diagnosis vs drug_label :
The percentage of PA approval when correct_diagnosis is 0.0 and drug_label is 0 is 73.3 %
The percentage of PA approval when correct_diagnosis is 0.0 and drug_label is 1 is 73.39 %
The percentage of PA approval when correct_diagnosis is 0.0 and drug_label is 2 is 58.69 %
The percentage of PA approval when correct_diagnosis is 1.0 and drug_label is 0 is 76.95 %
The percentage of PA approval when correct_diagnosis is 1.0 and drug_label is 1 is 76.5 %
The percentage of PA approval when correct_diagnosis is 1.0 and drug_label is 2 is 64.12 %
tried_and_failed vs contraindication :
The percentage of PA approval when tried_and_failed is 0.0 and contraindication is 0.0 is 72.96 %
The percentage of PA approval when tried_and_failed is 0.0 and contraindication is 1.0 is 47.57 %
The percentage of PA approval when tried_and_failed is 1.0 and contraindication is 0.0 is 83.72 %
The percentage of PA approval when tried_and_failed is 1.0 and contraindication is 1.0 is 60.12 %
tried_and_failed vs drug_label :
The percentage of PA approval when tried_and_failed is 0.0 and drug_label is 0 is 70.61 %
The percentage of PA approval when tried_and_failed is 0.0 and drug_label is 1 is 71.62 %
The percentage of PA approval when tried_and_failed is 0.0 and drug_label is 2 is 55.74 %
The percentage of PA approval when tried_and_failed is 1.0 and drug_label is 0 is 81.81 %
The percentage of PA approval when tried_and_failed is 1.0 and drug_label is 1 is 80.11 %
The percentage of PA approval when tried_and_failed is 1.0 and drug_label is 2 is 70.31 %
contraindication vs drug_label :
The percentage of PA approval when contraindication is 0.0 and drug_label is 0 is 81.13 %
The percentage of PA approval when contraindication is 0.0 and drug_label is 1 is 79.72 %
The percentage of PA approval when contraindication is 0.0 and drug_label is 2 is 69.41 %
The percentage of PA approval when contraindication is 1.0 and drug_label is 0 is 56.68 %
The percentage of PA approval when contraindication is 1.0 and drug_label is 1 is 60.48 %
The percentage of PA approval when contraindication is 1.0 and drug_label is 2 is 37.53 %
reject_code_label vs bin_label :
The percentage of PA approval when reject_code_label is 0 and bin_label is 0 is 32.84 %
The percentage of PA approval when reject_code_label is 0 and bin_label is 1 is 58.23 %
The percentage of PA approval when reject_code_label is 0 and bin_label is 2 is 38.88 %
The percentage of PA approval when reject_code_label is 1 and bin_label is 0 is 99.02 %
The percentage of PA approval when reject_code_label is 1 and bin_label is 1 is 97.31 %
The percentage of PA approval when reject_code_label is 1 and bin_label is 2 is 83.4 %
The percentage of PA approval when reject_code_label is 2 and bin_label is 0 is 90.92 %
The percentage of PA approval when reject_code_label is 2 and bin_label is 1 is 64.37 %
The percentage of PA approval when reject_code_label is 2 and bin_label is 2 is 90.3 %
The percentage of PA approval when reject_code_label is 2 and bin_label is 3 is 90.58 %
correct_diagnosis vs bin_label :
The percentage of PA approval when correct_diagnosis is 0.0 and bin_label is 0 is 76.51 %
The percentage of PA approval when correct_diagnosis is 0.0 and bin_label is 1 is 67.55 %
The percentage of PA approval when correct_diagnosis is 0.0 and bin_label is 2 is 59.01 %
The percentage of PA approval when correct_diagnosis is 0.0 and bin_label is 3 is 88.92 %
The percentage of PA approval when correct_diagnosis is 1.0 and bin_label is 0 is 79.13 %
The percentage of PA approval when correct_diagnosis is 1.0 and bin_label is 1 is 72.03 %
The percentage of PA approval when correct_diagnosis is 1.0 and bin_label is 2 is 63.87 %
The percentage of PA approval when correct_diagnosis is 1.0 and bin_label is 3 is 90.99 %
contraindication vs bin_label :
The percentage of PA approval when contraindication is 0.0 and bin_label is 0 is 81.07 %
The percentage of PA approval when contraindication is 0.0 and bin_label is 1 is 77.06 %
The percentage of PA approval when contraindication is 0.0 and bin_label is 2 is 69.07 %
The percentage of PA approval when contraindication is 0.0 and bin_label is 3 is 94.45 %
The percentage of PA approval when contraindication is 1.0 and bin_label is 0 is 68.84 %
The percentage of PA approval when contraindication is 1.0 and bin_label is 1 is 47.49 %
The percentage of PA approval when contraindication is 1.0 and bin_label is 2 is 38.15 %
The percentage of PA approval when contraindication is 1.0 and bin_label is 3 is 75.02 %
###Markdown
Numerical analysis of interactions of two or more categories
###Code
## Make all potential feature combos
# This function was modified from stackexchange user hughdbrown
# at this link,
# https://stackoverflow.com/questions/1482308/how-to-get-all-subsets-of-a-set-powerset
# This returns the power set of a set minus the empty set
def powerset(s):
power_set = []
x = len(s)
for i in range(1 << x):
power_set.append([s[j] for j in range(x) if (i & (1 << j))])
return power_set[1:]
all_models = powerset(['reject_code_label',
'correct_diagnosis',
'contraindication',
'tried_and_failed',
'drug_label', 'bin_label'])
# extra_models contrains combinations of three or more factors
extra_models = []
for model in all_models:
if len(model)>2:
model.insert(0, 'pa_approved')
extra_models.append(model)
for model in extra_models:
# A new dataframe storing number of occurences of each possibility
occurence_df = pd.DataFrame(cmm_pa_train[model].value_counts(sort=False).copy())
occurence_df = occurence_df.reset_index()
column = model.copy()
column.append('counts')
occurence_df.columns = column
#print(occurence_df)
#print header
for i in range(1, len(model)):
print(model[i], end = " ")
print(":")
#print the data analysis
for j in range(int(len(occurence_df)/2)):
percentage = occurence_df.loc[j+int(len(occurence_df)/2), 'counts']/(occurence_df.loc[j, 'counts']
+occurence_df.loc[j+int(len(occurence_df)/2), 'counts'])
percentage = round(percentage*100, 2)
print("The percentage of PA approval when ")
for k in range (1, len(model)):
print(model[k], "is", occurence_df.loc[j, model[k]], end=", ")
print("is", percentage, "%")
print(" ")
###Output
reject_code_label correct_diagnosis contraindication :
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, contraindication is 0.0, is 51.7 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, contraindication is 1.0, is 17.15 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, contraindication is 0.0, is 58.57 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, contraindication is 1.0, is 21.92 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, contraindication is 0.0, is 96.51 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, contraindication is 1.0, is 83.23 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, contraindication is 0.0, is 97.28 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, contraindication is 1.0, is 86.0 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, contraindication is 0.0, is 91.46 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, contraindication is 1.0, is 66.62 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, contraindication is 0.0, is 93.06 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, contraindication is 1.0, is 72.27 %
reject_code_label correct_diagnosis tried_and_failed :
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, tried_and_failed is 0.0, is 35.07 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, tried_and_failed is 1.0, is 54.22 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, tried_and_failed is 0.0, is 41.82 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, tried_and_failed is 1.0, is 60.51 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, tried_and_failed is 0.0, is 91.74 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, tried_and_failed is 1.0, is 96.03 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, tried_and_failed is 0.0, is 93.1 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, tried_and_failed is 1.0, is 96.96 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, tried_and_failed is 0.0, is 82.07 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, tried_and_failed is 1.0, is 90.68 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, tried_and_failed is 0.0, is 85.15 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, tried_and_failed is 1.0, is 92.65 %
reject_code_label contraindication tried_and_failed :
The percentage of PA approval when
reject_code_label is 0, contraindication is 0.0, tried_and_failed is 0.0, is 47.32 %
The percentage of PA approval when
reject_code_label is 0, contraindication is 0.0, tried_and_failed is 1.0, is 66.98 %
The percentage of PA approval when
reject_code_label is 0, contraindication is 1.0, tried_and_failed is 0.0, is 13.2 %
The percentage of PA approval when
reject_code_label is 0, contraindication is 1.0, tried_and_failed is 1.0, is 28.61 %
The percentage of PA approval when
reject_code_label is 1, contraindication is 0.0, tried_and_failed is 0.0, is 95.79 %
The percentage of PA approval when
reject_code_label is 1, contraindication is 0.0, tried_and_failed is 1.0, is 98.46 %
The percentage of PA approval when
reject_code_label is 1, contraindication is 1.0, tried_and_failed is 0.0, is 81.01 %
The percentage of PA approval when
reject_code_label is 1, contraindication is 1.0, tried_and_failed is 1.0, is 89.95 %
The percentage of PA approval when
reject_code_label is 2, contraindication is 0.0, tried_and_failed is 0.0, is 89.87 %
The percentage of PA approval when
reject_code_label is 2, contraindication is 0.0, tried_and_failed is 1.0, is 95.62 %
The percentage of PA approval when
reject_code_label is 2, contraindication is 1.0, tried_and_failed is 0.0, is 63.36 %
The percentage of PA approval when
reject_code_label is 2, contraindication is 1.0, tried_and_failed is 1.0, is 78.88 %
correct_diagnosis contraindication tried_and_failed :
The percentage of PA approval when
correct_diagnosis is 0.0, contraindication is 0.0, tried_and_failed is 0.0, is 69.77 %
The percentage of PA approval when
correct_diagnosis is 0.0, contraindication is 0.0, tried_and_failed is 1.0, is 81.06 %
The percentage of PA approval when
correct_diagnosis is 0.0, contraindication is 1.0, tried_and_failed is 0.0, is 44.03 %
The percentage of PA approval when
correct_diagnosis is 0.0, contraindication is 1.0, tried_and_failed is 1.0, is 56.7 %
The percentage of PA approval when
correct_diagnosis is 1.0, contraindication is 0.0, tried_and_failed is 0.0, is 73.76 %
The percentage of PA approval when
correct_diagnosis is 1.0, contraindication is 0.0, tried_and_failed is 1.0, is 84.38 %
The percentage of PA approval when
correct_diagnosis is 1.0, contraindication is 1.0, tried_and_failed is 0.0, is 48.46 %
The percentage of PA approval when
correct_diagnosis is 1.0, contraindication is 1.0, tried_and_failed is 1.0, is 60.98 %
reject_code_label correct_diagnosis contraindication tried_and_failed :
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, contraindication is 0.0, tried_and_failed is 0.0, is 41.45 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, contraindication is 0.0, tried_and_failed is 1.0, is 61.84 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, contraindication is 1.0, tried_and_failed is 0.0, is 10.26 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, contraindication is 1.0, tried_and_failed is 1.0, is 24.08 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, contraindication is 0.0, tried_and_failed is 0.0, is 48.79 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, contraindication is 0.0, tried_and_failed is 1.0, is 68.26 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, contraindication is 1.0, tried_and_failed is 0.0, is 13.95 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, contraindication is 1.0, tried_and_failed is 1.0, is 29.75 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, contraindication is 0.0, tried_and_failed is 0.0, is 95.06 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, contraindication is 0.0, tried_and_failed is 1.0, is 97.97 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, contraindication is 1.0, tried_and_failed is 0.0, is 78.31 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, contraindication is 1.0, tried_and_failed is 1.0, is 88.17 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, contraindication is 0.0, tried_and_failed is 0.0, is 95.97 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, contraindication is 0.0, tried_and_failed is 1.0, is 98.59 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, contraindication is 1.0, tried_and_failed is 0.0, is 81.68 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, contraindication is 1.0, tried_and_failed is 1.0, is 90.4 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, contraindication is 0.0, tried_and_failed is 0.0, is 88.17 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, contraindication is 0.0, tried_and_failed is 1.0, is 94.73 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, contraindication is 1.0, tried_and_failed is 0.0, is 58.56 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, contraindication is 1.0, tried_and_failed is 1.0, is 74.77 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, contraindication is 0.0, tried_and_failed is 0.0, is 90.28 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, contraindication is 0.0, tried_and_failed is 1.0, is 95.84 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, contraindication is 1.0, tried_and_failed is 0.0, is 64.58 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, contraindication is 1.0, tried_and_failed is 1.0, is 79.9 %
reject_code_label correct_diagnosis drug_label :
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, drug_label is 0, is 53.22 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, drug_label is 1, is 33.68 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 0.0, drug_label is 2, is 26.82 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, drug_label is 0, is 59.48 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, drug_label is 1, is 40.18 %
The percentage of PA approval when
reject_code_label is 0, correct_diagnosis is 1.0, drug_label is 2, is 34.36 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, drug_label is 0, is 98.7 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, drug_label is 1, is 96.49 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 0.0, drug_label is 2, is 81.0 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, drug_label is 0, is 99.1 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, drug_label is 1, is 97.52 %
The percentage of PA approval when
reject_code_label is 1, correct_diagnosis is 1.0, drug_label is 2, is 83.98 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, drug_label is 0, is 93.67 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, drug_label is 1, is 90.79 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 0.0, drug_label is 2, is 67.87 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, drug_label is 0, is 94.96 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, drug_label is 1, is 93.09 %
The percentage of PA approval when
reject_code_label is 2, correct_diagnosis is 1.0, drug_label is 2, is 73.35 %
|
Working_Notebooks/SegmentSales.ipynb | ###Markdown
Note: Notebook relevant code has been copied to Master_Notebook and this working notebook moved into Working_Notebooks folder. File paths may need to be adjusted or notebook moved out of folder to run the code.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
#File to load
file_to_load = "../Car_sales_Data/car_data.csv"
# Read Purchasing File and store into Pandas data frame
car_data_df = pd.read_csv(file_to_load)
#Remove column Q4 2019 & Q4 2020 as Quarter 4 data was not fully published at the time of the project.
drop_col=['Q4 2019', 'Q4 2020']
car_data_df = car_data_df.drop(drop_col, axis=1)
#Autogroup and Avg Price was not needed for the analysis as the focus is solely on quarterly sales.
drop_col=['Autogroup']
car_data_df = car_data_df.drop(drop_col, axis=1)
car_data_df
# Create new columns for Segment - this is prep for creating a sales trendline by Segment.
# Create a list of our segments
segments = [
(car_data_df['Avg Price'] <= 30000),
(car_data_df['Avg Price'] > 30000) & (car_data_df['Avg Price'] <= 45000),
(car_data_df['Avg Price'] > 45000) & (car_data_df['Avg Price'] <= 70000),
(car_data_df['Avg Price'] > 70000)
]
# Create a list of the values we want to assign for each segment.
values = ['Economy', 'Mid-Range', 'Luxury', 'Ultra Luxury']
# Create a new column and use np.select to assign values to it using our lists as arguments
car_data_df["Segment"] = np.select(segments, values)
car_data_df
# Group by Market Segment
Economy1 = car_data_df.loc[(car_data_df['Segment'] == "Economy"), :]
MidRange1 = car_data_df.loc[(car_data_df['Segment'] == "Mid-Range"), :]
Luxury1 = car_data_df.loc[(car_data_df['Segment'] == "Luxury"), :]
UltraLuxury1 = car_data_df.loc[(car_data_df['Segment'] == "Ultra Luxury"), :]
Economy1
# Group by Date to create sales volume trendline by Segment
EconSales = Economy1.groupby(['Segment']).sum()
EconSales = EconSales.reset_index()
MidRngSales = MidRange1.groupby(['Segment']).sum()
MidRngSales = MidRngSales.reset_index()
LuxSales = Luxury1.groupby(['Segment']).sum()
LuxSales = LuxSales.reset_index()
UltraLuxSales = UltraLuxury1.groupby(['Segment']).sum()
UltraLuxSales = UltraLuxSales.reset_index()
EconSales
# Economy car segment dataframe cleaning and transposition
# Drop Segment and Avg Price column
EconSales = EconSales.drop(['Segment','Avg Price'], axis=1)
# Transpose dataframes for trendlines
EconSales_T = EconSales.T
# Reset indexes
EconSales_T = EconSales_T.reset_index()
# Rename column
EconSales_T = EconSales_T.rename(columns={'index': 'Quarter',0:'Sales'})
EconSales_T
# Economy car sales volume line plot
# Line chart selection
EconSales_T.plot.line(x='Quarter',y='Sales', figsize=(7,4), legend = False, rot=30, title="Economy Car Sales Volume Change from 2019 to 2020");
# Sets the y limits
plt.ylim(900000, 1800000)
# Provides labels
plt.xlabel(" ")
plt.ylabel("Economy Car Sales", fontsize=12)
# Format tick marks
plt.tick_params(axis='both', direction='out', length=6, width=2, labelcolor = 'black',colors='teal')
plt.yticks([])
# Major grid lines
plt.grid(b=True, which='major', color='lightblue', alpha=0.6, linestyle='dashdot', lw=1.5)
# Minor grid lines
plt.minorticks_on()
plt.grid(b=True, which='minor', color='beige', alpha=0.8, ls='-', lw=1)
# Save the figure as .png
#plt.savefig('Images/Economy Car Sales Change.png')
plt.show(block=True)
# Mid-Range car segment dataframe cleaning
# Drop Segment and Avg Price column
MidRngSales = MidRngSales.drop(['Segment','Avg Price'], axis=1)
# Transpose dataframes for trendlines
MidRngSales_T = MidRngSales.T
# Reset indexes
MidRngSales_T = MidRngSales_T.reset_index()
# Rename column
MidRngSales_T = MidRngSales_T.rename(columns={'index': 'Quarter',0:'Sales'})
MidRngSales_T
# Mid-Range car sales volume line plot
# Line chart selection
MidRngSales_T.plot.line(x='Quarter',y='Sales', figsize=(7,4), legend = False, rot=30, title="Mid-Range Car Sales Volume Change from 2019 to 2020");
# Sets the y limits
plt.ylim(1400000, 2500000)
# Provides labels
plt.xlabel(" ")
plt.ylabel("Mid-Range Car Sales", fontsize=12)
# Format tick marks
plt.tick_params(axis='both', direction='out', length=6, width=2, labelcolor = 'black',colors='teal')
plt.yticks([])
# Major grid lines
plt.grid(b=True, which='major', color='lightblue', alpha=0.6, linestyle='dashdot', lw=1.5)
# Minor grid lines
plt.minorticks_on()
plt.grid(b=True, which='minor', color='beige', alpha=0.8, ls='-', lw=1)
# Save the figure as .png
#plt.savefig('Images/Mid-Range Car Sales Change.png')
plt.show(block=True)
# Luxury car segment dataframe cleaning
# Drop Segment and Avg Price column
LuxSales = LuxSales.drop(['Segment','Avg Price'], axis=1)
# Transpose dataframes for trendlines
LuxSales_T = LuxSales.T
# Reset indexes
LuxSales_T = LuxSales_T.reset_index()
# Rename column
LuxSales_T = LuxSales_T.rename(columns={'index': 'Quarter',0:'Sales'})
LuxSales_T
# Luxury car sales volume line plot
# Line chart selection
LuxSales_T.plot.line(x='Quarter',y='Sales', figsize=(7,4), legend = False, rot=30, title="Luxury Car Sales Volume Change from 2019 to 2020");
# Sets the y limits
plt.ylim(300000, 600000)
# Provides labels
plt.xlabel(" ")
plt.ylabel("Luxury Car Sales", fontsize=12)
# Format tick marks
plt.tick_params(axis='both', direction='out', length=6, width=2, labelcolor = 'black',colors='teal')
plt.yticks([])
# Major grid lines
plt.grid(b=True, which='major', color='lightblue', alpha=0.6, linestyle='dashdot', lw=1.5)
# Minor grid lines
plt.minorticks_on()
plt.grid(b=True, which='minor', color='beige', alpha=0.8, ls='-', lw=1)
# Save the figure as .png
#plt.savefig('Images/Luxury Car Sales Change.png')
plt.show(block=True)
# Ultra Luxury car segment dataframe cleaning
# Drop Segment and Avg Price column
UltraLuxSales = UltraLuxSales.drop(['Segment','Avg Price'], axis=1)
# Transpose dataframes for trendlines
UltraLuxSales_T = UltraLuxSales.T
# Reset indexes
UltraLuxSales_T = UltraLuxSales_T.reset_index()
# Rename column
UltraLuxSales_T = UltraLuxSales_T.rename(columns={'index': 'Quarter',0:'Sales'})
UltraLuxSales_T
# Ultra Luxury car sales volume line plot
# Line chart selection
UltraLuxSales_T.plot.line(x='Quarter',y='Sales', figsize=(7,4), legend = False, rot=30, title="Ultra Luxury Car Sales Volume Change from 2019 to 2020");
# Sets the y limits
plt.ylim(20000, 32000)
# Provides labels
plt.xlabel(" ")
plt.ylabel("Ultra Luxury Car Sales", fontsize=12)
# Format tick marks
plt.tick_params(axis='both', direction='out', length=6, width=2, labelcolor = 'black',colors='teal')
plt.yticks([])
# Major grid lines
plt.grid(b=True, which='major', color='lightblue', alpha=0.6, linestyle='dashdot', lw=1.5)
# Minor grid lines
plt.minorticks_on()
plt.grid(b=True, which='minor', color='beige', alpha=0.8, ls='-', lw=1)
# Save the figure as .png
#plt.savefig('Images/Ultra Luxury Car Sales Change.png')
plt.show(block=True)
###Output
_____no_output_____ |
14-warmup-blank_dataframe_and_series.ipynb | ###Markdown
- How many entries are there in `population`- What are the values?- What is the index of the series- What is the population of `Florida`- Extract the subseries of Texas, New York, and Florida
###Code
area_dict = {'California': 423967, 'Texas': 695662, 'New York': 141297,
'Florida': 170312, 'Illinois': 149995}
area = pd.Series(area_dict)
area
###Output
_____no_output_____ |
Part 08 - Randomized Parameter Search.ipynb | ###Markdown
Randomized Searching======================
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.grid_search import RandomizedSearchCV
from scipy.stats import expon
plt.hist([expon.rvs(scale=0.001) for x in xrange(10000)], bins=100, normed=True);
from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
param_distributions = {'C': expon(), 'gamma': expon()}
rs = RandomizedSearchCV(SVC(), param_distributions=param_distributions, n_iter=50)
rs.fit(X_train, y_train)
rs.best_params_
rs.best_score_
scores, Cs, gammas = zip(*[(score.mean_validation_score, score.parameters['C'], score.parameters['gamma']) for score in rs.grid_scores_])
plt.scatter(Cs, gammas, s=50, c=scores, linewidths=0)
plt.xlabel("C")
plt.ylabel("gamma")
plt.scatter(np.log(Cs), np.log(gammas), s=50, c=scores, linewidths=0)
plt.xlabel("C")
plt.ylabel("gamma")
###Output
_____no_output_____ |
notebook/for_develop.ipynb | ###Markdown
訓練データ生成
###Code
params = [
"contrast",
"repetitive",
"granular",
"random",
"rough",
"feature density",
"direction",
"structural complexity",
"coarse",
"regular",
"oriented",
"uniform"
]
df = pd.read_csv('texture_database/rating.csv')
max_val = 0.0
min_val = 5.0
for param in params:
for val in df[param]:
if max_val < val:
max_val = val
if min_val > val:
min_val = val
print "max : %f, min %f" % (max_val, min_val)
range_num = max_val - min_val
for param in params:
index = 0
for val in df[param]:
df[param][index] = (val - min_val) / range_num
index = index + 1
# print df[param]
# 画像を変換
images = []
for i in range(450):
index = i + 1
img_path = "./texture_database/textures/" + str(index) + ".png"
img = load_img(img_path, target_size=(320,320), grayscale=True)
arr = np.asarray(img)
arr = arr.tolist()
image = []
image.append(arr)
images.append(image)
images = np.array(images)
print images.shape
images = images.reshape([450, 320, 320,1])
X_train, X_test = np.vsplit(images, [400])
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print np.min(X_train), np.max(X_train)
print('X_train shape:', X_train.shape)
print('Image Shape:', X_train.shape[1:])
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
###Output
0.0 0.937255
('X_train shape:', (400, 320, 320, 1))
('Image Shape:', (320, 320, 1))
(400, 'train samples')
(50, 'test samples')
###Markdown
学習
###Code
def plot_loss(losses):
plt.figure(figsize=(10,8))
plt.plot(losses["d"], label='discriminitive loss')
plt.plot(losses["g_d"], label='generative-discriminitive loss')
plt.plot(losses["g_p"], label='generative-perspective loss')
plt.plot(losses["p"], label='perspective loss')
plt.legend()
plt.show()
def plot_gen(n_ex=16,dim=(4,4), figsize=(10,10) ):
noise = np.random.uniform(0,1,size=[n_ex,100])
generated_images = generator.predict(noise)
plt.figure(figsize=figsize)
for i in range(generated_images.shape[0]):
plt.subplot(dim[0],dim[1],i+1)
img = generated_images[i,0,:,:]
plt.imshow(img)
plt.axis('off')
plt.tight_layout()
plt.show()
def train(pdtg, batch_size, nb_epoch):
losses = {"d":[], "g_d":[], "g_p":[], "p":[]}
index = 1
for e in tqdm(range(nb_epoch)):
print "--------------%d回目のTraining--------------" % index
#batchの作成
indecies = np.random.randint(0,X_train.shape[0],size=batch_size)
image_batch = X_train[indecies,:,:,:]
perception_features = df.iloc[indecies].values
noise_gen = np.random.uniform(0,1,size=[batch_size,200])
noise_gen_per = np.random.uniform(0,1,size=[batch_size,12])
generated_images = pdtg.generator.predict([noise_gen, noise_gen_per])
#ジェネった画像でdiscriminatorの学習
X = np.concatenate((image_batch, generated_images))
X_2 = np.concatenate((perception_features, noise_gen_per))
y = np.zeros([2*batch_size,2])
y[0:batch_size,1] = 1
y[batch_size:,0] = 1
d_loss = pdtg.discriminator.train_on_batch([X, X_2],y)
losses["d"].append(d_loss)
print "Training Discriminator :::: value : %f" % d_loss
#ジェネった画像でperception_modelの学習
p_loss = pdtg.perception_model.train_on_batch(X, X_2)
losses["p"].append(p_loss)
print "Training Perception Model :::: value : %f" % p_loss
noise_tr = np.random.uniform(0,1,size=[batch_size,200])
noise_tr_per = np.random.uniform(0,1,size=[batch_size,12])
y2 = np.zeros([batch_size,2])
y2[:,1] = 1
g_d_loss = pdtg.g_d_model.train_on_batch([noise_tr, noise_tr_per],y2)
losses["g_d"].append(g_d_loss)
print "Training Generative Model(Dis) :::: value : %f" % g_d_loss
g_p_loss = pdtg.g_p_model.train_on_batch([noise_tr, noise_tr_per],noise_gen_per)
losses["g_p"].append(g_p_loss)
print "Training Generative Model(Per) :::: value : %f" % g_p_loss
print "\n\n\n"
index = index + 1
# Updates plots
if e%25==24:
plot_loss(losses)
pdtg = PDTG(200, (320, 320, 1))
pdtg.create_model()
pdtg.compile_model()
train(pdtg, 100, 5000)
indecies = np.random.randint(0,X_train.shape[0],size=50)
print indecies[:]
print df.iloc[indecies].values
noise_gen = np.random.uniform(0,1,size=[5,200])
noise_gen_per = np.random.uniform(0,1,size=[5,12])
image = pdtg.generator.predict([noise_gen, noise_gen_per])
image *= 255.
import matplotlib.pyplot as plt
%matplotlib inline
n_ex=16
dim=(4,4)
figsize=(10,10)
for i in range(image.shape[0]):
plt.subplot(dim[0],dim[1],i+1)
img = image[i,:,:,0]
print image[i,:,:,0]
plt.imshow(img)
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
[[ 194.559021 238.78100586 248.5484314 ..., 251.09054565
251.63656616 247.06210327]
[ 212.44882202 247.35804749 252.36068726 ..., 253.94845581
254.06835938 250.87826538]
[ 219.26409912 250.3190918 253.2636261 ..., 254.83325195 254.6421051
253.02301025]
...,
[ 218.31077576 248.01771545 253.45854187 ..., 254.76313782
253.15301514 245.45335388]
[ 199.34205627 244.22477722 250.26499939 ..., 254.00105286
250.89266968 234.63116455]
[ 162.9980011 229.35295105 243.37794495 ..., 246.41772461
236.84553528 186.67555237]]
[[ 192.33441162 237.16270447 247.70097351 ..., 250.30117798
250.93928528 245.73216248]
[ 210.33441162 246.42851257 251.92628479 ..., 253.65390015 253.7979126
250.00810242]
[ 217.28308105 249.65898132 252.97187805 ..., 254.76248169
254.51191711 252.51777649]
...,
[ 218.13050842 247.89883423 253.40539551 ..., 254.7375946 253.01257324
245.0057373 ]
[ 199.27922058 244.04104614 250.13479614 ..., 253.89445496 250.5171814
233.5788269 ]
[ 163.13235474 229.13087463 243.10762024 ..., 245.87467957
235.79542542 185.21244812]]
[[ 195.18496704 239.39820862 248.98942566 ..., 252.48350525
252.86769104 249.4236908 ]
[ 213.23594666 247.77297974 252.57447815 ..., 254.42144775
254.49176025 252.31788635]
[ 220.23207092 250.67738342 253.44485474 ..., 254.92474365
254.82322693 253.82588196]
...,
[ 224.807724 250.44276428 254.16131592 ..., 254.89176941
253.92880249 248.27062988]
[ 205.10725403 247.40090942 251.98718262 ..., 254.45817566 252.3611145
239.17541504]
[ 166.57060242 234.76550293 246.74742126 ..., 249.08435059
241.07626343 192.17521667]]
[[ 191.15371704 236.28106689 247.17451477 ..., 250.85780334
251.40487671 246.69343567]
[ 209.01794434 245.80516052 251.63366699 ..., 253.87149048
253.99693298 250.66889954]
[ 215.40211487 249.11628723 252.73088074 ..., 254.81561279
254.60606384 252.88995361]
...,
[ 215.45278931 246.65452576 253.02185059 ..., 254.61373901
252.40309143 243.13052368]
[ 196.33200073 242.46281433 249.269104 ..., 253.51342773
249.49078369 230.99475098]
[ 161.37718201 226.80221558 241.64071655 ..., 244.26112366
233.50628662 183.05754089]]
[[ 195.3822937 239.50320435 248.90139771 ..., 248.48989868
249.28405762 243.01304626]
[ 213.20378113 247.76771545 252.53764343 ..., 252.91184998
253.12301636 248.16867065]
[ 220.39123535 250.62911987 253.41464233 ..., 254.57415771
254.17366028 251.37303162]
...,
[ 213.12094116 245.52445984 252.56297302 ..., 254.09268188 250.2875824
237.81539917]
[ 193.72450256 240.82608032 248.19293213 ..., 252.06790161 245.9655304
223.80552673]
[ 159.75358582 224.48475647 239.92106628 ..., 239.06417847
226.38574219 176.86039734]]
|
Webscraping - Book_Features.ipynb | ###Markdown
Webscraping Book FeaturesAuthor: Daniel HuiLicense: MITThis notebook extracts book-specific features, such as ratings, reviews, and book size dimensions and page number
###Code
from __future__ import print_function, division
from bs4 import BeautifulSoup
import requests
import pandas as pd
import numpy as np
import collections
import re
###Output
_____no_output_____
###Markdown
Global Variables
###Code
max_range = 250. #set max records per file to be saved incrementally
location = 'random' #set library branch
set_variable = 7
###Output
_____no_output_____
###Markdown
Field-Level Functions
###Code
import ast
#Definition for finding information that is next to a text field
def find_text(textsoup, field):
info = textsoup.find(text=re.compile(field))
if info:
return info.findNext().text.strip()
else:
return 'N/A'
#function to extract book description
def find_description(textsoup):
try:
dictionary_string = textsoup.find("script",text=re.compile("@graph")).text #this is a dictionary string
book_dict = ast.literal_eval(dictionary_string) #turn it into an actual dict
book_details = [] #empty list to hold book details
book_sub_dict = book_dict.get("@graph")[0]
ratings_dict = book_sub_dict.get("aggregateRating")
try: #Avg Rating
book_details.append(ratings_dict.get("ratingValue"))
except: book_details.append("N/A")
try: #num of Ratings
book_details.append(ratings_dict.get("ratingCount"))
except: book_details.append(0)
try: #num of Reviews
book_details.append(ratings_dict.get("reviewCount"))
except: book_details.append(0)
try: #Hardcover/Softcover
book_details.append(book_sub_dict.get("bookFormat").get("@id"))
except: book_details.append("N/A")
try: #Subject areas
book_details.append(book_sub_dict.get("about"))
except: book_details.append("N/A")
try: #URL to book image
book_details.append(book_sub_dict.get("image"))
except: book_details.append("N/A")
try: #Book description
if len(book_sub_dict.get("description")[0]) > 1:
book_details.append(book_sub_dict.get("description")[0])
else: book_details.append(book_sub_dict.get("description")) #some cases this is needed
except: book_details.append("N/A")
return book_details
except:
return 7*['N/A']
###Output
_____no_output_____
###Markdown
Book-Level Function
###Code
def get_book_data(url_row):
response = requests.get(f"{url_row}?active_tab=bib_info") #take in the URL
webpage = response.text
soup = BeautifulSoup(webpage, "lxml")
this_book_data = [url_row]
this_book_data.append(find_text(soup,'Characteristic')) #Number of Pages, Book Size
this_book_data.append(find_text(soup,'Branch Call Number')) #Library Call Number
this_book_data = this_book_data + find_description(soup) #concat two lists
return this_book_data
###Output
_____no_output_____
###Markdown
Data Cleaning Functions
###Code
#Extract the page count
def get_page_count(row):
try:
row = row.replace(" unnumbered","") #handle cases where there are unnumbered pages
if 'pages' in row:
if len(row.split(' page')[0].strip().split(" ")) == 2:
return row.split(' page')[0].strip().split(" ")[-1]
else: return row.split(' page')[0].strip()
else: return 'N/A'
except: return 'N/A'
#extract the book dimensions
def get_book_dims(row):
try:
if 'cm' in row:
if len(row.split(' cm')[0].strip().split(" ")) == 1:
return row.split(' cm')[0].strip()
else: return row.split(' cm')[0].strip().split(" ")[-1]
else: return 'N/A'
except: return 'N/A'
###Output
_____no_output_____
###Markdown
Create URL Link
###Code
def get_url(row):
row = str(row)
return f"https://seattle.bibliocommons.com/item/show/{row}030"
###Output
_____no_output_____
###Markdown
Load URLS, Divide into DataFrame Chunks
###Code
url_df = pd.read_csv(f"01_Data/Random_Sample_{set_variable}.csv",index_col=0)
url_df["link"] = url_df["BibNum"].apply(get_url)
url_df = url_df[url_df['link'].notna()] #remove lines with no URL
url_df.head()
#split the URL List into chunks so you can incrementally save
total_loops = (len(url_df) // max_range) + 1
url_dframes = np.array_split(url_df, total_loops)
###Output
_____no_output_____
###Markdown
Loop Scrape
###Code
for i in range(11,len(url_dframes)): #adjust the lower number if the scrape stalled
dframe = url_dframes[i]
dframe = dframe.reset_index() #reset index so the ISBN below can match
dframe["data"] = dframe["link"].apply(get_book_data)
book_df = pd.DataFrame(list(dframe["data"])) #turn data into dataframe
book_df = book_df.rename({0: 'url', 1: 'page_dim', 2: 'callno', 3:'avg_rating', #rename columns
4:'tot_ratings', 5: 'tot_reviews', 6:'type', 7:'subjects',
8: 'image', 9: 'desc'}, axis=1)
#Clean Data. #Remove repetitive part of image URL
book_df["page"] = book_df["page_dim"].apply(get_page_count) #Extract Page Number
book_df["dim"] = book_df["page_dim"].apply(get_book_dims) #Extract book dimensions
book_df["isbn"] = dframe["isbn"]
#Keep useful columns
book_df = book_df[["isbn","url","page","dim","avg_rating","tot_ratings","tot_reviews",
"type","callno","subjects","desc","image"]]
book_df.to_csv(f'01_Data/book_data_{location}_{i}.csv')
###Output
_____no_output_____
###Markdown
Combine Files Together into Combined Branch CSV
###Code
#start a dataframe with the first CSV
book_data_df = pd.read_csv(f'01_Data/book_data_{location}_0.csv',index_col=0)
#loop remaining CSVs
for i in range(1,len(url_dframes)):
temp_df = pd.read_csv(f'01_Data/book_data_{location}_{i}.csv',index_col=0)
book_data_df = pd.concat([book_data_df,temp_df])
book_data_df.to_csv(f'01_Data/book_data_{location}_combined_{set_variable}.csv')
###Output
_____no_output_____ |
data stream/Task2_Sketching.ipynb | ###Markdown
Task 2. SketchingBuild code for computing a COUNT-MIN sketch, play with different heights and widths for the Count-Min sketch matrix. Compare it to the RESERVOIR sampling strategy. Is it more space-efficient/accurate? What about run-time? Use the theory to explain any differences you observe. Here we use scenario 2 (CTU-Malware-Capture-Botnet-43) for the task. Per the documentaion for the dataset, the number of differnt types of flow on this scenario is:| Total flows | Botnet flows | Normal flows | C&C flows | Background flows ||-------------|--------------|--------------|-----------|------------------|| 1,808,122 | 1.04% | 0.5% | 0.11% | 98.33% |The infected IP address is ***147.32.84.165***. Data Preparation
###Code
import struct
import hashlib
import sys
import random as rd
import numpy as np
import struct
from math import log, inf
from pylab import *
import pandas as pd
import matplotlib.pyplot as plt
import time
###Output
_____no_output_____
###Markdown
Reading the file with the stream
###Code
# define filepath for scenario 1 dataset
filepath = './data/capture20110811.pcap.netflow.labeled'
# read data from the file
f = open(filepath, 'r')
lines = f.readlines()
f.close()
data = lines[1:] # drop the header
###Output
_____no_output_____
###Markdown
Preparing the data
###Code
def preprocessing(data):
'''data preprocessing
Input
-----
string of a data flow
Return
------
ip_address and label
'''
s = data.split('\t')
s = [x for x in s if x] # remove empty elements
# modify data type
o = np.array([s[3].split(':')[0], #ScrAddr
s[5].split(':')[0], #DstAddr
s[11].rstrip('\n').rstrip() #Label
])
return o
###Output
_____no_output_____
###Markdown
Creating a data frame using the function above
###Code
df = pd.DataFrame(list(map(preprocessing, data)), columns=['ScrAddr', 'DstAddr', 'Label'])
###Output
_____no_output_____
###Markdown
Creating the min-count sketch functions Creating funtion that generates a random hash
###Code
#Getting how many bits your python is running om
PythonBits=struct.calcsize("P") * 8
#Generating random hash function
def hash_function():
#Generating a random number
n=rd.randint(1,10**15)
#Generating a random seed
rd.seed(n)
#Generating a random bit
random_bit = rd.getrandbits(PythonBits)
#Definining a hash function using the random bit
def randomhash(x):
return hash(x) ^ random_bit
return randomhash
###Output
_____no_output_____
###Markdown
Creating a function that skecth a stream give an epsilon and a delta
###Code
#Creating function that skecth
def Sketch(stream,epsilon,delta):
#Calculating skectch dimensions based on epsilon and delta
Vector_Size = int(2/epsilon)
NumberOfHashFunctions = int(log(1/delta))
#Creating matrix with zeros
Matrix=np.zeros([NumberOfHashFunctions,Vector_Size])
#Generating 'd' hash functions
hash_functions = [hash_function() for i in range(NumberOfHashFunctions)]
#Generating matrix with zeros
sketch_matrix = np.zeros((NumberOfHashFunctions, Vector_Size), dtype = 'int32')
#Going through the stream and adding ones for the hash
for element in stream:
for function_counter in range(len(hash_functions)):
row = function_counter
column = hash_functions[function_counter](element)%Vector_Size
sketch_matrix[row,column] += 1
#Return the sketch matrix and function to be used in the query
return [sketch_matrix,hash_functions]
###Output
_____no_output_____
###Markdown
Creating a function that query the sketch using the sketch matrix and the hash functions used to generate the sketch, both provided by the function "Sketch"
###Code
#Querying function
def query_hash(sketch_matrix,hash_functions,element):
#Defining counting
count = inf
#Getting the vector_size
Vector_Size = sketch_matrix.shape[1]
#Looping through the hash functions to get the min value
for function_counter in range(len(hash_functions)):
row = function_counter
column = hash_functions[function_counter](element) % Vector_Size
if sketch_matrix[row,column]<count:
count = sketch_matrix[row,column]
return count
###Output
_____no_output_____
###Markdown
Testing if the function is working using a testing stream
###Code
#Selecting Parameters
epsilon = 0.005
delta = 0.005
#Testing for stream with 1 1s, 2 2s, 3 3s, 4 4s, 5 5s, 6 6s, 7 7s, 8 8s, and 9 9s.
stream = [1,2,2,3,3,3,4,4,4,4,5,5,5,5,5,6,6,6,6,6,6,7,7,7,7,7,7,7,8,8,8,8,8,8,8,8,9,9,9,9,9,9,9,9,9]
sketch_matrix , hash_functions = Sketch(stream,epsilon,delta)
#Testing the query
for element in set(stream):
query = query_hash(sketch_matrix,hash_functions,element)
print(str(element) + " : " + str(query))
###Output
1 : 1
2 : 2
3 : 3
4 : 4
5 : 5
6 : 6
7 : 7
8 : 8
9 : 9
###Markdown
Running the functions to create sketchs on the stream data provided Creating sketching using the data frame using different heights and widths for the Count-Min sketch matrix
###Code
#Creating a stream in a list format from the dataframe with all the streams
ScrAddr_list = list(df.loc[:,"ScrAddr"])
DstAddr_list = list(df.loc[:,"DstAddr"])
Label_list = list(df.loc[:,"Label"])
#Playing with different heights and widths for the count-min sketch
epsilon_list = [0.0001,0.001,0.01,0.1]
delta_list = [0.0001,0.001,0.01,0.1]
#Creating dictionary to save the sketchs and hash functions
DictTime = dict()
#Looping
for epsilon in epsilon_list:
for delta in delta_list:
#Skecthing column 'ScrAddr'
start_time = time.time()
sketch_ScrAddr,hashfunctions_ScrAddr = Sketch(ScrAddr_list,epsilon,delta)
DictTime["e:"+str(epsilon)+"|d:"+str(delta)+"|ScrAddr"] = [sketch_ScrAddr,hashfunctions_ScrAddr,(time.time() - start_time)]
#Skecthing column 'DstAddr'
start_time = time.time()
sketch_DstAddr,hashfunctions_DstAddr = Sketch(DstAddr_list,epsilon,delta)
DictTime["e:"+str(epsilon)+"|d:"+str(delta)+"|DstAddr"] = [sketch_DstAddr,hashfunctions_DstAddr,(time.time() - start_time)]
DictSketch=DictTime
#for key,value in DictTime.items():
# print(value[2])
###Output
_____no_output_____
###Markdown
Calculating the memory space used and time spent on the creation of sketchs Calculating the memory space used for the sketch and hash functions, and organizing the time and memory space in a data frame
###Code
# Creatin a dataframe to time spent per column
performance_df = pd.DataFrame(columns=['Variable','Epsilon','Delta','Time','Space'])
index = 0
# Creating variable list for the loop
variables_list=['ScrAddr','DstAddr']
#Looping through all the possible skethchs
for epsilon in epsilon_list:
for delta in delta_list:
for variable in variables_list:
#Getting the time spent in the algorithm
time = DictTime["e:" + str(epsilon) + "|d:" + str(delta) + "|" + variable][2]
#Getting the memory space used in the sketch and hashfunctions
space_sketch = int(DictTime["e:" + str(epsilon) + "|d:" + str(delta) + "|" + variable][0].nbytes)
space_hashfunctions = int(sys.getsizeof(DictTime["e:" + str(epsilon) + "|d:" + str(delta) + "|" + variable][1]))
total_memory_space = int(space_sketch + space_hashfunctions)
#Append to the data frame
performance_df.loc[index]=[variable,epsilon,delta,time,total_memory_space]
index += 1
#Converting space to numeric
performance_df.loc[:,"Space"] = pd.to_numeric(performance_df.loc[:,"Space"])
###Output
_____no_output_____
###Markdown
Plotting the time performance and the space performance by combination of delta and epsilonobs: Uses O{(1/ε).ln(n/δ)} memory and O{ln(n/δ)} update time Time Performance is clearly logarithm on delta, as expected.
###Code
plt.plot(performance_df.groupby('Delta')['Time'].mean().sort_values(ascending=True))
plt.title('Logarithm behavior on time spent to create a sketch by delta')
plt.xlabel('Delta values')
plt.ylabel('Time (in seconds)')
###Output
_____no_output_____
###Markdown
Space Performance is clearly logarithm on delta, as expected.
###Code
plt.plot(performance_df.groupby('Delta')['Space'].mean().sort_values(ascending=True))
plt.title('Logarithm behavior on memory space used to create a sketch by delta')
plt.xlabel('Delta values')
plt.ylabel('Bytes')
###Output
_____no_output_____
###Markdown
Space Performance is clearly linear on 1/epsilon, as expected.
###Code
# Taking the ratio 1/Epsilon
performance_df1 = performance_df.copy()
performance_df1.loc[:,"Epsilon"] = 1/performance_df1.loc[:,"Epsilon"]
# Ploting
plt.plot(performance_df1.groupby('Epsilon')['Space'].mean().sort_values(ascending=True))
plt.title('Linear behavior of 1/epsilon on the memory space used to create sketches')
plt.xlabel('1/Epsilon')
plt.ylabel('Bytes')
###Output
_____no_output_____
###Markdown
Comparing the performance of the sketch with the reservoir sampling Reservoir function (the same used for task 1 but that only creates the reservoir instead of calculating the top10)
###Code
def reservoir_sampling(k=60000, full_df=False):
'''sampling through a reservoir of size k
Parameter
---------
k : reservoir size
Return
------
top10 : the top 10 most frequent infected ip_address and freq count
pp : proportion of reservoir size to stream size
exe_time : execution time of reservoir sampling
'''
START = time.time()
reservoir = list(map(preprocessing, data[:k]))
for i in range(k+1, len(data)):
# for pi, randomly replace j-th data in the reservoir with i-th data
if np.random.random() < k/i: # probability pi=k/i
j = int(np.random.randint(0, k, 1))
reservoir[j] = preprocessing(data[i])
reservoir_df = pd.DataFrame(reservoir, columns=['ScrAddr', 'DstAddr', 'Label'])
# get the top10 most frequent ip and their frequencies
#ips = df.loc[df['Label'] == 'Botnet']
#infected_ips = ips.drop(columns=['Label']).values.reshape(-1) # flaten the array
#infected_ips = np.delete(infected_ips, np.where(infected_ips == INFECTED_HOST))
#count = np.unique(infected_ips, return_counts=True)
#count = pd.DataFrame({'ip' : count[0],
# 'freq' : count[1]})
#if not full_df: # return top 10 most frequent
# top10 = count.sort_values('freq', ascending=False).head(10)
#else: # return all df
# top10 = count.sort_values('freq', ascending=False)
#pp = np.round(k/len(data), 3) # proportion of reservoir size to stream size
#END = time.time()
return reservoir_df
###Output
_____no_output_____
###Markdown
Time and Memory performance Creating reservoirs with different sizes and calculating the time and memory space spent
###Code
import time
#List reservoir sizes
reservoir_sizes = [1000,10000,100000,1000000]
# Creating variable list for the loop
variables_list=['ScrAddr','DstAddr']
#Dictionary to save the size and the time spent
DictReservoir = dict()
#Looping through the reservoir_size and calculating the time spent and the reservoir
for reservoir_size in reservoir_sizes:
start_time = time.time()
reservoir_sample = reservoir_sampling(reservoir_size)
reservoir_time_spent = (time.time() - start_time)
for variable in variables_list:
reservoir_sample_variable=reservoir_sample[variable]
DictReservoir["K:"+str(reservoir_size)+"|Variable: "+variable] = [variable,reservoir_sample_variable,reservoir_time_spent]
###Output
_____no_output_____
###Markdown
Formating the time and memory spent into dataframes
###Code
# Creatin a dataframe to time spent per column
reservoir_performance_df= pd.DataFrame(columns=['Variable','K','Time','Space'])
index = 0
# Creating variable list for the loop
variables_list=['ScrAddr','DstAddr']
#Looping through all the possible skethchs
for K in reservoir_sizes:
for variable in variables_list:
#Getting the time spent in the algorithm
time = DictReservoir["K:"+str(K)+"|Variable: "+variable][2]
#Getting the memory space used in the sketch and hashfunctions
space= sys.getsizeof(DictReservoir["K:"+str(K)+"|Variable: "+variable][1])
#Append to the data frame
reservoir_performance_df.loc[index]=[variable,K,time,space]
index += 1
#Converting space to numeric
reservoir_performance_df.loc[:,"Space"] = pd.to_numeric(reservoir_performance_df.loc[:,"Space"])
###Output
_____no_output_____
###Markdown
Time performance for reservoir
###Code
plt.plot(reservoir_performance_df.groupby('K')['Time'].mean().sort_values(ascending=True))
plt.title('Roughtly Linear behavior on time spent to create a reservoir by reservoir size')
plt.xlabel('Reservoir Size (K)')
plt.ylabel('Time (in seconds)')
###Output
_____no_output_____
###Markdown
Memory performance for the reservoir
###Code
plt.plot(reservoir_performance_df.groupby('K')['Space'].mean().sort_values(ascending=True))
plt.title('Linear behavior on space memory spent to create a reservoir by reservoir size')
plt.xlabel('Reservoir Size (K)')
plt.ylabel('Bytes')
###Output
_____no_output_____
###Markdown
Trying to find non-frequent IPs in the reservoir and in the sketch Count in the real dataset the infect IPs that appear the most and the least
###Code
INFECTED_HOST = '147.32.84.165'
ips = df.loc[df['Label'] == 'Botnet']
infected_ips = ips.drop(columns=['Label']).values.reshape(-1) # flaten the array
infected_ips = np.delete(infected_ips, np.where(infected_ips == INFECTED_HOST))
count = np.unique(infected_ips, return_counts=True)
count = pd.DataFrame({'ip' : count[0],
'freq' : count[1]})
top10_most = count.sort_values('freq', ascending=False).head(10).reset_index()
top10_least = count.sort_values('freq', ascending=True).head(10).reset_index()
###Output
_____no_output_____
###Markdown
Using reservoir to verify the exitence of the infect IPs that appear the least, it can be seen that NOT EVEN ONE IP was found for ALL the reservoir created
###Code
reservoir_performance_df["NumberOfInexistentIPs"] = 0
for row in range(len(reservoir_performance_df)):
reservoir_ScrAddr = DictReservoir["K:"+str(reservoir_performance_df.loc[row,"K"])+"|Variable: "+'ScrAddr'][1]
reservoir_DstAddr = DictReservoir["K:"+str(reservoir_performance_df.loc[row,"K"])+"|Variable: "+'DstAddr'][1]
number_of_inexistent_ips = 0
for i in range(len(top10_least)):
if (not top10_least.loc[i,"ip"] in reservoir_ScrAddr ) and (not top10_least.loc[i,"ip"] in reservoir_DstAddr) :
number_of_inexistent_ips += 1
reservoir_performance_df.loc[row,"NumberOfInexistentIPs"]=number_of_inexistent_ips
reservoir_performance_df
###Output
_____no_output_____
###Markdown
Using sketch to verify the exitence of the infect IPs that appear the least, it can be seen ALL 10 IPs were found for ALL the sketch created - since the number of inexistent IPs is zero, it can be concluded that all were found =)
###Code
sketch_performance_df=performance_df.copy()
sketch_performance_df["NumberOfInexistentIPs"] = 0
for row in range(len(performance_df)):
epsilon = sketch_performance_df.loc[row,"Epsilon"]
delta = sketch_performance_df.loc[row,"Delta"]
sketch_ScrAddr = DictSketch["e:"+str(epsilon)+"|d:"+str(delta)+"|ScrAddr"][0]
hash_ScrAddr = DictSketch["e:"+str(epsilon)+"|d:"+str(delta)+"|ScrAddr"][1]
sketch_DstAddr = DictSketch["e:"+str(epsilon)+"|d:"+str(delta)+"|DstAddr"][0]
hash_DstAddr = DictSketch["e:"+str(epsilon)+"|d:"+str(delta)+"|DstAddr"][1]
number_of_inexistent_ips = 0
for i in range(len(top10_least)):
if (query_hash(sketch_ScrAddr,hash_ScrAddr,top10_least.loc[i,"ip"])==0) and (query_hash(sketch_DstAddr,hash_DstAddr,top10_least.loc[i,"ip"])==0):
number_of_inexistent_ips += 1
sketch_performance_df.loc[row,"NumberOfInexistentIPs"]=number_of_inexistent_ips
sketch_performance_df
###Output
_____no_output_____ |
Algorismica/.ipynb_checkpoints/TorneigFINAL-checkpoint.ipynb | ###Markdown
TorneigA5 - Dividir i Vèncer Quantes vegades apareix una cadena com a subq (4 punts)
###Code
def comptarSubsequencia(paraula, cadena):
"""
Aquesta funció determina quantes vegades apareix una cadena en una paraula
com a subseqüència
Parameters
----------
paraula: string
paraula en la que buscar la cadena
cadena: string
cadena a buscar
-------
num: int
nombre de vegades que apareix
"""
num = count(cadena, paraula, len(cadena), len(paraula))
return num
def count(X, Y, m, n):
# Base case 1: if only one character is left
if m == 1 and n == 1:
return 1 if (X[0] == Y[0]) else 0
# Base case 2: if the input string `X` reaches its end
if m == 0:
return 0
# Base case 3: if pattern `Y` reaches its end, we have found subsequence
if n == 0:
return 1
# Optimization: the solution is not possible if the number of characters
# in the string is less than the number of characters in the pattern
if n > m:
return 0
'''
If the last character of both string and pattern matches,
1. Exclude the last character from both string and pattern
2. Exclude only the last character from the string.
Otherwise, if the last character of the string and pattern do not match,
recur by excluding only the last character in the string
'''
return (count(X, Y, m - 1, n - 1) if X[m - 1] == Y[n - 1] else 0)\
+ count(X, Y, m - 1, n)
comptarSubsequencia("sue","subsequencia")
assert comptarSubsequencia("Pie","Piee") == 2
assert comptarSubsequencia("sue","subsequencia") == 4
assert comptarSubsequencia("123","123451234512345") == 10
assert comptarSubsequencia("ppa","ppaa") == 2 #Si aquest dona problema pot ser error d'índex.
###Output
_____no_output_____
###Markdown
Amb quin cost apareix? (4 punts)
###Code
def comptarSubsequenciaPreu(paraula, cadena):
"""
Aquesta funció determina quantes vegades apareix una cadena en una paraula
com a subseqüència i el cost mínim tenint en compte que cada lletra enmig suma 1 al cost.
Parameters
----------
paraula: string
paraula en la que buscar la cadena
cadena: string
cadena a buscar
-------
num: int
nombre de vegades que apareix
cost: int
cost mínim
"""
return num,cost #Podeu canviar.
assert comptarSubsequenciaPreu("Pie","Piee") == (2,0)
assert comptarSubsequenciaPreu("sue","subsequencia") == (4,2)
assert comptarSubsequenciaPreu("123","123451234512345") == (10,0)
assert comptarSubsequenciaPreu("ser","senrsse") == (1,1) #Pot donar problemes.
###Output
_____no_output_____ |
mcalister/PlotWingLoadingAOA4.ipynb | ###Markdown
Plot wing loading AOA=4 degrees
###Code
%%capture
import sys
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import glob as glob
sys.path.insert(1, './utilities')
import utilities
import plot_wing
import definitions as defs
# Label, Filenames dictionary
runlist = [['SST', './tipvortex1-aoa-4/rundir', {'color':'r', 'ls':'--'}],
['SST-IDDES', './sst-iddes-4/rundir', {'color':'b', 'ls':'-'}],
]
# Loop on folders
for i, run in enumerate(runlist):
folder =run[1]
rundict = run[2]
# Setup
fdir = os.path.abspath(folder)
yname = os.path.join(fdir, "mcalister.yaml")
fname = "avg_slice.csv"
dim = defs.get_dimension(yname)
half_wing_length = defs.get_half_wing_length()
# simulation setup parameters
u0, v0, w0, umag0, rho0, mu, flow_angle = utilities.parse_ic(yname)
mname = utilities.get_meshname(yname)
aoa = defs.get_aoa(mname)
chord = 1
# experimental values
edir = os.path.abspath(os.path.join("utilities/exp_data", f"aoa-{aoa}"))
zslices = utilities.get_wing_slices(dim)
zslices["zslicen"] = zslices.zslice / half_wing_length
# data from other CFD simulations (SA model)
sadir = os.path.abspath(os.path.join("sitaraman_data", f"aoa-{aoa}"))
# Read in data
df = pd.read_csv(os.path.join(fdir, "wing_slices", fname), delimiter=",")
renames = utilities.get_renames()
df.columns = [renames[col] for col in df.columns]
# Project coordinates on to chord axis
chord_angle = np.radians(aoa) - flow_angle
crdvec = np.array([np.cos(chord_angle), -np.sin(chord_angle)])
rotcen = 0.25
df["xovc"] = (
np.dot(np.asarray([df.x - rotcen, df.y]).T, crdvec) / chord + rotcen
)
# Calculate the negative of the surface pressure coefficient
df["cp"] = -df.p / (0.5 * rho0 * umag0 ** 2)
# Plot cp in each slice
for k, (index, row) in enumerate(zslices.iterrows()):
subdf = df[np.fabs(df.z - row.zslice) < 1e-5]
# plot
plt.figure(k)
# Load corresponding exp data
if i == 0:
try:
ename = glob.glob(os.path.join(edir, f"cp_*_{row.zslicen:.3f}.txt"))[0]
exp_df = pd.read_csv(ename, header=0, names=["x", "cp"])
plt.plot(
exp_df.x,
exp_df.cp,
ls="",
color=plot_wing.cmap[-1],
marker=plot_wing.markertype[0],
ms=6,
mec=plot_wing.cmap[-1],
mfc=plot_wing.cmap[-1],
label="Exp.",
)
except IndexError:
print('Cannot load exp-data')
pass
# Plot the CFD data
# Sort for a pretty plot
x, y, cp = plot_wing.sort_by_angle(subdf.xovc.values, subdf.y.values, subdf.cp.values)
p = plt.plot(x, cp, linestyle=rundict['ls'], lw=2, color=rundict['color'], label=run[0])
p[0].set_dashes(plot_wing.dashseq[i])
# Format the plot
for k, (index, row) in enumerate(zslices.iterrows()):
plt.figure(k)
ax = plt.gca()
plt.xlabel(r"$x/c$", fontsize=18, fontweight="bold")
plt.ylabel(r"$-c_p$", fontsize=18, fontweight="bold")
plt.setp(ax.get_xmajorticklabels(), fontsize=16, fontweight="bold")
plt.setp(ax.get_ymajorticklabels(), fontsize=16, fontweight="bold")
plt.xlim([0, chord])
plt.ylim([-1.5, 5.0])
plt.title('z = %.2f'%row.zslice, fontsize=16)
plt.grid()
plt.legend()
plt.tight_layout()
###Output
_____no_output_____ |
unit3~10.ipynb | ###Markdown
Unit3. Hello World! 로 시작하기 IDLE에서 Hello World! 출력해보기
###Code
print("Hello World!")
###Output
Hello World!
###Markdown
코드를 한 줄 한 줄 실행해서 결과를 얻는 방식 → 인터프리터Python Shell: IDLE 처럼 파이썬 코드를 직접 입력해서 실행하는 프로그램Python PromptPython Shell은 인터프리터와 대화하듯이 코드를 처리함: 대화형 셸, Interactive modeREPL 방식 (Read-Eval-Print Loop)
###Code
print(Hello World!)
###Output
_____no_output_____
###Markdown
따옴표를 넣지 않을 경우 오류 발생 명령프롬프트에서 Python 실행하기 Python 설치할 때 Add Python 3.X to PATH 에 체크 소스코드단어 뒤에 괄호 ()가 붙은 것: 함수(function)함수를 '호출(call)'한다. Unit4. 기본 문법세미콜론
###Code
print('Hello, World!')
print('Wow'); print('Good!')
###Output
Hello, World!
Wow
Good!
###Markdown
파이썬은 구문 끝에 세미콜론을 붙이지 않음.세미콜론은 한 줄에 여러 구문을 작성할 때 사용. 주석
###Code
# print('Hello World!')
a = 1 + 2 # 더하기
print(a)
###Output
3
###Markdown
주석은 코드 앞에 를 단다. 뒤의 모든 명령어는 실행되지 않는다.한글 주석은 UTF-8 인코딩만을 지원한다. 한글 주석이 깨질 경우 다른 인코딩일 수 있으니 UTF-8 인코딩으로 다시 저장한다. 들여쓰기if 문 등에는 들여쓰기를 사용하지 않으면 문법오류들여쓰기 방법에는 공백 2칸, 공백 4칸, Tab 1칸이 있다.파이썬은 들여쓰기를 기준으로 코드블럭을 형성한다. 단 같은 코드블록은 들여쓰기 칸 수가 같아야 한다.
###Code
if a == 10:
print('10')
print('입니다')
###Output
_____no_output_____
###Markdown
Unit5. 숫자 계산하기정수 계산하기숫자 자료형: 정수(int), 실수(float), 복소수(complex)
###Code
1 + 1 #덧셈
2 - 1 #뺄셈
3 * 5 #곱셈
5 / 2 #나눗셈
# Python3 에서는 정수의 나눗셈 연산은 실수 값이 나옴
6 / 3 #나누어 떨어지는 값도 실수 값이 나옴
5 // 2 # 정수의 나눗셈 소수점 아래를 버리고 정수값을 출력
# 이 연산을 버림 나눗셈(floor division)이라고 한다.
12.5 // 5 # 단 실수의 나눗셈 후 소수점 아래를 버리는 연산자는 실수값을 출력한다.
7 % 3 # 나눗셈 후 나머지를 구하는 연산자
2 ** 10 # 거듭제곱
int(6 / 3) # 값을 강제로 정수로 만들기
type(10) # 객체의 자료형 알아내기
###Output
_____no_output_____
###Markdown
실수 계산하기
###Code
4.3 - 2.7 # 왜 오차가 발생할까?
0.1 + 0.2 == 0.3
float(5) # 값을 강제로 실수값으로 변환
###Output
_____no_output_____
###Markdown
파이썬 셀은 결과를 즉시 보는 용도 1 + 1이 바로 출력되지만 파이썬 스크립트에서는 출력이 되지 않음스크립트 파일에서 계산 결과를 출력하기 위해서는 print(1 + 1)을 사용해야 함. Unit6. 변수와 입력 사용하기변수 만들기
###Code
x = 10; y = 'Hello World!'
type(x)
type(y)
x += 20 # x = x + 20 과 같은 의미
x
del x # 변수 삭제는 del 함수를 사용
x
z = None # 빈 변수 만들기
print(z)
z
###Output
_____no_output_____
###Markdown
Input 함수 사용하기입력 값을 변수에 저장하는 함수
###Code
input()
input('문자열을 입력하세요: ')
a = input('숫자를 입력하세요: ')
b = input('숫자를 입력하세요: ')
print(a + b) # 왜 그럴까?
###Output
숫자를 입력하세요: 40
숫자를 입력하세요: 30
4030
###Markdown
Input 값은 항상 문자열(string)이다.
###Code
a = input()
type(a)
a = int(input('숫자를 입력하세요: ')) # 제대로 더하려면
b = int(input('숫자를 입력하세요: ')) # 실수를 더하려면 int 대신에 float 함수를 사용하면 된다.
print(a + b)
###Output
숫자를 입력하세요: 40
숫자를 입력하세요: 30
70
###Markdown
입력값을 변수 두 개에 저장하기변수1, 변수2 = input('문자열').split('기준 문자열')
###Code
a, b = input('문자열 두 개를 입력하세요: ').split()
print(a + b) # 문자열은 split으로 분리해도 문자열
a, b = input('문자열 두 개를 입력하세요: ').split()
print(int(a) + int(b)) # 숫자로 더하려면 정수로 변환
a, b = map(int, input('문자열 두 개를 입력하세요: ').split()) # 문자열을 한번에 지수로 바꾸는 방법 map함수
print(a + b)
###Output
문자열 두 개를 입력하세요: 10 20
30
###Markdown
Unit7. 출력 방법 알아보기값을 여러 개 출력하기
###Code
print(a, b, 20)
print(a, b, 20, sep='')
print(a, b, 20, sep=', ')
print(1920, 1080, sep='x')
###Output
1920x1080
###Markdown
제어 문자
###Code
print(1, 2, 3, sep='\n')
print(1\n2\n3) # 제어 문자는 '문자'
print('1\n2\n3') # 문자열에서 사용하면 가능
print('\\n') # \ 문자 자체를 출력할때는 두 번 쓴다.
###Output
\n
###Markdown
end 활용하기
###Code
print(10)
print(20)
print(30)
print(10, end='') # print에는 기본적으로 end값에 \n가 내장 end 값에 공백을 넣으면 \n가 지워지는 효과
print(20, end='')
print(30)
print(10, end=' ') # end 값에 한 칸을 띄면 다음과 같이 한 칸씩 띄어진다.
print(20, end=' ')
print(30)
###Output
10 20 30
###Markdown
Unit8. 불과 비교, 논리 연산자 사용하기불은 참(True)과 거짓(False)를 나타내는 값이다.
###Code
3 > 1 # 초과
9 >= 9 # 이상
10 == 10 # 일치
10 != 10 # 불일치
'Python' == 'python'
###Output
_____no_output_____
###Markdown
Unit9. 문자열 사용하기문자열은 작은 따옴표, 큰 따옴표, 작은 따옴표 3개, 큰 따옴표 3개 총 4가지의 방법으로 호출할 수 있다.
###Code
a = 'He said "Python is not difficult."'
a
b = '''Hello
안녕하세요
Python입니다.'''
b
print(b) # 여러줄로 된 문자열을 사용할 때에는 큰 따옴표 3개나 작은 따옴표 3개를 활용한다.
c = """Hello,
'Python'"""
print(c)
d = 'Python isn\'t difficult.' # 작은 따옴표 안에 작은 따옴표를 사용하고 싶으면 \(이스케이프)를 사용하면 된다.
print(d)
e = 'Hello, \n\'Python!\'' # 작은 따옴표 3개를 사용하지 않고 개행하려면 \n를 입력하면 된다.
print(e)
# 연습문제
s = 'Python is a Programming language that you let work quickly\nand\nintegrate systems more effectively.'
print(s)
###Output
Python is a Programming language that you let work quickly
and
integrate systems more effectively.
###Markdown
Unit10. 리스트와 튜플리스트는 여러개의 값을 저장할 때 사용. 대괄호로 묶어주고 각 값은 콤마로 구분.리스트에 저장된 각 값은 '요소'라고 불린다.
###Code
a = [14, 15, 18, 23, 45]
a
b = ['LoL', 24, 23.5, True] # 여러가지 자료형을 요소로 넣는 것도 가능하다.
b
c = [] # 빈 리스트 만들기1
c
d = list() # 빈 리스트 만들기2
d
###Output
_____no_output_____
###Markdown
range를 사용하여 리스트 만들기
###Code
range(10)
list(range(0,10)) # list(range(처음,끝,간격)), 간격을 설정하지 않으면 자동으로 1에 할당. 처음 수는 포함하지만 끝 수는 포함하지 않음
list(range(0,10,2))
list(range(0,11,2))
list(range(5,12,3))
list(range(10,0,-1)) # 간격을 음수로 설정하면 감소하는 리스트 생성 가능
###Output
_____no_output_____
###Markdown
튜플튜플은 안에 저장된 요소들을 변경, 추가, 삭제 할 수 없는 리스트.읽기 전용 리스트라고 생각하면 쉽다.튜플을 사용하는 이유는? 요소들을 변경, 추가, 삭제할 수 없기 때문에 요소를 보호해야하는 경우에 사용.
###Code
a = (1, 2, 3) # 대괄호 대신 소괄호로 묶는다.
a
b = ('Lol', 23, 27.4, True) # 리스트와 마찬가지로 여러 자료형을 요소로 넣을 수 있다.
b
(45,) # 요소가 한 개인 튜플 만들기. 콤마를 붙이지 않으면 튜플이 아닌 '값'이 된다.
c = tuple() # 빈 튜플 만들기
c
tuple(range(0,10,1)) # 튜플로 range 함수 사용가능
# 리스트를 튜플로 만들기
a = [1, 2, 3]
tuple(a)
# 튜플을 리스트로 만들기
b = (1, 2, 4)
list(b)
# 리스트 안에 문자열 넣기
list('Python')
# 튜플 안에 문자열 넣기
tuple('Python')
###Output
_____no_output_____
###Markdown
리스트와 튜플의 요소를 변수 여러개에 할당하는 것을 리스트 언패킹, 튜플 언패킹이라고 한다.
###Code
# 리스트 언패킹
x = [1, 2, 3]
a, b, c = x
print(a, b, c)
# 튜플 언패킹
y = (2, 6, 3)
a, b, c = y
print(a, b, c)
###Output
2 6 3
|
indevelopment/module9/English/C/README.ipynb | ###Markdown
Multi-device Programming This lab is intended for C/C++ programmers. --- IntroductionMany modern machines are now coming equipped with multiple parallel devices. Additionally, there are also networks of interconnected machines that we can program for. To give an idea of what these machines look like, we can look at (currently as of the making of these lab activities) the largest supercomputer in the world, [Summit from Oak Ridge National Lab.](https://www.olcf.ornl.gov/summit/) This is a machine that connects thousands of individual machines (referred to as nodes, in this context), and each node has 6 NVIDIA GPUs. So the goal of this lab is to give you the skills you would need to program for a machine like Summit.There are a few ways that you can use multiple devices with OpenACC:* Use OpenACC async to asynchronously launch work to multiple devices* Use OpenMP to launch multiple OpenMP threads and allow each thread to use a device* Use MPI for distributed programming, splitting the devices across MPI ranksWe will visit each of these individually, and complete an exercise for each. --- Multi-Device with OpenACC AsyncIn a previous lab, we have used the async clause to launch multiple tasks on our device for asynchronous execution. Now, we will use the async clause to launch tasks on more than one device. Let's see an example:```// Loop 1pragma acc parallel loopfor(int i = 0; i < N; i++) { A[i] = max(A[i], 0);}// Loop 2pragma acc parallel loopfor(int i = 0; i < M; i++) { B[i] = max(B[i], 0);}```Our goal is to run Loop 1 on one of our devices, and Loop 2 on another. Let's first go over everything we need to know to make this happen. Finding the Number of Available DevicesIt is often best to write code to be able to use a variable number of devices. OpenACC allows us to check the number of devices available with a built-in function.**acc_get_num_devices( devicetype )**The device type is which kind of device you want to find. For this lab, and in most cases, it is sufficient to use **acc_device_default**. Switching to a Different DeviceOur devices will be numbered from 0 -> num_devices-1. The active device is default to device0. We can change that with this built-in function:**acc_set_device_num( devicenum, devicetype )**We will again use acc_device_default for the device type, and for our code example we will use devices 0 and 1. OpenACC asyncThis was covered in an earlier lab, but let's do a quick recap. Normal OpenACC behavior is for the host thread to pause and wait for all OpenACC regions to finish. This could be any of the parallel, kernels, enter data, exit data, or update directives. If we add the async clause to any of these directives, we tell the host thread **not to wait**. This means that we can start a parallel loop on a device, and, without waiting, continue on to any other unrelated host code. We will use this feature to launch a kernel on one device asynchronously. Then, switch to a different device and launch another kernel. A Simple Multi-Device Code```int ndevice = acc_get_num_devices(acc_device_default);acc_set_device_num( 0, acc_device_default );// Loop 1pragma acc parallel loop asyncfor(int i = 0; i < N; i++) { A[i] = max(A[i], 0);}acc_set_device_num( 1, acc_device_default );// Loop 2pragma acc parallel loop asyncfor(int i = 0; i < M; i++) { B[i] = max(B[i], 0);}``` Special CasesMulti-device with async is the least straightforward of our three methods. There are a few things that we have to take special care with for it to work properly.1) Adding async to an enter/exit data directive might not always work the way we want it to. Our host thread may have trouble making enter/exit data call truly asynchronous because of the allocation/deallocation it has to do. Because of this, it is recommended to do device allocation/deallocation synchronously at the beginning and end, and use asynchronous updates whenever we need to move data.```int ndevice = acc_get_num_devices(acc_device_default);for(int n = 0; n < ndevices; n++) { acc_set_device_num( n, acc_device_default); pragma acc enter data create( ... )}// Multi-device Codepragma acc update self( ... ) asyncfor(int n = 0; n < ndevices; n++) { acc_set_device_num( n, acc_device_default); pragma acc exit data delete( ... )}```2) One important aspect of using OpenACC async is waiting at certain points of the code for synchronization. This is accomplished with the **wait** directive. There is not a way to wait for all devices in a single directive, so whenever we do synchronization, we must loop through all devices, and wait for each.```for(int n = 0; n < ndevices; n++) { pragma acc parallel loop async // Loop Code}for(int n = 0; n < ndevices; n++) { pragma acc wait}``` --- A More Realistic Code ExampleThe code we just looked at is far too basic to be applicable to most real-world applications. Let's look at a slightly more complex code called Mandelbrot. If you have followed along with the lecture material, you may be familar with the Madelbrot code. It is an image processing code that creates a "mandelbrot" image. The code is somewhat straightfoward. We are creating an image a single pixel at a time, and each pixel is independent of each other. Our goal for making it multi-device is to break the image creation into multiple blocks, and to compute each block on a different device. Here is the code:```const int w = 4098;const int h = 4098;int ndevices = acc_get_num_devices( acc_device_default );int rowsPerBlock = h / ndevices;int blockSize = w*h / ndevices;for(int n = 0; n < ndevices; n++) { acc_set_device_num( n, acc_device_default ); pragma acc enter data create(outImage[n*rowsPerBlock:blockSize])}for(int n = 0; n < ndevices; n++) { acc_set_device_num( n, acc_device_default ); pragma acc parallel loop async for(int y = n*rowsPerBlock; y < (n+1)*rowsPerBlock; y++) { for(int x = 0; x < w; x++) { outImage[y*w + x] = mandelbrot(x, y); } } pragma acc update self(outImage[n*rowsPerBlock:blockSize]) async}for(int n = 0; n < ndevices; n++) { acc_set_device_num( n, acc_device_default ); pragma acc wait pragma acc exit data delete(outImage[n*rowsPerBlock:blockSize])}```This code combines all the techniques discussed previously, and is a good template for the code we will work on in this lab. The code we are working on is found in [filter.c](/view/C/Async/filter.c), and the functions are called from [main.cpp](/view/C/Async/main.cpp). This code takes an image as input, and will apply a blurring function to each pixel independently. If you want to see the image that we are working with, you may do so [here.](/view/C/Async/costarica.jpg) Run the code below to get some baseline performance.
###Code
!make clean -C Async && make -C Async
###Output
_____no_output_____
###Markdown
Your goal is to implement all the techniques displayed in Mandelbrot to this code. This includes getting the number of devices, handling all of the device allocation/deallocation, and launching the loops and updates asynchronously on multiple devices.Edit the code in [filter.c](/edit/C/Async/filter.c) and add multi-device support to the code. When you are ready to test your changes, run the code below.
###Code
!make clean -C Async && make -C Async
###Output
_____no_output_____
###Markdown
If you want to view the image that is being created, you can do so [here.](/view/C/Async/out.jpg)If you are having trouble getting it to work, we recommend trying to make changes one step at a time. The first step is getting it to work with the multi-device blocking strategy, but only running on a single device. Then you can try adding the multi-device functions and async to run it on multiple devices.Lastly, if you want to check your answers, you can see our solution [here.](/edit/C/Async/Solution/filter.c) --- Multi-Device with OpenMP + OpenACCThe main benefit of using OpenMP over Async is that OpenMP gives us several threads to work with, rather than a single host thread. Each thread can use its own device, and we do not have to worry about the same synchronization problems as before. You do not need to be very familar with OpenMP to do multi-device programming; you will only need to learn some basic OpenMP loop syntax.There are three primary strategies we can use OpenMP to achieve multi-device programming:1) Create a loop that goes over all devices and parallelize that loop with OpenMP. This is a similar strategy that we used when doing multi-device with async. Here is an example:```int ndevices = acc_get_num_devices( acc_device_default );pragma omp parallel forfor(int n = 0; n < ndevices; n++) { acc_set_device_num( n, acc_device_default ); pragma acc parallel loop for( ... ) // Notice that async is no longer needed}```2) Break the code into however many blocks we desire. Parallelize those blocks with OpenMP, and be sure to set the number of OpenMP threads to match the number of devices. Then, each OpenMP thread will use a single device to compute a portion of the total blocks. Here is an example:```int ndevices = acc_get_num_devices( acc_device_default );int nblocks = 32;pragma omp parallel for num_threads(ndevices)for(int n = 0; n < nblocks; n++) { int tid = omp_get_thread_num(); acc_set_device_num( tid, acc_device_default ); pragma acc parallel loop for( ... )}```3) Create an OpenMP parallel region, setting the number of OpenMP threads to match the number of devices. Give each thread a single device, and run whatever computation you need. This is similar to the previous example, but we only need to call **acc_set_device_num** once per thread.```int ndevices = acc_get_num_devices( acc_device_default );pragma omp parallel num_threads(ndevices){ int tid = omp_get_thread_num(); acc_set_device_num( tid, acc_device_default ); pragma acc parallel loop for( ... ) { }}```We can also use this to apply the code blocking similar to strategy 2:```int ndevices = acc_get_num_devices( acc_device_default );int nblocks = 32;pragma omp parallel num_threads(ndevices){ int tid = omp_get_thread_num(); acc_set_device_num( tid, acc_device_default ); pragma omp for for( int block = 0; block < nblocks; block++ ) { pragma acc parallel loop for( ... ) { } }}``` Madelbrot Example and ExerciseTo prepare for applying OpenMP to our code, let's look at the Mandelbrot example again (using strategy 3).```const int w = 4098;const int h = 4098;int ndevices = acc_get_num_devices( acc_device_default );int nblocks = 32;int rowsPerBlock = h / nblocks;int blockSize = w*h / nblocks;pragma omp parallel num_threads(ndevices){ int tid = omp_get_thread_num(); acc_set_device_num( tid, acc_device_default ); pragma omp for for(int n = 0; n < nblocks; n++) { pragma acc parallel loop copyout(outImage[n*rowsPerBlock:blockSize]) for(int y = n*rowsPerBlock; y < (n+1)*rowsPerBlock; y++) { for(int x = 0; x < w; x++) { outImage[y*w + x] = mandelbrot(x, y); } } // End OpenACC parallel loop } // End OpenMP block loop} // End OpenMP parallel region```Now, apply one of these strategies (or try all of them if you want) to our image filtering code. Run the code below and get some baseline performance.
###Code
!make clean -C OpenMP && make -C OpenMP
###Output
_____no_output_____
###Markdown
Now, edit the code however you like in [filter.c.](/edit/C/OpenMP/filter.c) Run the code below to check your results.
###Code
!make clean -C OpenMP && make -C OpenMP
###Output
_____no_output_____
###Markdown
If you would like to see our solution, you may do here [here.](/edit/C/OpenMP/Solution/filter.c), and if you want to see the output image again, to can view it [here.](/view/C/OpenMP/out.jpg) --- Multi-Device with MPI + OpenACCThis is by far the most practical use of multi-device with OpenACC. Many codes are already parallelized with MPI for CPUs, and OpenACC is meant to be easy to implement on-top-of a MPI code. This section will require a more high-level understanding of MPI than the OpenMP code did.MPI+OpenACC allows us to connect multiple devices across multiple nodes on a network. Here is a visualization of what we hope to accomplish:The basic premise of a MPI+OpenACC code is that we have a MPI code (with distributed memory and communcation), and within that code we accelerate our computationally intensive loops using OpenACC like normal. We can assign each MPI rank its own GPU, or we can allow multiple ranks to share a GPUs (though this is usually not beneficial). Let's look at a simple code example:```int main(int argc, char** argv){ MPI_Init(&argc, &argv); int nranks, rank; MPI_Comm_size(MPI_COMM_WORLD, &nranks); MPI_Comm_rank(MPI_COMM_WORLD, &rank); int ndevices = acc_get_num_devices(acc_device_default); acc_set_device_num(rank%ndevices, acc_device_default); if(rank == 0) { pragma acc parallel loop for( ... ) { } } else if(rank == 1) { pragma acc parallel loop for( ... ) { } } MPI_Finalize();```This example is much more basic than the codes you will most likely work with. So let's look again at our mandelbrot code. Mandelbrot Example```MPI_Init(&argc, &argv);int nranks, rank;MPI_Comm_size(MPI_COMM_WORLD, &nranks);MPI_Comm_rank(MPI_COMM_WORLD, &rank); int ndevices = acc_get_num_devices(acc_device_default);acc_set_device_num(rank%ndevices, acc_device_default);const int w = 4098;const int h = 4098;int rowsPerRank = h / nranks;int pixelsPerRank = (w*h) / nranks;int yStart = rank*rowsPerRank;int yEnd = yStart + rowsPerRank;pragma acc parallel loop copyout(outData[yStart*w:pixelsPerRank)for(int y = yStart; y < yEnd; y++) { for(int x = 0; x < w; x++) { outData[y*w + x] = mandelbrot(x, y); }}// MPI_Gather syntax ***************************************************// MPI_Gather(const void *sendbuf, int sendcount, MPI_Datatype sendtype,// void *recvbuf, int recvcount, MPI_Datatype recvtype,// int root, MPI_Comm comm)MPI_Gather(&outData[yStart*w], pixelsPerRank, MPI_UNSIGNED_INT, // Send info outData, pixelsPerRank, MPI_UNSIGNED_INT, // Recv info 0, MPI_COMM_WORLD); MPI_Finalize();```If you are more savvy with MPI, you might notice that a simple MPI_Gather here is somewhat lazy. It would be more correct to use MPI_Gatherv to account for image sizes that are not evenly divisible by the number of ranks. You may include this in your code, and our solution will also have it, but this lab is designed so that if you are less familar with MPI you do not need to worry about it to complete it. Image Filter ExerciseUsing mandelbrot as an example, let's parallelize our image filter code with MPI. We will start with a basic OpenACC (single device) implementation, and you will need to add all of the MPI interface to the code.We have already added everything needed to [main.c](/edit/C/MPI/main.c), so you do not need to edit this file. Instead, edit [filter.c](/edit/C/MPI/filter.c) and add the **acc_set_device_num** and the rest of the MPI code. We will post some helpful MPI syntax below, and some general tips:* You will need to distribute the input array across the MPI ranks. If you are unfamilar with MPI, we recommend using an MPI_Bcast to transer the entire input array to each rank. If you are more familiar with MPI, we recommend you use MPI_Scatterv to only transfer the data you need from the input array.* You will need to edit the Y loop to compute a subset of the rows based on MPI rank.* You will need to gather all of the output arrays at the end.MPI Syntax: \*note: for the MPI_Comm use the MPI_COMM_WORLD constant, for the MPI_Datatype use MPI_UNSIGNED_CHAR, and for root use 0\*```MPI_Bcast( void \*buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm ) MPI_Gather(const void \*sendbuf, int sendcount, MPI_Datatype sendtype, void \*recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_Scatterv(const void \*sendbuf, const int \*sendcounts, const int \*displs, MPI_Datatype sendtype, void \*recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)``` When you're ready to try out your changes, run the code below.
###Code
!make clean -C MPI && make -C MPI
###Output
_____no_output_____
###Markdown
If you would like to see our solution, you may do here [here](/edit/C/MPI/Solution/filter.c), and if you want to see the output image, you may do so [here](/view/C/MPI/out.jpg). --- Bonus Activity, Device-to-Device MPI Data TransfersWe can transfer data directly from device-to-device when calling MPI functions. We have to use a strategy covered in an earlier Lab; the OpenACC host_data directive. By using the **host_data** directive with the **use_device** clause, we can pass device pointers directly to our MPI function calls. And if your machine supports it, this will allow MPI to transfer data directly between devices. Here is an example:```pragma acc host_data use_device(A){ MPI_Gather(A + offset, size, MPI_INT, outData, size, MPI_INT, 0, MPI_COMM_WORLD); }```If you would like to test it out, go back to the code you wrote for this lab, and trying adding the host_data directive to your MPI calls. [Here is a shortcut to filter.c.](/edit/C/filter.c)
###Code
!make clean -C MPI && make -C MPI
###Output
_____no_output_____
###Markdown
--- Post-Lab SummaryIf you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well.You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
###Code
%%bash
rm -f openacc_files.zip
zip -r openacc_files.zip *
###Output
_____no_output_____ |
code/Downlodaing_lyrics_data.ipynb | ###Markdown
Data Collection
###Code
%load_ext watermark
%watermark -a "Datta Tele" -d -v -m -g -p numpy,scipy,matplotlib,sympy
###Output
Datta Tele 2017-09-24
CPython 3.6.1
IPython 5.3.0
numpy 1.12.1
scipy 0.19.0
matplotlib 2.0.2
sympy 1.0
compiler : MSC v.1900 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
CPU cores : 8
interpreter: 64bit
Git hash :
###Markdown
Dowloading the DatasetTop inspiring christian music and non-christian music is collected through web scrapping and downloaded as csv file. [Only artist name and title]
###Code
import pandas as pd
df = pd.read_csv("christian_Songs.csv")
df.head()
###Output
_____no_output_____
###Markdown
Dowloading Lyrics as per artist and title of songBelow we will download the lyrics from __[LyricWikia](http://lyrics.wikia.com/wiki/Lyrics_Wiki)__ for my favorite song "Steady" by "For King and Country".
###Code
import urllib
import lxml.html
class Song(object):
def __init__(self, artist, title):
self.artist = self.__format_str(artist)
self.title = self.__format_str(title)
self.url = None
self.lyric = None
def __format_str(self, s):
# remove paranthesis and contents
s = s.strip()
try:
# strip accent
s = ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
except:
pass
s = s.title()
return s
def __quote(self, s):
return urllib.parse.quote(s.replace(' ', '_'))
def __make_url(self):
artist = self.__quote(self.artist)
title = self.__quote(self.title)
artist_title = '%s:%s' %(artist, title)
url = 'http://lyrics.wikia.com/' + artist_title
self.url = url
def update(self, artist=None, title=None):
if artist:
self.artist = self.__format_str(artist)
if title:
self.title = self.__format_str(title)
def lyricwikia(self):
self.__make_url()
try:
doc = lxml.html.parse(self.url)
lyricbox = doc.getroot().cssselect('.lyricbox')[0]
except (IOError, IndexError) as e:
self.lyric = ''
return self.lyric
lyrics = []
for node in lyricbox:
if node.tag == 'br':
lyrics.append('\n')
if node.tail is not None:
lyrics.append(node.tail)
self.lyric = "".join(lyrics).strip()
return self.lyric
song = Song(artist='For King & Country', title='Steady')
lyr = song.lyricwikia()
print(lyr)
###Output
My constant solid ground
You are my lantern in the night
When I'm twisted up and shaken
You're the one I put my faith in
Yeah, you're the reason I survive
You keep me steady when the sky is falling
And I'll keep steady after You
I'll carry on when my strength is failing
Take heart 'cause You're with me
So let the sun stop, stars drop, whatever comes
I'll be ready, You keep me steady
You keep me steady
You're a river, You cover me
When the bombs fall, You're the cavalry
Somehow You're always standing right by my side
So no matter what I will be facing
I will not be over-taken
And You are the only reason why
You're my hiding place, my home
And fear cannot invade these four walls
I need You near, I need You here
You keep me steady
You keep me steady
###Markdown
Downloading lyrics for top inspiring songs and added to dataframeFirst, we add a new cloumn for the lyrics, iterate the process and download the lyrics based on titles of music and artist name.
###Code
import pyprind
pbar = pyprind.ProgBar(df.shape[0])
for row_id in df.index:
song = Song(artist=df.loc[row_id]['artist'], title=df.loc[row_id]['title'])
lyr = song.lyricwikia()
df.loc[row_id,'lyrics'] = lyr
pbar.update()
df.tail()
df.head()
###Output
_____no_output_____
###Markdown
Remove empty rows where lyrics is not downloaded and keep the backup
###Code
df.to_csv('lyrics_backup.csv')
df = df[df.lyrics!='']
df.describe()
df.to_csv("topsongs.csv")
###Output
_____no_output_____
###Markdown
WordCloudWord Cloud is technique to visualize the text data. It is a weighted list of words, typically specific words appears in a source of textual data. Package source - __[https://github.com/amueller/word_cloud](https://github.com/amueller/word_cloud)__
###Code
wor = str(df['lyrics'])
worde = wor
###Output
_____no_output_____
###Markdown
Word cloud of top inspiring christian songs
###Code
from wordcloud import WordCloud
import matplotlib.pyplot as plt
from sklearn.feature_extraction import text
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
import numpy as np
from PIL import Image
words = ' '.join(df['lyrics'])
wordcloud = WordCloud(
font_path="C://Users//datta//Fonts//Flux Regular.otf",
stopwords=STOPWORDS,
background_color='white',
width=800,
height=400
).generate(words)
plt.imshow(wordcloud)
plt.axis('off')
plt.savefig('./wordcloud_all_w.png', dpi=800)
plt.show()
lyr = ' '.join(df.loc[:, 'lyrics'])
wordcloud = WordCloud(
font_path="C://Users//datta//Fonts//Flux Regular.otf",
stopwords=STOPWORDS,
background_color='white',
width=800,
height=400
).generate(lyr)
plt.imshow(wordcloud)
plt.axis('off')
plt.savefig('./lyrworld.png', dpi=300)
plt.show()
sp_df = pd.read_csv("spotify_inspirational.csv")
sp_df.head()
import pyprind
pbar = pyprind.ProgBar(sp_df.shape[0])
for row_id in sp_df.index:
song = Song(artist=sp_df.loc[row_id]['artist'], title=sp_df.loc[row_id]['title'])
lyr = song.lyricwikia()
sp_df.loc[row_id,'lyrics'] = lyr
pbar.update()
sp_df.tail()
sp_df.head()
sp_df.to_csv("spotisongs.csv")
sp_df = sp_df[sp_df.lyrics!='']
sp_df.to_csv("spotisongsinsp.csv")
sp_df.head()
###Output
_____no_output_____
###Markdown
WordCloud for top inspiring non-christian songs
###Code
spotiwords =' '.join(sp_df['lyrics'])
wordcloud = WordCloud(
font_path="C://Users//datta//Fonts//Flux Regular.otf",
stopwords=STOPWORDS,
background_color='white',
width=1080,
height=720
).generate(spotiwords)
plt.imshow(wordcloud)
plt.axis('off')
plt.savefig('./spoticloud_all_w.png', dpi=300)
plt.show()
combine_df = pd.read_excel("topsongs.xlsx")
combine_df.head()
combine_df.tail()
###Output
_____no_output_____
###Markdown
WordCloud of all inspirational songs
###Code
combine_words =' '.join(combine_df['lyrics'])
wordcloud = WordCloud(
font_path="C://Users//datta//Fonts//Flux Regular.otf",
stopwords=STOPWORDS,
background_color='white',
width=1080,
height=720
).generate(combine_words)
plt.imshow(wordcloud)
plt.axis('off')
plt.savefig('./combinecloud_all_w.png', dpi=300)
plt.show()
blue = '#5A6FFA'
green = '#A3EB5B'
inspiring, inspiring_non_christian_music = sum(combine_df.loc[:, 'Category'] == 'inspiring'), sum(combine_df.loc[:, 'Category'] == 'inspiring_non_christian_music')
from matplotlib import rcParams
rcParams['font.size'] = 20
piechart = plt.pie(
(inspiring, inspiring_non_christian_music),
labels=('Inspiring christian music','Inspiring non christian music'),
shadow=True,
colors=(green, blue),
explode=(0,0.15), # space between slices
startangle=90, # rotate conter-clockwise by 90 degrees
autopct='%1.1f%%',# display fraction as percentages
)
plt.axis('equal')
plt.tight_layout()
plt.savefig('./pie_inspire.eps', dpi=300)
plt.savefig('./pie_inspire.png', dpi=300)
non_df = pd.read_csv("non_inspirational_christian.csv")
non_df.head()
import pyprind
pbar = pyprind.ProgBar(non_df.shape[0])
for row_id in non_df.index:
song = Song(artist=non_df.loc[row_id]['artist'], title=non_df.loc[row_id]['title'])
lyr = song.lyricwikia()
non_df.loc[row_id,'lyrics'] = lyr
pbar.update()
non_df.tail()
non_df.to_csv("noninspi.csv")
non_df = non_df[non_df.lyrics!='']
non_df.to_csv("nonsongs.csv")
non_df.head()
###Output
_____no_output_____
###Markdown
WordCloud of uninspiring christian songs as per __[Link](https://open.spotify.com/user/deschlerk/playlist/3wLJuG575lvw3dxSdHw3Yr)__
###Code
non_words =' '.join(non_df['lyrics'])
wordcloud = WordCloud(
font_path="C://Users//datta//Fonts//Flux Regular.otf",
stopwords=STOPWORDS,
background_color='white',
width=800,
height=500
).generate(non_words)
plt.imshow(wordcloud)
plt.axis('off')
plt.savefig('./combinecloud_all_w.png', dpi=300)
plt.show()
music_df = pd.read_excel("topsongs.xlsx")
music_df.head()
music_df.describe()
###Output
_____no_output_____
###Markdown
K Love
###Code
import pandas as pd
k_love_df = pd.read_csv('k_love_top_songs.csv')
k_love_df.head()
###Output
_____no_output_____
###Markdown
After cleaning
###Code
k_love_df.head()
###Output
_____no_output_____
###Markdown
Download lyrics and add to the dataframe
###Code
import pyprind
pbar = pyprind.ProgBar(k_love_df.shape[0])
for row_id in k_love_df.index:
song = Song(artist=k_love_df.loc[row_id]['artist'], title=k_love_df.loc[row_id]['title'])
lyr = song.lyricwikia()
k_love_df.loc[row_id,'lyrics'] = lyr
pbar.update()
k_love_df
###Output
0% [##########] 100% | ETA: 00:00:00
Total time elapsed: 00:00:07
###Markdown
we will add remaining lyrics to dataframe. First we need to save as backup
###Code
k_love = pd.read_excel('k_love_songs.xlsx')
k_love
k_words =' '.join(k_love['lyrics'])
###Output
_____no_output_____
###Markdown
WordCloud of Top K_Love Inspiring songs
###Code
mask = np.array(Image.open("C://Users//datta//Movie_lens//lko.jpg"))
image_colors = ImageColorGenerator(mask)
wordcloud = WordCloud(
font_path="C://Users//datta//Fonts//Flux Regular.otf",
stopwords=STOPWORDS,
mask=mask,
background_color='white',
width=800,
height=500
).generate(k_words)
plt.imshow(wordcloud.recolor(color_func=image_colors))
plt.axis('off')
plt.savefig('./combinecloud_all_w.png', dpi=600)
plt.show()
###Output
_____no_output_____ |
Read_In_Tide_Gauge_Data.ipynb | ###Markdown
pwd
###Code
#
col = []
for val in ynew:
if val < 3.4:
col.append('white')
elif val <= 3.5:
col.append('green')
elif val <= 3.65:
col.append('red')
elif val <= 3.7:
col.append('azure')#three
elif val <= 3.75:
col.append('lightcyan')#one
elif val <= 3.8:
col.append('aliceblue')#14
elif val <= 3.85:
col.append('aqua')#17
elif val <= 3.9:
col.append('royalblue')#4
elif val <= 3.95:
col.append('mediumblue')#10
elif val <= 3.9:
col.append('red')
elif val <= 4.0:
col.append('midnightblue')#3
else:
col.append('red')
plt.bar(xnew,ynew, color=col)
plt.show()
len(ynew)
53/4
13.25*
###Output
_____no_output_____ |
Notebooks/Lab_01_Navigating_the_terminal.ipynb | ###Markdown
Lab 01: Navigating the terminal & looking at a brain imageIn this section we will go over some basic commands on : 1) How to navigate the terminal and start a remote desktop on the server 2) Copy some brain files over from another directory 3) View the brain files using fslview 4) View the brain files using SPM5) Install VPN 1. Logging in to the server Make sure you are connected to **Dartmouth Secure** network MAC usersThe steps are quite simple for MAC users. 1) Open up the **Terminal** application. (search for Terminal in your Spotlight) 2) SSH using the following command replacing {YOUR_ID} with the one given to you in class.
###Code
ssh -Y {YOUR_ID}@hera.dartmouth.edu
###Output
_____no_output_____
###Markdown
3) After you type in your password, you will notice that you are logged in as your ID and see a screen like this:Congratulations you opened up your first ssh connection to a remote cluster!You can easily logout by typing in **exit**. PC usersPC's don't come with Terminals for ssh-ing into remote servers. You will need to install Putty that you can download from http://www.putty.org/ 1) Install PuTTy 2) Open PuTTy 3) Configure PuTTy settings as the following then click **Open** Host Name : hera.dartmouth.edu *or* eros.dartmouth.edu Connection type : SSH A Terminal window will open up and ask for your login and password. Enter your ID and PW as given by the instructor. You should see the same terminal window as above Change your password! If you were given a temporary password change your password (and please don't forget it) with the following command once you have connected to the server through SSH:
###Code
kpasswd
###Output
_____no_output_____
###Markdown
Setting up VNC You are welcome to just use the Terminal for the work but sometimes you might want a more interactive Desktop experience. To that end, it is nice to use a VNC. Another reason for VNC is that the connection above goes away when your internet connection goes down, or if you close terminal, or if the server reboots.To stay connected in the first two cases we are going to use Virtual Network Computing (VNC) which essentially makes a remote desktop on the server that you can connect to and see from your computer. MAC: If you have a MAC, VNC is already part of your computer (Finder -> Go -> Connect to Server). PC: If you have a PC then you will need to install a VNC client. Any should work but UltraVNC is pretty reliable: http://www.uvnc.com/ You will need to activate VNC on the server before you connect to it. With your SSH connection established earlier (with Terminal or PuTTy), use the following command to start a VNC session:
###Code
vncserver -geometry 1600x1200
###Output
_____no_output_____
###Markdown
You should see something like this: You will need to set a password for your VNC desktop connections. Please do so and remember it!Now you can connect to your VNC desktop: MAC: launch **Finder** -> click **Go** on the top menu panel -> click **Connect to Server** In the **Server Address** section, type in your vnc address `vnc://{eros_or_hera}.dartmouth.edu:59` and click **Connect** and enter your password. PC: Follow similar steps as the MAC via UltraVNC. Open UltraVNC Viewer Your address will be `vnc://{eros_or_hera}.dartmouth.edu:59` Both:with the last representing the desktop number returned above eros:"5". If it is a single digit then put a 0 in front of it. IE desktop 11 goes to 5911 and desktop 5 goes to 5905. For example:
###Code
vnc://eros.dartmouth.edu:5905
###Output
_____no_output_____
###Markdown
If all goes well, you should see a screen like this: Now, to open up your terminal window, click the **foot shaped icon** in the bottom left hand corner -> **System Tools** -> **Terminal** Then you will see your Terminal window pop up! Inside the terminal, to see if you have any open VNC windows you can use the following command:
###Code
lsvnc
###Output
_____no_output_____
###Markdown
To make sure that this VNC session stays active and doesn't log itself out we need to run reauth. In your terminal window type
###Code
reauth 18000 yourusername
###Output
_____no_output_____
###Markdown
Then minimize this terminal (dont close it) and open a new one as you did above. 2. Basics in navigating the terminal Now we will learn about some basic commands to navigate in the terminal. These commands will become useful as you move around data files and call fMRI analysis programs. Think of the Terminal as the same as navigating using a Finder (mac) or File Browser (win) using just your keyboard. To get started, open up a Terminal window.MAC: You can use the SSH session directly with the port forwarding option -Y (You might need ot install xcode by running `xcode-select --install` in your local terminal. PC: Open terminal in your VNC as shown above.First, you probably want to know which directory you are in. This is done through the `pwd` command which stands for `print working directory`
###Code
pwd
###Output
_____no_output_____
###Markdown
You will see a result showing the current path which should look like this: `/afs/dbic.dartmouth.edu/usr/PBS60/pbs60a` The very first folder you log into is your "home" directory. Now let's look at what kind of files are in your current directory. To do this we use the command `ls` which is short for `list`
###Code
ls
###Output
_____no_output_____
###Markdown
You should now see that it hows 3 different folders: bin, matlab, and subjects. `bin matlab subjects`These are the current foder/files in your directory. You can change how you view the list using different options such as `ls -l` which will show you in list format `ls -al` which will show you all hidden files, or `ls -lh` which will show you the files in list format but with human readable sizes of folders. We can also look at what files exist in each of the subfolders using `ls` followed by wildcard `*`
###Code
ls *
###Output
_____no_output_____
###Markdown
The result should show you files within each subdirectory, such as the following. ```bin:matlab:startup.msubjects:``` If all goes well, your terminal should look like this: This is all good, and now lets try to actually navigate in and out of folders. We use the change directory command, which conveniently is `cd` followed by the folder you'd like to navigate to such as `matlab`. Note that in many cases you don't need to type in the full file name. Instead you can press **tab** to use tab-completion.
###Code
cd matlab
###Output
_____no_output_____
###Markdown
Now you will see that you've moved into the matlab folder (check with `pwd`) and the folder contains the startup.m file as we've seen before (check with `ls`). To move back to the original folder we were in, we could use the `cd` command again followed by a space and two periods `cd ..` which will move you up one folder. Alternative, you could also jus type `cd` which will bring you back to your home directory.
###Code
cd ..
###Output
_____no_output_____
###Markdown
Now let's trt navigating to another directory by going up one directory to /PBS60 We could do this by doing `cd ..` again which should put you in the `/afs/dbic.dartmouth.edu/usr/PBS60` folder (check with `pwd`).
###Code
cd ..
ls
###Output
_____no_output_____
###Markdown
This should show you all the other folders in the group such as: ```ALEX jcheong pbs60a pbs60c pbs60e pbs60g pbs60i pbs60k pbs60m pbs60o pbs60q pbs60s pbs60u pbs60w pbs60yDATA KRISTINA pbs60b pbs60d pbs60f pbs60h pbs60j pbs60l pbs60n pbs60p pbs60r pbs60t pbs60v pbs60x ```Now let's go into jcheong 's folder which will have a folder named `sub-sid000012` (check with `ls`). Navigate into the folder and list what's in the directory. You should see two folders `anat` and `func`
###Code
cd jcheong
ls
cd sub-sid000012
ls
###Output
_____no_output_____
###Markdown
These are folders with brain image files (nifti or nii) files inside of them. You can check what's in each folder using `ls *`Now, make note of the full path to the subject folder using `pwd`which shoul look something lik this: `/afs/dbic.dartmouth.edu/usr/PBS60/jcheong/sub-sid000012` 3. Copying files and directories. Now let's return to your home directory by simply typing `cd`We are going to copy the folder over the `/sub-sid000012` to your own directory. This can be achieved by command copy: `cp -rv {input file/folder} {output file/folder}` Let's put this in the subject folder so our command would look like the following. The `-r` option is necessary because it's a folder that requires recurive copying. and the `-v` option for verbose details about the copy process. This will move the sub-sid000012 folder in your own subjects folder.
###Code
cp -rv /afs/dbic.dartmouth.edu/usr/PBS60/jcheong/sub-sid000012 subjects
###Output
_____no_output_____
###Markdown
Alternatively, you can use the similar `rsync` function with similar options. This is a more powerful/versatile command that allows copying internally and remotely.
###Code
rsync -azvPhr /afs/dbic.dartmouth.edu/usr/PBS60/jcheong/sub-sid000012 subjects
###Output
_____no_output_____
###Markdown
The copying will take couple seconds. When complete you can enter the `subjects` folder to check that the subject `sub-sid000012` copied successfully. Viewing a brain file using fslviewNow cd into the `/subjects/sub-sid000012/anat` folder and check that the `sub-sid000012_acq-MPRAGE_T1w.nii.gz` file exists. We can open the file using `fslview` followed by the filename
###Code
fslview sub-sid000012_acq-MPRAGE_T1w.nii.gz
###Output
_____no_output_____
###Markdown
You should see a window pop up that looks like the following. Feel free to click around and admire the human brain imaging! Other useful commands: You will grow more comfortable as you use the terminal and persist through trial and error. Here are some other useful commands to try out and practice. globbing Glob is not a command but it allows you to grab multiple files with certain patterns in their file names. For example, you can grab all files with extension .nii.gz by `ls *.nii.gz` or rsync only those files with `rsync -azvphr *.nii.gz .` mkdir The `mkdir {directory name}` command lets you create a directory with the name of your choosing.Try making a directory with `myfirstdirectory` by `mkdir myfirstdirectory`
###Code
mkdir myfirstdirectory
###Output
_____no_output_____
###Markdown
mv The `mv {original filename} {destination filename}` command lets you move files or directories around without copying the files. For example you can use this to rename file or folder names. Try renaming your `myfirstdirectory` to `test_directory` using `mv myfirstdirectory test_directory`
###Code
mv myfirstdirectory test_directory
###Output
_____no_output_____
###Markdown
rm `rm -r {file or folder to delete}` is a dangeroud command that you can use to delete files and folders. It does NOT give you any warnings so you can actually delete the entire system by mistake. The `-r` option is necessary to delete a folder. Try deleteing the `test_directory` with `rm -r test_directory`
###Code
rm -r test_directory
###Output
_____no_output_____
###Markdown
whichFor applications like fslview, you may want to know where the file lives. Using `which {application name}` like `which fslview` will tell you where the fslview file is.
###Code
which fslview
###Output
_____no_output_____
###Markdown
{command} --help For most commands, you can look up what arguments they require and what the functions do using the **--help** flag. Alternatively you can also look at the full manual using **man {command}**, exit by pressing **q** For instance:
###Code
which --help
fslview --help
man rm
###Output
_____no_output_____
###Markdown
touch `touch {filename}` will let you create a textfile. Try creating your own readme file such as `touch readme.txt`
###Code
touch readme.txt
###Output
_____no_output_____
###Markdown
vi or vim `vi` or `vim` are text editors for the terminal. You can access it directly or load the file into them, for example: `vi readme.txt`
###Code
vi readme.txt
###Output
_____no_output_____
###Markdown
To edit the file, you will need to press "i" on your keyboard which will let you type into the file. Once done, you can press "ESC" + ":" + "wq!" to save and exit the editor. 4. View brain images using different fMRI softwaresThere are three major fMRI analysis softwares that most researchers use: FSL, SPM, AFNI. Here we will briefly open up each program to give you a sense of what they are like. Viewing a brain file using FSLviewFSL was developed in Oxford, UK (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki)FSLview is the image viewer used in FSL. (For newer versions of FSL it has been replaced with FSLeyes.) It is not an analysis console but you can get a sense of what brain images look like and what you are dealing with. In your terminal cd into the `/subjects/sub-sid000012/anat` folder and check that the `sub-sid000012_acq-MPRAGE_T1w.nii.gz` file exists. We can open the file using `fslview` followed by the filename
###Code
fslview sub-sid000012_acq-MPRAGE_T1w.nii.gz
###Output
_____no_output_____
###Markdown
You should see a window pop up that looks like the following. Feel free to click around and admire the human brain imaging! To launch the analysis console you would just type **fsl** into the terminal. Before you can type anything else in the terminal you must close fsl, or open another terminal. SPM - Statistical Parametric Mapping SPM is developed to be used in the MATLAB environment and was also created in UCL, UK (http://www.fil.ion.ucl.ac.uk/spm/doc/) It is fast, powerful, and (maybe) more scriptable but the downside is that you need to have a MATLAB license which might be expensive if you are not affiliated with a University that has a school-wide license.To open MATLAB on these servers you would normally just type 'matlab' but we are going to start MATLAB with some specific paths so we can run SPM and xjview (a add-on brain image viewer). From a terminal type:
###Code
spm12
###Output
_____no_output_____
###Markdown
Once this opens you should see something like this:Now that we have matlab open we can start SPM
###Code
spm fmri
###Output
_____no_output_____
###Markdown
You should be able to see something like this: SPM is the main program we will be using to do preprocessing and data analysis. Display a brain image in SPM by selecting "Display" and then finding the brain image you loaded earlier. Play around with the settings and click around the brain. Try adding an overlay. Next lets use another SPM-based program which can visualize brain images nicely - xjView. Close SPM by clicking the X in the top right of the SPM12 menu. Then start xjView from the matlab prompt
###Code
xjview
###Output
_____no_output_____
###Markdown
Now try to load the file /afs/dbic.dartmouth.edu/usr/PBS60/jhuckins/FoodVSPeopleOther_p001_vol_corrected.niixjView loads files on top of 'template' brain images. When you are done with xjView you can close it. To exit matlab type
###Code
exit
###Output
_____no_output_____
###Markdown
This will bring you back to terminal and you can run AFNI or other programs from here. AFNI - Analysis of Functional Neuro-Images Lastly, we can use AFNI which was developed by NIH folks in the US (https://afni.nimh.nih.gov/)To launch AFNI, you can just type afni in your terminal.
###Code
afni
###Output
_____no_output_____ |
docs/notebooks/Draw_Map_Demo.ipynb | ###Markdown
Draw Map Demo
###Code
import hydrofunctions as hf
hf.draw_map()
###Output
_____no_output_____ |
jupyter_notebooks/12_example_cross_validations.ipynb | ###Markdown
Demonstrate a way to do cross validation with DeepDive Note, this is not K-fold cross validation. Cross validation here is essentially resampling with replacement.We simply create the app once, then initdb and run and accumulate stats for each run.We'll recreate one of our previous apps here, and loop through multiple initdb and run in order to collect our stats. If want true k-fold cross-validation:It would be a little more difficult to do K-fold cross validation because DeepDive creates the test and training splits itself randomly.And note, there is randomness both in the set selection for training and test AND in the specific number of elements in each of these sets. For example, when we specify in the apps .conf file 75% training, 25% test, DeepDive might instead take 73% and 27%. Start Server
###Code
!pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start # deepdive
###Output
pg_ctl: another server might be running; trying to start server anyway
server starting
###Markdown
Use the same app creation method and one of the experiment sets as previously.The app creation is same as in ./11_2_aud_per_vs_others...
###Code
import os
import shutil
import errno
import glob
dd_sent_sources = '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_sentences'
dd_app_dir = '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app'
templates = '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/templates_deepdive_app_bagofwords/'
conf_matrix_r_src = '/Users/ccarey/Documents/Projects/NAMI/rdoc/scripts/report_dd_confusion_matrix.R' # creates tsv and pdf reports.
def create_dd_app(template_dir, app_name, input_raw, input_annotated):
'''Sets up a DeepDive app based and populates the input data'''
try:
shutil.copytree(template_dir, app_name)
except OSError as err:
print("Error copying {} to {}: {}".format(template_dir, app_name, err))
# create / overwrite unique postgres db name for this app:
with open(os.path.join(app_name, 'db.url'), 'w') as f:
f.write('postgresql://localhost/{}\n'.format(app_name))
try:
shutil.copyfile(input_raw, os.path.join(app_name, 'input', 'raw_sentences'))
except OSError as err:
print("Error copying raw sentences: {}".format(err))
try:
shutil.copyfile(input_annotated, os.path.join(app_name, 'input', 'annotated_sentences'))
except OSError as err:
print("Error copying annotated sentences: {}".format(err))
def my_dd_table_sql_str():
'''Used to populate cc_all_predictions.tsv or such table in
deepdive app.
That tsv file in turn is used by our confusion matrix R Script to
generate stats and plots from our deepdive apps.
'''
cmd = ('SELECT a.has_term, r.terms, a.sentence_id, expectation '
'FROM '
'_annotated_sentences_has_term_inference as a JOIN '
'_raw_sentences as r ON '
'a.sentence_id = r.sentence_id '
'ORDER BY a.sentence_id')
return(cmd)
###Output
_____no_output_____
###Markdown
Create a deepdive app named according to its input data.
###Code
# all apps must be at depth = 1, in a deepdive_app home directory, in our case 'deepdive_app/'
%cd {dd_app_dir}
###Output
/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app
###Markdown
Make sure we know our input data before trying to create the app.
###Code
sentence_collections = !ls -d {dd_sent_sources}/Auditory_2_1000__vs__Arousal_1_1000__pdx__un_Aud_1_1000
sentence_input = sentence_collections[0]
app_name = 'AP_2_1000_vs_AR_1_1000_pdx_AP_1_1000_cval'
app_name
print('Creating deepdive_app {} at : {}'.format(app_name, os.getcwd()))
create_dd_app(template_dir=templates,
app_name = app_name,
input_raw=os.path.join(sentence_input, 'raw_sentences'),
input_annotated=os.path.join(sentence_input, 'annotated_sentences'))
###Output
Creating deepdive_app AP_2_1000_vs_AR_1_1000_pdx_AP_1_1000_cval at : /Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app
###Markdown
loop through app 10 times, collecting stats.
###Code
cmd = 'SELECT a.has_term, r.terms, a.sentence_id, expectation FROM _annotated_sentences_has_term_inference as a JOIN _raw_sentences as r ON a.sentence_id = r.sentence_id ORDER BY a.sentence_id'
%cd {dd_app_dir}
%cd {app_name}
### TODO: we could make this more pythonic...
# In each loop, the RScript is generating new stats and confusion matrix files,
# we accumulate just the test portion each time.
# The '' as last argument to R was an alternate title for plots?
def run_deepdive(run_stats_fname):
!deepdive initdb 1> /dev/null 2>&1
!deepdive run 1> /dev/null 2>&1
!deepdive sql eval '{cmd}' format=tsv > cc_all_predictions.tsv # yes, "{cmd}" must be single or double quoted
!RScript {conf_matrix_r_src} cc_all_predictions.tsv "{run_stats_fname}" '' 1> /dev/null 2>&1
for i in range(0,10):
run_stats_fname = app_name + str(i)
run_deepdive(run_stats_fname)
pattern = '*cval*' + 'test_only*' + 'confmatr.tsv'
!cat {pattern} > 'cv_conf_matrix.tsv'
pattern = '*cval*' + 'test_only*' + 'stats.tsv'
!cat {pattern} >> 'cv_stats.tsv'
!grep '"Specificity\|"Accuracy\|Sensitivity"' cv_stats.tsv | sort -s -d -k 1,1 > cc_cross_validation.tsv
#cat cv_stats_matrix.tsv | grep '"Specificity\|"Accuracy\|Sensitivity"' | sort -s -d -k 1,1 > cc_cross_validation.tsv
###Output
_____no_output_____
###Markdown
Generate the plot of the cross validation statistics.Run the following code in R in this deepdive app.
###Code
# TODO: make this into one of our scripts.
# R
library(ggplot2)
t <- read.table('cc_cross_validation.tsv', col.names = c('stat', 'value'), sep='\t')
pdf('cross_validation_plot.pdf')
ggplot(data=t, aes( x=as.factor(stat), y=value, color=stat)) + geom_point() + ggtitle('Cross validation plot larger Auditory set vs Arousal') + theme(axis.text.x = element_text(angle = 90)) + xlab('Cross validation Statistic')
dev.off()
###Output
_____no_output_____ |
notebooks/Alternative_F1_for_ALE_ILE clustering_2019-04-12.ipynb | ###Markdown
Table of Contents1 Basic settings2 Reference dataset3 ILE clustering3.1 Min_word_count = 1 -- cell U523.2 Grammar Tester results 2019-04-023.3 ILE clustering, min_word_count = [31,21,11,6,1] -- lines 57-52, column U4 ALE clustering -- lines 57-52, colums Q:T5 Save results Alternative F1 estimations for ALE and ILE clustering[ULL Project Plan ⇒ Parses ⇒ lines 47-52](https://docs.google.com/spreadsheets/d/1TPbtGrqZ7saUHhOIi5yYmQ9c-cvVlAGqY14ATMPVCq4/editgid=963717716&range=Q47:U52) Basic settings
###Code
import os, sys, time, itertools, operator, numpy as np
from collections import Counter, OrderedDict
mod_pth = os.path.abspath(os.path.join('..'))
if mod_pth not in sys.path: sys.path.append(mod_pth)
from src.grammar_learner.utl import UTC, kwa
from src.grammar_learner.read_files import check_dir, check_mst_files
from src.grammar_learner.widgets import html_table
from src.grammar_learner.write_files import list2file
from src.grammar_learner.grammar_checker import _compare_lg_dicts_
out_dir = mod_pth + '/output/Alternative_F1_ALE_ILE_' + str(UTC())[:10]
if check_dir(out_dir, True): print(out_dir)
###Output
/home/obaskov/94/ULL/output/Alternative_F1_ALE_ILE_2019-04-12
###Markdown
Reference dataset
###Code
corpus = 'GCB'; dataset = 'LG-E-noQuotes'
kwargs = {'mod_pth': mod_pth,
'reference_path': mod_pth + '/data/' + corpus + '/' + dataset}
files, re = check_mst_files(kwargs['reference_path']); files
###Output
_____no_output_____
###Markdown
ILE clustering `Min_word_count = 1` -- [cell U52](https://docs.google.com/spreadsheets/d/1TPbtGrqZ7saUHhOIi5yYmQ9c-cvVlAGqY14ATMPVCq4/editgid=963717716&range=U52)
###Code
mwc = 1
test_path = 'ILE-GCB-LG-E-noQuotes-LG-551-S94-2019-04-02'
test = '/GCB_LG-E-noQuotes_dILEd_no-gen'
kwargs['test_path'] = mod_pth + '/output/' + test_path + test
if mwc > 1: kwargs['test_path'] += '_mwc=' + str(mwc)
kwargs['test_path'] += '/GC_LGEnglish_noQuotes_fullyParsed.ull.ull'
precision, recall, f1, re = _compare_lg_dicts_(**kwargs)
print('Recall:\t\t', str(round(recall*100, 2)) + '%',
'\nPrecision:\t', str(round(precision*100, 2)) + '%',
'\nF1:\t\t ', round(f1,2),
'\nReference:\t', re['reference_sentences'], 'sentences',
'\nTest set:\t', re['test_ull_sentences'], 'sentences')
###Output
Recall: 56.14%
Precision: 97.76%
F1: 0.71
Reference: 68826 sentences
Test set: 68826 sentences
###Markdown
Grammar Tester results [2019-04-02](http://langlearn.singularitynet.io/data/clustering_2019/ILE-GCB-LG-E-noQuotes-LG_551-S94-2019-04-02/GCB_LG-E-noQuotes_dILEd_no-gen/GC_LGEnglish_noQuotes_fullyParsed.ull.stat)
###Code
with open(kwargs['test_path'][:-3] + 'stat', 'r') as f: stat = f.read() #; print(stat)
for i in [2,7,14,15,16,18]: print(stat.splitlines()[i])
# Check Grammar Tester results: precision ~ recall / PA: (approximation)
pa = 0.6169
recall = 0.5961
alt_precision = recall / pa
alt_f1 = 2 * alt_precision * recall / (alt_precision + recall);
print('Alternative precision estimation ~', str(round(alt_precision*100, 2))
+ '%, F1 ~', round(alt_f1, 2))
###Output
Alternative precision estimation ~ 96.63%, F1 ~ 0.74
###Markdown
ILE clustering, `min_word_count = [31,21,11,6,1]` -- [lines 57-52, column U](https://docs.google.com/spreadsheets/d/1TPbtGrqZ7saUHhOIi5yYmQ9c-cvVlAGqY14ATMPVCq4/editgid=963717716&range=U47:U52)
###Code
ile = []
for mwc in [31,21,11,6,1]:
kwargs['test_path'] = mod_pth + '/output/' + test_path + test
if mwc > 1: kwargs['test_path'] += '_mwc=' + str(mwc)
kwargs['test_path'] += '/GC_LGEnglish_noQuotes_fullyParsed.ull.ull'
precision, recall, f1, r_ = _compare_lg_dicts_(**kwargs)
if 'error' in r_: print(r_)
if mwc == 1: ile.append(('ILE', 2, 'err'))
ile.append(('ILE', mwc, round(f1,2)))
display(html_table([['Clustering', 'MWC', 'F1']] + ile))
###Output
_____no_output_____
###Markdown
ALE clustering -- [lines 57-52, colums Q:T](https://docs.google.com/spreadsheets/d/1TPbtGrqZ7saUHhOIi5yYmQ9c-cvVlAGqY14ATMPVCq4/editgid=963717716&range=Q47:T52)
###Code
def f1_set(clusters, **kwargs):
tp = kwargs['mod_pth'] + '/output/' + kwargs['test_path'] \
+ '/GCB_LG-E-noQuotes_cALWEd_no-gen'
results = []
for mwc in [31,21,11,6,2,1]:
kwargs['test_path'] = tp
if mwc > 1: kwargs['test_path'] += '_mwc=' + str(mwc)
kwargs['test_path'] += '/GC_LGEnglish_noQuotes_fullyParsed.ull.ull'
precision, recall, f1, r_ = _compare_lg_dicts_(**kwargs)
if 'error' in r_: print(r_)
results.append((clusters, mwc, round(f1,2)))
return(results)
tests = []
kwargs['test_path'] = 'cALEd-50-GCB-LG-E-noQuotes-2019-04-04'
tests.extend(f1_set(50, **kwargs))
kwargs['test_path'] = 'cALEd-500-GCB-LG-E-noQuotes-S94-2019-04-02'
tests.extend(f1_set(500, **kwargs))
kwargs['test_path'] = 'cALEd-1000-GCB-LG-E-noQuotes-2019-04-03'
tests.extend(f1_set(1000, **kwargs))
kwargs['test_path'] = 'cALEd-2000-GCB-LG-E-noQuotes-2019-04-03'
tests.extend(f1_set(2000, **kwargs))
tests.extend(ile)
###Output
_____no_output_____
###Markdown
Save results
###Code
table = [['MWC', 'ALE50', 'ALE500', 'ALE1000', 'ALE2000', ' ILE ']]
for mwc in [31,21,11,6,2,1]:
table.append([str(mwc)] + [str(t[2]) for t in tests if t[1]==mwc])
print(list2file(table, out_dir + '/Alternative_F1_lines_47-52.txt'))
###Output
MWC ALE50 ALE500 ALE1000 ALE2000 ILE
31 0.57 0.66 0.65 0.65 0.59
21 0.59 0.68 0.68 0.67 0.62
11 0.6 0.71 0.71 0.7 0.66
6 0.6 0.73 0.72 0.72 0.69
2 0.61 0.74 0.73 0.72 err
1 0.62 0.74 0.73 0.72 0.71
|
03-dask-basics.ipynb | ###Markdown
Machine Learning on Big Data with Dask Introduction to DaskBefore we get into too much complexity, let's talk about the essentials of Dask. What is Dask?Dask is an open-source framework that enables parallelization of Python code. This can be applied to all kinds of Python use cases, not just machine learning. Dask is designed to work well on single-machine setups and on multi-machine clusters. You can use Dask with pandas, NumPy, scikit-learn, and other Python libraries. If you want to learn more about the other areas where Dask can be useful, there's a [great website explaining all of that](https://dask.org/). Why Parallelize?For machine learning use cases, parallelizing work with Dask can be useful if:- Data sizes exceed memory of a single node- Complex data transformation that is slow on a single node- Complex models that require a lot of resources- Many compute tasks that can execute at the same time (think hyperparameter tuning, ensemble models) Initialize Dask clusterThe `dask_saturn` package makes the Dask Cluster that we created from Saturn Cloud accessible in our notebook. If the cluster was already created, we would not need to specify any arguments when initializing `SaturnCluster`, but it is a good idea to do so for reproducibility purposes. The arguments to `SaturnCluster` match the fields presented when editing a Dask Cluster from the Saturn Cloud.
###Code
from dask_saturn import SaturnCluster
from dask.distributed import Client
cluster = SaturnCluster(
scheduler_size='medium',
worker_size='xlarge',
n_workers=5,
nthreads=4,
)
client = Client(cluster)
###Output
_____no_output_____
###Markdown
To see the options for scheduler and worker sizes, and how they match up to the options presented in Saturn Cloud, run the following:
###Code
from dask_saturn.core import describe_sizes
describe_sizes()
###Output
_____no_output_____
###Markdown
The `Client` object is our "entry point" to Dask. Most Dask operations will automatically detect the client and run operations across the cluster, but sometimes its necessary to pass a `client` object when performing more advanced operations. Previewing the `client` object tells us details about the cluster and a link to the Dashboard. Open up the Dashboard now and keep it visible in a separate window - you'll see it light up when we run Dask operations!
###Code
client
###Output
_____no_output_____
###Markdown
The following cell will block until all workers are available. You can also view cluster status and access the Dashbaord link from the Project page in Saturn Cloud.
###Code
client.wait_for_workers(5)
print('Ready to go!')
###Output
_____no_output_____
###Markdown
Lazy evaluation - dask.delayedDelaying a task with Dask can queue up a set of transformations or calculations so that it's ready to run later, in parallel. This is what's known as "lazy" evaluation - it won't evaluate the requested computations until explicitly told to. This differs from other kinds of functions, which compute instantly upon being called. Many very common and handy functions are ported to be native in Dask, which means they will be lazy (delayed computation) without you ever having to even ask. However, sometimes you will have complicated custom code that is written in pandas, scikit-learn, or even base python, that isn't natively available in Dask. Other times, you may just not have the time or energy to refactor your code into Dask, if edits are needed to take advantage of native Dask elements.If this is the case, you can decorate your functions with `@dask.delayed`, which will manually establish that the function should be lazy, and not evaluate until you tell it. You'd tell it with the processes `.compute()` or `.persist()`, described in the next section. We'll use `@dask.delayed` several times in this workshop to make PyTorch tasks easily parallelized.Let's start with a small example. We have a function `multiply()` that multiplies two numbers together. We can call the function to see its result:
###Code
def multiply(x, y):
return x * y
multiply(2, 3)
###Output
_____no_output_____
###Markdown
Now we can decorate the function with `@dask.delayed` to indicate that we want to the function to execute lazily on our cluster:
###Code
import dask
@dask.delayed
def multiply_dask(x, y):
return x * y
multiply_dask(2, 3)
###Output
_____no_output_____
###Markdown
This is quite different output than calling a normal function. This is because Dask hasn't done anything yet! Call `.compute()` to get the actual result.> Tip: Open up the Dask Dashboard to see the task executing on the cluster!
###Code
multiply_dask(2, 3).compute()
###Output
_____no_output_____
###Markdown
We can even chain together multiple delayed functions:
###Code
x = multiply_dask(2, 3)
y = multiply_dask(3, 4)
z = x * y
z
###Output
_____no_output_____
###Markdown
ExerciseGet the result of `z`!
###Code
<<< FILL IN >>>
z.compute()
###Output
_____no_output_____ |
Natural Language Processing with Classification and Vector Spaces/Week 3 - Vector Space Models/NLP_C1_W3_lecture_nb_01.ipynb | ###Markdown
Linear algebra in Python with NumPy In this lab, you will have the opportunity to remember some basic concepts about linear algebra and how to use them in Python.Numpy is one of the most used libraries in Python for arrays manipulation. It adds to Python a set of functions that allows us to operate on large multidimensional arrays with just a few lines. So forget about writing nested loops for adding matrices! With NumPy, this is as simple as adding numbers.Let us import the `numpy` library and assign the alias `np` for it. We will follow this convention in almost every notebook in this course, and you'll see this in many resources outside this course as well.
###Code
import numpy as np # The swiss knife of the data scientist.
###Output
_____no_output_____
###Markdown
Defining lists and numpy arrays
###Code
alist = [1, 2, 3, 4, 5] # Define a python list. It looks like an np array
narray = np.array([1, 2, 3, 4]) # Define a numpy array
###Output
_____no_output_____
###Markdown
Note the difference between a Python list and a NumPy array.
###Code
print(alist)
print(narray)
print(type(alist))
print(type(narray))
###Output
_____no_output_____
###Markdown
Algebraic operators on NumPy arrays vs. Python listsOne of the common beginner mistakes is to mix up the concepts of NumPy arrays and Python lists. Just observe the next example, where we add two objects of the two mentioned types. Note that the '+' operator on NumPy arrays perform an element-wise addition, while the same operation on Python lists results in a list concatenation. Be careful while coding. Knowing this can save many headaches.
###Code
print(narray + narray)
print(alist + alist)
###Output
_____no_output_____
###Markdown
It is the same as with the product operator, `*`. In the first case, we scale the vector, while in the second case, we concatenate three times the same list.
###Code
print(narray * 3)
print(alist * 3)
###Output
_____no_output_____
###Markdown
Be aware of the difference because, within the same function, both types of arrays can appear. Numpy arrays are designed for numerical and matrix operations, while lists are for more general purposes. Matrix or Array of ArraysIn linear algebra, a matrix is a structure composed of n rows by m columns. That means each row must have the same number of columns. With NumPy, we have two ways to create a matrix:* Creating an array of arrays using `np.array` (recommended). * Creating a matrix using `np.matrix` (still available but might be removed soon).NumPy arrays or lists can be used to initialize a matrix, but the resulting matrix will be composed of NumPy arrays only.
###Code
npmatrix1 = np.array([narray, narray, narray]) # Matrix initialized with NumPy arrays
npmatrix2 = np.array([alist, alist, alist]) # Matrix initialized with lists
npmatrix3 = np.array([narray, [1, 1, 1, 1], narray]) # Matrix initialized with both types
print(npmatrix1)
print(npmatrix2)
print(npmatrix3)
###Output
_____no_output_____
###Markdown
However, when defining a matrix, be sure that all the rows contain the same number of elements. Otherwise, the linear algebra operations could lead to unexpected results.Analyze the following two examples:
###Code
# Example 1:
okmatrix = np.array([[1, 2], [3, 4]]) # Define a 2x2 matrix
print(okmatrix) # Print okmatrix
print(okmatrix * 2) # Print a scaled version of okmatrix
# Example 2:
badmatrix = np.array([[1, 2], [3, 4], [5, 6, 7]]) # Define a matrix. Note the third row contains 3 elements
print(badmatrix) # Print the malformed matrix
print(badmatrix * 2) # It is supposed to scale the whole matrix
###Output
_____no_output_____
###Markdown
Scaling and translating matricesNow that you know how to build correct NumPy arrays and matrices, let us see how easy it is to operate with them in Python using the regular algebraic operators like + and -. Operations can be performed between arrays and arrays or between arrays and scalars.
###Code
# Scale by 2 and translate 1 unit the matrix
result = okmatrix * 2 + 1 # For each element in the matrix, multiply by 2 and add 1
print(result)
# Add two sum compatible matrices
result1 = okmatrix + okmatrix
print(result1)
# Subtract two sum compatible matrices. This is called the difference vector
result2 = okmatrix - okmatrix
print(result2)
###Output
_____no_output_____
###Markdown
The product operator `*` when used on arrays or matrices indicates element-wise multiplications.Do not confuse it with the dot product.
###Code
result = okmatrix * okmatrix # Multiply each element by itself
print(result)
###Output
_____no_output_____
###Markdown
Transpose a matrixIn linear algebra, the transpose of a matrix is an operator that flips a matrix over its diagonal, i.e., the transpose operator switches the row and column indices of the matrix producing another matrix. If the original matrix dimension is n by m, the resulting transposed matrix will be m by n.**T** denotes the transpose operations with NumPy matrices.
###Code
matrix3x2 = np.array([[1, 2], [3, 4], [5, 6]]) # Define a 3x2 matrix
print('Original matrix 3 x 2')
print(matrix3x2)
print('Transposed matrix 2 x 3')
print(matrix3x2.T)
###Output
_____no_output_____
###Markdown
However, note that the transpose operation does not affect 1D arrays.
###Code
nparray = np.array([1, 2, 3, 4]) # Define an array
print('Original array')
print(nparray)
print('Transposed array')
print(nparray.T)
###Output
_____no_output_____
###Markdown
perhaps in this case you wanted to do:
###Code
nparray = np.array([[1, 2, 3, 4]]) # Define a 1 x 4 matrix. Note the 2 level of square brackets
print('Original array')
print(nparray)
print('Transposed array')
print(nparray.T)
###Output
_____no_output_____
###Markdown
Get the norm of a nparray or matrixIn linear algebra, the norm of an n-dimensional vector $\vec a$ is defined as:$$ norm(\vec a) = ||\vec a|| = \sqrt {\sum_{i=1}^{n} a_i ^ 2}$$Calculating the norm of vector or even of a matrix is a general operation when dealing with data. Numpy has a set of functions for linear algebra in the subpackage **linalg**, including the **norm** function. Let us see how to get the norm a given array or matrix:
###Code
nparray1 = np.array([1, 2, 3, 4]) # Define an array
norm1 = np.linalg.norm(nparray1)
nparray2 = np.array([[1, 2], [3, 4]]) # Define a 2 x 2 matrix. Note the 2 level of square brackets
norm2 = np.linalg.norm(nparray2)
print(norm1)
print(norm2)
###Output
_____no_output_____
###Markdown
Note that without any other parameter, the norm function treats the matrix as being just an array of numbers.However, it is possible to get the norm by rows or by columns. The **axis** parameter controls the form of the operation: * **axis=0** means get the norm of each column* **axis=1** means get the norm of each row.
###Code
nparray2 = np.array([[1, 1], [2, 2], [3, 3]]) # Define a 3 x 2 matrix.
normByCols = np.linalg.norm(nparray2, axis=0) # Get the norm for each column. Returns 2 elements
normByRows = np.linalg.norm(nparray2, axis=1) # get the norm for each row. Returns 3 elements
print(normByCols)
print(normByRows)
###Output
_____no_output_____
###Markdown
However, there are more ways to get the norm of a matrix in Python.For that, let us see all the different ways of defining the dot product between 2 arrays. The dot product between arrays: All the flavorsThe dot product or scalar product or inner product between two vectors $\vec a$ and $\vec a$ of the same size is defined as:$$\vec a \cdot \vec b = \sum_{i=1}^{n} a_i b_i$$The dot product takes two vectors and returns a single number.
###Code
nparray1 = np.array([0, 1, 2, 3]) # Define an array
nparray2 = np.array([4, 5, 6, 7]) # Define an array
flavor1 = np.dot(nparray1, nparray2) # Recommended way
print(flavor1)
flavor2 = np.sum(nparray1 * nparray2) # Ok way
print(flavor2)
flavor3 = nparray1 @ nparray2 # Geeks way
print(flavor3)
# As you never should do: # Noobs way
flavor4 = 0
for a, b in zip(nparray1, nparray2):
flavor4 += a * b
print(flavor4)
###Output
_____no_output_____
###Markdown
**We strongly recommend using np.dot, since it is the only method that accepts arrays and lists without problems**
###Code
norm1 = np.dot(np.array([1, 2]), np.array([3, 4])) # Dot product on nparrays
norm2 = np.dot([1, 2], [3, 4]) # Dot product on python lists
print(norm1, '=', norm2 )
###Output
_____no_output_____
###Markdown
Finally, note that the norm is the square root of the dot product of the vector with itself. That gives many options to write that function:$$ norm(\vec a) = ||\vec a|| = \sqrt {\sum_{i=1}^{n} a_i ^ 2} = \sqrt {a \cdot a}$$ Sums by rows or columnsAnother general operation performed on matrices is the sum by rows or columns.Just as we did for the function norm, the **axis** parameter controls the form of the operation:* **axis=0** means to sum the elements of each column together. * **axis=1** means to sum the elements of each row together.
###Code
nparray2 = np.array([[1, -1], [2, -2], [3, -3]]) # Define a 3 x 2 matrix.
sumByCols = np.sum(nparray2, axis=0) # Get the sum for each column. Returns 2 elements
sumByRows = np.sum(nparray2, axis=1) # get the sum for each row. Returns 3 elements
print('Sum by columns: ')
print(sumByCols)
print('Sum by rows:')
print(sumByRows)
###Output
_____no_output_____
###Markdown
Get the mean by rows or columnsAs with the sums, one can get the **mean** by rows or columns using the **axis** parameter. Just remember that the mean is the sum of the elements divided by the length of the vector$$ mean(\vec a) = \frac {{\sum_{i=1}^{n} a_i }}{n}$$
###Code
nparray2 = np.array([[1, -1], [2, -2], [3, -3]]) # Define a 3 x 2 matrix. Chosen to be a matrix with 0 mean
mean = np.mean(nparray2) # Get the mean for the whole matrix
meanByCols = np.mean(nparray2, axis=0) # Get the mean for each column. Returns 2 elements
meanByRows = np.mean(nparray2, axis=1) # get the mean for each row. Returns 3 elements
print('Matrix mean: ')
print(mean)
print('Mean by columns: ')
print(meanByCols)
print('Mean by rows:')
print(meanByRows)
###Output
_____no_output_____
###Markdown
Center the columns of a matrixCentering the attributes of a data matrix is another essential preprocessing step. Centering a matrix means to remove the column mean to each element inside the column. The sum by columns of a centered matrix is always 0.With NumPy, this process is as simple as this:
###Code
nparray2 = np.array([[1, 1], [2, 2], [3, 3]]) # Define a 3 x 2 matrix.
nparrayCentered = nparray2 - np.mean(nparray2, axis=0) # Remove the mean for each column
print('Original matrix')
print(nparray2)
print('Centered by columns matrix')
print(nparrayCentered)
print('New mean by column')
print(nparrayCentered.mean(axis=0))
###Output
_____no_output_____
###Markdown
**Warning:** This process does not apply for row centering. In such cases, consider transposing the matrix, centering by columns, and then transpose back the result. See the example below:
###Code
nparray2 = np.array([[1, 3], [2, 4], [3, 5]]) # Define a 3 x 2 matrix.
nparrayCentered = nparray2.T - np.mean(nparray2, axis=1) # Remove the mean for each row
nparrayCentered = nparrayCentered.T # Transpose back the result
print('Original matrix')
print(nparray2)
print('Centered by columns matrix')
print(nparrayCentered)
print('New mean by rows')
print(nparrayCentered.mean(axis=1))
###Output
_____no_output_____
###Markdown
Note that some operations can be performed using static functions like `np.sum()` or `np.mean()`, or by using the inner functions of the array
###Code
nparray2 = np.array([[1, 3], [2, 4], [3, 5]]) # Define a 3 x 2 matrix.
mean1 = np.mean(nparray2) # Static way
mean2 = nparray2.mean() # Dinamic way
print(mean1, ' == ', mean2)
###Output
_____no_output_____ |
Labs/2-STL_Lab-Questions.ipynb | ###Markdown
Stastistical Learning Theory and Excess Risk DecompositionSreyas Mohan, CDS
###Code
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
from sklearn import linear_model
from scipy.stats import norm
import matplotlib.mlab as mlab
%matplotlib inline
###Output
_____no_output_____
###Markdown
Generic Utility Functions
###Code
def plot_prediction_function(f_list = None, label_list = None, data = None, alpha = 0.9, include_data_x = False):
plt.figure(figsize=(20,10))
if data is not None:
x_min, x_max = np.min(data[:, 0]), np.max(data[:, 0])
else:
x_min, x_max = -5.0, 5.0;
if ( (f_list is not None) and (label_list is not None) ):
x_array = np.arange(x_min, x_max, 0.1);
if include_data_x:
x_array = np.concatenate([x_array, data[:, 0], data[:, 0]+0.001, data[:, 0]-0.001])
x_array = np.sort(x_array)
for f, label in zip(f_list, label_list):
f_y = f(x_array);
plt.plot(x_array, f_y, label=label);
if data is not None:
plt.scatter(data[:, 0], data[:, 1], alpha = alpha )
if label_list is not None:
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Generative Model for Data We will work on a regression problem in this lab. The input, action and output space will be $\mathcal{R}$. The output $y$ is related to the input $x$ as $ y = g(x) = ax^2 + bx + c $ where $a,b$ and $c$ are sampled from random variables which follows gaussian distribution with parameters $\mu_a, \sigma_a$, $\mu_b, \sigma_b$ and $\mu_c, \sigma_c$ respectively. (Basically, given $x$, $y$ depends on $3$ random numbers). Assume that $X$ is sampled from $\mathcal{N}(\mu_x, \sigma_x)$. For the purposes of this lab, let's set $\mu_a = 1, \mu_b = 2, \mu_c = 3, \mu_x = 0$ and $\sigma_a = \sigma_b = \sigma_c = \sigma_x = 1$. Utility functions for sampling
###Code
mu_a = 1;
sigma_a = 1;
mu_b = 2;
sigma_b = 1;
mu_c = 3;
sigma_c = 1;
mu_x = 0;
sigma_x = 1;
assert( (mu_x == 0) and (sigma_x == 1))
## assumes g is a polynomial. Takes in a coeff list, where coeff_list[i] is the coefficient of x^i and x to evaluate it at
def template_g(coeff_list, x):
ans = 0;
for i, coeff in enumerate(coeff_list):
ans += coeff * (x**i)
return ans
### generates one sample function g
def sample_g():
a = np.random.randn() * sigma_a + mu_a;
b = np.random.randn() * sigma_b + mu_b;
c = np.random.randn() * sigma_c + mu_c;
return partial(template_g, [c, b, a] )
## give one (x,y) sample
def get_one_x_y_sample():
x = np.random.randn()*sigma_x + mu_x;
g = sample_g()
return x, g(x)
## gives a matrix with first column x and second column y
def generate_n_samples(n = 1000):
matrix = np.zeros([n, 2]);
for i in range(n):
matrix[i] = get_one_x_y_sample();
return matrix
###Output
_____no_output_____
###Markdown
Visualizing samples from $\mathcal{P}_{X \times Y}$
###Code
data_cloud = generate_n_samples(1000)
plot_prediction_function(data = data_cloud)
###Output
_____no_output_____
###Markdown
Visualizing samples of $Y$ for fixed values of $X$
###Code
def y_for_fixed_x( x_list = np.arange(-4, 4.5, 0.5), n_sample_per_x = 10):
data = np.zeros([len(x_list)*n_sample_per_x, 2]);
for i, x in enumerate(x_list):
for j in range(n_sample_per_x):
g = sample_g();
data[n_sample_per_x*i + j, 0] = x;
data[n_sample_per_x*i + j, 1] = g(x);
return data
plot_prediction_function(data = y_for_fixed_x(n_sample_per_x=50) )
###Output
_____no_output_____
###Markdown
Exercise:Recall that the risk of a function $f$ wrt to loss $l$, $R(f) = E[l((f(x), y)]$ and the bayes optimal function $f^* = \underset{f}{\operatorname{argmin}}R(f)$ (Hint: for $X \sim \mathcal{N}(0, 1)$, $E[X^4] = 3$)1. If we minimize l2 loss, what is the bayes optimal function $f^*(x)$ for the model described above? Does your answer depend on the distribution assumed on $X$?2. What is the risk assosciated with $f^*(x)$, $R(f)$?3. Once you have mathematical expressions for 1 and 2, fill in the python functions below for the bayes prediction function and bayes risk.
###Code
## variable f_star assiged to bayes optimal function
## bayes risk with numerical value of bayes risk
## complete f_star and bayes_risk
f_star = partial(template_g, [0] )
bayes_risk = 0;
print(bayes_risk)
plot_prediction_function([f_star], ['$f^*$'], data=data_cloud)
plot_prediction_function([f_star], ['$f^*$'], data=y_for_fixed_x(n_sample_per_x=100) )
###Output
_____no_output_____
###Markdown
Estimating the Risk $R(f)$ using monte-carlo
###Code
def estimate_risk(f, n_try = int(1e5) ):
sum = 0;
for i in range(n_try):
x, y = get_one_x_y_sample()
sum += (f(x) - y)**2
return sum/n_try
estimate_risk(f_star)
###Output
_____no_output_____
###Markdown
Empirical Risk We never usually know $\mathcal{P}_{X, Y}$ and we work with finite samples drawn from the distribution. With $\mathcal{D}_n = ( (x_1, y_1), (x_2, y_2), \dots , (x_n, y_n) )$ be $n$ iid data points, the empirical risk of $f$ with respect to loss $l$ on dataset $\mathcal{D}_n$ is defined as $$ \hat{R}_n(f) = \frac{1}{n} \sum_{i=1}^{n} l( f(x_i), y_i) $$Have we used the expression of $ \hat{R}_n(f) $ for anything till now? Exercise:1. Is $\hat{R}_n(f)$ or $R(f)$ a random variable? If so, what is the mean of the random variable and what is it's distribution? 2. Can $R(f) \geq \hat{R}_n(f)$?
###Code
def empirical_risk(f, sample_matrix):
fy_array = f(sample_matrix[:, 0]);
risk = np.mean((fy_array - sample_matrix[:, 1]) ** 2)
return risk
###Output
_____no_output_____
###Markdown
Checking the distribution of $\hat{R}_n(f)$
###Code
n = 1000;
emp_rsk = np.zeros(n)
for i in range(n):
emp_rsk[i] = empirical_risk(f_star, generate_n_samples(5000))
print( np.mean(emp_rsk) )
datos = emp_rsk
plt.figure(figsize=(20,10))
# best fit of data
(mu, sigma) = norm.fit(datos)
# the histogram of the data
n, bins, patches = plt.hist(datos, 75, density=True, facecolor='green', alpha=0.75)
# add a 'best fit' line
y = mlab.normpdf( bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=2)
#plot
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title(r'$\mathrm{Histogram\ of\ \hat{R}(f^*):}\ \mu=%.3f,\ \sigma=%.3f$' %(mu, sigma))
plt.grid(True)
plt.show()
###Output
/Users/sreyas/miniconda3/envs/denoising/lib/python3.7/site-packages/ipykernel_launcher.py:10: MatplotlibDeprecationWarning: scipy.stats.norm.pdf
# Remove the CWD from sys.path while we load stuff.
###Markdown
Empirical Risk Minimization Let's fix a training data with $n = 100$ points
###Code
data = generate_n_samples(n = 100);
###Output
_____no_output_____
###Markdown
Visualizing Data
###Code
plot_prediction_function([f_star], ['$f^*$'], data)
###Output
_____no_output_____
###Markdown
ExerciseOne way to achieve $\hat{R}_n(f) = 0 $ is to memorize the data. For the sake of simplicity let's assume that the $\mathcal{D}_n$ we are working with has no duplicate values for $x$. The function $f(x)$ returns the corresponding $y$ from $\mathcal{D}_n$ otherwise $f(x)$ returns $0$. What is the risk, $R(f)$, for this function?
###Code
risk_memorized_function = 0
print(risk_memorized_function)
###Output
0
###Markdown
Estimating the risk of memorized function numerically
###Code
def f_memorized(x_array):
scalar = False
if not isinstance(x_array,(list,np.ndarray)):
scalar = True
x_array = np.array([x_array]);
result = np.zeros_like(x_array).astype(float)
for i, x in enumerate(x_array):
found = False
for x_i, y_i in data:
if x==x_i:
result[i] = y_i;
found = True
if not found:
result[i] = 0.0
if scalar:
return result[0]
return result
linear_f_hat_risk = estimate_risk( f_memorized, n_try=int(1e3) )
print('Risk: ', linear_f_hat_risk)
linear_f_hat_empirical_risk = empirical_risk(f_memorized, data)
print('Empirical Risk: ', linear_f_hat_empirical_risk)
plot_prediction_function([f_star, f_memorized], ['$f^*$', 'memorized'], data, include_data_x=True)
###Output
_____no_output_____
###Markdown
Constrained Risk Minimization Like we saw in the previous section, (unconstrained) empirical risk minimization is (almost) useless in practise. We achieved perfect $0$ for empirical risk but we have a increase in the actual risk, which is the quantity we care about. Linear (Affine) Hypothesis Space ExerciseFor $f(x) = \alpha x + \beta$1. Calculate an expression for risk of $f(x)$2. Find $\alpha^*, \beta^*$ which minimises $R(f)$.
###Code
### risk for linear function (alphax + beta)
## students fill in
def linear_function_risk(alpha, beta):
return 0;
### Estimating Risk
### Students to fill in linear_f_star
linear_f_star = partial(template_g, [0]);
mc_linear_f_star_risk = estimate_risk( linear_f_star)
print('MC Risk: ', mc_linear_f_star_risk)
linear_f_star_risk = linear_function_risk(mu_b, mu_a + mu_c)
print('Risk :' , linear_f_star_risk)
###Output
MC Risk: 27.00446435785947
Risk : 0
###Markdown
Visualizing Prediction Function $\ \ f^*_\mathcal{H}$
###Code
plot_prediction_function([linear_f_star, f_star], ['$f^*_\mathcal{H}$', '$f^*$'], data, alpha = 1)
###Output
_____no_output_____
###Markdown
Constrained Empirical Risk Minimization $\hat{\alpha}, \hat{\beta}$ which minimises $\hat{R}(f)$ and estimate the risk using monte carlo method.
###Code
## Fitting Linear Regression on Data
reg = linear_model.LinearRegression(fit_intercept=True).fit(data[:, 0:1], data[:, 1])
reg.intercept_, reg.coef_
linear_f_hat = partial(template_g, [reg.intercept_, reg.coef_[0]])
linear_f_hat_risk = linear_function_risk(reg.coef_[0], reg.intercept_)
print('Risk :' , linear_f_hat_risk)
mc_linear_f_hat_risk = estimate_risk( linear_f_hat )
print('MC Risk: ', mc_linear_f_hat_risk)
linear_f_hat_empirical_risk = empirical_risk(linear_f_hat, data)
print('Empirical Risk: ', linear_f_hat_empirical_risk)
###Output
Risk : 0
MC Risk: 6.968077202103871
Empirical Risk: 4.592553812416919
###Markdown
Visualizing Prediction Function $\ \ \hat{f}_\mathcal{H}$
###Code
plot_prediction_function([linear_f_star, linear_f_hat, f_star], ['$f^*_\mathcal{H}$', '$\hat{f}_\mathcal{H}$', '$f^*$'], data, alpha = 1)
###Output
_____no_output_____
###Markdown
Error Decomposition Recall that: Approximation Error for $\mathcal{H}$ = $R(f^*_{\mathcal{H}}) - R(f^*) $ Estimation Error of $\ \hat{f}_\mathcal{H}$ = $R(\hat{f}_{\mathcal{H}}) - R(f_{\mathcal{H}})$ Excess Risk of f = $R(f) - R(f^*)$ Exercise:1. From the values we calculated above, what is the approximation error for linear hypothesis space? Is approximation error a random variable? 2. What is the estimation error of $\ \hat{f}_\mathcal{H}$? Is estimation error a random variable? 3. What is the excess risk of $\ \hat{f}_\mathcal{H}$? Is this a random variable? Answers:1. Approximation Error is not random. 2. Estimation Error is random. 3. Excess risk of $\ \hat{f}_\mathcal{H}$ is random. Estimation Error and Excess risk of $\hat{f}_\mathcal{H}$
###Code
def estimation_error_and_excess_risk(data, risk_fh, risk_f_star):
reg = linear_model.LinearRegression(fit_intercept=True).fit(data[:, 0:1], data[:, 1]);
linear_f_hat_risk = linear_function_risk(reg.coef_[0], reg.intercept_)
return linear_f_hat_risk - risk_fh, linear_f_hat_risk - risk_f_star
n_try = 1000;
estimation_error_array = np.zeros(n_try);
excess_risk_array = np.zeros(n_try)
for i in range(n_try):
estimation_error_array[i], excess_risk_array[i] = estimation_error_and_excess_risk( generate_n_samples(n = 100), linear_f_star_risk, bayes_risk)
_ = plt.hist(estimation_error_array, bins = 100)
_ = plt.hist(excess_risk_array, bins = 100)
###Output
_____no_output_____
###Markdown
Optimization Error Since we were optimizing for L2 loss over a linear hypothesis space, we found the best possible $\hat{f}_\mathcal{H}$ (upto numerical error) using the closed form expression for linear regression. What if we use SGD instead to find $f$ and stop iterations prematurely?
###Code
reg_sgd = linear_model.SGDRegressor(max_iter=3,
fit_intercept = True,
penalty = 'none', #No Regularization
tol = None)
reg_sgd.fit(data[:, 0:1], data[:, 1])
reg_sgd.intercept_, reg_sgd.coef_
linear_f_tilde = partial(template_g, [reg_sgd.intercept_[0], reg_sgd.coef_[0]])
linear_f_tilde_risk = linear_function_risk(reg_sgd.coef_[0], reg_sgd.intercept_[0])
print('Risk :' , linear_f_tilde_risk)
mc_linear_f_tilde_risk = estimate_risk( linear_f_tilde )
print('MC Risk: ', linear_f_tilde_risk)
linear_f_tilde_empirical_risk = empirical_risk(linear_f_tilde, data)
print('Empirical Risk: ', linear_f_tilde_empirical_risk)
###Output
Risk : 0
MC Risk: 0
Empirical Risk: 7.383567595936288
###Markdown
Visualizing $\ \ \ \tilde{f_\mathcal{H} }$
###Code
plot_prediction_function([linear_f_star, linear_f_hat, linear_f_tilde, f_star],
['$f^*_\mathcal{H}$', '$\hat{f}_\mathcal{H}$', '$\tilde{f_\mathcal{H}}$', '$f^*$'],
data, alpha = 1)
###Output
_____no_output_____ |
jupyter-notebooks/analytics/quickstart/03_visualizing_raster_results.ipynb | ###Markdown
Planet Analytics API Tutorial Visualizing Raster ResultsThis notebook shows how to download and visualize [raster results](https://developers.planet.com/docs/analytics/raster-results) from a Planet Analytics [Subscription](https://developers.planet.com/docs/analytics/subscriptions). Setup To use this notebook, you need an api key for a Planet account with access to the Analytics API and a subscription to a raster feed (e.g. buildings or roads) API Key and Test ConnectionSet `API_KEY` below if it is not already in your notebook as an environment variable.See the [Analytics API Docs](https://developers.planet.com/docs/analytics/) for more details on authentication.
###Code
import os
import requests
# if your Planet API Key is not set as an environment variable, you can paste it below
API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')
# alternatively, you can just set your API key directly as a string variable:
# API_KEY = "YOUR_PLANET_API_KEY_HERE"
# construct auth tuple for use in the requests library
BASIC_AUTH = (API_KEY, '')
BASE_URL = "https://api.planet.com/analytics/"
subscriptions_list_url = BASE_URL + 'subscriptions' + '?limit=1000'
resp = requests.get(subscriptions_list_url, auth=BASIC_AUTH)
if resp.status_code == 200:
print('Yay, you can access the Analytics API')
subscriptions = resp.json()['data']
print('Available subscriptions:', len(subscriptions))
else:
print('Something is wrong:', resp.content)
###Output
_____no_output_____
###Markdown
Specify Analytics Subscription of InterestBelow we will list your available subscription ids and some metadata in a dataframe and then select a subscription of interest.
###Code
import pandas as pd
pd.options.display.max_rows = 1000
df = pd.DataFrame(subscriptions)
df['start'] = pd.to_datetime(df['startTime']).dt.date
df['end'] = pd.to_datetime(df['endTime']).dt.date
df[['id', 'title', 'description', 'start', 'end']]
###Output
_____no_output_____
###Markdown
Pick a **road or building** subscription from which to pull results, and replace the ID below.
###Code
# This example ID is for a subscription of monthly road rasters
SUBSCRIPTION_ID = 'cda3398b-1283-4ad9-87a6-e25796b5ca80'
###Output
_____no_output_____
###Markdown
Getting subscription resultsNow we will fetch some example results from the API and visualize the road raster
###Code
import json
# Construct the url for the subscription's results collection
subscription_results_url = BASE_URL + 'collections/' + SUBSCRIPTION_ID + '/items' + '?limit=5'
print("Request URL: {}".format(subscription_results_url))
# Get subscription results collection
resp = requests.get(subscription_results_url, auth=BASIC_AUTH)
if resp.status_code == 200:
print('Yay, you can access analytic feed results!')
subscription_results = resp.json()
print('Fetched {} results.'.format(len(subscription_results['features'])))
else:
print('Something is wrong:', resp.content)
###Output
_____no_output_____
###Markdown
`subscription_results` is now a geojson feature collection. If we look at the keys of the most recent feature, we will see `links`, which will tell us how to get both geotiffs and webtiles associated with this result.
###Code
latest_result = subscription_results['features'][0]
print(latest_result.keys())
###Output
_____no_output_____
###Markdown
Let's look at one of the links.
###Code
from pprint import pprint
pprint(latest_result['links'][0])
###Output
_____no_output_____
###Markdown
Each link has `rel` and `href` keys. Let's print out the `rel`s for all of the links in our `latest_result`
###Code
for link in latest_result['links']:
print(link['rel'])
###Output
_____no_output_____
###Markdown
What are all of these links?**self:** this is a link to the result geojson we're currently looking at, i.e. `latest_result`**source-quad:** the input mosaic quad that was used to produce the road raster**target-quad:** the output mosaic quad (a road raster)**source-tiles:** web map tiles for the input imagery**target-tiles:** web map tiles for the analytics output Downloading Source and Target ImageryIn this section, we weill use the feature `links` to download source and target GeoTIFF files. To start, we'll make a helper function to get a url of interest from one of our result features
###Code
def get_url(feature, rel):
"""Get the url for a link with the specified rel from a geojson feature
Args:
feature - geojson feature from the analytics api
rel - link relationship of interest
Returns:
url - a url for webtiles or geotiff download
"""
for link in feature['links']:
if link['rel'] == rel:
return link['href']
###Output
_____no_output_____
###Markdown
Let's take a look at the download url for the target, specified by the `rel` of `target-quad`
###Code
target_quad_url = get_url(latest_result, rel='target-quad')
print(target_quad_url)
###Output
_____no_output_____
###Markdown
Now, we'll make some functions to download files to a local directory.
###Code
import os
import pathlib
import shutil
# make a local directory in which to save the target quad
data_dir = 'data/'
os.makedirs(data_dir, exist_ok=True)
def download_file(url, dest_path):
"""Download a file hosted at url to the specified local dest_path.
Args:
url: url for the file of interest
dest_path: filepath (can be relative to this notebook)
"""
# request the image
resp = requests.get(url, stream=True, auth=BASIC_AUTH)
# make sure the response is ok
if resp.status_code == 200:
# write the image contents to a local file
with open(dest_path, 'wb') as f:
resp.raw.decode_content = True
shutil.copyfileobj(resp.raw, f)
print('Saved file:', path)
else:
print('Something is wrong:', resp.content)
def make_local_path(feature, rel):
"""Programatically generate a local path to store images associated with a feature.
Args:
feature - geojson feature with an id
rel - link relationship of interest
Returns:
path - str representing the local path for the (feature, rel) pair
"""
data_dir = 'data/' + feature['id']
os.makedirs(data_dir, exist_ok=True)
return data_dir + '/' + rel + '.tif'
def download(feature, rel):
"""Download store image associated with a (feature, rel) pair.
Args:
feature - geojson feature with an id
rel - link relationship of interest
Returns:
path - str representing the local path for the (feature, rel) pair
"""
path = make_local_path(feature, rel)
if pathlib.Path(path).exists():
return path
url = get_url(feature, rel)
download_file(url, path)
return path
###Output
_____no_output_____
###Markdown
You can open the downloaded [GeoTIFF](https://medium.com/planet-stories/a-handy-introduction-to-cloud-optimized-geotiffs-1f2c9e716ec3) file with tools such as [QGIS](https://www.qgis.org/en/site/). Here's a link to your file:
###Code
rel = 'target-quad'
path = download(latest_result, 'target-quad')
print('downloaded {} to: {}'.format(rel, path))
from IPython.display import FileLink
FileLink(path)
###Output
_____no_output_____
###Markdown
Rendering source imagery and analytic rastersIn this final part, we will plot source and target images for for one road results feature, inspect the distribution of pixel values, and display the images on a map. Plotting source and target quadsWe can use [rasterio](https://rasterio.readthedocs.io/en/stable/) to open and plot these GeoTIFF files.
###Code
# make plots show up in this notebook
%matplotlib inline
import rasterio
from rasterio.plot import show
source_path = download(latest_result, 'source-quad')
target_path = download(latest_result, 'target-quad')
source_im = rasterio.open(source_path)
target_im = rasterio.open(target_path)
print('source:')
show(source_im)
print('target:')
show(target_im, cmap='hot_r')
###Output
_____no_output_____
###Markdown
The numbers on the axes above are [Web Mercator](https://en.wikipedia.org/wiki/Web_Mercator_projection) coordinates. We can confirm the coordinate reference system (CRS) with rasterio:
###Code
print(source_im.crs)
print(target_im.crs)
###Output
_____no_output_____
###Markdown
Making a bigger plotThe rendered images above are kind of small. We can use matplotlib to make a bigger plot. Here is an example of displaying the target quad on a bigger plot:
###Code
from matplotlib import pyplot as plt
def show_image(feature, rel, cmap=None, size=(10,10)):
source_path = download(feature, rel)
with rasterio.open(source_path) as ds:
plt.figure(figsize=size)
plt.imshow(ds.read(1), cmap=cmap)
plt.show()
show_image(latest_result, rel='target-quad', cmap='hot_r', size=(12,12))
###Output
_____no_output_____
###Markdown
Unlike the rasterio plot, the pyplot plot did not keep the geo coordinates. The axes now show relative pixel coordinates from 0 to 4096. Inspecting pixel value distributionsVisually it looks like the target quad has just two colors. This is because the road detection raster sets every value in the first band to either 0 or 255. Let's look at the source and target image histograms to see this:
###Code
from rasterio.plot import show_hist
show_hist(source_im)
###Output
_____no_output_____
###Markdown
The source image histogram shows us that the pixel value distribution for red, green, and blue bands. All 3 bands have similarly shaped distributions across the possible values of 0 to 255. (Ignore the 4th band for now). Now let's take a look at the target image:
###Code
show_hist(target_im)
###Output
_____no_output_____
###Markdown
The road detection histogram looks much different! All of the data is in the first band. (Ignore the 2nd band here.)Most of the values in the first band are 0 (not road), but some of them are 255 (road) Making a mapSince the source and target images are geo-referenced, we can render them on a map! Although we could render the downloaded GeoTIFFs on a map, there is also a tileserver available for both the source and target images. This last step will show how to connect an ipyleaflet map to the tileservers. Where to look?Since the `last_result` we looked at before was a geojson feature, we can inspect its geometry to determine where to center our new map.
###Code
lon, lat = latest_result['geometry']['coordinates'][0][0]
print(lat, lon)
###Output
_____no_output_____
###Markdown
Starting with an empty basemapLet's make new map centered around the coordinates we found above.
###Code
from ipyleaflet import Map
m = Map(center=(lat, lon), zoom=9)
m
###Output
_____no_output_____
###Markdown
Outlining the feature boundaryWe can add a polygon on top of the basemap to show where the `latest_result` imagery was from.
###Code
from ipyleaflet import GeoJSON
geojson = GeoJSON(data=latest_result, style = {'color': 'blue', 'opacity':1, 'weight':1.9, 'dashArray':'9', 'fillOpacity':0.1})
m.add_layer(geojson)
###Output
_____no_output_____
###Markdown
Adding a tileserverLet's show the target quad on top of the basemap using a `TileLayer`. First we need to get the tileserver url. This is in the `latest_result`'s `links` with a `rel` of `target-tiles`. We can get the url with the `get_url` helper from before.
###Code
target_tile_url = get_url(latest_result, 'target-tiles')
print(target_tile_url)
###Output
_____no_output_____
###Markdown
Now we add a tile layer
###Code
from ipyleaflet import TileLayer
target_tile_layer = TileLayer(url=target_tile_url)
m.add_layer(target_tile_layer)
m
###Output
_____no_output_____
###Markdown
Notice that the results tile layer extends past the blue box for our quad. The tileserver will display all of the results from the same mosaic. This means that results neigboring quads are displayed above. This is a useful way of exploring larger areas at once. There will be unique tileserver urls for different source mosaics (which correspond to points in time, in this case monthly). Mapping analytic results on top of source imageryWe can make a couple of adjustments to get our map to highlight roads on top of source imagery. To do this, we will- clear the map layers- add a source imagery `TileLayer`- add a non-opaque target imagery `TileLayer`
###Code
m.clear_layers()
source_tile_url = get_url(latest_result, 'source-tiles')
source_tile_layer = TileLayer(url=source_tile_url)
target_tile_layer = TileLayer(url=target_tile_url, opacity=0.4)
m.add_layer(source_tile_layer)
m.add_layer(target_tile_layer)
m
###Output
_____no_output_____ |
4_synthetic_data_attention/Error_modes/distribution_5/m_9_interpretable/distribution_5_m_9.ipynb | ###Markdown
Generate dataset
###Code
y = np.random.randint(0,3,1200)
idx= []
for i in range(3):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((1200,2))
x[idx[0],:] = np.random.uniform(low=[5,2],high=[6,3],size=(sum(idx[0]),2))
x[idx[1],:] = np.random.uniform(low=[5,0],high=[6,1],size=(sum(idx[1]),2))
x[idx[2],:] = np.random.uniform(low=[8,-3],high=[9,6],size=(sum(idx[2]),2))
# x[idx[3],:] = np.random.uniform(low=[1,-1],high=[2,5],size=(sum(idx[3]),2))
# x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
# x[idx[1],:] = np.random.multivariate_normal(mean = [-6,7],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
# x[idx[2],:] = np.random.multivariate_normal(mean = [-5,-4],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
# x[idx[0],:] = np.random.multivariate_normal(mean = [5.5,4],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
# x[idx[1],:] = np.random.multivariate_normal(mean = [6,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
# x[idx[2],:] = np.random.multivariate_normal(mean = [4,6],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
# x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
# x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
# x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
# x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
# x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
# x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
# x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
for i in range(3):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig("type3_2_dist.png",bbox_inches="tight")
plt.savefig("type3_2_dist.pdf",bbox_inches="tight")
foreground_classes = {'class_0','class_1'}
background_classes = {'class_2'}
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(18,1))
desired_num = 3000
mosaic_list =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9): #m=9
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list.append(np.reshape(a,(18,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list = np.concatenate(mosaic_list,axis=1).T
# print(mosaic_list)
print(np.shape(mosaic_label))
print(np.shape(fore_idx))
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Wherenet(nn.Module):
def __init__(self):
super(Wherenet,self).__init__()
self.linear1 = nn.Linear(2,1)
def forward(self,z):
x = torch.zeros([batch,9],dtype=torch.float64) #m=2
y = torch.zeros([batch,2], dtype=torch.float64)
#x,y = x.to("cuda"),y.to("cuda")
for i in range(9): #m=9
x[:,i] = self.helper(z[:,2*i:2*i+2])[:,0]
#print(k[:,0].shape,x[:,i].shape)
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(9): #m=9
x1 = x[:,i]
#print()
y = y+torch.mul(x1[:,None],z[:,2*i:2*i+2])
return y , x
def helper(self,x):
#x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear1(x)
return x
trainiter = iter(train_loader)
input1,labels1,index1 = trainiter.next()
where = Wherenet().double()
where = where
out_where,alphas = where(input1)
out_where.shape,alphas.shape
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,3)
#self.linear2 = nn.Linear(4,3)
# self.linear3 = nn.Linear(8,3)
def forward(self,x):
#x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear1(x)
return x
what = Whatnet().double()
# what(out_where)
test_data_required = 1000
mosaic_list_test =[]
mosaic_label_test = []
fore_idx_test=[]
for j in range(test_data_required):
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,9) #m=9
a = []
for i in range(9): #m=9
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_test.append(np.reshape(a,(18,1)))
mosaic_label_test.append(fg_class)
fore_idx_test.append(fg_idx)
mosaic_list_test = np.concatenate(mosaic_list_test,axis=1).T
print(mosaic_list_test.shape)
test_data = MosaicDataset(mosaic_list_test,mosaic_label_test,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
criterion = nn.CrossEntropyLoss()
optimizer_where = optim.SGD(where.parameters(), lr=0.01, momentum=0.9)
optimizer_what = optim.SGD(what.parameters(), lr=0.01, momentum=0.9)
nos_epochs = 200
train_loss=[]
test_loss =[]
train_acc = []
test_acc = []
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
#inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device)
# zero the parameter gradients
optimizer_what.zero_grad()
optimizer_where.zero_grad()
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer_what.step()
optimizer_where.step()
running_loss += loss.item()
if cnt % 6 == 5: # print every 6 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / 6))
running_loss = 0.0
cnt=cnt+1
if epoch % 5 == 4:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
if epoch % 5 == 4:
col1.append(epoch)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
#************************************************************************
#testing data set
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
#inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device)
# print(inputs.shtorch.save(where.state_dict(),"model_epoch"+str(epoch)+".pt")ape,labels.shape)
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
#torch.save(where.state_dict(),"where_model_epoch"+str(epoch)+".pt")
#torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt")
print('Finished Training')
#torch.save(where.state_dict(),"where_model_epoch"+str(nos_epochs)+".pt")
#torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt")
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_train
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,col4, label ="focus_true_pred_true ")
plt.plot(col1,col5, label ="focus_false_pred_true ")
plt.plot(col1,col6, label ="focus_true_pred_false ")
plt.plot(col1,col7, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.savefig("linear_type3_21.png",bbox_inches="tight")
plt.savefig("linear_type3_21.pdf",bbox_inches="tight")
plt.show()
df_test
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,col10, label ="focus_true_pred_true ")
plt.plot(col1,col11, label ="focus_false_pred_true ")
plt.plot(col1,col12, label ="focus_true_pred_false ")
plt.plot(col1,col13, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.show()
# where.state_dict()["linear1.weight"][:] = torch.Tensor(np.array([[ 0, -1]]))
# where.state_dict()["linear1.bias"][:] = torch.Tensor(np.array([0]))
for param in where.named_parameters():
print(param)
# what.state_dict()["linear1.weight"][:] = torch.Tensor(np.array([[ 4.0075, -2.7007],
# [-0.7772, 2.6867],
# [-1.6794, 0.9356]]))
# what.state_dict()["linear1.bias"][:] = torch.Tensor(np.array([-2.7580, -3.0382, 6.3862]))
for param in what.named_parameters():
print(param)
xx,yy= np.meshgrid(np.arange(1,10,0.01),np.arange(-5,6,0.01))
X = np.concatenate((xx.reshape(-1,1),yy.reshape(-1,1)),axis=1)
X = torch.Tensor(X).double()
Y = where.helper(X)
Y1 = what(X)
X.shape,Y.shape
X = X.detach().numpy()
Y = Y[:,0].detach().numpy()
fig = plt.figure(figsize=(6,6))
cs = plt.contourf(X[:,0].reshape(xx.shape),X[:,1].reshape(yy.shape),Y.reshape(xx.shape))
plt.xlabel("X1")
plt.ylabel("X2")
fig.colorbar(cs)
for i in range(3):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig("focus_contour.png")#,bbox_inches='tight')
Y1 = Y1.detach().numpy()
Y1 = torch.softmax(torch.Tensor(Y1),dim=1)
_,Z4= torch.max(Y1,1)
Z1 = Y1[:,0]
Z2 = Y1[:,1]
Z3 = Y1[:,2]
Z4
#fig = plt.figure(figsize=(6,6))
# plt.scatter(X[:,0],X[:,1],c=Z1)
# plt.scatter(X[:,0],X[:,1],c=Z2)
# plt.scatter(X[:,0],X[:,1],c=Z3)
#cs = plt.contourf(X[:,0].reshape(xx.shape),X[:,1].reshape(yy.shape),Z1.reshape(xx.shape))
# #plt.colorbar(cs)
# cs = plt.contourf(X[:,0].reshape(xx.shape),X[:,1].reshape(yy.shape),Z2.reshape(xx.shape))
# #plt.colorbar(cs)
# cs = plt.contourf(X[:,0].reshape(xx.shape),X[:,1].reshape(yy.shape),Z3.reshape(xx.shape))
#plt.colorbar(cs)
# plt.xlabel("X1")
# plt.ylabel("X2")
#ax.view_init(60,100)
#plt.savefig("non_interpretable_class_2d.pdf",bbox_inches='tight')
avrg = []
with torch.no_grad():
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
avg_inp,alphas = where(inputs)
avrg.append(avg_inp)
avrg= np.concatenate(avrg,axis=0)
plt.scatter(X[:,0],X[:,1],c=Z4)
for i in range(3):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
# plt.scatter(avrg[idx[i],0],avrg[idx[i],1], label='avg_'+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.scatter(avrg[:,0],avrg[:,1])
plt.savefig("decision_boundary.png",bbox_inches="tight")
true = []
pred = []
acc= 0
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
true.append(labels)
pred.append(predicted)
acc+=sum(predicted == labels)
true = np.concatenate(true,axis=0)
pred = np.concatenate(pred,axis=0)
from sklearn.metrics import confusion_matrix
confusion_matrix(true,pred)
###Output
_____no_output_____ |
Pytorch/Self Organizing Maps/hybridnn.ipynb | ###Markdown
Hybrid Deep Learning Model
###Code
# Part 1 - Identify the Frauds with the Self-Organizing Map
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Credit_Card_Applications.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
dataset.head()
# Feature Scaling
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
X = sc.fit_transform(X)
# Training the SOM
from minisom import MiniSom
som = MiniSom(x = 10, y = 10, input_len = 15, sigma = 1.0, learning_rate = 0.5)
som.random_weights_init(X)
som.train_random(data = X, num_iteration = 100)
# Visualizing the results
from pylab import bone, pcolor, colorbar, plot, show
bone()
pcolor(som.distance_map().T)
colorbar()
markers = ['o', 's']
colors = ['r', 'g']
for i, x in enumerate(X):
w = som.winner(x)
plot(w[0] + 0.5,
w[1] + 0.5,
markers[y[i]],
markeredgecolor = colors[y[i]],
markerfacecolor = 'None',
markersize = 10,
markeredgewidth = 2)
show()
# Finding the frauds
mappings = som.win_map(X)
frauds = np.concatenate((mappings[(4,2)], mappings[(8,1)]), axis = 0)
frauds = sc.inverse_transform(frauds)
# Part 2 - Going from Unsupervised to Supervised Deep Learning
# Creating the matrix of features
customers = dataset.iloc[:, 1:].values
# Creating the dependent variable
is_fraud = np.zeros(len(dataset))
for i in range(len(dataset)):
if dataset.iloc[i,0] in frauds:
is_fraud[i] = 1
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
customers = sc.fit_transform(customers)
#Now let's make the ANN!
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Dense
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 2, kernel_initializer = 'uniform', activation = 'relu', input_dim = 15))
# Adding the output layer
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(customers, is_fraud, batch_size = 1, epochs = 5)
# Predicting the probabilities of frauds
y_pred = classifier.predict(customers)
y_pred = np.concatenate((dataset.iloc[:, 0:1].values, y_pred), axis = 1)
y_pred = y_pred[y_pred[:, 1].argsort()]
print(y_pred)
###Output
[[1.57997850e+07 3.65362612e-05]
[1.56214230e+07 4.81891439e-05]
[1.56548590e+07 2.44068229e-04]
...
[1.56773950e+07 1.68363154e-01]
[1.56927180e+07 1.80808455e-01]
[1.57257760e+07 2.20805392e-01]]
|
Moringa_Data_Science_Prep_Independent_Project_W4_2020_05_Brian_Chege_Python_Notebook.ipynb | ###Markdown
###Code
#Importing the necessary libraries for our analysis
import pandas as pd
import numpy as np
#The line of code below reads the csv file into a dataframe and displays the tail of the dataframe
df=pd.read_csv("Autolib_dataset.csv",delimiter=',')
df.tail()
#Displaying the information about the dataset
df.info()
#Standardizing the column names to lower capitalization and replacing the spaces
df.columns=df.columns.str.strip().str.lower().str.replace(' ','_').str.replace('(','').str.replace(')','')
df.head()
#Picking the columns of interest
df_2=df[["cars","bluecar_counter","utilib_counter","utilib_1.4_counter","city","postal_code","year","month","day","hour","minute"]]
df_2
#Creating a new dataframe with the name df
df=df_2.copy()
#Previewing our dataset
df.head()
#Displaying the number of null items in our newly formed dataset
df.isnull().sum()
#Finding out the possibility of duplicated values
df.duplicated()
#Exporting our cleaned dataset to a csv file
df.to_csv('cleaned_autolib_dataset.csv')
#Defining the city name to be paris to be easier in our analysis in our dataframe
df3=df[df.city=='Paris']
df3.head()
#Finding the difference and knowing whether a car was picked or not by creating a new column named the rate of change
df3["rate_of_change_bluecar"]=df3['bluecar_counter'].diff()
#Finding the most negative value to discern when the cars were being picked the most
d_f_1=df3[df3.rate_of_change<0]
d_f_1.groupby(['hour'])['rate_of_change_bluecar'].sum().sort_values(ascending=True)
#Creating a new column for knowing the difference and the rate of return of blue cars
df["rate_of_change_bluecar"]=df['bluecar_counter'].diff()
#Finding the most positive value to discern when the cars were being returned the most
d_f_2=df[df.rate_of_change_bluecar>0]
d_f_2.groupby(['hour'])['rate_of_change_bluecar'].sum().sort_values(ascending=False)
#Creating a new column to accomadate the difference of the bluecar counter
df["postal_code_new"]=df['bluecar_counter'].diff()
#Getting the number of cars that were in a postal code in refernce to our new created column
d_f_2=df[df.postal_code_new<0]
d_f_2.groupby(['postal_code'])['postal_code_new'].sum().sort_values(ascending=True)
###Output
_____no_output_____ |
datacamp_ml/ml_classification_regression/trees.ipynb | ###Markdown
stratify in train_test_split makes sure that the ratio of division remains the same! If, for example, there is a50-50% to get 0s and 1s, stratify will ensure to split the data accordingly, preserving the ratio.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, shuffle=True, stratify=y)
dtc = DecisionTreeClassifier().fit(X_train, y_train)
test_pred = dtc.predict(X_test)
accuracy_score(y_test, test_pred)
###Output
_____no_output_____
###Markdown
MSE, RMSE, KFold, cross_val_score Ensemble learning - Voting classifier
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
def best_params(clf, x, y, name):
scaler = StandardScaler()
if name != 'tree':
x = scaler.fit_transform(x)
if name == 'logreg':
params = {'C': [0.1, 1, 10], 'penalty': ['l1', 'l2', 'elasticnet'], 'solver': ['liblinear', 'lbfgs', 'saga']}
else:
params = {'n_neighbors': [2, 3, 4, 5, 6, 7], 'weights': ['uniform', 'distance']}
else:
params = {'max_depth': [2, 3, 4, 5, 6, 7, 8, 9, 10]}
return GridSearchCV(clf, param_grid=params, cv=5, refit=True, n_jobs=-1).fit(x, y)
logreg = LogisticRegression()
dtc = DecisionTreeClassifier()
knn = KNeighborsClassifier()
names = ['logreg', 'tree', 'knn']
algos = [logreg, dtc, knn]
for idx, clf in enumerate(algos):
results = best_params(clf, X_train, y_train, names[idx])
print(results.best_params_)
algos[idx] = results
scaler = StandardScaler().fit(X_train)
X_tr_st = scaler.transform(X_train)
X_ts_st = scaler.transform(X_test)
logreg = LogisticRegression(C=0.1, penalty='l1', solver='liblinear').fit(X_tr_st, y_train)
dtc = DecisionTreeClassifier(max_depth=2).fit(X_train, y_train)
knn = KNeighborsClassifier(n_neighbors=2, weights='distance').fit(X_tr_st, y_train)
predictions = [logreg.predict(X_ts_st), dtc.predict(X_test), knn.predict(X_ts_st)]
predictions
from sklearn.ensemble import VotingClassifier
print(sum(predictions))
classifiers = [
('Logistic Regression', LogisticRegression(max_iter=1e5)),
('K Nearest Neighbours', KNeighborsClassifier()),
('Classification Tree', DecisionTreeClassifier())
]
for clf_name, clf in classifiers:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print('{:s} : {:.3f}'.format(clf_name, accuracy_score(y_test, y_pred)))
vc = VotingClassifier(estimators=classifiers, verbose=1)
vc.fit(X_train, y_train)
y_pred = vc.predict(X_test)
print(f'Voting classifier: {accuracy_score(y_test, y_pred)}')
logreg = LogisticRegression(C=0.1, penalty='l1', solver='liblinear')
dtc = DecisionTreeClassifier(max_depth=2)
knn = KNeighborsClassifier(n_neighbors=2, weights='distance')
classifiers = [
('Logistic Regression', logreg),
('K Nearest Neighbours', knn),
('Classification Tree', dtc)
]
vc = VotingClassifier(classifiers, n_jobs=-1).fit(X_tr_st, y_train)
y_pred = vc.predict(X_ts_st)
print(f'Voting classifier: {accuracy_score(y_test, y_pred)}')
from sklearn.ensemble import BaggingClassifier
dt = DecisionTreeClassifier(max_depth=4, min_samples_leaf=.16)
bagging = BaggingClassifier(dt, 300, n_jobs=-1).fit(X_train, y_train)
y_pred = bagging.predict(X_test)
print(accuracy_score(y_test, y_pred))
bagging = BaggingClassifier(dt, 300, n_jobs=-1, oob_score=True).fit(X_train, y_train)
y_pred = bagging.predict(X_test)
test_accuracy = accuracy_score(y_test, y_pred)
print(test_accuracy)
print(bagging.oob_score_)
###Output
1.0
1.0
|
matplotlib/gallery_jupyter/mplot3d/surface3d_radial.ipynb | ###Markdown
3D surface with polar coordinatesDemonstrates plotting a surface defined in polar coordinates.Uses the reversed version of the YlGnBu color map.Also demonstrates writing axis labels with latex math mode.Example contributed by Armin Moser.
###Code
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Create the mesh in polar coordinates and compute corresponding Z.
r = np.linspace(0, 1.25, 50)
p = np.linspace(0, 2*np.pi, 50)
R, P = np.meshgrid(r, p)
Z = ((R**2 - 1)**2)
# Express the mesh in the cartesian system.
X, Y = R*np.cos(P), R*np.sin(P)
# Plot the surface.
ax.plot_surface(X, Y, Z, cmap=plt.cm.YlGnBu_r)
# Tweak the limits and add latex math labels.
ax.set_zlim(0, 1)
ax.set_xlabel(r'$\phi_\mathrm{real}$')
ax.set_ylabel(r'$\phi_\mathrm{im}$')
ax.set_zlabel(r'$V(\phi)$')
plt.show()
###Output
_____no_output_____ |
Python for Data Science/7. Dictionary.ipynb | ###Markdown
Dictionary
###Code
#Creating a dictionary
marks={'History':45,'Geography':54,'Hindi':56}
marks
# access elements
marks['Geography']
#adding elements
marks['English'] = 97
marks
marks.update({'Chemistry':89,'Physics':98})
marks
del marks['Hindi']
marks
###Output
_____no_output_____ |
south_america.ipynb | ###Markdown
Please read the [README](https://github.com/ibudiselic/covid/blob/master/README.md) file in this repository.
###Code
# Interactive plots make it difficult to update this automatically from the command line.
# This can be switched to `True` in Binder to get interactive plots, though.
INTERACTIVE_PLOTS = False
# Load all the data and functionality.
%run lib.ipynb
nb_continent = 'South America'
display(Markdown(f'# Analysis of the most affected countries in {nb_continent}'))
countries_to_plot = get_top_countries_by_relative_deaths(continent=nb_continent)
%%javascript
IPython.OutputArea.auto_scroll_threshold = 9999;
analyze_countries(absolute_date_comparison_start_date='2020-03-15')
###Output
_____no_output_____
###Markdown
Recovering countries that had over 300 active cases at peak
###Code
draw_recovering_countries(get_recovering_countries_info(continent=nb_continent))
###Output
_____no_output_____ |
examples/dsm_v/Extract Tree from DSM-V.ipynb | ###Markdown
Read PDF into python-text
###Code
import re
import yaml
import json
import regex
# parsing the pdf
from tika import parser
# turning stuff to a tree
from anytree import Node, RenderTree, AsciiStyle, PostOrderIter
from anytree.importer import DictImporter
from anytree.exporter import DictExporter
from anytree.search import findall
DSM_V_location = './raw/DSM_V.pdf'
DSM_ToC_location = './raw/toc.yaml'
raw = parser.from_file(DSM_V_location)
content = raw['content']
###Output
_____no_output_____
###Markdown
Process Table of Contents for controlling results
###Code
with open(DSM_ToC_location) as f:
# use safe_load instead load
ToC = {"name": "DSM_V", "id":"100000", 'children': yaml.safe_load(f)}
importer = DictImporter()
exporter = DictExporter()
toc_node = importer.import_(ToC)
id = 100001
for node in PostOrderIter(toc_node):
while ' ' in node.name:
node.name = node.name.replace(' ', ' ')
setattr(node, 'id', str(id))
id += 1
with open('./raw/toc.json', 'w+') as f:
json.dump(exporter.export(toc_node), f)
def get_name(content):
name = ' '.join([
k for k in content.split('\n')
if len(k) > 0 and k[0].isupper()
])
return name
###Output
_____no_output_____
###Markdown
Find Disorders that have an organized structure
###Code
# possible stuff found inside a block describing a disorder
SECTIONS = [
'Diagnostic Criteria',
'Recording Procedures',
'Specifiers',
'Diagnostic Features',
'Associated Features Supporting Diagnosis',
'Prevalence',
'Development and Course',
'Risk and Prognostic Factors',
'Culture-Related Diagnostic Issues',
'Gender-Related Diagnostic Issues',
'Functional Consequences of',
'Differential Diagnosis',
'Comorbidity',
'Relationship to Other Classifications',
]
# step by step information about diagnostic criteria
STEPS = [
'Diagnostic Criteria', 'Coding note:', 'Specify if:', 'Specify whether:',
'A.', 'B.', 'C.', 'D.', 'E.',
'F.', 'G.', 'H.', 'I.', 'J.',
]
SUBSTEPS = ["1.", "2.", "3.", "4.", "5.", "6.", "7.", "8."]
def get_sections(content, steps=SECTIONS):
"""Helper function separating the text into blocks of information
based on the expected steps that should be included."""
base = {}
previous_step = ""
data = []
found_steps = []
for row in content.split("\n"):
for step in steps:
if step in found_steps:
continue
found = regex.findall(
"(" + step + "){e<=3}", row[: len(step) + 3], overlapped=True
)
if row.startswith(step) or row == step or len(found):
base[previous_step] = "\n".join(data)
previous_step = step
data = []
found_steps.append(step)
data.append(row)
base[previous_step] = "\n".join(data)
if "" in base:
del base[""]
return base
def create_structure(content, steps):
"""Sub-section list filtering.
Example: 1. 2. 3.
Example: A. B. C.
"""
for each in steps:
content = content.replace(each, "\n" + each)
base = {}
previous_step = "Beginning"
data = []
for row in content.split("\n"):
for step in steps:
if step in row:
base[previous_step] = "\n".join(data).replace("\n\n", "-")
previous_step = step
data = []
data.append(row)
base[previous_step] = "\n".join(data)
if "" in base:
del base[""]
return base
def structure_body(content, steps=STEPS, subsections=SUBSTEPS):
structured = create_structure(content, steps)
for each, val in structured.items():
sub = create_structure(val, substeps)
structured[each] = "\n".join(sub.values())
return "\n\n".join(structured.values())
def create_sections(content, sections=SECTIONS, structured=['Diagnostic Criteria']):
sections = get_sections(content, steps)
for key, val in sections.items():
if key in structured:
sections[key] = structure_body(val)
return sections
previous = None
next_name = None
structured_disorders = []
for i in re.finditer(r'^Diagnostic Criteria', content, flags=re.MULTILINE):
# decide name of disorder based on previous lines
diag_start = i.span()[0]
# skip first time, and store text there
if previous is None:
next_name = get_name(content[diag_start-80:diag_start])
previous = diag_start
continue
# decide naming
name = next_name
next_name = get_name(content[diag_start-80:diag_start])
# get content of disorder
text = content[previous:diag_start].replace(name, '')
disorder = {}
disorder['body'] = text
disorder['sections'] = get_sections(text)
disorder['name'] = name.replace(' ', ' ')
# prepare for next round
previous = diag_start
# skip problematic runs
if len(disorder.keys()) < 2:
continue
structured_disorders.append(disorder)
len(structured_disorders)
###Output
_____no_output_____
###Markdown
Post-Processing* some extracted data have no name, check by hand their cases* some names have issues* some names include too much, reduce
###Code
diag1 = 'Note: A tic is a sudden, rapid, recurrent, nonrhythmic motor movement or vocalization.'
diag2 = 'A. The development of a reversible substance-specific syndrome attributable to recent in'
diag3 = 'A. Daily use of tobacco for at least several weeks.'
diag4 = 'A. Following cessation of use of a hallucinogen, the reexperiencing of one or more of the'
diag5 = 'A. Presence of obsessions, compulsions, or both'
diag6 = 'C. The cognitive deficits do not occur exclusively in the context of a delirium.'
diag7 = 'A. Lack of, or significantly reduced, sexual interest/arousal, as manifested by at least '
diag8 = 'A. Polysomnograpy demonstrates episodes of decreased respiration associated with el'
empty_name_fix = {
'Tic Disorders': diag1,
'Other (or Unknown) Substance Intoxication': diag2,
'Tobacco Withdrawal': diag3,
'Hallucinogen Persisting Perception Disorder': diag4,
'Obsessive-Compulsive Disorder': diag5,
'Mild Neurocognitive Disorder': diag6,
'Female Sexual Interest/Arousal Disorder': diag7,
'Sleep-Related Hypoventilation': diag8,
}
def fix_name(name):
"""Removing unneeded variations of names that doesnt allow coordination
between extracted info and table of contents.
This has been done post processing, but its
being written here for keeping clean."""
name = name[name.rfind('.')+1:]
return (
name.replace('l\/l', 'M')
.replace('IVI', 'M')
.replace('Disinhiblted', 'Disinhibited')
.replace('Dereallzation', 'Derealization')
.replace('-induced', '-Induced')
.replace('Reiated', 'Related')
.replace('Genlto', 'Genito')
.replace('A positive family history ', '')
.replace(
'Unspecified Tobacco-Related Disorder Tobacco Use Disorder',
'Tobacco Use Disorder'
)
.replace(
'Unspecified Stimulant-Related Disorder Stimulant Use Disorder',
'Stimulant Use Disorder'
)
.replace(
'Induced Disorders Unspecified Inhalant-Related Disorder Inhalant Use Disorder',
'Inhalant Use Disorder'
)
.replace(
'Disorder Unspecified Hallucinogen-Related Disorder Phencyclidine Use Disorder',
'Phencyclidine Use Disorder'
)
.replace(
'Induced Disorders Unspecified Cannabis-Related Disorder Cannabis Use Disorder',
'Cannabis Use Disorder'
)
.replace(
'Unspecified Alcohol-Related Disorder Alcohol Use Disorder',
'Alcohol Use Disorder'
)
.replace(
'Induced Disorders Unspecified Caffeine-Related Disorder Caffeine Intoxication',
'Caffeine Intoxication'
)
.replace(
'Disorder Attention-Deficit/Hyperactivity Disorder',
'Attention-Deficit/Hyperactivity Disorder Attention-Deficit/Hyperactivity Disorder'
)
).lstrip()
def fix_empty_name_disorders(disorder):
"""fix cases where the name is empty."""
for disorder_name, diagnosis in empty_name_fix.items():
if diagnosis in disorder['body']:
return disorder_name
return ""
# raise Exception('Disorder Not Found in Post Processing.')
###Output
_____no_output_____
###Markdown
Turn the processed disorders into a tree
###Code
for disorder in structured_disorders:
involved_nodes = []
# fix name issues
disorder['name'] = fix_name(disorder['name'])
# if name doesnt exist, then fix it
if disorder['name'] == '' or disorder['name'] == ' ':
disorder['name'] = fix_empty_name_disorders(disorder)
# find name in the table of contents
for subchild in toc_node.children:
# if its exact name
involved_nodes += findall(
subchild,
filter_=lambda node: disorder['name'] == node.name
)
if len(involved_nodes) == 0:
# if its exact name
involved_nodes += findall(
subchild,
filter_=lambda node: disorder['name'] == node.parent.name + " " + node.name
)
if len(involved_nodes) == 0:
# if not exact name
involved_nodes += findall(
subchild,
filter_=lambda node:
disorder['name'] in node.parent.name + " " + node.name or
disorder['name'] in node.name + " " + node.parent.name
)
# catch errors before going further
if len(involved_nodes) == 0:
print("Error: Disease '" + disorder['name'] + "' was not found in Table of Contents.")
continue
if len(involved_nodes) > 1:
print("Error: Disease '" + disorder['name'] + "' was found more than once in Table of Contents.")
continue
# further post-processing
node = involved_nodes[0]
if 'sections' in disorder and disorder['sections'] != []:
node.children = [
Node(name=key, body=val, type="section")
for key, val in disorder['sections'].items()
]
for key, val in disorder.items():
if key == 'name':
pass
setattr(node, key, val)
with open('../dsm_v/dsm_v.json', 'w+') as f:
json.dump(exporter.export(toc_node), f)
###Output
_____no_output_____ |
example/xo2b photometry.ipynb | ###Markdown
Aperture photometry of the exoplanet XO2b observation in the filter B Harris.Data from Zellem et al.(2015): http://arxiv.org/abs/1505.01063WARNINGYou have to put your login.cl in the directory where you would like to run this script or a similar.
###Code
#under astroconda envs
import exotred #import the package
print 'Obtain the information in the YAML file:'
data_path, save_path, input_file = exotred.input_info('../input/') #include the YAML information
print data_path
print save_path
input_file
print 'Obtain the sky background information for each image: '
exotred.bkg_info(input_file)
print 'Loading sky background data: '
bkg_data, bkg_rms = exotred.bkg_read(input_file)
print 'Making the aperture photometry: '
exotred.phot_aperture(input_file,bkg_data,bkg_rms)
print 'Reading the results of the aperture photometry: '
rawflux, eflux, hjd, airmass = exotred.phot_readData(input_file)
print 'Make a beautifull plot: '
%matplotlib inline
from matplotlib.gridspec import GridSpec
import matplotlib.pyplot as plt #plot library
import numpy as np
def init_plotting():
"""
Funcao para definir uma janela grafica com parametros customizados para aptresentacoes em ppt ou latex.
"""
plt.rcParams['figure.figsize'] = (14.0,8.0)
plt.rcParams['font.size'] = 20
#plt.rcParams['font.family'] = 'Times New Roman'
plt.rcParams['axes.labelsize'] = plt.rcParams['font.size']
plt.rcParams['axes.titlesize'] = 0.75*plt.rcParams['font.size']
plt.rcParams['legend.fontsize'] = 0.65*plt.rcParams['font.size']
plt.rcParams['xtick.labelsize'] = plt.rcParams['font.size']
plt.rcParams['ytick.labelsize'] = plt.rcParams['font.size']
plt.rcParams['xtick.major.size'] = 3
plt.rcParams['xtick.minor.size'] = 3
plt.rcParams['xtick.major.width'] = 1
plt.rcParams['xtick.minor.width'] = 1
plt.rcParams['ytick.major.size'] = 3
plt.rcParams['ytick.minor.size'] = 3
plt.rcParams['ytick.major.width'] = 1
plt.rcParams['ytick.minor.width'] = 1
plt.rcParams['legend.frameon'] = True
plt.rcParams['legend.loc'] = 'best'
plt.rcParams['axes.linewidth'] = 1
init_plotting()
f = plt.figure()
plt.suptitle(input_file['exoplanet']+" - rawdata")
gs1 = GridSpec(2, 2, width_ratios=[1,2],height_ratios=[4,1])
gs1.update(wspace=0.5)
ax1 = plt.subplot(gs1[:-1, :])
ax2 = plt.subplot(gs1[-1, :])
ax1.grid()
ax1.errorbar(hjd,rawflux,yerr=eflux,ecolor='g')
ax1.set_xticklabels([])
ax1.set_ylabel('Relative Flux')
ax2.grid()
ax2.plot(hjd,airmass,color='green')
plt.yticks(np.array([airmass.min(), (airmass.min()+airmass.max())/2., airmass.max()]))
ax2.set_xlabel('JD')
ax2.set_ylabel('airmass')
plt.savefig('raw_data_example.png')
###Output
Make a beautifull plot:
|
Parcial 1/Tarea4_BetsyTorres.ipynb | ###Markdown
Esta tarea fue realizada en equipo por Betsy Torres, Mariana Briones y Erick Mendoza Tarea 4 4.1The following table list different cash flows that management believes a project might earn in the fifth year od it's life. Listed also in the table is management's estimate of the probability associated with each of the alternative possible cash flows the project might generate in that year.
###Code
# Chechar screeeeeeeeeeens
import pandas as pd
import numpy as np
cash_flows = [20, 17.5, 15, 12.5, 10, 7.5, 5, 2.5, 0]
probability = [0.04, 0.06, 0.12, 0.18, 0.20, 0.18, 0.12, 0.06, 0.04]
data = pd.DataFrame({'Cash Flows': cash_flows, 'Probability': probability})
data
###Output
_____no_output_____
###Markdown
**a) What is the expected cash flow in year 5?**
###Code
exp_cash = np.dot(cash_flows,probability)
print('El flujo de efectivo esperado en el año 5 es de:',exp_cash)
###Output
El flujo de efectivo esperado en el año 5 es de: 10.0
###Markdown
**b) Each of the possible cash flows is different from the expected cash flow, If we square each of the differences from the expected cash flow and calculate the probability-weighted average of these squared differences, we obtain the variance of the cash flow. Calculate the variance of the year 5 cash flow.**
###Code
var = np.multiply(cash_flows, probability).var()
print('La varianza de los flujos de efectivo es de:', var)
###Output
La varianza de los flujos de efectivo es de: 0.5709876543209877
###Markdown
**c) The standard deviation is the square root of the variance. Calculate the standard deviation of the year 5 cash flow.**
###Code
std = var**0.5
print('La desviación estandar de la varianza de los flujos de efectivo en el año 5 es de:', std)
###Output
La desviación estandar de la varianza de los flujos de efectivo en el año 5 es de: 0.7556372504853025
###Markdown
4.2 The XTRON product requires a € 1 million investmentin one´s year time if management should decides to launch the product. The project sponsor believes that the NPV will be € 200,000, but the probability is 20% that the NPV will be - € 100,000 instead. How much value can management expect this investment to add for shareholders at that time?
###Code
NPV = [200000, -100000] #sucess, failure
prob = [0.8, 0.2] #sucess, failure
def arbol_decision(NPV, prob):
invest = NPV[0] * prob[0] + NPV[1] * prob[1]
no_invest = 0
if invest > no_invest:
print ("Invertir en producto, y obtendrá una ganancia de: ", invest)
else:
print("No invertir porque tendrá pérdidas")
arbol_decision(NPV, prob)
###Output
Invertir en producto, y asi se llegará a: 140000.0
|
String/1011/1208. Get Equal Substrings Within Budget.ipynb | ###Markdown
说明: 给您两个相同长度的字符串s和t。您希望将s更改为t。 将s的第i个字符更改为t的第i个字符将花费|s[i]-t[i]|即字符的ASCII值之间的绝对差异。 您还将获得一个整数maxCost。 返回s的子串的最大长度,该子串的最大长度可以更改为与开销小于或等于maxCost的相应子串相同。 如果s中没有子字符串可以从t更改为相应的子字符串,Example 1: Input: s = "abcd", t = "bcdf", maxCost = 3 Output: 3 Explanation: "abc" of s can change to "bcd". That costs 3, so the maximum length is 3.Example 2: Input: s = "abcd", t = "cdef", maxCost = 3 Output: 1 Explanation: Each character in s costs 2 to change to charactor in t, so the maximum length is 1.Example 3: Input: s = "abcd", t = "acde", maxCost = 0 Output: 1 Explanation: You can't make any change, so the maximum length is 1.Constraints: 1、1 <= s.length, t.length <= 10^5 2、0 <= maxCost <= 10^6 3、s and t only contain lower case English letters.
###Code
class Solution:
def equalSubstring(self, s: str, t: str, maxCost: int) -> int:
res, idx = 0, 0
for i in range(len(s)):
if s[i] == t[i]:
res += 1
idx = i + 1
else:
break
out = res
while idx < len(s):
if s[idx] != t[idx]:
maxCost -= abs(ord(s[idx]) - ord(t[idx]))
if maxCost >= 0:
res += 1
elif maxCost < 0:
past = idx - res
while s[past] == t[past]:
res -= 1
past += 1
maxCost += abs(ord(s[past]) - ord(t[past]))
out = max(res, out)
idx += 1
return out
solution = Solution()
solution.equalSubstring(s = "abdd", t = "abch", maxCost = 3)
###Output
1
|
figures/figure02.ipynb | ###Markdown
Saldías et al. Figure 02Waves - ssh anomaly (canyon minus no-canyon), allowed and scattered waves
###Code
from brokenaxes import brokenaxes
import cmocean as cmo
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import matplotlib.gridspec as gspec
import matplotlib.patches as patches
from netCDF4 import Dataset
import numpy as np
import pandas as pd
import scipy as sc
import scipy.io as sio
import xarray as xr
import matplotlib.colors as mcolors
import matplotlib.lines as mlines
from matplotlib.lines import Line2D
%matplotlib inline
%matplotlib inline
def get_fig_file(file_fig):
# Brink mode
file = sio.loadmat(file_fig)
z, xpl, xxx, zzz = file['z'][0,:], file['xpl'][0,:], file['xxx'][0,:], file['zzz'][0,:]
# (u is cross-shore and v is alongshore in Brink.)
p0, u0, v0, w0, r0 = file['p_profile'], file['u_profile'],file['v_profile'], file['w_profile'], file['r_profile']
scale=0.2
w = w0 * 0.01 * scale # cms-1 to ms-1 and normalization (?)
u = u0 * 0.01 * scale # cms-1 to ms-1 and normalization
v = v0 * 0.01 * scale # cms-1 to ms-1 and normalization
r = r0 * 1.0 * scale # mg/cm³ to kg/m³ and normalization
p = p0 * 0.1 * scale # dyn/cm² to 0.1 Pa (or kg m-1 s-2) and normalization
return(u,v,w,r,p,z,xpl, xxx, zzz)
def plot_Brink(ax,fld,z,xpl,xxx,zzz,minp,maxp,nlev=15):
landc='#8b7765'
levels=np.linspace(minp,maxp,nlev)
cnf = ax.contourf(xpl, z, fld, levels=levels, cmap=cmo.cm.delta, vmin=minp,
vmax=maxp, zorder=1)
ax.contour(xpl, z, fld, levels=levels, linewidths=1, linestyles='-', colors='0.4', zorder=2)
ax.contour(xpl, z, fld, levels=[0], linewidths=2, linestyles='-', colors='k', zorder=2)
ax.fill_between(xxx, zzz.min(), zzz, facecolor=landc, zorder=3)
levels=np.linspace(np.nanmin(v),np.nanmax(v),nlev)
return(cnf, ax)
runs = ['DS','IS','SS']
fig = plt.figure(figsize=(7.48,9))
plt.rcParams.update({'font.size': 8})
# Set up subplot grid
gs = GridSpec(4, 3, width_ratios=[1,1,1], height_ratios=[0.6,1.3,1.5,1.3],
wspace=0.1,hspace=0.3, figure=fig)
ax1 = fig.add_subplot(gs[0, 0])
ax2 = fig.add_subplot(gs[0, 1])
ax3 = fig.add_subplot(gs[0, 2])
ax4 = fig.add_subplot(gs[1, 0])
ax5 = fig.add_subplot(gs[1, 1])
ax6 = fig.add_subplot(gs[1, 2])
ax7 = fig.add_subplot(gs[2, 0])
ax8 = fig.add_subplot(gs[2, 1:])
ax9 = fig.add_subplot(gs[3, 0])
ax10 = fig.add_subplot(gs[3, 1])
ax11 = fig.add_subplot(gs[3, 2])
for ax in [ax2,ax3,ax5,ax6,ax10,ax11]:
ax.set_yticks([])
for ax,run in zip([ax1,ax2,ax3],runs):
ax.set_xlabel('x (km)', labelpad=0)
ax.set_title(run)
for ax in [ax4,ax5,ax6,ax7]:
ax.set_xlabel('Days', labelpad=0)
for ax in [ax9,ax10,ax11]:
ax.set_xlabel('x (km)', labelpad=0)
ax1.set_ylabel('Depth (m)', labelpad=0)
ax4.set_ylabel('y (km)', labelpad=0)
ax7.set_ylabel('y (km)', labelpad=0)
ax9.set_ylabel('Depth (m)', labelpad=0)
ax8.set_xlabel(r'$k$ ($10^{-5}$ rad m$^{-1}$)', labelpad=0)
ax8.set_ylabel(r'$\omega$ ($10^{-5}$ rad s$^{-1}$)', labelpad=0.5)
ax8.yaxis.set_label_position("right")
ax8.yaxis.tick_right()
# Shelf profiles
for run, ax in zip(runs, [ax1,ax2,ax3]):
can_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_CR_'+run+'_7d.nc'
yshelf = 400
yaxis = int(579/2)
with Dataset(can_file, 'r') as nbl:
hshelf = -nbl.variables['h'][yshelf,:]
haxis = -nbl.variables['h'][yaxis,:]
x_rho = (nbl.variables['x_rho'][:]-400E3)/1000
y_rho = (nbl.variables['y_rho'][:]-400E3)/1000
ax.plot(x_rho[yshelf,:], hshelf,'k-', label='shelf')
ax.plot(x_rho[yaxis,:], haxis,'k:', label='canyon \n axis')
ax.set_xlim(-50,0)
ax.set_ylim(-500,0)
ax1.legend(labelspacing=0)
#SSH hovmöller plots (canyon-no canyon)
xind = 289
for run, ax in zip(runs,(ax4,ax5,ax6)):
nc_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_NCR_'+run+'_7d.nc'
can_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_CR_'+run+'_7d.nc'
with Dataset(can_file, 'r') as nbl:
y_rho = nbl.variables['y_rho'][:]
time = nbl.variables['ocean_time'][:]
zeta = nbl.variables['zeta'][:,:,xind]
with Dataset(nc_file, 'r') as nbl:
y_rho_nc = nbl.variables['y_rho'][:]
time_nc = nbl.variables['ocean_time'][:]
zeta_nc = nbl.variables['zeta'][:,:,xind]
pc2 = ax.pcolormesh((time_nc)/(3600*24),(y_rho_nc[:,xind]/1000)-400,
np.transpose((zeta[:,:]-zeta_nc[:,:]))*1000,
cmap=cmo.cm.balance, vmax=4.0, vmin=-4.0)
if run == 'IS':
rect = patches.Rectangle((5,-20),15,160,linewidth=2,edgecolor='k',facecolor='none')
ax.add_patch(rect)
ax.axhline(0.0, color='k', alpha=0.5)
ax.set_ylim(-400,400)
cbar_ax = fig.add_axes([0.92, 0.585, 0.025, 0.17])
cb = fig.colorbar(pc2, cax=cbar_ax, orientation='vertical', format='%1.0f')
cb.set_label(r'Surface elevation (10$^{-3}$ m)')
# Zoomed-in SSH hovmöller plot of IS (canyon-no canyon)
yind = 420
xlim = 100
xind = 289
y1 = 189
y2 = 389
y3 = 526
y4 = 540
y5 = 315
run = 'IS'
ax = ax7
nc_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_NCR_'+run+'_7d.nc'
can_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_CR_'+run+'_7d.nc'
with Dataset(can_file, 'r') as nbl:
y_rho = nbl.variables['y_rho'][:]
time = nbl.variables['ocean_time'][:]
zeta = nbl.variables['zeta'][:,:,xind]
with Dataset(nc_file, 'r') as nbl:
y_rho_nc = nbl.variables['y_rho'][:]
time_nc = nbl.variables['ocean_time'][:]
zeta_nc = nbl.variables['zeta'][:,:,xind]
pc2 = ax.pcolormesh((time_nc)/(3600*24),(y_rho_nc[:,xind]/1000)-400,
np.transpose((zeta[:,:]-zeta_nc[:,:]))*1000,
cmap=cmo.cm.balance, vmax=4.0, vmin=-4.0)
t1_IS = (time_nc[47])/(3600*24)
y1_IS = (y_rho_nc[y2,xind]/1000)-400
t2_IS = (time_nc[65])/(3600*24)
y2_IS = (y_rho_nc[y4,xind]/1000)-400
ax.plot([t1_IS, t2_IS],[y1_IS, y2_IS], '.-', color='k')
t1_IS = (time_nc[47])/(3600*24)
y1_IS = (y_rho_nc[289,xind]/1000)-400
t2_IS = (time_nc[55])/(3600*24)
y2_IS = (y_rho_nc[y2,xind]/1000)-400
ax.plot([t1_IS, t2_IS],[y1_IS, y2_IS], '.-',color='k')
ax.axhline(0.0, color='k', alpha=0.5)
ax.axhline(-5.0, color='0.5', alpha=0.5)
ax.axhline(5.0, color='0.5', alpha=0.5)
ax.set_ylim(-20,140)
ax.set_xlim(5,20)
rect = patches.Rectangle((5.1,-19),14.85,158,linewidth=2,edgecolor='k',facecolor='none')
ax.add_patch(rect)
# Dispersion curves
g = 9.81 # gravitational accel. m/s^2
Hs = 100 # m shelf break depth
f = 1.028E-4 # inertial frequency
omega_fw = 1.039E-5 # fw = forcing wave
k_fw = 6.42E-6# rad/m
domain_length = 800E3 # m
canyon_width = 10E3 # m
col1 = '#254441' #'#23022e'
col2 = '#43AA8B' #'#573280'
col3 = '#B2B09B' #'#ada8b6'
col4 = '#FF6F59' #'#58A4B0'
files = ['../dispersion_curves/DS/dispc_DS_mode1_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode1_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode1_KRM.dat',
'../dispersion_curves/DS/dispc_DS_mode2_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode2_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode2_KRM.dat',
'../dispersion_curves/DS/dispc_DS_mode3_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode3_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode3_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode4_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode4_KRM.dat',
'../dispersion_curves/DS/dispc_DS_mode5_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode5_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode5_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode6_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode6_KRM.dat',
]
colors = [col1,
col2,
col3,
col1,
col2,
col3,
col1,
col2,
col3,
col2,
col3,
col1,
col2,
col3,
#col1,
col2,
col3,
]
linestyles = ['-','-','-','--','--','--',':',':',':','-.','-.','-','-','-','--','--']
labels = [ r'DS $\bar{c_1}$',r'IS $\bar{c_1}$',r'SS $\bar{c_1}$',
r'DS $\bar{c_2}$',r'IS $\bar{c_2}$',r'SS $\bar{c_2}$',
r'DS $\bar{c_3}$',r'IS $\bar{c_3}$',r'SS $\bar{c_3}$',
r'IS $\bar{c_4}$',r'SS $\bar{c_4}$',
r'DS $\bar{c_5}$',r'IS $\bar{c_5}$',r'SS $\bar{c_5}$',
r'IS $\bar{c_6}$',r'SS $\bar{c_6}$']
ax8.axhline(omega_fw*1E5, color='0.5', label='1/7 days')
ax8.axhline(f*1E5, color='gold', label='f')
ax8.axvline((1E5*(2*np.pi))/domain_length, linestyle='-', color=col4, alpha=1, label='domain length')
for file, col, lab, line in zip(files, colors, labels, linestyles):
data_mode = pd.read_csv(file, delim_whitespace=True, header=None, names=['wavenum', 'freq', 'perturbation'])
omega = data_mode['freq'][:]
k = data_mode['wavenum'][:]*100
ax8.plot(k*1E5, omega*1E5, linestyle=line,
color=col,linewidth=2,alpha=0.9,
label=lab+r'=%1.2f ms$^{-1}$' % (np.mean(omega/k)))
ax8.plot((omega_fw/1.59)*1E5, omega_fw*1E5, '^',color=col1,
markersize=9, label = 'incident DS %1.2f' %(1.59),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/1.39)*1E5, omega_fw*1E5, '^',color=col2,
markersize=9, label = 'incident IS %1.2f' %(1.39),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/1.29)*1E5, omega_fw*1E5, '^',color=col3,
markersize=9, label = 'incident SS %1.2f' %(1.29),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/0.32)*1E5, omega_fw*1E5, 'o',color=col1,
markersize=9, label = 'DS model c=%1.2f m/s' %(0.32),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/0.23)*1E5, omega_fw*1E5, 'o',color=col2,
markersize=9, label = 'IS model c=%1.2f m/s' %(0.23),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/1.04)*1E5, omega_fw*1E5, 'o',color=col3,
markersize=9, label = 'SS model c=%1.2f m/s' %(1.04),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/0.14)*1E5, omega_fw*1E5, 'd',color=col1,
markersize=11, label = 'DS model c=%1.2f m/s' %(0.14),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/0.14)*1E5, omega_fw*1E5, 'd',color=col2,
markersize=9, label = 'IS model c=%1.2f m/s' %(0.14),
markeredgecolor='0.2',markeredgewidth=1)
ax8.set_ylim(0, 1.5)
ax8.set_xlim(0,8)
legend_elements=[]
legend_elements.append(Line2D([0], [0], marker='^',color='w', label='incident',
markerfacecolor='k', mec='k',markersize=6))
legend_elements.append(Line2D([0], [0], marker='o',color='w', label='1$^{st}$ scattered',
markerfacecolor='k', mec='k',markersize=6))
legend_elements.append(Line2D([0], [0], marker='d',color='w', label='2$^{nd}$ scattered',
markerfacecolor='k', mec='k',markersize=6))
for col, run in zip([col1,col2,col3], runs):
legend_elements.append(Line2D([0], [0], marker='s',color=col, linewidth=4,label=run,
markerfacecolor=col, mec=col, markersize=0))
ax8.legend(handles=legend_elements, bbox_to_anchor=(0.65,0.32),frameon=False, handlelength=0.7,
handletextpad=0.5, ncol=2,columnspacing=0.25, framealpha=0, edgecolor='w',labelspacing=0.2)
# Mode structure (Modes 1, 4 and 6 IS run)
run='IS'
modes = ['mode1','mode3', 'mode5']
for mode, ax in zip(modes, [ax9,ax10,ax11]):
u,v,w,r,p,z,xpl,xxx,zzz = get_fig_file('../dispersion_curves/'+run+'/figures_'+run+'_'+mode+'_KRM.mat')
minp = -(1.66e-06)*1E6
maxp = (1.66e-06)*1E6
cntf, ax = plot_Brink(ax, p*1E6, z, xpl, xxx, zzz, minp, maxp)
ax.set_xlim(0,50)
cbar_ax = fig.add_axes([0.92, 0.125, 0.025, 0.17])
cb = fig.colorbar(cntf, cax=cbar_ax, orientation='vertical', format='%1.1f')
cb.set_label(r'Pressure (10$^{-6}$ Pa)')
ax9.text(0.5,0.9,'Incident wave',transform=ax9.transAxes, fontweight='bold')
ax10.text(0.5,0.9,'Mode 3 (IS)',transform=ax10.transAxes, fontweight='bold')
ax11.text(0.5,0.9,'Mode 5 (IS)',transform=ax11.transAxes, fontweight='bold')
ax8.text(0.09,0.75,'mode 1',transform=ax8.transAxes,rotation=70 )
ax8.text(0.27,0.75,'mode 2',transform=ax8.transAxes,rotation=51 )
ax8.text(0.43,0.75,'mode 3',transform=ax8.transAxes,rotation=41 )
ax8.text(0.65,0.75,'mode 4',transform=ax8.transAxes,rotation=30 )
ax8.text(0.87,0.72,'mode 5',transform=ax8.transAxes,rotation=25 )
ax8.text(0.87,0.47,'mode 6',transform=ax8.transAxes,rotation=18 )
ax1.text(0.95,0.05,'a',transform=ax1.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax2.text(0.95,0.05,'b',transform=ax2.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax3.text(0.95,0.05,'c',transform=ax3.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax4.text(0.95,0.03,'d',transform=ax4.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax5.text(0.95,0.03,'e',transform=ax5.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax6.text(0.96,0.03,'f',transform=ax6.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax7.text(0.01,0.94,'g',transform=ax7.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax8.text(0.01,0.03,'h',transform=ax8.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax9.text(0.97,0.03,'i',transform=ax9.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax10.text(0.97,0.03,'j',transform=ax10.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax11.text(0.95,0.03,'k',transform=ax11.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
plt.savefig('Figure2.png',format='png',bbox_inches='tight', dpi=300)
plt.show()
###Output
_____no_output_____ |
Test1_Mask_RCNN.ipynb | ###Markdown
Новый раздел Установка и импорт
###Code
###Output
_____no_output_____ |
hm-recbole-ifs.ipynb | ###Markdown
Importing Libraries
###Code
import os
import gc
import numpy as np
import pandas as pd
data_path = r'../input/h-and-m-personalized-fashion-recommendations/transactions_train.csv'
customer_data_path = r'../input/h-and-m-personalized-fashion-recommendations/customers.csv'
article_data_path = r'../input/h-and-m-personalized-fashion-recommendations/articles.csv'
submission_data_path = r'../input/h-m-ensembling/submission.csv'
!mkdir /kaggle/working/recbole_data
recbole_data_path = r'/kaggle/working/recbole_data'
# Data Extraction
def create_data(datapath, data_type=None):
if data_type is None:
df = pd.read_csv(datapath)
elif data_type == 'transaction':
df = pd.read_csv(datapath, dtype={'article_id': str}, parse_dates=['t_dat'])
elif data_type == 'article':
df = pd.read_csv(datapath, dtype={'article_id': str})
return df
###Output
_____no_output_____
###Markdown
Reading Transaction data
###Code
%%time
# Load all sales data (for 3 years starting from 2018 to 2020)
# ALso, article_id is treated as a string column otherwise it
# would drop the leading zeros while reading the specific column values
transactions_data=create_data(data_path, data_type='transaction')
print(transactions_data.shape)
# # Unique Attributes
print(str(len(transactions_data['t_dat'].drop_duplicates())) + "-total No of unique transactions dates in data sheet")
print(str(len(transactions_data['customer_id'].drop_duplicates())) + "-total No of unique customers ids in data sheet")
print(str(len(transactions_data['article_id'].drop_duplicates())) + "-total No of unique article ids courses names in data sheet")
print(str(len(transactions_data['sales_channel_id'].drop_duplicates())) + "-total No of unique sales channels in data sheet")
transactions_data.head()
###Output
(31788324, 5)
734-total No of unique transactions dates in data sheet
1362281-total No of unique customers ids in data sheet
104547-total No of unique article ids courses names in data sheet
2-total No of unique sales channels in data sheet
CPU times: user 55.3 s, sys: 4.24 s, total: 59.5 s
Wall time: 1min 20s
###Markdown
Postprocessing Transaction data 1. timestamp column is created from transaction dates column 2. columns are renamed for easy reading
###Code
transactions_data['timestamp'] = transactions_data.t_dat.values.astype(np.int64)// 10 ** 9
transactions_data = transactions_data[transactions_data['timestamp'] > 1585620000]
transactions_data = transactions_data[['customer_id','article_id','timestamp']].rename(columns={'customer_id': 'user_id:token',
'article_id': 'item_id:token',
'timestamp': 'timestamp:float'})
transactions_data
###Output
_____no_output_____
###Markdown
Saving transaction data to kaggle based recbole output directory
###Code
transactions_data.to_csv(os.path.join(recbole_data_path, 'recbole_data.inter'), index=False, sep='\t')
del [[transactions_data]]
gc.collect()
###Output
_____no_output_____
###Markdown
Reading Article data
###Code
%%time
# Load all Customers
article_data=create_data(article_data_path, data_type='article')
print(article_data.shape)
print(str(len(article_data['article_id'].drop_duplicates())) + "-total No of unique article ids in article data sheet")
article_data.head()
###Output
(105542, 25)
105542-total No of unique article ids in article data sheet
CPU times: user 716 ms, sys: 43.6 ms, total: 760 ms
Wall time: 1.07 s
###Markdown
Postprocessing Article data 1. drop duplicate columns to avoid multicollinearity2. columns are renamed for easy reading
###Code
article_data = article_data.drop(columns = ['product_type_name', 'graphical_appearance_name', 'colour_group_name',
'perceived_colour_value_name', 'perceived_colour_master_name', 'index_name',
'index_group_name', 'section_name', 'garment_group_name',
'prod_name', 'department_name', 'detail_desc'])
article_data = article_data.rename(columns = {'article_id': 'item_id:token',
'product_code': 'product_code:token',
'product_type_no': 'product_type_no:float',
'product_group_name': 'product_group_name:token_seq',
'graphical_appearance_no': 'graphical_appearance_no:token',
'colour_group_code': 'colour_group_code:token',
'perceived_colour_value_id': 'perceived_colour_value_id:token',
'perceived_colour_master_id': 'perceived_colour_master_id:token',
'department_no': 'department_no:token',
'index_code': 'index_code:token',
'index_group_no': 'index_group_no:token',
'section_no': 'section_no:token',
'garment_group_no': 'garment_group_no:token'})
article_data
###Output
_____no_output_____
###Markdown
Saving article data to kaggle based recbole output directory
###Code
article_data.to_csv(os.path.join(recbole_data_path, 'recbole_data.item'), index=False, sep='\t')
del [[article_data]]
gc.collect()
###Output
_____no_output_____
###Markdown
Setting up Recbole based dataset and configurations
###Code
import logging
from logging import getLogger
from recbole.config import Config
from recbole.data import create_dataset, data_preparation
from recbole.model.sequential_recommender import GRU4RecF
from recbole.trainer import Trainer
from recbole.utils import init_seed, init_logger
parameter_dict = {
'data_path': '/kaggle/working',
'USER_ID_FIELD': 'user_id',
'ITEM_ID_FIELD': 'item_id',
'TIME_FIELD': 'timestamp',
'user_inter_num_interval': "[40,inf)",
'item_inter_num_interval': "[40,inf)",
'load_col': {'inter': ['user_id', 'item_id', 'timestamp'],
'item': ['item_id', 'product_code', 'product_type_no', 'product_group_name', 'graphical_appearance_no',
'colour_group_code', 'perceived_colour_value_id', 'perceived_colour_master_id',
'department_no', 'index_code', 'index_group_no', 'section_no', 'garment_group_no']
},
'selected_features': ['product_code', 'product_type_no', 'product_group_name', 'graphical_appearance_no',
'colour_group_code', 'perceived_colour_value_id', 'perceived_colour_master_id',
'department_no', 'index_code', 'index_group_no', 'section_no', 'garment_group_no'],
'neg_sampling': None,
'epochs': 100,
'eval_args': {
'split': {'RS': [10, 0, 0]},
'group_by': 'user',
'order': 'TO',
'mode': 'full'},
'topk':[12]
}
config = Config(model='GRU4RecF', dataset='recbole_data', config_dict=parameter_dict)
# init random seed
init_seed(config['seed'], config['reproducibility'])
# logger initialization
init_logger(config)
logger = getLogger()
# Create handlers
c_handler = logging.StreamHandler()
c_handler.setLevel(logging.INFO)
logger.addHandler(c_handler)
# write config info into log
logger.info(config)
dataset = create_dataset(config)
logger.info(dataset)
# dataset splitting
train_data, valid_data, test_data = data_preparation(config, dataset)
# model loading and initialization
model = GRU4RecF(config, train_data.dataset).to(config['device'])
logger.info(model)
# trainer loading and initialization
trainer = Trainer(config, model)
# model training
best_valid_score, best_valid_result = trainer.fit(train_data)
###Output
GRU4RecF(
(item_embedding): Embedding(7330, 64, padding_idx=0)
(feature_embed_layer): FeatureSeqEmbLayer(
(token_embedding_table): ModuleDict(
(item): FMEmbedding(
(embedding): Embedding(3935, 64)
)
)
(float_embedding_table): ModuleDict(
(item): Embedding(1, 64)
)
(token_seq_embedding_table): ModuleDict(
(item): ModuleList(
(0): Embedding(16, 64)
)
)
)
(item_gru_layers): GRU(64, 128, bias=False, batch_first=True)
(feature_gru_layers): GRU(768, 128, bias=False, batch_first=True)
(dense_layer): Linear(in_features=256, out_features=64, bias=True)
(dropout): Dropout(p=0.3, inplace=False)
(loss_fct): CrossEntropyLoss()
)
Trainable parameters: 1156288
epoch 0 training [time: 45.54s, train loss: 3642.3637]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 1 training [time: 43.23s, train loss: 3390.6134]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 2 training [time: 43.00s, train loss: 3250.4472]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 3 training [time: 43.16s, train loss: 3163.3735]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 4 training [time: 42.99s, train loss: 3099.2533]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 5 training [time: 42.99s, train loss: 3044.1074]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 6 training [time: 43.14s, train loss: 2998.5542]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 7 training [time: 42.97s, train loss: 2962.4046]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 8 training [time: 42.91s, train loss: 2932.5592]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 9 training [time: 42.97s, train loss: 2907.5308]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 10 training [time: 42.84s, train loss: 2885.6282]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 11 training [time: 42.92s, train loss: 2867.5368]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 12 training [time: 42.82s, train loss: 2850.5957]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 13 training [time: 42.83s, train loss: 2836.0930]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 14 training [time: 43.10s, train loss: 2822.5238]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 15 training [time: 42.77s, train loss: 2811.0895]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 16 training [time: 42.90s, train loss: 2799.8698]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 17 training [time: 42.98s, train loss: 2790.4907]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 18 training [time: 42.79s, train loss: 2781.8785]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 19 training [time: 42.84s, train loss: 2774.2283]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 20 training [time: 42.86s, train loss: 2766.8078]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 21 training [time: 42.85s, train loss: 2760.0341]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 22 training [time: 43.00s, train loss: 2753.8305]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 23 training [time: 42.64s, train loss: 2748.0924]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 24 training [time: 42.53s, train loss: 2742.5462]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 25 training [time: 42.67s, train loss: 2737.9435]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 26 training [time: 42.88s, train loss: 2732.9869]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 27 training [time: 43.01s, train loss: 2728.7535]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 28 training [time: 42.79s, train loss: 2724.6929]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 29 training [time: 42.88s, train loss: 2720.6401]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 30 training [time: 42.76s, train loss: 2716.7650]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 31 training [time: 42.87s, train loss: 2712.6816]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 32 training [time: 42.76s, train loss: 2709.9449]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 33 training [time: 42.76s, train loss: 2706.3356]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 34 training [time: 42.60s, train loss: 2703.5010]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 35 training [time: 42.95s, train loss: 2700.1212]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 36 training [time: 42.59s, train loss: 2696.9098]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 37 training [time: 42.57s, train loss: 2694.2124]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 38 training [time: 42.80s, train loss: 2691.2175]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 39 training [time: 42.72s, train loss: 2688.2775]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 40 training [time: 42.52s, train loss: 2687.1207]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 41 training [time: 42.78s, train loss: 2683.6108]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 42 training [time: 42.70s, train loss: 2680.3406]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 43 training [time: 42.75s, train loss: 2678.0644]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 44 training [time: 42.91s, train loss: 2676.2267]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 45 training [time: 42.72s, train loss: 2673.2900]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 46 training [time: 42.84s, train loss: 2670.9084]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 47 training [time: 42.59s, train loss: 2668.3615]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 48 training [time: 42.60s, train loss: 2667.0580]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 49 training [time: 42.56s, train loss: 2664.2967]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 50 training [time: 42.78s, train loss: 2662.0451]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 51 training [time: 42.73s, train loss: 2660.1352]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 52 training [time: 42.78s, train loss: 2657.7945]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 53 training [time: 42.61s, train loss: 2656.4686]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 54 training [time: 42.59s, train loss: 2654.0421]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 55 training [time: 42.58s, train loss: 2651.8889]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 56 training [time: 42.60s, train loss: 2650.4294]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 57 training [time: 42.74s, train loss: 2648.3119]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 58 training [time: 42.59s, train loss: 2646.3108]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 59 training [time: 42.52s, train loss: 2644.7120]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 60 training [time: 42.59s, train loss: 2642.6164]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 61 training [time: 42.59s, train loss: 2640.6564]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 62 training [time: 42.57s, train loss: 2639.2032]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 63 training [time: 42.41s, train loss: 2637.3143]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 64 training [time: 42.18s, train loss: 2635.7403]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 65 training [time: 42.17s, train loss: 2634.4346]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 66 training [time: 42.22s, train loss: 2632.6797]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 67 training [time: 42.12s, train loss: 2630.9897]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 68 training [time: 42.24s, train loss: 2629.3250]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 69 training [time: 42.21s, train loss: 2627.6307]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 70 training [time: 42.24s, train loss: 2626.2554]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 71 training [time: 42.09s, train loss: 2624.0660]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 72 training [time: 42.27s, train loss: 2623.2111]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 73 training [time: 42.10s, train loss: 2621.6412]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 74 training [time: 42.08s, train loss: 2623.1531]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 75 training [time: 42.14s, train loss: 2618.6150]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 76 training [time: 42.16s, train loss: 2617.9686]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 77 training [time: 42.02s, train loss: 2615.9039]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 78 training [time: 42.13s, train loss: 2613.9363]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 79 training [time: 42.12s, train loss: 2612.7309]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 80 training [time: 42.09s, train loss: 2610.8800]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 81 training [time: 42.16s, train loss: 2609.9121]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 82 training [time: 42.10s, train loss: 2608.5970]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 83 training [time: 42.08s, train loss: 2607.7214]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 84 training [time: 42.15s, train loss: 2605.9061]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 85 training [time: 42.24s, train loss: 2604.9589]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 86 training [time: 42.12s, train loss: 2603.3150]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 87 training [time: 42.05s, train loss: 2602.3913]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 88 training [time: 42.11s, train loss: 2600.7972]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 89 training [time: 42.12s, train loss: 2599.0803]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 90 training [time: 42.23s, train loss: 2598.2695]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 91 training [time: 42.18s, train loss: 2596.9184]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 92 training [time: 42.15s, train loss: 2596.0503]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 93 training [time: 42.47s, train loss: 2594.9000]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 94 training [time: 42.34s, train loss: 2593.6262]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 95 training [time: 42.17s, train loss: 2592.5283]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 96 training [time: 42.30s, train loss: 2591.2610]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 97 training [time: 42.32s, train loss: 2589.8100]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 98 training [time: 42.29s, train loss: 2589.1635]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 99 training [time: 42.22s, train loss: 2588.0752]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
###Markdown
Generate trained recommender based predictions
###Code
from recbole.utils.case_study import full_sort_topk
external_user_ids = dataset.id2token(
dataset.uid_field, list(range(dataset.user_num)))[1:]#fist element in array is 'PAD'(default of Recbole) ->remove it
import torch
from recbole.data.interaction import Interaction
def add_last_item(old_interaction, last_item_id, max_len=50):
new_seq_items = old_interaction['item_id_list'][-1]
if old_interaction['item_length'][-1].item() < max_len:
new_seq_items[old_interaction['item_length'][-1].item()] = last_item_id
else:
new_seq_items = torch.roll(new_seq_items, -1)
new_seq_items[-1] = last_item_id
return new_seq_items.view(1, len(new_seq_items))
def predict_for_all_item(external_user_id, dataset, model):
model.eval()
with torch.no_grad():
uid_series = dataset.token2id(dataset.uid_field, [external_user_id])
index = np.isin(dataset.inter_feat[dataset.uid_field].numpy(), uid_series)
input_interaction = dataset[index]
test = {
'item_id_list': add_last_item(input_interaction,
input_interaction['item_id'][-1].item(), model.max_seq_length),
'item_length': torch.tensor(
[input_interaction['item_length'][-1].item() + 1
if input_interaction['item_length'][-1].item() < model.max_seq_length else model.max_seq_length])
}
new_inter = Interaction(test)
new_inter = new_inter.to(config['device'])
new_scores = model.full_sort_predict(new_inter)
new_scores = new_scores.view(-1, test_data.dataset.item_num)
new_scores[:, 0] = -np.inf # set scores of [pad] to -inf
return torch.topk(new_scores, 12)
topk_items = []
for external_user_id in external_user_ids:
_, topk_iid_list = predict_for_all_item(external_user_id, dataset, model)
last_topk_iid_list = topk_iid_list[-1]
external_item_list = dataset.id2token(dataset.iid_field, last_topk_iid_list.cpu()).tolist()
topk_items.append(external_item_list)
print(len(topk_items))
external_item_str = [' '.join(x) for x in topk_items]
result = pd.DataFrame(external_user_ids, columns=['customer_id'])
result['prediction'] = external_item_str
result.head()
del external_item_str
del topk_items
del external_user_ids
del train_data
del valid_data
del test_data
del model
del Trainer
del logger
del dataset
gc.collect()
###Output
_____no_output_____
###Markdown
Reading Submission data
###Code
submission_data = pd.read_csv(submission_data_path)
submission_data.shape
###Output
_____no_output_____
###Markdown
Postprocessing submision data 1. Replacing trained customer ids based prediction from recbole based predictions by performing merge 2. Filling up Nan values for customer ids which were not part of recbole training session 3. Generating the final prediction column 4. Dropping redundant columns
###Code
submission_data = pd.merge(submission_data, result, on='customer_id', how='outer')
submission_data
submission_data = submission_data.fillna(-1)
submission_data['prediction'] = submission_data.apply(
lambda x: x['prediction_y'] if x['prediction_y'] != -1 else x['prediction_x'], axis=1)
submission_data
submission_data = submission_data.drop(columns=['prediction_y', 'prediction_x'])
submission_data
###Output
_____no_output_____
###Markdown
Writing final submission file to kaggle output disk
###Code
submission_data.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
SC0x/Unit 1 - Supply Chain Management Overview.ipynb | ###Markdown
Why do we care?Supply chains have grown to span the globe.Multiple functions are now combined (warehousing, inventory, transportation, demand planning and procurement)Runs within and across a firm"Bridge" & "Shockabsorber"From customers to suppliers.Adapt and adjust and be flexible.Prediciting the future is hard.Using diesel fuel as an example. \2 Diesel used as fuel for truck load and less than truckload transportation.
###Code
!wget https://www.eia.gov/petroleum/gasdiesel/xls/pswrgvwall.xls -O ./data/pswrgvwall.xls
import pandas as pd
diesel = pd.read_excel("./data/pswrgvwall.xls", sheet_name=None)
diesel.keys()
diesel['Data 1']
%matplotlib inline
no_head = diesel['Data 1']
no_head.columns = no_head.iloc[1]
no_head = no_head.iloc[2:]
no_head['Date'] = pd.to_datetime(no_head['Date'])
no_head.set_index('Date', inplace=True)
no_head.plot(title="Diesel price $USD over time", figsize=(20,12))
###Output
_____no_output_____
###Markdown
Question: What is the impact of such variability on a supply chain?If price was fixed you could desing a supply chain to last 10 years. But its not so you need to have a "shock absorber", these uncertainties/types of factors need to be considered. By using a data drive and metrics based approach organizations can pursue effeciencies that will pay dividends when prices are high and low. Observation: Supply Chain has been gained efficiencies such that the % contribution to GDP has been reducedFrom the Deloitter report the category is sitting at around 3%.https://www2.deloitte.com/us/en/insights/economy/spotlight/economics-insights-analysis-07-2019.html
###Code
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "https://www2.deloitte.com/content/dam/insights/us/articles/5024_Economics-Spotlight-July2019/figures/5024_Fig2.jpg")
###Output
_____no_output_____ |
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb | ###Markdown
Siamese Neural Network Recommendation for Friends (for Website) This notebook presents the final code that will be used for the Movinder [website](https://movinder.herokuapp.com/) when `Get recommendation with SiameseNN!` is selected by user.
###Code
import pandas as pd
import json
import datetime, time
from sklearn.model_selection import train_test_split
import itertools
import os
import zipfile
import random
import numpy as np
import requests
import matplotlib.pyplot as plt
import scipy.sparse as sp
from sklearn.metrics import roc_auc_score
###Output
_____no_output_____
###Markdown
--- (1) Read data
###Code
movies = json.load(open('movies.json'))
friends = json.load(open('friends.json'))
ratings = json.load(open('ratings.json'))
soup_movie_features = sp.load_npz('soup_movie_features_11.npz')
soup_movie_features = soup_movie_features.toarray()
###Output
_____no_output_____
###Markdown
(1.2) Simulate new friend's input The new group of friends will need to provide information that will be later used for training the model and predicting the ratings they will give to other movies. The friends will have a new id `new_friend_id`. They will provide a rating specified in the dictionary with the following keys: `movie_id_ml` (id of the movie rated), `rating` (rating of that movie on the scale from 1 to 5), and `friend_id` that will be the friends id specified as `new_friend_id`. In addition to this rating information, the users will have to provide to the system the information that includes their average age in the group `friends_age` and gender `friends_gender`.
###Code
new_friend_id = len(friends)
new_ratings = [{'movie_id_ml': 302.0, 'rating': 4.0, 'friend_id': new_friend_id},
{'movie_id_ml': 304.0, 'rating': 4.0, 'friend_id': new_friend_id},
{'movie_id_ml': 307.0, 'rating': 4.0, 'friend_id': new_friend_id}]
new_ratings
new_friend = {'friend_id': new_friend_id, 'friends_age': 25.5, 'friends_gender': 0.375}
new_friend
# extend the existing data with this new information
friends.append(new_friend)
ratings.extend(new_ratings)
###Output
_____no_output_____
###Markdown
--- (2) Train the LightFM Model We will be using the [LightFM](http://lyst.github.io/lightfm/docs/index.html) implementation of SiameseNN to train our model using the user and item (i.e. movie) features. First, we create `scipy.sparse` matrices from raw data and they can be used to fit the LightFM model.
###Code
from lightfm.data import Dataset
from lightfm import LightFM
from lightfm.evaluation import precision_at_k
from lightfm.evaluation import auc_score
###Output
_____no_output_____
###Markdown
(2.1) Build ID mappings We create a mapping between the user and item ids from our input data to indices that will be internally used by this model. This needs to be done since the LightFM works with user and items ids that are consecutive non-negative integers. Using `dataset.fit` we assign internal numerical id to every user and item we passed in.
###Code
dataset = Dataset()
item_str_for_eval = "x['title'],x['release'], x['unknown'], x['action'], x['adventure'],x['animation'], x['childrens'], x['comedy'], x['crime'], x['documentary'], x['drama'], x['fantasy'], x['noir'], x['horror'], x['musical'],x['mystery'], x['romance'], x['scifi'], x['thriller'], x['war'], x['western'], *soup_movie_features[x['soup_id']]"
friend_str_for_eval = "x['friends_age'], x['friends_gender']"
dataset.fit(users=(int(x['friend_id']) for x in friends),
items=(int(x['movie_id_ml']) for x in movies),
item_features=(eval("("+item_str_for_eval+")") for x in movies),
user_features=((eval(friend_str_for_eval)) for x in friends))
num_friends, num_items = dataset.interactions_shape()
print(f'Mappings - Num friends: {num_friends}, num_items {num_items}.')
###Output
Mappings - Num friends: 192, num_items 1251.
###Markdown
(2.2) Build the interactions and feature matrices The `interactions` matrix contains interactions between `friend_id` and `movie_id_ml`. It puts 1 if friends `friend_id` rated movie `movie_id_ml`, and 0 otherwise.
###Code
(interactions, weights) = dataset.build_interactions(((int(x['friend_id']), int(x['movie_id_ml']))
for x in ratings))
print(repr(interactions))
###Output
<192x1251 sparse matrix of type '<class 'numpy.int32'>'
with 59123 stored elements in COOrdinate format>
###Markdown
The `item_features` is also a sparse matrix that contains movie ids with their corresponding features. In the item features, we include the following features: movie title, when it was released, all genres it belongs to, and vectorized representation of movie keywords, cast members, and countries it was released in.
###Code
item_features = dataset.build_item_features(((x['movie_id_ml'],
[eval("("+item_str_for_eval+")")]) for x in movies) )
print(repr(item_features))
###Output
<1251x2487 sparse matrix of type '<class 'numpy.float32'>'
with 2502 stored elements in Compressed Sparse Row format>
###Markdown
The `user_features` is also a sparse matrix that contains movie ids with their corresponding features. The user features include their age, and gender.
###Code
user_features = dataset.build_user_features(((x['friend_id'],
[eval(friend_str_for_eval)]) for x in friends) )
print(repr(user_features))
###Output
<192x342 sparse matrix of type '<class 'numpy.float32'>'
with 384 stored elements in Compressed Sparse Row format>
###Markdown
(2.3) Building a model After some hyperparameters tuning, we end up to having the best model performance with the following values:- Epocks = 150- Learning rate = 0.015- Max sampled = 11- Loss type = WARPReferences:- The WARP (Weighted Approximate-Rank Pairwise) lso for implicit feedback learning-rank. Originally implemented in [WSABIE paper](http://www.thespermwhale.com/jaseweston/papers/wsabie-ijcai.pdf).- Extension to apply to recommendation settings in the 2013 k-order statistic loss [paper](http://www.ee.columbia.edu/~ronw/pubs/recsys2013-kaos.pdf) in the form of the k-OS WARP loss, also implemented in LightFM.
###Code
epochs = 150
lr = 0.015
max_sampled = 11
loss_type = "warp" # "bpr"
model = LightFM(learning_rate=lr, loss=loss_type, max_sampled=max_sampled)
model.fit_partial(interactions, epochs=epochs, user_features=user_features, item_features=item_features)
train_precision = precision_at_k(model, interactions, k=10, user_features=user_features, item_features=item_features).mean()
train_auc = auc_score(model, interactions, user_features=user_features, item_features=item_features).mean()
print(f'Precision: {train_precision}, AUC: {train_auc}')
def predict_top_k_movies(model, friends_id, k):
n_users, n_movies = train.shape
if use_features:
prediction = model.predict(friends_id, np.arange(n_movies), user_features=friends_features, item_features=item_features)#predict(model, user_id, np.arange(n_movies))
else:
prediction = model.predict(friends_id, np.arange(n_movies))#predict(model, user_id, np.arange(n_movies))
movie_ids = np.arange(train.shape[1])
return movie_ids[np.argsort(-prediction)][:k]
dfm = pd.DataFrame(movies)
dfm = dfm.sort_values(by="movie_id_ml")
k = 10
friends_id = new_friend_id
movie_ids = np.array(dfm.movie_id_ml.unique())#np.array(list(df_movies.movie_id_ml.unique())) #np.arange(interactions.shape[1])
print(movie_ids.shape)
n_users, n_items = interactions.shape
scores = model.predict(friends_id, np.arange(n_items), user_features=user_features, item_features=item_features)
# scores = model.predict(friends_id, np.arange(n_items))
known_positives = movie_ids[interactions.tocsr()[friends_id].indices]
top_items = movie_ids[np.argsort(-scores)]
print(f"Friends {friends_id}")
print(" Known positives:")
for x in known_positives[:k]:
print(f" {x} | {dfm[dfm.movie_id_ml==x]['title'].iloc[0]}" )
print(" Recommended:")
for x in top_items[:k]:
print(f" {x} | {dfm[dfm.movie_id_ml==x]['title'].iloc[0]}" )
###Output
(1251,)
Friends 191
Known positives:
301 | in & out
302 | l.a. confidential
307 | the devil's advocate
Recommended:
48 | hoop dreams
292 | rosewood
255 | my best friend's wedding
286 | the english patient
284 | tin cup
299 | hoodlum
125 | phenomenon
1 | toy story
315 | apt pupil
7 | twelve monkeys
|
Explore U.S. Births/Basics.ipynb | ###Markdown
1: Introduction To The Dataset
###Code
data = open('US_births_1994-2003_CDC_NCHS.csv','r').read().split('\n')
data[:10]
###Output
_____no_output_____
###Markdown
2: Converting Data Into A List Of Lists
###Code
def read_csv(filename,header = False):
final_list = []
read_data = open(filename,'r').read().split('\n')[1:]
if header == True:
read_data = open(filename,'r').read().split('\n')[1:]
else:
read_data = open(filename,'r').read().split('\n')
for item in read_data:
int_fields = []
string_fields = item.split(',')
for val in string_fields:
int_fields.append(int(val))
final_list.append(int_fields)
return(final_list)
cdc_list = read_csv('US_births_1994-2003_CDC_NCHS.csv',header = True)
cdc_list[:10]
###Output
_____no_output_____
###Markdown
3: Calculating Number Of Births Each Month
###Code
def month_births(data):
births_per_month = {}
for item in data:
if item[1] in births_per_month.keys():
births_per_month[item[1]] += item[4]
else:
births_per_month[item[1]] = item[4]
return(births_per_month)
cdc_month_births = month_births(cdc_list)
cdc_month_births
def dow_births(data):
births_per_dow = {}
for item in data:
if item[3] in births_per_dow.keys():
births_per_dow[item[3]] += item[4]
else:
births_per_dow[item[3]] = item[4]
return(births_per_dow)
cdc_day_births = dow_births(cdc_list)
cdc_day_births
###Output
_____no_output_____
###Markdown
5: Creating A More General Function
###Code
def calc_counts(data,column):
birth = {}
for item in data:
if item[column] in birth.keys():
birth[item[column]] += item[4]
else:
birth[item[column]] = item[4]
return(birth)
cdc_year_births = calc_counts(cdc_list, 0)
cdc_month_births = calc_counts(cdc_list, 1)
cdc_dom_births = calc_counts(cdc_list, 2)
cdc_dow_births = calc_counts(cdc_list, 3)
cdc_year_births
cdc_month_births
cdc_dom_births
cdc_dow_births
def min_max(dictionary):
min_val = min(dictionary.items(), key=lambda k: k[1])
max_val = max(dictionary.items(), key=lambda k: k[1])
return("Minimum Value:%s Maximum Value:%s"%(min_val,max_val))
min_max(cdc_dow_births)
###Output
_____no_output_____ |
notebooks/noise_map_generator_example.py.ipynb | ###Markdown
Sketch of UWB pipelineThis notebook contains the original sketch of the uwb implementation which is availible in the uwb package.Code in the package is mostly reworked and devide in modules. For usage of the package please check outthe other notebook in the directory.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.cluster import DBSCAN
from itertools import product
from scipy.stats import multivariate_normal
from functools import reduce
def multi_dim_noise(grid_dims, amount, step, std=10, means=(1,5)):
prod = reduce((lambda x,y: x*y), grid_dims)
samples = np.zeros(grid_dims + [amount , len(grid_dims)])
clusters = np.random.randint(
means[0], means[1] + 1, size=grid_dims
)
grid = []
for dim in grid_dims:
grid.append(((np.arange(dim) + 1) * step))
mean = np.array(np.meshgrid(*grid, indexing="ij")).reshape(prod, len(grid_dims))
noise = np.random.randn(means[1], prod, len(grid_dims)) * std
centers = (noise + mean).reshape([means[1]] + grid_dims + [len(grid_dims)])
# transpose hack for selection
roll_idx = np.roll(np.arange(centers.ndim),-1).tolist()
centers = np.transpose(centers, roll_idx)
for idxs in product(*[range(i) for i in grid_dims]):
print(idxs)
samples[idxs] = make_blobs(
n_samples=amount, centers=(centers[idxs][:, 0:clusters[idxs]]).T
)[0]
return samples
def generate_noise(width, length, amount, step, std=10, means=(1,5)):
samples = np.zeros((width, length, amount, 2))
clusters = np.random.randint(
means[0], means[1] + 1, size=(width, length)
)
# calculate centers
grid_width = (np.arange(width) + 1) * step
grid_length = (np.arange(length) + 1) * step
mean = np.array(
[
np.repeat(grid_width, len(grid_length)),
np.tile(grid_length, len(grid_width)),
]
).T
noise = np.random.randn(means[1], width * length, 2) * std
centers = (noise + mean).reshape((means[1], width, length, 2))
for i in range(width):
for j in range(length):
samples[i, j, :] = make_blobs(
n_samples=amount, centers=centers[0 : clusters[i, j], i, j, :]
)[0]
return samples, (grid_width, grid_length)
np.random.seed(0)
data, map_grid = generate_noise(3, 3, 50, 10)
multi_dim_noise([4,2,5], 50, 10)
plt.plot(data[0,0,:,0], data[0,0,:,1], 'o') # example of 5 clusters in position 0,0
plt.show()
def generate_map(noise, eps=2, min_samples=3):
db = DBSCAN(eps=eps, min_samples=min_samples).fit(noise)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
n_noise_ = list(labels).count(-1)
return labels, core_samples_mask, n_clusters_
def plot_clusters(X, labels, core_sapmles_mask, n_clusters_):
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
labels = np.zeros((3, 3, 50), dtype=int)
for x,y in product(range(3), range(3)):
labels[x,y,:], core_samples_mask, n_clusters_ = generate_map(data[x,y,:,:])
plot_clusters(data[x,y,:,:], labels[x,y,:], core_samples_mask, n_clusters_)
plt.show()
# estimate parameters
# this is quite slow but calculation is perfomed only once per map generation
params = [[[] for i in range(3)] for i in range(3)]
for x,y in product(range(3), range(3)):
used_data = 50 - list(labels[x,y]).count(-1)
for i in range(np.max(labels[x,y,:]) + 1):
mask = labels[x,y] == i
mean_noise = data[x,y,mask,:].mean(axis=0) - np.array([(x+1) * 10,(y+1) * 10])
cov_noise = np.cov(data[x,y,mask,:].T)
weight = mask.sum() / used_data
params[x][y].append((mean_noise, cov_noise, weight))
print(params)
# dynamics model
walk = []
start_state = np.array([[20, 20, 0, 0]], dtype=float)
walk.append(start_state)
def transition_function(current_state, x_range=(10, 40), y_range=(10, 40), std=1):
"""Performs a one step transition assuming sensing interval of one
Format of current_state = [x,y,x',y'] + first dimension is batch size
"""
next_state = np.copy(current_state)
next_state[: ,0:2] += current_state[:, 2:4]
next_state[: ,2:4] += np.random.randn(2) * std
next_state[: ,0] = np.clip(next_state[: ,0], x_range[0], x_range[1])
next_state[: ,1] = np.clip(next_state[: ,1], y_range[0], y_range[1])
return next_state
next_state = transition_function(start_state)
walk.append(next_state)
for i in range(100):
next_state = transition_function(next_state)
walk.append(next_state)
walk = np.array(walk)
print(walk.shape)
plt.plot(walk[:,0,0], walk[:,0, 1])
plt.show()
# measurement noise map augmented particle filter
def find_nearest_map_position(x,y, map_grid):
x_pos = np.searchsorted(map_grid[0], x)
y_pos = np.searchsorted(map_grid[1], y, side="right")
x_valid = (x_pos != 0) & (x_pos < len(map_grid[0]))
x_pos = np.clip(x_pos, 0, len(map_grid[0]) - 1)
x_dist_right = map_grid[0][x_pos] - x
x_dist_left = x - map_grid[0][x_pos - 1]
x_pos[x_valid & (x_dist_right > x_dist_left)] -= 1
y_valid = (y_pos != 0) & (y_pos < len(map_grid[1]))
y_pos = np.clip(y_pos, 0, len(map_grid[1]) - 1)
y_dist_right = map_grid[1][y_pos] - y
y_dist_left = y - map_grid[0][y_pos - 1]
y_pos[y_valid & (y_dist_right > y_dist_left)] -= 1
return x_pos, y_pos
def reweight_samples(x, z, w, params, map_grip):
x_pos, y_pos = find_nearest_map_position(x[:,0], x[:,1], map_grid)
new_weights = np.zeros_like(w)
for i, (x_p, y_p) in enumerate(zip(x_pos, y_pos)):
for gm in params[x_p][y_p]:
# calculating p(z|x) for GM
mean, cov, weight = gm
new_weights[i] += multivariate_normal.pdf(z[i, 0:2] ,mean=mean, cov=cov) * weight * w[i]
denorm = np.sum(new_weights)
return new_weights / denorm
print(map_grid)
x = np.array([9, 10, 11, 14, 16, 24, 31, 30, 29, 15])
y = np.array([9, 10, 11, 14, 16, 24, 31, 30, 29, 15])
w = np.ones(10) * 0.1
print(find_nearest_map_position(
x,
y,
map_grid
))
x_noise = np.random.randn(10)
y_noise = np.random.randn(10)
particles = np.stack((x, y, x_noise, y_noise)).T
transitioned_particles = transition_function(particles)
n_w = reweight_samples(particles, transitioned_particles, w, params, map_grid)
# compute metrics for resampling
def compute_ESS(x, w):
M = len(x)
CV = 1/M * np.sum((w*M-1)**2)
return M / (1 + CV)
print(compute_ESS(particles, w))
print(compute_ESS(particles, n_w)) # needs to be resampled
###Output
_____no_output_____ |
Section 16/AdvanceReg/Teclov_generalised_regression.ipynb | ###Markdown
Generalised RegressionIn this notebook, we will build a generalised regression model on the **electricity consumption** dataset. The dataset contains two variables - year and electricity consumption.
###Code
#importing libraries
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from sklearn import metrics
#fetching data
elec_cons = pd.read_csv("total-electricity-consumption-us.csv", sep = ',', header= 0 )
elec_cons.head()
# number of observations: 51
elec_cons.shape
# checking NA
# there are no missing values in the dataset
elec_cons.isnull().values.any()
size = len(elec_cons.index)
index = range(0, size, 5)
train = elec_cons[~elec_cons.index.isin(index)]
test = elec_cons[elec_cons.index.isin(index)]
print(len(train))
print(len(test))
# converting X to a two dimensional array, as required by the learning algorithm
X_train = train.Year.reshape(-1,1) #Making X two dimensional
y_train = train.Consumption
X_test = test.Year.reshape(-1,1) #Making X two dimensional
y_test = test.Consumption
# Doing a polynomial regression: Comparing linear, quadratic and cubic fits
# Pipeline helps you associate two models or objects to be built sequentially with each other,
# in this case, the objects are PolynomialFeatures() and LinearRegression()
r2_train = []
r2_test = []
degrees = [1, 2, 3]
for degree in degrees:
pipeline = Pipeline([('poly_features', PolynomialFeatures(degree=degree)),
('model', LinearRegression())])
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
r2_test.append(metrics.r2_score(y_test, y_pred))
# training performance
y_pred_train = pipeline.predict(X_train)
r2_train.append(metrics.r2_score(y_train, y_pred_train))
# plot predictions and actual values against year
fig, ax = plt.subplots()
ax.set_xlabel("Year")
ax.set_ylabel("Power consumption")
ax.set_title("Degree= " + str(degree))
# train data in blue
ax.scatter(X_train, y_train)
ax.plot(X_train, y_pred_train)
# test data
ax.scatter(X_train, y_train)
ax.plot(X_test, y_pred)
plt.show()
# respective test r-squared scores of predictions
print(degrees)
print(r2_train)
print(r2_test)
###Output
[1, 2, 3]
[0.84237474021761372, 0.99088967445535958, 0.9979789881969624]
[0.81651704638268097, 0.98760805026754717, 0.99848974839924587]
|
mini_project.ipynb | ###Markdown
Tests:(Chapt 11 conditions: seed 42, elu, learning rate = 0.01, he init, RGB normalization, BN, momentum = 0.9, AdamOpt, 5 layers, 100 neurons per layer, 1000 epochs, batch size 20)With Chapt 11 conditions & 2 outputs:49.70%Without BN:49.80%Without BN or RGB normalization:50.00%Without normalization and with Glorot Normal init (instead of He init):50.00%With He init and learning rate = 0.05:50.00%With He init, RGB normalization, and learning rate = 0.05:54.40%With BN again:50.00%Without BN and with 1140 outputs:50.20%Same as Chapt 11 with GradientDescent instead of AdamOpt and without BN:58.50%With learning rate = 0.05:59.20%Same as Chapt 11 with GradientDescent and momentum = 0.8:58.90%With batch size 5:64.20%Chapt 11 + GD + batch size 5 + 3 layers instead of 5:66.20%
###Code
with tf.Session() as sess:
saver.restore(sess, "./mini_project_final.ckpt") # or better, use save_path
X_new_scaled = X_test[:20]
Z = logits.eval(feed_dict={X: X_new_scaled})
y_pred = np.argmax(Z, axis=1)
from tensorflow_graph_in_jupyter import show_graph
show_graph(tf.get_default_graph())
###Output
_____no_output_____ |
results/draw_results.ipynb | ###Markdown
Supplementary
###Code
def show_results_dist_1_3(frames, title, y_axe_name,
dims=(18, 12), save=False, file_name='trash', legend_size=13):
size = len(frames)
a4_dims = dims
fig, axs = plt.subplots(1, 3, figsize=a4_dims)
for i in range(3):
sns.lineplot(x="Error = 1 - Recall@1", y=y_axe_name,hue="algorithm",
markers=True, style="algorithm", dashes=False,
data=frames[i], ax=axs[i], linewidth=2, ms=10)
axs[i].set_title(title[i], size='20')
lx = axs[i].get_xlabel()
ly = axs[i].get_ylabel()
axs[i].set_xlabel(lx, fontsize=20)
axs[i].set_ylabel(ly, fontsize=20)
axs[i].set_xscale('log')
if i == 0:
axs[i].set_xticks([0.001, 0.01, .1])
else:
axs[i].set_xticks([0.01, 0.1])
axs[i].get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
axs[i].tick_params(axis='both', which='both', labelsize=15)
if i > 0:
axs[i].set_ylabel('')
plt.setp(axs[i].get_legend().get_texts(), fontsize=legend_size)
if save:
fig.savefig(file_name + ".pdf", bbox_inches='tight')
y_axe = 7
y_axe_name = "dist calc"
model_names = [['kNN', 'kNN + Kl + llf 4', 'kNN + Kl + llf 8',
'kNN + Kl + llf 16', 'kNN + Kl + llf 32']]
num_show = [0, 1, 2, 3, 4]
num_exper = [6]
num_models = [5]
file_names = [path + '5_nlt.txt']
df_kl_4 = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
num_exper = [6]
num_models = [5]
file_names = [path + '9_nlt.txt']
df_kl_8 = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
num_exper = [6]
num_models = [5]
file_names = [path + '17_nlt.txt']
df_kl_16 = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
frames = [df_kl_4, df_kl_8, df_kl_16]
show_results_dist_1_3(frames, ['d = 4', 'd = 8', 'd = 16'], y_axe_name, dims=(20, 6),
save=False, file_name='suppl_figure_optimal_kl_number')
path_end = '_llt.txt'
y_axe = 7
y_axe_name = "dist calc"
model_names = [['kNN', 'thrNN', 'kNN + Kl-dist + llf', 'kNN + Kl-rank + llf', 'kNN + Kl-rank sample + llf']]
num_show = [0, 1, 2, 3, 4]
num_exper = [6]
num_models = [5]
file_names = [path + '5' + path_end]
df_kl_4 = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
num_exper = [6]
num_models = [5]
file_names = [path + '9' + path_end]
df_kl_8 = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
num_exper = [6]
num_models = [5]
file_names = [path + '17' + path_end]
df_kl_16 = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
frames = [df_kl_4, df_kl_8, df_kl_16]
show_results_dist_1_3(frames, ['d = 4', 'd = 8', 'd = 16'], y_axe_name, dims=(20, 6),
save=False, file_name='suppl_figure_optimal_kl_type', legend_size=10)
path_start = "~/Desktop/results/distr_to_1_"
path_end = ".txt"
a4_dims = (7, 3)
fig, ax = plt.subplots(figsize=a4_dims)
ax.set_yticks([])
file_name = path_start + "sift" + path_end
file_name = os.path.expanduser(file_name)
distr = np.genfromtxt(file_name)
sns.distplot(distr, label="SIFT")
file_name = path_start + "d_9" + path_end
file_name = os.path.expanduser(file_name)
distr = np.genfromtxt(file_name)
sns.distplot(distr, label="d=8")
file_name = path_start + "d_17" + path_end
file_name = os.path.expanduser(file_name)
distr = np.genfromtxt(file_name)
sns.distplot(distr, label="d=16")
file_name = path_start + "d_33" + path_end
file_name = os.path.expanduser(file_name)
distr = np.genfromtxt(file_name)
sns.distplot(distr, label="d=32")
file_name = path_start + "d_65" + path_end
file_name = os.path.expanduser(file_name)
distr = np.genfromtxt(file_name)
sns.distplot(distr, label="d=64")
plt.legend()
fig.savefig("suppl_dist_disrt.pdf", bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Real Datasets
###Code
path = '~/Desktop/results/naive_triplet_'
y_axe = 9
y_axe_name = "QPS (1/s)"
model_names = [['kNN', 'HNSW', 'kNN + Kl', 'kNN + Kl + dim-red']]
num_show = [0, 1, 2, 3, 4]
num_exper = [7]
num_models = [4]
file_names = [path + 'sift.txt']
df_sift = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
num_exper = [5]
file_names = [path + 'gist.txt']
df_gist = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
file_names = [path + 'glove.txt']
df_glove = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
file_names = [path + 'deep.txt']
df_deep = get_df(num_models, num_exper, file_names, model_names, y_axe, y_axe_name, title="trash", num_show=num_show)
frames = [df_gist, df_sift, df_glove, df_deep]
show_results(frames, ['GIST', 'SIFT', 'GloVe', 'DEEP'], y_axe_name,
y_log=False, x_log=True, dims=(24, 14),
save=True, file_name='naive_real_datasets_2_2_final')
###Output
_____no_output_____ |
ML_Files/Cross_Strip_8Fam/.ipynb_checkpoints/Pixelated_32Features_DNN_Time_Label-checkpoint.ipynb | ###Markdown
This Notebook is done for Pixelated Data for 1 Compton, 2 PhotoElectric with 2 more Ambiguity! I got 89% Accuracy on Test set! I am getting X from Blurred Dataset, y as labels from Ground Truth
###Code
import pandas as pd
import numpy as np
from keras.utils import to_categorical
import math
df = {'Label':[],
'Theta_P1':[], 'Theta_E1':[], 'Theta_P2':[], 'Theta_E2':[], 'Theta_P3':[], 'Theta_E3':[], 'Theta_P4':[], 'Theta_E4':[],
'Theta_P5':[], 'Theta_E5':[], 'Theta_P6':[], 'Theta_E6':[], 'Theta_P7':[], 'Theta_E7':[], 'Theta_P8':[], 'Theta_E8':[],
'y': []}
with open("Data/test_Output_8.csv", 'r') as f:
counter = 0
counter_Theta_E = 0
for line in f:
sline = line.split('\t')
if len(sline) == 12:
df['Label'].append(int(sline[0]))
df['Theta_P1'].append(float(sline[1]))
df['Theta_E1'].append(float(sline[4]))
df['Theta_P2'].append(float(sline[5]))
df['Theta_E2'].append(float(sline[6]))
df['Theta_P3'].append(float(sline[7]))
df['Theta_E3'].append(float(sline[8]))
df['Theta_P4'].append(float(sline[9]))
df['Theta_E4'].append(float(sline[10]))
df['y'].append(int(sline[11]))
# df.info() Counts Nan in the dataset
df = pd.DataFrame(df)
df.to_csv('GroundTruth.csv', index=False)
df[0:4]
X = []
y = []
df = pd.read_csv('GroundTruth.csv')
for i in range(0, len(df)-1, 1): # these are from Blurred Data!
features = df.loc[i, 'Theta_P1':'Theta_E4'].values.tolist()
label = df.loc[i, 'y':'y'].values.tolist()
X.append(features)
y.append(label)
X = np.array(X)
y = np.array(y)
y = to_categorical(y, num_classes=None, dtype='float32')
print(y[0])
# ID = df.loc[i,'ID'] # get family ID from blurred dataframe
# gt_temp_rows = df[df['ID'] == ID] # find corresponding rows in grund truth dataframe
# count = 0
# if (len(gt_temp_rows)==0) or(len(gt_temp_rows)==1): # yani exactly we have 2 lines!
# count += 1
# continue
# idx = gt_temp_rows.index.tolist()[0] # read the first row's index
# # print(len(gt_temp_rows))
# # print(gt_temp_rows.index.tolist())
# # # set the target value
# # print('********************')
# # print('eventID_label:', int(sline[0]))
# # print(gt_temp_rows)
# if (gt_temp_rows.loc[idx, 'DDA':'DDA'].item() <= gt_temp_rows.loc[idx+1, 'DDA':'DDA'].item()):
# label = 1
# else:
# label = 0
# X.append(row1)
# y.append(label)
# X = np.array(X)
# y = np.array(y)
# # print(y)
# y = to_categorical(y, num_classes=None, dtype='float32')
# # print(y)
###Output
[0. 1. 0. 0.]
###Markdown
Define the Model
###Code
# Define the keras model
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(128, input_dim=X.shape[1], activation='relu')) #8, 8: 58 12, 8:64 32,16: 66 16,16: 67
model.add(Dense(64, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(y.shape[1], activation='softmax'))
model.summary()#CNN, LSTM, RNN, Residual, dense
print(model)
# compile the keras model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#loss: categorical_crossentropy (softmax output vector mide: multi class classification)
#binary_crossentropy (sigmoid output: binary classification)
#mean_squared_error MSE
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# fit the keras model on the dataset
history = model.fit(X_train, y_train, epochs=220, batch_size=10, validation_split=0.15)
import matplotlib.pyplot as plt
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Valid'], loc='upper left')
plt.grid(True)
# plt.xticks(np.arange(1, 100, 5))
plt.show()
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Valid'], loc='upper left')
plt.grid(True)
# plt.xticks(np.arange(1, 100, 5))
plt.show()
# Evaluating trained model on test set. This Accuracy came from DDA Labeling
_, accuracy = model.evaluate(X_test, y_test)
print('Accuracy: %.2f' % (accuracy*100))
###Output
226/226 [==============================] - 0s 540us/step - loss: 0.3584 - accuracy: 0.8997
Accuracy: 89.97
###Markdown
HyperParameterOptimization
###Code
def create_model(hyperParams):
hidden_layers = hyperParams['hidden_layers']
activation = hyperParams['activation']
dropout = hyperParams['dropout']
output_activation = hyperParams['output_activation']
loss = hyperParams['loss']
input_size = hyperParams['input_size']
output_size = hyperParams['output_size']
model = Sequential()
model.add(Dense(hidden_layers[0], input_shape=(input_size,), activation=activation))
model.add(Dropout(dropout))
for i in range(len(hidden_layers)-1):
model.add(Dense(hidden_layers[i], activation=activation))
model.add(Dropout(dropout))
model.add(Dense(output_size, activation=output_activation))
model.compile(loss=loss, optimizer='adam', metrics=['accuracy'])
# categorical_crossentropy, binary_crossentropy
return model
def cv_model_fit(X, y, hyperParams):
kfold = KFold(n_splits=10, shuffle=True)
scores=[]
for train_idx, test_idx in kfold.split(X):
model = create_model(hyperParams)
model.fit(X[train_idx], y[train_idx], batch_size=hyperParams['batch_size'],
epochs=hyperParams['epochs'], verbose=0)
score = model.evaluate(X[test_idx], y[test_idx], verbose=0)
scores.append(score*100) # f_score
# print('fold ', len(scores), ' score: ', scores[-1])
del model
return scores
# hyper parameter optimization
from itertools import product
from sklearn.model_selection import KFold
from keras.layers import Activation, Conv2D, Input, Embedding, Reshape, MaxPool2D, Concatenate, Flatten, Dropout, Dense, Conv1D
# default parameter setting:
hyperParams = {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [512, 512, 128],
'activation': 'relu', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
# parameter search space:
batch_chices = [32]
epochs_choices = [100]
hidden_layers_choices = [[4, 4], [16, 32],
[8, 8, 8], [4, 8, 16],
[4, 4, 4]]
activation_choices = ['relu', 'sigmoid'] #, 'tanh'
dropout_choices = [ 0.5]
s = [batch_chices, epochs_choices, hidden_layers_choices, activation_choices, dropout_choices]
perms = list(product(*s)) # permutations
# Linear search:
best_score = 0
for row in perms:
hyperParams['batch_size'] = row[0]
hyperParams['epochs'] = row[1]
hyperParams['hidden_layers'] = row[2]
hyperParams['activation'] = row[3]
hyperParams['dropout'] = row[4]
print('10-fold cross validation on these hyperparameters: ', hyperParams, '\n')
cvscores = cv_model_fit(X, y, hyperParams)
print('\n-------------------------------------------')
mean_score = np.mean(cvscores)
std_score = np.std(cvscores)
# Update the best parameter setting:
print('CV mean: {0:0.4f}, CV std: {1:0.4f}'.format(mean_score, std_score))
if mean_score > best_score: # later I should incorporate std in best model selection
best_score = mean_score
print('****** Best model so far ******')
best_params = hyperParams
print('-------------------------------------------\n')
###Output
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [4, 4], 'activation': 'relu', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6174, CV std: 0.0522
****** Best model so far ******
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [4, 4], 'activation': 'sigmoid', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6339, CV std: 0.0198
****** Best model so far ******
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [16, 32], 'activation': 'relu', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6068, CV std: 0.0979
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [16, 32], 'activation': 'sigmoid', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6069, CV std: 0.0850
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [8, 8, 8], 'activation': 'relu', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6117, CV std: 0.0422
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [8, 8, 8], 'activation': 'sigmoid', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6086, CV std: 0.0277
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [4, 8, 16], 'activation': 'relu', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6153, CV std: 0.0682
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [4, 8, 16], 'activation': 'sigmoid', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6379, CV std: 0.0211
****** Best model so far ******
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [4, 4, 4], 'activation': 'relu', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6043, CV std: 0.0690
-------------------------------------------
10-fold cross validation on these hyperparameters: {'input_size': 4, 'output_size': 2, 'batch_size': 32, 'epochs': 100, 'hidden_layers': [4, 4, 4], 'activation': 'sigmoid', 'dropout': 0.5, 'output_activation': 'softmax', 'loss': 'categorical_crossentropy'}
-------------------------------------------
CV mean: 0.6374, CV std: 0.0246
-------------------------------------------
|
_notebooks/2021-04-29-phonetic-comparison.ipynb | ###Markdown
Polish phonetic comparison> "Transcript matching for E2E ASR with phonetic post-processing"- toc: false- branch: master- hidden: true- categories: [asr, polish, phonetic, todo]
###Code
from difflib import SequenceMatcher
import icu
plipa = icu.Transliterator.createInstance('pl-pl_FONIPA')
###Output
_____no_output_____
###Markdown
The errors in E2E models are quite often phonetic confusions, so we do the opposite of traditional ASR and generate the phonetic representation from the output as a basis for comparison.
###Code
def phonetic_check(word1, word2, ignore_spaces=False):
"""Uses ICU's IPA transliteration to check if words are the same"""
tl1 = plipa.transliterate(word1) if not ignore_spaces else plipa.transliterate(word1.replace(' ', ''))
tl2 = plipa.transliterate(word2) if not ignore_spaces else plipa.transliterate(word2.replace(' ', ''))
return tl1 == tl2
phonetic_check("jórz", "jusz", False)
###Output
_____no_output_____
###Markdown
The Polish `y` is phonetically a raised schwa; like the schwa in English, it's often deleted in fast speech. This function returns true if the only differences between the first word and the second is are deletions of `y`, except at the end of the word (which is typically the plural ending).
###Code
def no_igrek(word1, word2):
"""Checks if a word-internal y has been deleted"""
sm = SequenceMatcher(None, word1, word2)
for oc in sm.get_opcodes():
if oc[0] == 'equal':
continue
elif oc[0] == 'delete' and word1[oc[1]:oc[2]] != 'y':
return False
elif oc[0] == 'delete' and word1[oc[1]:oc[2]] == 'y' and oc[2] == len(word1):
return False
elif oc[0] == 'insert' or oc[0] == 'replace':
return False
return True
no_igrek('uniwersytet', 'uniwerstet')
no_igrek('uniwerstety', 'uniwerstet')
phonetic_alternatives = [ ['u', 'ó'], ['rz', 'ż'] ]
def reverse_alts(phonlist):
return [ [i[1], i[0]] for i in phonlist ]
sm = SequenceMatcher(None, "już", "jurz")
for oc in sm.get_opcodes():
print(oc)
###Output
('equal', 0, 2, 0, 2)
('replace', 2, 3, 2, 4)
###Markdown
Reads a `CTM`-like file, returning a list of lists containing the filename, start time, end time, and word.
###Code
def read_ctmish(filename):
output = []
with open(filename, 'r') as f:
for line in f.readlines():
pieces = line.strip().split(' ')
if len(pieces) <= 4:
continue
for piece in pieces[4:]:
output.append([pieces[0], pieces[2], pieces[3], piece])
return output
###Output
_____no_output_____
###Markdown
Returns the contents of a plain text file as a list of lists containing the line number and the word, for use in locating mismatches
###Code
def read_text(filename):
output = []
counter = 0
with open(filename, 'r') as f:
for line in f.readlines():
counter += 1
for word in line.strip().split(' ')
output.append([counter, word])
return output
ctmish = read_ctmish("/mnt/c/Users/Jim O\'Regan/git/notes/PlgU9JyTLPE.ctm")
rec_words = [i[3] for i in ctmish]
###Output
_____no_output_____ |
notebooks/Buffered Text-to-Speech.ipynb | ###Markdown
Buffered Text-to-SpeechIn this tutorial, we are going to build a state machine that controls a text-to-speech synthesis. The problem we solve is the following:- Speaking the text takes time, depending on how long the text is that the computer should speak.- Commands for speaking can arrive at any time, and we would like our state machine to process one of them at a time. So, even if we send three messages to it shortly after each other, it processes them one after the other.While solving this problem, we can learn more about the following concepts in STMPY state machines:- **Do-Activities**, which allow us to encapsulate the long-running text-to-speech function in a state machine.- **Deferred Events**, which allow us to ignore incoming messages until a later state, when we are ready again. Text-to-Speech MacOn a Mac, this is a function to make your computer speak:
###Code
from os import system
def text_to_speech(text):
system('say {}'.format(text))
###Output
_____no_output_____
###Markdown
Run the above cell so the function is available in the following, and then execute the following cell to test it:
###Code
text_to_speech("Hello. I am a computer.")
###Output
_____no_output_____
###Markdown
WindowsTODO: We should have some code to run text to speech on Windows, too! State Machine 1With this function, we can create our first state machine that accepts a message and then speaks out some text. (Let's for now ignore how we get the text into the method, we will do that later.)Unfortunately, this state machine has a problem. This is because the method `text_to_speech(text)` is taking a long time to complete. This means, for the entire time that it takes to speak the text, nothing else can happen in all the state machines that are part of the same driver! State Machine 2 Long-Running ActionsThe way this function is implented makes that it **blocks**. This means, the Python program is busy executing this function as long as the speech takes to pronouce the message. Longer message, longer blocking.You can test this by putting some debugging aroud the function, to see when the functions returns:
###Code
print('Before speaking.')
text_to_speech("Hello. I am a computer.")
print('After speaking.')
###Output
_____no_output_____
###Markdown
You see that the string _"After speaking"_ is printed after the speaking is finished. During the execution, the program is blocked and does not do anything else. When our program should also do other stuff at the same time, either completely unrelated to speech or even just accepting new speech commands, this is not working! The driver is now completely blocked with executing the speech method, not being able to do anything else. Do-ActivitiesInstead of executing the method as part of a transition, we execute it as part of a state. This is called a **Do-Activity**, and it is declared as part of a state. The do-activity is started when the state is entered. Once the activity is finished, the state machine receives the event `done`, which triggers it to switch into another state.You may think now that the do-activity is similar to an entry action, as it is started when entering a state. However, a do-activity is started as part of its own thread, so that it does not block any other behavior from happening. Our state machine stays responsive, and so does any of the other state machines that may be assigned to the same driver. This happens in the background, STMPY is creating a new thread for a do-activity, starts it, and dispatches the `done` event once the do-activity finishes.When the do-activity finishes (in the case of the text-to-speech function, this means when the computer is finished talking), the state machine dispatches _automatically_ the event `done`, which brings the state machine into the next state. - A state with a do activity can therefore only declare one single outgoing transition that is triggered by the event `done`. - A state can have at most one do-activity. - A do-activity cannot be aborted. Instead, it should be programmed so that the function itself terminates, indicated for instance by the change of a variable.The following things are still possible in a state with a do-activity:- A state with a do-activity can have entry and exit actions. They are simply executed before or after the do activities.- A state with a do-activity can have internal transitions, since they don't leave the state.
###Code
from stmpy import Machine, Driver
from os import system
import logging
debug_level = logging.DEBUG
logger = logging.getLogger('stmpy')
logger.setLevel(debug_level)
ch = logging.StreamHandler()
ch.setLevel(debug_level)
formatter = logging.Formatter('%(asctime)s - %(name)-12s - %(levelname)-8s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
class Speaker:
def speak(self, string):
system('say {}'.format(string))
speaker = Speaker()
t0 = {'source': 'initial', 'target': 'ready'}
t1 = {'trigger': 'speak', 'source': 'ready', 'target': 'speaking'}
t2 = {'trigger': 'done', 'source': 'speaking', 'target': 'ready'}
s1 = {'name': 'speaking', 'do': 'speak(*)'}
stm = Machine(name='stm', transitions=[t0, t1, t2], states=[s1], obj=speaker)
speaker.stm = stm
driver = Driver()
driver.add_machine(stm)
driver.start()
driver.send('speak', 'stm', args=['My first sentence.'])
driver.send('speak', 'stm', args=['My second sentence.'])
driver.send('speak', 'stm', args=['My third sentence.'])
driver.send('speak', 'stm', args=['My fourth sentence.'])
driver.wait_until_finished()
###Output
_____no_output_____
###Markdown
The state machine 2 still has a problem, but this time another one: If we receive a new message with more text to speak _while_ we are in state `speaking`, this message is discarded. Our next state machine will fix this. State Machine 3As you know, events arriving in a state that do not declare outgoing triggers with that event, are discarded (that means, thrown away). For our state machine 2 above this means that when we are in state `speaking` and a new message arrives, this message is discarded. However, what we ideally want is that this message is handled once the currently spoken text is finished. There are two ways of achieving this:1. We could build a queue variable into our logic, and declare a transition that puts any arriving `speak` message into that queue. Whenever the currently spoken text finishes, we take another one from the queue until the queue is empty again. This has the drawback that we need to code the queue ourselves.2. We use a mechanism called **deferred event**, which is part of the state machine mechanics. This is the one we are going to use below. Deferred EventsA state can declare that it wants to **defer** an event, which simply means to not handle it. For our speech state machine it means that state `speaking` can declare that it defers event `speak`. Any event that arrives in a state that defers it, is ignored by that state. It is as if it never arrived, or as if it is invisible in the incoming event queue. Only once we switch into a next state that does not defer it, it gets visible again, and then either consumed by a transition, or discarded if the state does not declare any transition triggered by it.
###Code
s1 = {'name': 'speaking', 'do': 'speak(*)', 'speak': 'defer'}
stm = Machine(name='stm', transitions=[t0, t1, t2], states=[s1], obj=speaker)
speaker.stm = stm
driver = Driver()
driver.add_machine(stm)
driver.start()
driver.send('speak', 'stm', args=['My first sentence.'])
driver.send('speak', 'stm', args=['My second sentence.'])
driver.send('speak', 'stm', args=['My third sentence.'])
driver.send('speak', 'stm', args=['My fourth sentence.'])
driver.wait_until_finished()
###Output
_____no_output_____ |
docs/examples/nyse/nyse.ipynb | ###Markdown
NYSE & BlurrIn this guide we will train a machine learning model that predicts closing price of a stock based on historical data. We will transform time-series stock data into features to train this model. PrerequisitesIt's recommended to have a basic understanding of how Blurr works. Following [tutorials 1](http://productml-blurr.readthedocs.io/en/latest/Streaming%20BTS%20Tutorial/) and [2](http://productml-blurr.readthedocs.io/en/latest/Window%20BTS%20Tutorial/) should provide enough background context. PreparationLet's start by installing `Blurr` and other required dependencies (using requirements.txt):
###Code
import sys
print("installing blurr and other required dependencies...")
!{sys.executable} -m pip install blurr --quiet
!{sys.executable} -m pip install -r requirements.txt --quiet
print("done.")
###Output
installing blurr and other required dependencies...
done.
###Markdown
The DatasetThis walkthrough is based on [New York Stock Exchange Data](https://www.kaggle.com/dgawlik/nyse/data) made available for [Kaggle challenges](https://www.kaggle.com/dgawlik/nyse).Let's start by downloading and having a peek at the available data:
###Code
!wget http://demo.productml.com/data/nyse-input-data.json.zip
!unzip -o nyse-input-data.json.zip -d .
import pandas as pd
stocks = pd.read_json("./nyse-input-data.json", lines=True)
stocks.head()
###Output
_____no_output_____
###Markdown
This dataset contains data for each market day.Our **goal is to predict closing price** of a stock for any given day based on historical data. In order to do that, we need to transform our original data source into **features** that can be used for training.We'll calculate **moving averages** and other aggregate data for different **time windows**: one, three and seven days. Blurr TemplatesWe perform initial aggregations of our data by day with [nyse-streaming-bts.yml](./nyse-streaming-bts.yml). Features are then computed using [nyse-window-bts.yml](./nyse-window-bts.yml) for each stock per day.
###Code
!cat 'nyse-streaming-bts.yml'
###Output
Type: Blurr:Transform:Streaming
Version: '2018-03-01'
Description: New York Store Exchange Transformations
Name: nyse
Import:
- { Module: dateutil.parser, Identifiers: [ parse ]}
Identity: source.symbol
Time: parse(source.datetime)
Stores:
- Type: Blurr:Store:Memory
Name: memory
Aggregates:
- Type: Blurr:Aggregate:Block
Name: stats
Store: memory
Split: time.date() != stats.latest_tradetime.date()
When: source.symbol in ['AAPL', 'MSFT', 'GOOG', 'FB']
Fields:
- Name: close
Type: float
Value: source.price
- Name: high
Type: float
Value: source.price
When: source.price >= stats.high
- Name: low
Type: float
Value: source.price
When: (stats.low == 0 or source.price < stats.low)
- Name: volatility
Type: float
Value: (float(stats.high) / float(stats.low)) - 1
When: stats.low > 0
- Name: volume
Type: float
Value: stats.volume + source.volume
- Name: latest_tradetime
Type: datetime
Value: time
###Markdown
**Streaming BTS**We're predicting values for tech companies only (Apple, Facebook, Microsoft, Google):```yamlWhen: source.symbol in ['AAPL', 'MSFT', 'GOOG', 'FB']```Each record in the original dataset represents a single stock transaction. By setting `Split: str(time.date()) != stats.date` we'll create a new aggregate for each day per stock.
###Code
!cat 'nyse-window-bts.yml'
###Output
Type: Blurr:Transform:Window
Version: '2018-03-01'
Name: moving_averages
SourceBTS: nyse
Anchor:
Condition: nyse.stats.volatility < 0.04
Aggregates:
- Type: Blurr:Aggregate:Window
Name: close
WindowType: count
WindowValue: 1
Source: nyse.stats
Fields:
- Name: value
Type: float
Value: anchor.close # the anchor object represents the record that matches the anchor condition
- Type: Blurr:Aggregate:Window
Name: last
WindowType: count
WindowValue: -1
Source: nyse.stats
Fields:
- Name: close
Type: float
Value: source.close[0]
- Name: volume
Type: float
Value: source.volume[0]
- Name: volatility
Type: float
Value: source.volatility[0]
- Type: Blurr:Aggregate:Window
Name: last_3
WindowType: count
WindowValue: -3
Source: nyse.stats
Fields:
- Name: close_avg
Type: float
Value: sum(source.close) / len(source.close)
- Name: volume_avg
Type: float
Value: sum(source.volume) / len(source.volume)
- Name: volatility_avg
Type: float
Value: sum(source.volatility) / len(source.volatility)
- Name: max_volatility
Type: float
Value: max(source.volatility)
- Name: min_volatility
Type: float
Value: min(source.volatility)
- Type: Blurr:Aggregate:Window
Name: last_7
WindowType: count
WindowValue: -7
Source: nyse.stats
Fields:
- Name: close_avg
Type: float
Value: sum(source.close) / len(source.close)
- Name: volume_avg
Type: float
Value: sum(source.volume) / len(source.volume)
- Name: volatility_avg
Type: float
Value: sum(source.volatility) / len(source.volatility)
- Name: max_volatility
Type: float
Value: max(source.volatility)
- Name: min_volatility
Type: float
Value: min(source.volatility)
###Markdown
**Window BTS**We'll use a very rough criteria to remove outliers: our model will only work when closing price changes less than a 4%:```yamlAnchor: Condition: nyse.stats.volatility < 0.04```We're using [moving averages](https://www.investopedia.com/terms/m/movingaverage.asp) to generate features based on historical data about a stock:```yaml- Type: Blurr:Aggregate:Window Name: last_7 WindowType: count WindowValue: -7 Source: nyse.stats Fields: - Name: close_avg Type: float Value: sum(source.close) / len(source.close)``` Transforming Data
###Code
from blurr_util import print_head, validate, transform
validate('nyse-streaming-bts.yml')
validate('nyse-window-bts.yml')
###Output
Running syntax validation on nyse-streaming-bts.yml
Document is valid
Running syntax validation on nyse-window-bts.yml
Document is valid
###Markdown
Let's run our Streaming BTS for informational purposes only, so we can preview the result of the transformation:
###Code
transform(log_files=["./nyse-input-data.json"],
stream_bts='./nyse-streaming-bts.yml',
output_file="./nyse-streaming-bts-out.log")
print_head("./nyse-streaming-bts-out.log")
transform(log_files=["./nyse-input-data.json"],
stream_bts='./nyse-streaming-bts.yml',
window_bts='./nyse-window-bts.yml',
output_file="./nyse-processed-data.csv")
###Output
_____no_output_____
###Markdown
Let's now preview the data that will be used to **train our model**
###Code
window_out = pd.read_csv("./nyse-processed-data.csv")
window_out.head()
###Output
_____no_output_____
###Markdown
Modelling**Blurr** is about Data Preparation and Feature Engineering. Modeling is included here for illustration purpose, and the reader can use any modeling library or tool for such purpose.Let's start by importing the output of our Window BTS as the source dataset. We're dropping unnecessary `_identity` columns:
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
def import_dataset():
data = pd.read_csv("./nyse-processed-data.csv")
data["close"] = data["close.value"] # Moving close to the last column
data.drop(['close.value'], 1, inplace=True)
data.drop(['close._identity'], 1, inplace=True)
data.drop(['last._identity'], 1, inplace=True)
data.drop(['last_3._identity'], 1, inplace=True)
data.drop(['last_7._identity'], 1, inplace=True)
return data
dataset = import_dataset()
dataset.head()
###Output
_____no_output_____
###Markdown
Each column represents a Feature, except the rightmost column which represents the Output we're trying to predic
###Code
feature_count = len(dataset.columns) - 1
print("#features=" + str(feature_count))
###Output
#features=13
###Markdown
We're splitting our dataset into Input Variables (`X`) and the Output Variable (`Y`) using pandas' [`iloc` function](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iloc.html):
###Code
X = dataset.iloc[:, 0:feature_count].values
print(X.shape)
Y = dataset.iloc[:, feature_count].values
print(Y.shape)
###Output
(5978,)
###Markdown
We need to split between train and test datasets for training and evaluation of the model:
###Code
from sklearn.model_selection import train_test_split
X_train_raw, X_test_raw, Y_train_raw, Y_test_raw = train_test_split(X, Y, test_size = 0.2)
###Output
_____no_output_____
###Markdown
Finally, we need to scale our data before training:
###Code
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train_raw)
X_test = scaler.transform(X_test_raw)
Y_train = scaler.fit_transform(Y_train_raw.reshape(-1, 1))
Y_test = scaler.transform(Y_test_raw.reshape(-1, 1))
###Output
_____no_output_____
###Markdown
It's now time to build and train our model:
###Code
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
#Initializing Neural Network
model = Sequential()
model.add(Dense(units = 36, kernel_initializer = 'uniform', activation = 'relu', input_dim = feature_count))
model.add(Dense(units = 36, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'linear'))
# Compiling Neural Network
model.compile(loss='mse',optimizer='adam', metrics=['accuracy'])
# Fitting our model
model.fit(X_train, Y_train, batch_size = 512, epochs = 70, validation_split=0.1, verbose=1)
###Output
Train on 4303 samples, validate on 479 samples
Epoch 1/70
4303/4303 [==============================] - 0s 46us/step - loss: 0.1025 - acc: 0.0000e+00 - val_loss: 0.0775 - val_acc: 0.0021
Epoch 2/70
4303/4303 [==============================] - 0s 4us/step - loss: 0.0942 - acc: 0.0000e+00 - val_loss: 0.0686 - val_acc: 0.0021
Epoch 3/70
4303/4303 [==============================] - 0s 4us/step - loss: 0.0813 - acc: 0.0000e+00 - val_loss: 0.0563 - val_acc: 0.0021
Epoch 4/70
4303/4303 [==============================] - 0s 4us/step - loss: 0.0645 - acc: 0.0000e+00 - val_loss: 0.0449 - val_acc: 0.0021
Epoch 5/70
4303/4303 [==============================] - 0s 4us/step - loss: 0.0494 - acc: 0.0000e+00 - val_loss: 0.0391 - val_acc: 0.0021
Epoch 6/70
4303/4303 [==============================] - 0s 4us/step - loss: 0.0385 - acc: 0.0000e+00 - val_loss: 0.0277 - val_acc: 0.0021
Epoch 7/70
4303/4303 [==============================] - 0s 4us/step - loss: 0.0242 - acc: 2.3240e-04 - val_loss: 0.0145 - val_acc: 0.0021
Epoch 8/70
4303/4303 [==============================] - 0s 4us/step - loss: 0.0113 - acc: 2.3240e-04 - val_loss: 0.0053 - val_acc: 0.0021
Epoch 9/70
4303/4303 [==============================] - 0s 5us/step - loss: 0.0034 - acc: 2.3240e-04 - val_loss: 0.0016 - val_acc: 0.0021
Epoch 10/70
4303/4303 [==============================] - 0s 4us/step - loss: 0.0010 - acc: 2.3240e-04 - val_loss: 8.0088e-04 - val_acc: 0.0021
Epoch 11/70
4303/4303 [==============================] - 0s 5us/step - loss: 7.3606e-04 - acc: 2.3240e-04 - val_loss: 7.9738e-04 - val_acc: 0.0021
Epoch 12/70
4303/4303 [==============================] - 0s 4us/step - loss: 7.4681e-04 - acc: 2.3240e-04 - val_loss: 7.1321e-04 - val_acc: 0.0021
Epoch 13/70
4303/4303 [==============================] - 0s 4us/step - loss: 6.5283e-04 - acc: 2.3240e-04 - val_loss: 5.8154e-04 - val_acc: 0.0021
Epoch 14/70
4303/4303 [==============================] - 0s 5us/step - loss: 5.3614e-04 - acc: 2.3240e-04 - val_loss: 5.1272e-04 - val_acc: 0.0021
Epoch 15/70
4303/4303 [==============================] - 0s 4us/step - loss: 4.8915e-04 - acc: 2.3240e-04 - val_loss: 4.8887e-04 - val_acc: 0.0021
Epoch 16/70
4303/4303 [==============================] - 0s 4us/step - loss: 4.5884e-04 - acc: 2.3240e-04 - val_loss: 4.4613e-04 - val_acc: 0.0021
Epoch 17/70
4303/4303 [==============================] - 0s 4us/step - loss: 4.2753e-04 - acc: 2.3240e-04 - val_loss: 4.1493e-04 - val_acc: 0.0021
Epoch 18/70
4303/4303 [==============================] - 0s 4us/step - loss: 4.0553e-04 - acc: 2.3240e-04 - val_loss: 3.9318e-04 - val_acc: 0.0021
Epoch 19/70
4303/4303 [==============================] - 0s 4us/step - loss: 3.8638e-04 - acc: 2.3240e-04 - val_loss: 3.7563e-04 - val_acc: 0.0021
Epoch 20/70
4303/4303 [==============================] - 0s 4us/step - loss: 3.6760e-04 - acc: 2.3240e-04 - val_loss: 3.5841e-04 - val_acc: 0.0021
Epoch 21/70
4303/4303 [==============================] - 0s 4us/step - loss: 3.4842e-04 - acc: 2.3240e-04 - val_loss: 3.3985e-04 - val_acc: 0.0021
Epoch 22/70
4303/4303 [==============================] - 0s 4us/step - loss: 3.2932e-04 - acc: 2.3240e-04 - val_loss: 3.2005e-04 - val_acc: 0.0021
Epoch 23/70
4303/4303 [==============================] - 0s 4us/step - loss: 3.0877e-04 - acc: 2.3240e-04 - val_loss: 3.0079e-04 - val_acc: 0.0021
Epoch 24/70
4303/4303 [==============================] - 0s 4us/step - loss: 2.8941e-04 - acc: 2.3240e-04 - val_loss: 2.8071e-04 - val_acc: 0.0021
Epoch 25/70
4303/4303 [==============================] - 0s 4us/step - loss: 2.6856e-04 - acc: 2.3240e-04 - val_loss: 2.6227e-04 - val_acc: 0.0021
Epoch 26/70
4303/4303 [==============================] - 0s 4us/step - loss: 2.4924e-04 - acc: 2.3240e-04 - val_loss: 2.4349e-04 - val_acc: 0.0021
Epoch 27/70
4303/4303 [==============================] - 0s 4us/step - loss: 2.3008e-04 - acc: 2.3240e-04 - val_loss: 2.2409e-04 - val_acc: 0.0021
Epoch 28/70
4303/4303 [==============================] - 0s 5us/step - loss: 2.1201e-04 - acc: 2.3240e-04 - val_loss: 2.0661e-04 - val_acc: 0.0021
Epoch 29/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.9386e-04 - acc: 2.3240e-04 - val_loss: 1.8931e-04 - val_acc: 0.0021
Epoch 30/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.7755e-04 - acc: 2.3240e-04 - val_loss: 1.7436e-04 - val_acc: 0.0021
Epoch 31/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.6193e-04 - acc: 2.3240e-04 - val_loss: 1.5940e-04 - val_acc: 0.0021
Epoch 32/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.4812e-04 - acc: 2.3240e-04 - val_loss: 1.4754e-04 - val_acc: 0.0021
Epoch 33/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.3705e-04 - acc: 2.3240e-04 - val_loss: 1.3818e-04 - val_acc: 0.0021
Epoch 34/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.2689e-04 - acc: 2.3240e-04 - val_loss: 1.2781e-04 - val_acc: 0.0021
Epoch 35/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.1788e-04 - acc: 2.3240e-04 - val_loss: 1.2091e-04 - val_acc: 0.0021
Epoch 36/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.0975e-04 - acc: 2.3240e-04 - val_loss: 1.1201e-04 - val_acc: 0.0021
Epoch 37/70
4303/4303 [==============================] - 0s 4us/step - loss: 1.0235e-04 - acc: 2.3240e-04 - val_loss: 1.0453e-04 - val_acc: 0.0021
Epoch 38/70
4303/4303 [==============================] - 0s 4us/step - loss: 9.5748e-05 - acc: 2.3240e-04 - val_loss: 9.8943e-05 - val_acc: 0.0021
Epoch 39/70
4303/4303 [==============================] - 0s 4us/step - loss: 9.0075e-05 - acc: 2.3240e-04 - val_loss: 9.3020e-05 - val_acc: 0.0021
Epoch 40/70
4303/4303 [==============================] - 0s 4us/step - loss: 8.4809e-05 - acc: 2.3240e-04 - val_loss: 8.7891e-05 - val_acc: 0.0021
Epoch 41/70
4303/4303 [==============================] - 0s 4us/step - loss: 8.0412e-05 - acc: 2.3240e-04 - val_loss: 8.3260e-05 - val_acc: 0.0021
Epoch 42/70
4303/4303 [==============================] - 0s 4us/step - loss: 7.6413e-05 - acc: 2.3240e-04 - val_loss: 8.0566e-05 - val_acc: 0.0021
Epoch 43/70
4303/4303 [==============================] - 0s 4us/step - loss: 7.2809e-05 - acc: 2.3240e-04 - val_loss: 7.6273e-05 - val_acc: 0.0021
Epoch 44/70
4303/4303 [==============================] - 0s 4us/step - loss: 6.9614e-05 - acc: 2.3240e-04 - val_loss: 7.2717e-05 - val_acc: 0.0021
Epoch 45/70
4303/4303 [==============================] - 0s 4us/step - loss: 6.6499e-05 - acc: 2.3240e-04 - val_loss: 6.9950e-05 - val_acc: 0.0021
Epoch 46/70
4303/4303 [==============================] - 0s 4us/step - loss: 6.3570e-05 - acc: 2.3240e-04 - val_loss: 6.6740e-05 - val_acc: 0.0021
Epoch 47/70
4303/4303 [==============================] - 0s 5us/step - loss: 6.1582e-05 - acc: 2.3240e-04 - val_loss: 6.4343e-05 - val_acc: 0.0021
Epoch 48/70
4303/4303 [==============================] - 0s 5us/step - loss: 5.9359e-05 - acc: 2.3240e-04 - val_loss: 6.2669e-05 - val_acc: 0.0021
Epoch 49/70
4303/4303 [==============================] - 0s 5us/step - loss: 5.7216e-05 - acc: 2.3240e-04 - val_loss: 5.9803e-05 - val_acc: 0.0021
Epoch 50/70
4303/4303 [==============================] - 0s 5us/step - loss: 5.5511e-05 - acc: 2.3240e-04 - val_loss: 5.8672e-05 - val_acc: 0.0021
Epoch 51/70
4303/4303 [==============================] - 0s 5us/step - loss: 5.4430e-05 - acc: 2.3240e-04 - val_loss: 5.6377e-05 - val_acc: 0.0021
Epoch 52/70
4303/4303 [==============================] - 0s 4us/step - loss: 5.2620e-05 - acc: 2.3240e-04 - val_loss: 5.4984e-05 - val_acc: 0.0021
Epoch 53/70
4303/4303 [==============================] - 0s 5us/step - loss: 5.1474e-05 - acc: 2.3240e-04 - val_loss: 5.3884e-05 - val_acc: 0.0021
Epoch 54/70
4303/4303 [==============================] - 0s 4us/step - loss: 5.1241e-05 - acc: 2.3240e-04 - val_loss: 5.1820e-05 - val_acc: 0.0021
Epoch 55/70
4303/4303 [==============================] - 0s 4us/step - loss: 4.8984e-05 - acc: 2.3240e-04 - val_loss: 5.0553e-05 - val_acc: 0.0021
Epoch 56/70
4303/4303 [==============================] - 0s 4us/step - loss: 4.7987e-05 - acc: 2.3240e-04 - val_loss: 4.9232e-05 - val_acc: 0.0021
###Markdown
We can measure the quality of our model using [MSE](https://en.wikipedia.org/wiki/Mean_squared_error) and [RMSE](https://en.wikipedia.org/wiki/Root-mean-square_deviation):
###Code
import math
score = model.evaluate(X_test, Y_test, verbose=0)
print('Model Score: %.5f MSE (%.2f RMSE)' % (score[0], math.sqrt(score[0])))
###Output
Model Score: 0.00004 MSE (0.01 RMSE)
###Markdown
Finally, let's plot prediction vs actual data.Prior to normalisation, we undo scaling and perform a sort for graph quality:
###Code
import matplotlib.pyplot as plt2
prediction_sorted = scaler.inverse_transform(model.predict(X_test))
prediction_sorted.sort(axis=0)
Y_test_sorted = scaler.inverse_transform(Y_test.copy().reshape(-1, 1))
Y_test_sorted.sort(axis=0)
plt2.plot(prediction_sorted, color='red', label='Prediction')
plt2.plot(Y_test_sorted, color='blue', label='Actual')
plt2.xlabel('#sample')
plt2.ylabel('close value')
plt2.legend(loc='best')
plt2.show()
###Output
_____no_output_____ |
Test_karl/material/session_12/lecture_12.ipynb | ###Markdown
Code stuff - not slides!
###Code
%run ../ML_plots.ipynb
###Output
ERROR:root:File `'../ML_plots.ipynb.py'` not found.
###Markdown
Session 12: Supervised learning, part 1*Andreas Bjerre-Nielsen* Agenda1. [Modelling data](Modelling-data)1. [A familiar regression model](A-familiar-regression-model)1. [The curse of overfitting](The-curse-of-overfitting)1. [Important details](Implementation-details) Vaaaamos
###Code
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings(action='ignore', category=ConvergenceWarning)
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
plt.style.use('default') # set style (colors, background, size, gridlines etc.)
plt.rcParams['figure.figsize'] = 10, 4 # set default size of plots
plt.rcParams.update({'font.size': 18})
###Output
_____no_output_____
###Markdown
Supervised problems (1)*How do we distinguish between problems?*
###Code
f_identify_question
###Output
_____no_output_____
###Markdown
Supervised problems (2)*The two canonical problems*
###Code
f_identify_answer
###Output
_____no_output_____
###Markdown
Supervised problems (3)*Which models have we seen for classification?*- .- .- . Modelling data Model complexity (1)*What does a model of low complexity look like?*
###Code
f_complexity[0]
###Output
_____no_output_____
###Markdown
Model complexity (2)*What does medium model complexity look like?*
###Code
f_complexity[1]
###Output
_____no_output_____
###Markdown
Model complexity (3)*What does high model complexity look like?*
###Code
f_complexity[2]
###Output
_____no_output_____
###Markdown
Model fitting (1)*Quiz (1 min.): Which model fitted the data best?*
###Code
f_bias_var['regression'][2]
###Output
_____no_output_____
###Markdown
Model fitting (2)*What does underfitting and overfitting look like for classification?*
###Code
f_bias_var['classification'][2]
###Output
_____no_output_____
###Markdown
Two agendas (1)What are the objectives of empirical research? 1. *causation*: what is the effect of a particular variable on an outcome? 2. *prediction*: find some function that provides a good prediction of $y$ as a function of $x$ Two agendas (2)How might we express the agendas in a model?$$ y = \alpha + \beta x + \varepsilon $$- *causation*: interested in $\hat{\beta}$ - *prediction*: interested in $\hat{y}$ Two agendas (3)Might these two agendas be related at a deeper level? Can prediction quality inform us about how to make causal models? A familiar regression model Estimation (1)*Do we know already some ways to estimate regression models?* - Social scientists know all about the Ordinary Least Squares (OLS). - OLS estimate both parameters and their standard deviation. - Is best linear unbiased estimator under regularity conditions. *How is OLS estimated?* - $\beta=(\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T\textbf{y}$- computation requires non perfect multicollinarity. Estimation (2)*How might we estimate a linear regression model?* - first order method (e.g. gradient descent)- second order method (e.g. Newton-Raphson) *So what the hell was gradient descent?* - compute errors, multiply with features and update Estimation (3)*Can you explain that in details?* - Yes, like with Adaline, we minimize the sum of squared errors (SSE): \begin{align}SSE&=\boldsymbol{e}^{T}\boldsymbol{e}\\\boldsymbol{e}&=\textbf{y}-\textbf{X}\textbf{w}\end{align}
###Code
X = np.random.normal(size=(3,2))
y = np.random.normal(size=(3))
w = np.random.normal(size=(3))
e = y-(w[0]+X.dot(w[1:]))
SSE = e.T.dot(e)
###Output
_____no_output_____
###Markdown
Estimation (4)*And what about the updating..? What is it something about the first order deritative?* \begin{align}\frac{\partial SSE}{\partial\hat{w}}=&\textbf{X}^T\textbf{e},\\ \Delta\hat{w}=&\eta\cdot\textbf{X}^T\textbf{e}=\eta\cdot\textbf{X}^T(\textbf{y}-\hat{\textbf{y}})\end{align}
###Code
eta = 0.001 # learning rate
fod = X.T.dot(e)
update_vars = eta*fod
update_bias = eta*e.sum()
###Output
_____no_output_____
###Markdown
Estimation (5)*What might some advantages be relative to OLS?*- Works despite high multicollinarity- Speed - OLS has $\mathcal{O}(K^2N)$ computation time ([read more](https://math.stackexchange.com/questions/84495/computational-complexity-of-least-square-regression-operation)) - Quadratic scaling in number of variables ($K$). - Stochastic gradient descent - Likely to converge faster with many observations ($N$) Fitting a polynomial (1)Polyonomial: $f(x) = 2+8*x^4$Try models of increasing order polynomials. - Split data into train and test (50/50)- For polynomial order 0 to 9: - Iteration n: $y = \sum_{k=0}^{n}(\beta_k\cdot x^k)+\varepsilon$. - Estimate order n model on training data - Evaluate with on test data with RMSE: - $log RMSE = \log (\sqrt{MSE})$ Fitting a polynomial (2)We generate samples of data from true model.
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
def true_fct(X):
return 2+X**4
n_samples = 25
n_degrees = 15
np.random.seed(0)
X_train = np.random.normal(size=(n_samples,1))
y_train = true_fct(X_train).reshape(-1) + np.random.randn(n_samples)
X_test = np.random.normal(size=(n_samples,1))
y_test = true_fct(X_test).reshape(-1) + np.random.randn(n_samples)
###Output
_____no_output_____
###Markdown
Fitting a polynomial (3)We estimate the polynomials
###Code
from sklearn.metrics import mean_squared_error as mse
test_mse = []
train_mse = []
parameters = []
degrees = range(n_degrees+1)
for p in degrees:
X_train_p = PolynomialFeatures(degree=p).fit_transform(X_train)
X_test_p = PolynomialFeatures(degree=p).fit_transform(X_train)
reg = LinearRegression().fit(X_train_p, y_train)
train_mse += [mse(reg.predict(X_train_p),y_train)]
test_mse += [mse(reg.predict(X_test_p),y_test)]
parameters.append(reg.coef_)
###Output
_____no_output_____
###Markdown
Fitting a polynomial (4)*So what happens to the model performance in- and out-of-sample?*
###Code
degree_index = pd.Index(degrees,name='Polynomial degree ~ model complexity')
ax = pd.DataFrame({'Train set':train_mse, 'Test set':test_mse})\
.set_index(degree_index)\
.plot(figsize=(10,4))
ax.set_ylabel('Mean squared error')
###Output
_____no_output_____
###Markdown
Fitting a polynomial (4)*Why does it go wrong?*- more spurious parameters- the coefficient size increases Fitting a polynomial (5)*What do you mean coefficient size increase?*
###Code
order_idx = pd.Index(range(n_degrees+1),name='Polynomial order')
ax = pd.DataFrame(parameters,index=order_idx)\
.abs().mean(1)\
.plot(logy=True)
ax.set_ylabel('Mean parameter size')
###Output
_____no_output_____
###Markdown
Fitting a polynomial (6)*How else could we visualize this problem?*
###Code
f_bias_var['regression'][2]
###Output
_____no_output_____ |
math_fun/Fun_with_Car_Plate_Numbers!.ipynb | ###Markdown
Good morning! You have completed the math trail on car plate numbers in a somewhat (semi-)automated way.Can you actually solve the same tasks with code? Read on and you will be amazed how empowering programming can be to help make mathematics learning more efficient and productive! :) TaskGiven the incomplete car plate number `SLA9??2H`Find the missing ?? numbers. A valid Singapore car plate number typically starts with 3 letters, followed by 4 digits and ending with a 'check' letter. For example, for the valid car plate number is 'SDG6136T',- The first letter is 'S' for Singapore. - The next two letters and the digits are used to compute the check letter, using the following steps: - Ignoring the first letter 'S', the letters are converted to their positions in the alphabet. For example, 'D' is 4, 'G' is 7 and 'M' is 13. - The converted letters and the digits form a sequence of 6 numbers. For example, 'DG6136' will give (4, 7, 6, 1, 3, 6). - The sequence of 6 numbers is multiplied term by term by the sequence of 6 weights (9, 4, 5, 4, 3, 2) respectively, summed up and then divided by 19 to obtain the remainder. - For example, '476136' will give 4x9 + 7x4 + 6x5 + 1x4 + 3x3 + 6x2 = 119, and this leaves a remainder of 5 after dividing by 19. - The 'check' letter is obtained by referring to the following table. Thus the check letter corresponding to remainder 5 is T.``` | Remainder | 'check' letter | Remainder | 'check' letter | Remainder | 'check' letter || 0 | A | 7 | R | 13 | H || 1 | Z | 8 | P | 14 | G || 2 | Y | 9 | M | 15 | E || 3 | X | 10 | L | 16 | D || 4 | U | 11 | K | 17 | C || 5 | T | 12 | J | 18 | B || 6 | S | | | | |```Reference: https://sgwiki.com/wiki/Vehicle_Checksum_Formula Pseudocode```FOR i = 0 to 99 Car_Plate = 'SJT9' + str(i) + '2H' IF Check_Letter(Car_Plate) is True print (Car_Plate) on screen ENDIF NEXT```
###Code
# we need to store the mapping from A to 1, B to 2, etc.
# for the letters part of the car plate number
# a dictionary is good for this purpose
letter_map = {}
for i in range(27): # 26 alphabets
char = chr(ord('A') + i)
letter_map[char] = i + 1
#print(letter_map) # this will output {'A':1, 'B':2, 'C':3, ..., 'Z':26}
# we also need to store the mapping from remainders to the check letter
# and we can also use a dictionary! :)
check_map = {0:'A', 1:'Z', 2:'Y', 3:'X', 4:'U', 5:'T', 6:'S', 7:'R', 8:'P', \
9:'M', 10:'L', 11:'K', 12:'J', 13:'H', 14:'G', 15:'E', 16:'D', \
17:'C', 18:'B'}
# we define a reusable Boolean function to generate the check letter and
# check if it matches the last letter of the car plate number
def check_letter(car_plate):
weights = [9, 4, 5, 4, 3, 2]
total = 0
for i in range(len(car_plate)-1):
if i < 2: # letters
num = letter_map[car_plate[i]]
else: # digits
num = int(car_plate[i])
total += num * weights[i]
remainder = total % 19
return check_map[remainder] == car_plate[-1]
#main
car_plate = 'DG6136T' # you can use this to verify the given example
if check_letter(car_plate):
print('S' + car_plate, car_plate[3:5])
print()
for i in range(100): # this loop repeats 100 times for you! :)
car_plate = 'LA9' + str(i).zfill(2) + '2H' # 'LA9002H', 'LA9012H', ...
if check_letter(car_plate):
print('S' + car_plate, car_plate[3:5])
#main
for i in range(100):
car_plate = 'LA' + str(i).zfill(2) + '68Y'
if check_letter(car_plate):
print('S' + car_plate, car_plate[2:4])
'0'.zfill(2)
###Output
_____no_output_____
###Markdown
Challenge- How many car_plate numbers start with SMV and end with D?
###Code
#main
count = 0
for i in range(10000):
car_plate = 'MV' + str(i).zfill(4) + 'D'
if check_letter(car_plate):
count += 1
print(count)
#main
wanted = []
for i in range(10000):
car_plate = 'MV' + str(i).zfill(4) + 'D'
if check_letter(car_plate):
print('S' + car_plate, end=' ')
wanted.append('S' + car_plate)
print(len(wanted))
###Output
SMV0027D SMV0044D SMV0061D SMV0079D SMV0096D SMV0108D SMV0125D SMV0142D SMV0177D SMV0194D SMV0206D SMV0223D SMV0240D SMV0258D SMV0275D SMV0292D SMV0304D SMV0321D SMV0339D SMV0356D SMV0373D SMV0390D SMV0402D SMV0437D SMV0454D SMV0471D SMV0489D SMV0500D SMV0518D SMV0535D SMV0552D SMV0587D SMV0616D SMV0633D SMV0650D SMV0668D SMV0685D SMV0714D SMV0731D SMV0749D SMV0766D SMV0783D SMV0812D SMV0847D SMV0864D SMV0881D SMV0899D SMV0910D SMV0928D SMV0945D SMV0962D SMV0997D SMV1016D SMV1033D SMV1050D SMV1068D SMV1085D SMV1114D SMV1131D SMV1149D SMV1166D SMV1183D SMV1212D SMV1247D SMV1264D SMV1281D SMV1299D SMV1310D SMV1328D SMV1345D SMV1362D SMV1397D SMV1409D SMV1426D SMV1443D SMV1460D SMV1478D SMV1495D SMV1507D SMV1524D SMV1541D SMV1559D SMV1576D SMV1593D SMV1605D SMV1622D SMV1657D SMV1674D SMV1691D SMV1703D SMV1720D SMV1738D SMV1755D SMV1772D SMV1801D SMV1819D SMV1836D SMV1853D SMV1870D SMV1888D SMV1917D SMV1934D SMV1951D SMV1969D SMV1986D SMV2005D SMV2022D SMV2057D SMV2074D SMV2091D SMV2103D SMV2120D SMV2138D SMV2155D SMV2172D SMV2201D SMV2219D SMV2236D SMV2253D SMV2270D SMV2288D SMV2317D SMV2334D SMV2351D SMV2369D SMV2386D SMV2415D SMV2432D SMV2467D SMV2484D SMV2513D SMV2530D SMV2548D SMV2565D SMV2582D SMV2611D SMV2629D SMV2646D SMV2663D SMV2680D SMV2698D SMV2727D SMV2744D SMV2761D SMV2779D SMV2796D SMV2808D SMV2825D SMV2842D SMV2877D SMV2894D SMV2906D SMV2923D SMV2940D SMV2958D SMV2975D SMV2992D SMV3011D SMV3029D SMV3046D SMV3063D SMV3080D SMV3098D SMV3127D SMV3144D SMV3161D SMV3179D SMV3196D SMV3208D SMV3225D SMV3242D SMV3277D SMV3294D SMV3306D SMV3323D SMV3340D SMV3358D SMV3375D SMV3392D SMV3404D SMV3421D SMV3439D SMV3456D SMV3473D SMV3490D SMV3502D SMV3537D SMV3554D SMV3571D SMV3589D SMV3600D SMV3618D SMV3635D SMV3652D SMV3687D SMV3716D SMV3733D SMV3750D SMV3768D SMV3785D SMV3814D SMV3831D SMV3849D SMV3866D SMV3883D SMV3912D SMV3947D SMV3964D SMV3981D SMV3999D SMV4000D SMV4018D SMV4035D SMV4052D SMV4087D SMV4116D SMV4133D SMV4150D SMV4168D SMV4185D SMV4214D SMV4231D SMV4249D SMV4266D SMV4283D SMV4312D SMV4347D SMV4364D SMV4381D SMV4399D SMV4410D SMV4428D SMV4445D SMV4462D SMV4497D SMV4509D SMV4526D SMV4543D SMV4560D SMV4578D SMV4595D SMV4607D SMV4624D SMV4641D SMV4659D SMV4676D SMV4693D SMV4705D SMV4722D SMV4757D SMV4774D SMV4791D SMV4803D SMV4820D SMV4838D SMV4855D SMV4872D SMV4901D SMV4919D SMV4936D SMV4953D SMV4970D SMV4988D SMV5007D SMV5024D SMV5041D SMV5059D SMV5076D SMV5093D SMV5105D SMV5122D SMV5157D SMV5174D SMV5191D SMV5203D SMV5220D SMV5238D SMV5255D SMV5272D SMV5301D SMV5319D SMV5336D SMV5353D SMV5370D SMV5388D SMV5417D SMV5434D SMV5451D SMV5469D SMV5486D SMV5515D SMV5532D SMV5567D SMV5584D SMV5613D SMV5630D SMV5648D SMV5665D SMV5682D SMV5711D SMV5729D SMV5746D SMV5763D SMV5780D SMV5798D SMV5827D SMV5844D SMV5861D SMV5879D SMV5896D SMV5908D SMV5925D SMV5942D SMV5977D SMV5994D SMV6013D SMV6030D SMV6048D SMV6065D SMV6082D SMV6111D SMV6129D SMV6146D SMV6163D SMV6180D SMV6198D SMV6227D SMV6244D SMV6261D SMV6279D SMV6296D SMV6308D SMV6325D SMV6342D SMV6377D SMV6394D SMV6406D SMV6423D SMV6440D SMV6458D SMV6475D SMV6492D SMV6504D SMV6521D SMV6539D SMV6556D SMV6573D SMV6590D SMV6602D SMV6637D SMV6654D SMV6671D SMV6689D SMV6700D SMV6718D SMV6735D SMV6752D SMV6787D SMV6816D SMV6833D SMV6850D SMV6868D SMV6885D SMV6914D SMV6931D SMV6949D SMV6966D SMV6983D SMV7002D SMV7037D SMV7054D SMV7071D SMV7089D SMV7100D SMV7118D SMV7135D SMV7152D SMV7187D SMV7216D SMV7233D SMV7250D SMV7268D SMV7285D SMV7314D SMV7331D SMV7349D SMV7366D SMV7383D SMV7412D SMV7447D SMV7464D SMV7481D SMV7499D SMV7510D SMV7528D SMV7545D SMV7562D SMV7597D SMV7609D SMV7626D SMV7643D SMV7660D SMV7678D SMV7695D SMV7707D SMV7724D SMV7741D SMV7759D SMV7776D SMV7793D SMV7805D SMV7822D SMV7857D SMV7874D SMV7891D SMV7903D SMV7920D SMV7938D SMV7955D SMV7972D SMV8009D SMV8026D SMV8043D SMV8060D SMV8078D SMV8095D SMV8107D SMV8124D SMV8141D SMV8159D SMV8176D SMV8193D SMV8205D SMV8222D SMV8257D SMV8274D SMV8291D SMV8303D SMV8320D SMV8338D SMV8355D SMV8372D SMV8401D SMV8419D SMV8436D SMV8453D SMV8470D SMV8488D SMV8517D SMV8534D SMV8551D SMV8569D SMV8586D SMV8615D SMV8632D SMV8667D SMV8684D SMV8713D SMV8730D SMV8748D SMV8765D SMV8782D SMV8811D SMV8829D SMV8846D SMV8863D SMV8880D SMV8898D SMV8927D SMV8944D SMV8961D SMV8979D SMV8996D SMV9015D SMV9032D SMV9067D SMV9084D SMV9113D SMV9130D SMV9148D SMV9165D SMV9182D SMV9211D SMV9229D SMV9246D SMV9263D SMV9280D SMV9298D SMV9327D SMV9344D SMV9361D SMV9379D SMV9396D SMV9408D SMV9425D SMV9442D SMV9477D SMV9494D SMV9506D SMV9523D SMV9540D SMV9558D SMV9575D SMV9592D SMV9604D SMV9621D SMV9639D SMV9656D SMV9673D SMV9690D SMV9702D SMV9737D SMV9754D SMV9771D SMV9789D SMV9800D SMV9818D SMV9835D SMV9852D SMV9887D SMV9916D SMV9933D SMV9950D SMV9968D SMV9985D 525
###Markdown
More challenges!Suggest one or more variations of problems you can solve with car plate numbers using the power of Python programming. Some ideas include:* Check if a given car plate number is valid* Which valid car plate numbers have a special property (eg prime number, contains at least two '8' digits, does not contain the lucky number 13, etc.)* If there are the same number of available car plate numbers each series (eg SMV and SMW)* (your idea here)Submit a pull request with your ideas and/or code to contribute to learning Mathematics using programming to benefit the world! :)
###Code
###Output
_____no_output_____ |
notebooks/New_fitness_functions_and_acquisition_functions.ipynb | ###Markdown
Using a new function to evaluate or evaluating a new acquisition function In this notebook we describe how to integrate a new fitness function to the testing framework as well as how to integrate a new acquisition function.
###Code
import numpy as np
import matplotlib.pyplot as plt
# add the egreedy module to the path (one directory up from this)
import sys, os
sys.path.append(os.path.realpath(os.path.pardir))
###Output
_____no_output_____
###Markdown
New fitness function The `perform_experiment` function in the `optimizer` class, used to carry out the optimisation runs (see its docstring and `run_all_experiments.py` for usage examples), imports a fitness **class**. This class, when instantiated is also callable. The class is imported from the `test_problems` module. Therefore, the easiest way to incorporate your own fitness function is to add it to the `test_problems` module by creating a python file in the `egreedy/test_problems/` directory and adding a line importing it into the namespace (see `egreedy/test_problems/__init__.py` for examples) so that it can be directly imported from `test_problems`.If, for example, your fitness class is called `xSquared` and is located in the file `xs.py`, you would place the python file in the directory `egreedy/test_problems` and add the line:```from .xs import xSquared```to the `egreedy/test_problems/__init__.py` file.We will now detail how to structure your fitness class and show the required class methods by creating a new fitness class for the function\begin{equation}f( \mathbf{x} ) = \sum_{i=1}^2 x_i^2,\end{equation}where $\mathbf{x} \in [-5, 5]^2.$
###Code
class xSquared:
"""Example fitness class.
.. math::
f(x) = \sum_{i=1}^2 x_i^2
This demonstration class shows all the required attributes and
functionality of the fitness function class.
"""
def __init__(self):
"""Initialisation function.
This is called when the class is instantiated and sets up its
attributes as well as any other internal variables that may
be needed.
"""
# problem dimensionality
self.dim = 2
# lower and upper bounds for each dimension (must be numpy.ndarray)
self.lb = np.array([-5., -5.])
self.ub = np.array([5., 5.])
# location(s) of the optima (optional - not always known)
self.xopt = np.array([0.])
# its/thier fitness value(s)
self.yopt = np.array([0.])
# callable constraint function for the problem - should return
# True if the argument value is **valid** - if no constraint function
# is required then this can take the value of None
self.cf = None
def __call__(self, x):
"""Main callable function.
This is called after the class is instantiated, e.g.
>>> f = xSquared()
>>> f(np.array([2., 2.]))
array([8.])
Note that it is useful to have a function that is able deal with
multiple inputs, which should a numpy.ndarray of shape (N, dim)
"""
# ensure the input is at least 2d, this will cause one-dimensional
# vectors to be reshaped to shape (1, dim)
x = np.atleast_2d(x)
# evaluate the function
val = np.sum(np.square(x), axis=1)
# return the evaluations
return val
###Output
_____no_output_____
###Markdown
This class can then either be placed in the directories discussed above and used for evaluating multiple techniques on it or used for testing purposes. Optimising the new test function with an acquistion function The following code outlines how to optimise your newly created test function with the $\epsilon$-greedy with Pareto front selection ($\epsilon$-PF) algorithm.
###Code
from pyDOE2 import lhs
from egreedy.optimizer import perform_BO_iteration
# ---- instantiate the test problem
f = xSquared()
# ---- Generate testing data by Latin hypercube sampling across the domain
n_training = 2 * f.dim
# LHS sample in [0, 1]^2 and rescale to problem domain
Xtr = lhs(f.dim, n_training, criterion='maximin')
Xtr = (f.ub - f.lb) * Xtr + f.lb
# expensively evaluate and ensure shape is (n_training, 1)
Ytr = np.reshape(f(Xtr), (n_training, 1))
# ---- Select an acquistion function with optimiser.
# In this case we select e-greedy with Pareto front selection (e-PF)
# known as eFront.
#
# All the acqusition functions have the same parameters:
# lb : lower-bound constraints (numpy.ndarray)
# ub : upper-bound constraints (numpy.ndarray)
# acq_budget : max number of calls to the GP model
# cf : callable function constraint function that returns True if
# the argument vector is VALID. Optional, has a value of None
# if not used
# acquisition_args : optional dictionary containing key:value pairs
# of arguments to a specific acqutision function.
# e.g. for an e-greedy method then the dict
# {'epsilon': 0.1} would dictate the epsilon value.
# e-greedy with Pareto front selection (e-PF), known as eFront
from egreedy.acquisition_functions import eFront
# instantiate the optimiser with a budget of 5000d and epsilon = 0.1
acq_budget = 5000 * f.dim
acquisition_args = {'epsilon': 0.1}
acq_func = eFront(lb=f.lb, ub=f.ub, cf=None, acq_budget=acq_budget,
acquisition_args=acquisition_args)
# ---- Perform the bayesian optimisation loop for a total budget of 20
# function evaluations (including those used for LHS sampling)
total_budget = 20
while Xtr.shape[0] < total_budget:
# perform one iteration of BO:
# this returns the new location and function value (Xtr, Ynew) as well
# as the trained model used to select the new location
Xnew, Ynew, model = perform_BO_iteration(Xtr, Ytr,f, acq_func, verbose=True)
# augment the training data and repeat
Xtr = np.concatenate((Xtr, np.atleast_2d(Xnew)))
Ytr = np.concatenate((Ytr, np.atleast_2d(Ynew)))
print('Best function value so far: {:g}'.format(np.min(Ytr)))
print()
###Output
Training a GP model with 4 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [2.74739209 1.85140059]
Function value: 10.9758
Best function value so far: 2.34029
Training a GP model with 5 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.8734869 0.23262576]
Function value: 0.817094
Best function value so far: 0.817094
Training a GP model with 6 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.48106594 0.1616895 ]
Function value: 0.257568
Best function value so far: 0.257568
Training a GP model with 7 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.18353004 0.31916069]
Function value: 0.135547
Best function value so far: 0.135547
Training a GP model with 8 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.13640659 0.16686483]
Function value: 0.0464506
Best function value so far: 0.0464506
Training a GP model with 9 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.01887471 0.03552285]
Function value: 0.00161813
Best function value so far: 0.00161813
Training a GP model with 10 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00498028 0.00452991]
Function value: 4.53233e-05
Best function value so far: 4.53233e-05
Training a GP model with 11 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00296791 0.01886975]
Function value: 0.000364876
Best function value so far: 4.53233e-05
Training a GP model with 12 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00228287 -0.0035852 ]
Function value: 1.80651e-05
Best function value so far: 1.80651e-05
Training a GP model with 13 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00136503 0.01035887]
Function value: 0.00010917
Best function value so far: 1.80651e-05
Training a GP model with 14 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00031969 -0.00034095]
Function value: 2.18449e-07
Best function value so far: 2.18449e-07
Training a GP model with 15 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.0022843 0.00156171]
Function value: 7.65699e-06
Best function value so far: 2.18449e-07
Training a GP model with 16 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.00435965 0.00438578]
Function value: 3.82417e-05
Best function value so far: 2.18449e-07
Training a GP model with 17 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.0038385 -0.01013353]
Function value: 0.000117422
Best function value so far: 2.18449e-07
Training a GP model with 18 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.01889642 0.02033263]
Function value: 0.000770491
Best function value so far: 2.18449e-07
Training a GP model with 19 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [ 0.00050108 -0.02413065]
Function value: 0.000582539
Best function value so far: 2.18449e-07
###Markdown
The plot below shows the difference between the best seen function value and the true minimum, i.e. $|f^\star - f_{min}|$, over each iteration.
###Code
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.semilogy(np.minimum.accumulate(np.abs(Ytr - f.yopt)))
ax.set_xlabel('Iteration', fontsize=15)
ax.set_ylabel('$|f^\star - f_{min}|$', fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
New acquisition function We now detail how to create your own acquisition function class and integrate it into the testing suite.In a similar manner to the fitness function classes, the acquisition function classes are imported from the `egreedy.acquisition_functions` module, with the specific classes available determined by the `__ini__.py` file in the same module. If, for example, your acquisition function class is called `greed` and is located in the file `gr.py`, you would place the python file in the directory `egreedy/acquisition_functions` and add the line:```from .gr import greed```to the `egreedy/acquisition_functions/__init__.py` file.The python file `egreedy/acquisition_functions/acq_func_optimisers.py` contains base classes for the acquisition function classes. We will now demonstrate how to implement two simple acquisition functions and then show how to optimise one of the test functions included in the suite. The `BaseOptimiser` class is the base acquisition function class that implements the standard interface for acquisition function optimizers. It only contains an initialisation function with several arguments:- lb: lower-bound constraint- ub: upper-bound constraint- acq_budget : maximum number of calls to the Gaussian Process- cf : callable constraint function that returns True if the argument decision vector is VALID (optional, default value: None)- acquisition_args : Optional dictionary containing additional arguments that are unpacked into key=value arguments for an internal acquisition function; e.g. {'epsilon':0.1}.The `ParetoFrontOptimiser` class implements the base class as well as an additional function named `get_front(model)` that takes in a GPRegression model from GPy and approximates its Pareto front of model prediction and uncertainty. It returns the decision vectors belonging to the members of the front, an array containing corresponding their predicted value, and an array containing the prediction uncertainty.We first create a simple acquisition function, extending the base class, that generates uniform samples in space and uses the Gaussian Process's mean prediction to select the best (lowest value) predicted location.
###Code
from egreedy.acquisition_functions.acq_func_optimisers import BaseOptimiser
class greedy_sample(BaseOptimiser):
"""Greedy function that uniformly samples the GP posterior
and returns the location with the best (lowest) mean predicted value.
"""
# note we do not need to implement an __init__ method because the
# base class already does this. Here we will include a commented
# version for clarity.
# def __init__(self, lb, ub, acq_budget, cf=None, acquisition_args={}):
# self.lb = lb
# self.ub = ub
# self.cf = cf
# self.acquisition_args = acquisition_args
# self.acq_budget = acq_budget
def __call__(self, model):
"""Returns the location with the best (lowest) predicted value
after uniformly sampling decision space.
"""
# generate samples
X = np.random.uniform(self.lb, self.ub,
size=(acq_budget, self.lb.size))
# evaluate them with the gp
mu, sigmasqr = model.predict(X, full_cov=False)
# find the index of the best value
argmin = np.argmin(mu.flat)
# return the best found value
return X[argmin, :]
from egreedy.acquisition_functions.acq_func_optimisers import ParetoFrontOptimiser
class greedy_pfront(ParetoFrontOptimiser):
"""Exploitative method that calculates the approximate Pareto front
of a GP model and returns the Pareto set member that has the best
(lowest) predicted value.
"""
# again here we do not need to implement an __init__ method.
def __call__(self, model):
"""Returns the location with the best (lowest) predicted value
from the approximate Pareto set of the GP's predicted value and
its corresponding uncertainty.
"""
# approximate the pareto set; here X are the locations of the
# members of the set and mu and sigma are their predicted values
# and uncertainty
X, mu, sigma = self.get_front(model)
# find the index of the best value
argmin = np.argmin(mu.flat)
# return the best found value
return X[argmin, :]
###Output
_____no_output_____
###Markdown
We now create a similar script to the one used above in the function example. This time we will optimise the `push4` function included in the test suite and load the training data associated with the first run all techniques evaluated in the paper carried out. Note that in this case the training data contains arguments to be passed into the function during instantiation. This is because the `push4` runs are evaluated on a *problem instance* basis.
###Code
from egreedy.optimizer import perform_BO_iteration
from egreedy import test_problems
# ---- optimisation run details
problem_name = 'push4'
run_no = 1
acq_budget = 5000 * 4 # because the problem dimensionality is 4
total_budget = 25
# ---- load the training data
data_file = f'../training_data/{problem_name:}_{run_no:}.npz'
with np.load(data_file, allow_pickle=True) as data:
Xtr = data['arr_0']
Ytr = data['arr_1']
if 'arr_2' in data:
f_optional_arguments = data['arr_2'].item()
else:
f_optional_arguments = {}
# ---- instantiate the test problem
f_class = getattr(test_problems, problem_name)
f = f_class(**f_optional_arguments)
# ---- instantiate the acquistion function we created earlier
acq_func = greedy_sample(lb=f.lb, ub=f.ub, cf=None, acq_budget=acq_budget,
acquisition_args=acquisition_args)
while Xtr.shape[0] < total_budget:
# perform one iteration of BO:
# this returns the new location and function value (Xtr, Ynew) as well
# as the trained model used to select the new location
Xnew, Ynew, model = perform_BO_iteration(Xtr, Ytr, f, acq_func, verbose=True)
# augment the training data and repeat
Xtr = np.concatenate((Xtr, np.atleast_2d(Xnew)))
Ytr = np.concatenate((Ytr, np.atleast_2d(Ynew)))
print('Best function value so far: {:g}'.format(np.min(Ytr)))
print()
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.plot(np.minimum.accumulate(np.abs(Ytr - f.yopt)))
ax.set_xlabel('Iteration', fontsize=15)
ax.set_ylabel('$|f^\star - f_{min}|$', fontsize=15)
plt.show()
###Output
_____no_output_____ |
demo/non_Gaussian_likelihood.ipynb | ###Markdown
Example 1 on heteroskedastic Gaussian likelihood
###Code
n=12
X=np.linspace(0,1,n)[:,None]
#Create some replications of input positions so that each input position will six different outputs. Note that SI has linear complexity with number of replications.
for i in range(5):
X=np.concatenate((X,np.linspace(0,1,n)[:,None]),axis=0)
f1= lambda x: -1. if x<0.5 else 1. #True mean function, which is a step function
f2= lambda x: np.exp(1.5*np.sin((x-0.3)*7.)-6.5) #True variance function, which has higher values around 0.5 but low values around boundaries
Y=np.array([np.random.normal(f1(x),np.sqrt(f2(x))) for x in X]) #Generate stochastic outputs according to f1 and f2
z=np.linspace(0,1.,200)[:,None]
Yz=np.array([f1(x) for x in z]).flatten()
plt.plot(z,Yz) #Plot true mean function
plt.scatter(X,Y,color='r')
#Create a 2-layered DGP + Hetero model
layer1=[kernel(length=np.array([0.5]),name='matern2.5')]
layer2=[kernel(length=np.array([0.2]),name='matern2.5',scale_est=1,connect=np.arange(1)),
kernel(length=np.array([0.2]),name='matern2.5',scale_est=1,connect=np.arange(1))]
layer3=[Hetero()]
#Construct the DGP + Hetero model
all_layer=combine(layer1,layer2,layer3)
m=dgp(X,[Y],all_layer)
#Train the model
m.train(N=500)
#Construct the emulator
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
#Make predictions across all layers so we can extract predictions for the mean and variance functions
mu,var=emu.predict(z, method='mean_var',full_layer=True)
#Visualize the overall model prediction
s=np.sqrt(var[-1])
u=mu[-1]+2*s
l=mu[-1]-2*s
p=plt.plot(z,mu[-1],color='r',alpha=1,lw=1)
p1=plt.plot(z,u,'--',color='g',lw=1)
p1=plt.plot(z,l,'--',color='g',lw=1)
plt.scatter(X,Y,color='black')
plt.plot(z,Yz)
#Visualize the prediction for the mean function
mu_mean=mu[-2][:,0]
var_mean=var[-2][:,0]
s=np.sqrt(var_mean)
u=mu_mean+2*s
l=mu_mean-2*s
p=plt.plot(z,mu_mean,color='r',alpha=1,lw=1)
p1=plt.plot(z,u,'--',color='g',lw=1)
p1=plt.plot(z,l,'--',color='g',lw=1)
plt.plot(z,Yz,color='black',lw=1)
#Visualize the prediction for the log(variance) function
mu_var=mu[-2][:,1]
var_var=var[-2][:,1]
s=np.sqrt(var_var)
u=mu_var+2*s
l=mu_var-2*s
p=plt.plot(z,mu_var,color='r',alpha=1,lw=1)
p1=plt.plot(z,u,'--',color='g',lw=1)
p1=plt.plot(z,l,'--',color='g',lw=1)
plt.plot(z,np.array([np.log(f2(x)) for x in z]).reshape(-1,1),color='black',lw=1)
###Output
_____no_output_____
###Markdown
Example 2 on heteroskedastic Gaussian likelihood
###Code
#Load and visualize the motorcycle dataset
X=np.loadtxt('./mc_input.txt').reshape(-1,1)
Y=np.loadtxt('./mc_output.txt').reshape(-1,1)
X=(X-np.min(X))/(np.max(X)-np.min(X))
Y=(Y-Y.mean())/Y.std()
plt.scatter(X,Y)
#Construct a 2-layered DGP + Hetero model
layer1=[kernel(length=np.array([0.5]),name='sexp')]
layer2=[]
for _ in range(2):
k=kernel(length=np.array([0.2]),name='sexp',scale_est=1,connect=np.arange(1))
layer2.append(k)
layer3=[Hetero()]
all_layer=combine(layer1,layer2,layer3)
m=dgp(X,[Y],all_layer)
#Train the model
m.train(N=500)
#Construct the emulator
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
#Make predictions over [0,1]
z=np.linspace(0,1,100)[:,None]
mu,var=emu.predict(z, method='mean_var')
#Visualize the predictions
s=np.sqrt(var)
u=mu+2*s
l=mu-2*s
p=plt.plot(z,mu,color='r',alpha=1,lw=1)
p1=plt.plot(z,u,'--',color='g',lw=1)
p1=plt.plot(z,l,'--',color='g',lw=1)
plt.scatter(X,Y,color='black')
###Output
_____no_output_____
###Markdown
Example 3 on Poisson likelihood
###Code
#Generate some data with replications
n=10
X=np.linspace(0,.3,n)[:,None]
for _ in range(4):
X=np.concatenate((X,np.linspace(0,.3,n)[:,None]),axis=0)
X=np.concatenate((X,np.linspace(0.35,1,n)[:,None]),axis=0)
f= lambda x: np.exp(np.exp(-1.5*np.sin(1/((0.7*0.8*(1.5*x+0.1)+0.3)**2))))
Y=np.array([np.random.poisson(f(x)) for x in X]).reshape(-1,1)
z=np.linspace(0,1.,200)[:,None]
Yz=np.array([f(x) for x in z]).flatten()
test_Yz=np.array([np.random.poisson(f(x)) for x in z]).reshape(-1,1) #generate some testing output data
plt.plot(z,Yz)
plt.scatter(X,Y,color='r')
#Train a GP + Poisson model
layer1=[kernel(length=np.array([0.5]),name='matern2.5',scale_est=1)]
layer2=[Poisson()]
all_layer=combine(layer1,layer2)
m=dgp(X,[Y],all_layer)
m.train(N=500)
#Visualize the results
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
mu,var=emu.predict(z, method='mean_var',full_layer=True) #Make mean-variance prediction
samp=emu.predict(z, method='sampling') #Draw some samples to obtain the quantiles of the overall model
quant=np.quantile(np.squeeze(samp), [0.05,0.5,0.95],axis=1) #Compute sample-based quantiles
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,4))
ax1.set_title('Predicted and True Poisson Mean')
ax1.plot(z,Yz,color='black')
ax1.plot(z,mu[-1],'--',color='red',alpha=0.8,lw=3)
ax1.plot(z,quant[0,:],'--',color='b',lw=1)
ax1.plot(z,quant[1,:],'--',color='b',lw=1)
ax1.plot(z,quant[2,:],'--',color='b',lw=1)
mu_gp, var_gp=mu[-2], var[-2]
s=np.sqrt(var_gp)
u,l =mu_gp+2*s, mu_gp-2*s
ax2.set_title('Predicted and True logged Poisson Mean')
ax2.plot(z,mu_gp,color='r',alpha=1,lw=1)
ax2.plot(z,u,'--',color='g',lw=1)
ax2.plot(z,l,'--',color='g',lw=1)
ax2.plot(z,np.log(Yz),color='black',lw=1)
print('The negative log-likelihood of predictions is', emu.nllik(z,test_Yz)[0])
#Train a 2-layered DGP + Poisson model
layer1=[kernel(length=np.array([0.5]),name='matern2.5')]
layer2=[kernel(length=np.array([0.1]),name='matern2.5',scale_est=1,connect=np.arange(1))]
layer3=[Poisson()]
all_layer=combine(layer1,layer2,layer3)
m=dgp(X,[Y],all_layer)
m.train(N=500)
#Visualize the results
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
mu,var=emu.predict(z, method='mean_var',full_layer=True) #Make mean-variance prediction
samp=emu.predict(z, method='sampling') #Draw some samples to obtain the quantiles of the overall model
quant=np.quantile(np.squeeze(samp), [0.05,0.5,0.95],axis=1) #Compute sample-based quantiles
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,4))
ax1.set_title('Predicted and True Poisson Mean')
ax1.plot(z,Yz,color='black')
ax1.plot(z,mu[-1],'--',color='red',alpha=0.8,lw=3)
ax1.plot(z,quant[0,:],'--',color='b',lw=1)
ax1.plot(z,quant[1,:],'--',color='b',lw=1)
ax1.plot(z,quant[2,:],'--',color='b',lw=1)
mu_gp, var_gp=mu[-2], var[-2]
s=np.sqrt(var_gp)
u,l =mu_gp+2*s, mu_gp-2*s
ax2.set_title('Predicted and True logged Poisson Mean')
ax2.plot(z,mu_gp,color='r',alpha=1,lw=1)
ax2.plot(z,u,'--',color='g',lw=1)
ax2.plot(z,l,'--',color='g',lw=1)
ax2.plot(z,np.log(Yz),color='black',lw=1)
print('The negative log-likelihood of predictions is', emu.nllik(z,test_Yz)[0])
###Output
The negative log-likelihood of predictions is 1.7790101014101791
###Markdown
Example 4 on Negative Binomial likelihoodThe Negative Binomial pmf in dgpsi is defined by$$p_Y(y;\mu,\sigma)=\frac{\Gamma(y+\frac{1}{\sigma})}{\Gamma(1/{\sigma})\Gamma(y+1)}\left(\frac{\sigma\mu}{1+\sigma\mu}\right)^y\left(\frac{1}{1+\sigma\mu}\right)^{1/{\sigma}}$$with mean $0<\mu<\infty$ and dispersion $0<\sigma<\infty$, which correspond to numpy's negative binomial parameters $n$ and $p$ via $n=1/\sigma$ and $p=1/(1+\mu\sigma)$.
###Code
#Generate some data from the Negative Binomial distribution.
n=30
X=np.linspace(0,1,n)[:,None]
for _ in range(5):
X=np.concatenate((X,np.linspace(0,1,n)[:,None]),axis=0)
f1= lambda x: 1/np.exp(2) if x<0.5 else np.exp(2) #True mean function
f2= lambda x: np.exp(6*x**2-3) #True dispersion function
Y=np.array([np.random.negative_binomial(1/f2(x),1/(1+f1(x)*f2(x))) for x in X]).reshape(-1,1)
Xt=np.linspace(0,1.,200)[:,None]
Yt=np.array([f1(x) for x in Xt]).flatten()
plt.plot(Xt,Yt)
plt.scatter(X,Y,color='r')
#Train a 2-layered DGP (one GP in the first layer and two in the second corresponding to the mean and dispersion parameters) + NegBin model
layer1=[kernel(length=np.array([0.5]),name='matern2.5')]
layer2=[kernel(length=np.array([0.02]),name='matern2.5',scale_est=1,connect=np.arange(1)),
kernel(length=np.array([0.02]),name='matern2.5',scale_est=1,connect=np.arange(1))]
layer3=[NegBin()]
all_layer=combine(layer1,layer2,layer3)
m=dgp(X,[Y],all_layer)
m.train(N=500)
#Visualize the results
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
mu,var=emu.predict(Xt, method='mean_var',full_layer=True) #Make mean-variance prediction
samp=emu.predict(Xt, method='sampling') #Draw some samples to obtain the quantiles of the overall model
quant=np.quantile(np.squeeze(samp), [0.05,0.5,0.95],axis=1) #Compute sample-based quantiles
fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(15,4))
ax1.set_title('Predicted and True NegBin Mean')
ax1.plot(Xt,Yt,color='black')
ax1.plot(Xt,mu[-1],'--',color='red',alpha=0.8,lw=3)
ax1.plot(Xt,quant[0,:],'--',color='b',lw=1)
ax1.plot(Xt,quant[1,:],'--',color='b',lw=1)
ax1.plot(Xt,quant[2,:],'--',color='b',lw=1)
mu_gp, var_gp=mu[-2][:,0], var[-2][:,0]
s=np.sqrt(var_gp)
u,l =mu_gp+2*s, mu_gp-2*s
ax2.set_title('Predicted and True logged NegBin Mean')
ax2.plot(Xt,mu_gp,color='r',alpha=1,lw=1)
ax2.plot(Xt,u,'--',color='g',lw=1)
ax2.plot(Xt,l,'--',color='g',lw=1)
ax2.plot(Xt,np.log(Yt),color='black',lw=1)
mu_gp, var_gp=mu[-2][:,1], var[-2][:,1]
s=np.sqrt(var_gp)
u,l =mu_gp+2*s, mu_gp-2*s
ax3.set_title('Predicted and True logged NegBin Dispersion')
ax3.plot(Xt,mu_gp,color='r',alpha=1,lw=1)
ax3.plot(Xt,u,'--',color='g',lw=1)
ax3.plot(Xt,l,'--',color='g',lw=1)
ax3.plot(Xt,np.array([np.log(f2(x)) for x in Xt]).reshape(-1,1),color='black',lw=1)
###Output
_____no_output_____ |
Assignment 3/From Scratch Implementation/_CAP5625_Assignment3_scratch_LogisticRidgeRegresion.ipynb | ###Markdown
**Assignment 3 (From Scratch)** **Penalized Logistic Ridge Regression CV with Batch Gradient Decent** - **Programmers:** - Shaun Pritchard - Ismael A Lopez- **Date:** 11-15-2021- **Assignment:** 3- **Prof:** M.DeGiorgio **Overview: Assignment 3**- In this assignment you will still be analyzing human genetic data from 𝑁 = 183 trainingobservations (individuals) sampled across the world. The goal is to fit a model that can predict(classify) an individual’s ancestry from their genetic data that has been projected along 𝑝 = 10top principal components (proportion of variance explained is 0.2416) that we use as featuresrather than the raw genetic data- Using ridge regression, fit a penalized (regularized) logistic (multinomial) regression with model parameters obtained by batch gradient descent. Based on K = 5 continental ancestries (African, European, East Asian, Oceanian, or Native American), predictions will be made. Ridge regression will permit parameter shrinkage (tuning parameter 𝜆 ≥ 0) to mitigate overfitting. In order to infer the bestfit model parameters on the training dataset, the tuning parameter will be selected using five-fold cross validation. After training, the model will be used to predict new test data points. **Imports**> Import libaries and data
###Code
#Math libs
from math import sqrt
from scipy import stats
from numpy import median
from decimal import *
import os
# Data Science libs
import numpy as np
import pandas as pd
# Graphics libs
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#Timers
# !pip install pytictoc
# from pytictoc import TicToc
# Import training and test datasets
train_df = pd.read_csv('TrainingData_N183_p10.csv')
test_df = pd.read_csv('TestData_N111_p10.csv')
# Validate traning data import correclty
train_df.head(2)
# Validate testing data import correclty
test_df.head(2)
###Output
_____no_output_____
###Markdown
**Data Pre-Proccessing**- Pre-proccess test and training datasets- Impute categorical variables in features- Validate correct output of test data
###Code
# recode the categories
data = train_df['Ancestry'].unique().tolist()
num_features = len(data)
train_df['Ancestry2'] = train_df['Ancestry'].apply(lambda x: data.index(x))
# Validate training data set
train_df.head(2)
# Shape trianing data
train_df.shape
###Output
_____no_output_____
###Markdown
**Seperate X Y Predictors and Responses** - Seperate predictors from responses - Validate correct output
###Code
# Seperate dependant categorical feature data for training and test data set for later use
Y_train_names = train_df['Ancestry'].tolist()
Y_test_names = test_df['Ancestry'].tolist()
# Separate training feature predictors from responses
X_train = np.float32(train_df.to_numpy()[:, :-2])
Y_train = train_df['Ancestry2'].to_numpy()
# Separate test feature predictors from responses
X_test = np.float32(test_df.to_numpy()[:, :-1])
X_train.shape
###Output
_____no_output_____
###Markdown
**Set Global Vairibles**- λ = tunning parameters- α = learning rate- k = number of folds- n_itters = nu mber of itterations- X_p = predictor vlaues from training data- Y_p = response values from training data
###Code
# Set local variables
# Tuning Parms
λ = 10 ** np.arange(-4., 4.)
# learning rate
α = 1e-4
# K-folds
k = 5
# Itterations
n_iters = 10000
# Set n x m matrix predictor variable
X_p = X_train
# Set n vector response variable
Y_p = Y_train
###Output
_____no_output_____
###Markdown
**Instantiate Data**- Handle logic and set variables needed for calculating ridge logistic regression- Handle logic and set variables for batch gradient descent- Handle logic and set variables for cross-validation
###Code
# Encode response variable from CV design matrix for cross vlaidation
def imputeResponse(response_vector, num_features):
response_vector = np.int64(response_vector)
X1 = response_vector.shape[0]
response_mat = np.zeros([X1, num_features])
response_mat[np.arange(X1), response_vector] = 1
return response_mat
# Method to handle randomization of training data predictors and responses
def randomizeData(X_p, Y_p):
data = np.concatenate((X_p, Y_p[:, None]), 1)
np.random.shuffle(data)
return data[:, :-1], data[:, -1]
# Randomize predictors and responses into new vairables
x, y = randomizeData(X_p, Y_p)
# Set Global variable for samples and number of X = N X M features
X1 = x.shape[0]
X2 = x.shape[1]
# Get number of training feature classes = 5
num_features = np.unique(y).size
# Call method imputation method on training response variables
y = imputeResponse(y, num_features)
# Store 5 K-fold cross validation results in symetric matrices
CV = np.zeros([k, len(λ)])
# Number of validation sample index values based on k-folds
val_samples = int(np.ceil(X1 / k))
test_i = list(range(0, X1, val_samples))
# Create a 𝛽 zero matrix to store the trained predictors
𝛽 = np.zeros([k, len(λ), X2 + 1, num_features])
###Output
_____no_output_____
###Markdown
**Implement logic**- Main functions to handle logic within the preceding algorithms
###Code
# Standardize X coefficients
def standardize(x, mean_x, std_x):
return (x - mean_x) / std_x
# Concatenate ones column matrix with X coefficiants
def intercept(x):
col = np.ones([x.shape[0], 1])
return np.concatenate((col, x), 1)
# Predict standardize expotential X values from intercepts
def predict(x):
x = standardize(x, mean_x, std_x)
x = intercept(x)
X_p = np.exp(np.matmul(x, 𝛽x))
return X_p / np.sum(X_p, 1)[:, None]
# Splitting the data into k groups resampling method
def cv_folds(i_test):
if i_test + val_samples <= X1:
i_tests = np.arange(i_test, i_test + val_samples)
else:
i_tests = np.arange(i_test, X1)
x_test = x[i_tests]
x_train = np.delete(x, i_tests, axis = 0)
y_test = y[i_tests]
y_train = np.delete(y, i_tests, axis = 0)
return x_train, x_test, y_train, y_test
# Calculate model CV score
def score(x, y, 𝛽x):
# Compute exponent values of X coef and BGD unnormilized probality matrix
U = np.exp(np.matmul(x, 𝛽x))
# Calculate sum unnormilized probality / sum unnormilized matrix by 1
P = U / np.sum(U, 1)[:, None]
# Calulate to cost error score
err = -(1 / x.shape[0]) * np.sum(np.sum(y * np.log10(P), 1))
return err
###Output
_____no_output_____
###Markdown
**Batch Gradient Descent**> Alorithm 1 used for this computation
###Code
def BGD(x, y, 𝛽x, lamb):
# Unormalized class probability matrix
U = np.exp(np.dot(x, 𝛽x))
# Normalized class probability matrix
P = U / np.sum(U, 1)[:, None]
# K intercept matrix
Z = 𝛽x.copy()
Z[1:] = 0
# Update parameter matrix
𝛽x = 𝛽x + α * (np.matmul(np.transpose(x), y - P) - 2 * lamb * (𝛽x - Z))
return 𝛽x
###Output
_____no_output_____
###Markdown
**CV Ridge Penlized Logistic Regression**> - Compute ridge-penalized logistic regression with cross vlaidation- Performing a ridge-penalized logistic regression fit to training data{(𝑥1, 𝑦1), (𝑥2, 𝑦2), … , (𝑥𝑁, 𝑦𝑁)} is to minimize the cost function 
###Code
# Compute ridge-penalized logistic regression with cross vlaidation
for i_lambda, lamb in enumerate(λ):
for i_fold, i_test in zip(range(k), test_i):
# Validates and trains the CV iteration based on the validation and training sets.
x_train, x_test, y_train, y_test = cv_folds(i_test)
# Standardize x and center y 5 K-fold trianing and test data
mean_x, std_x = np.mean(x_train, 0), np.std(x_train, 0)
# implement standardize X training and test sets
x_train = standardize(x_train, mean_x, std_x)
x_test = standardize(x_test, mean_x, std_x)
# Add training and test intercept column to the design matrix
x_train = intercept(x_train)
x_test = intercept(x_test)
# initialize Beta coef for lambdas and fold
𝛽x = np.zeros([X2 + 1, num_features])
# Loop through beta and lambdas with batch gradient decent
for iter in range(n_iters):
𝛽x = BGD(x_train, y_train, 𝛽x, lamb)
# Score CV cost error tp the model and store the values
CV[i_fold, i_lambda] = score(x_test, y_test, 𝛽x)
# Save the updated coefficient vectors 𝛽x
𝛽[i_fold, i_lambda] = 𝛽x
###Output
_____no_output_____
###Markdown
**Deliverable 1**> Illustrate the effect of the tuning parameter on the inferred ridge regressioncoefficients by generating five plots (one for each of the 𝐾 = 5 ancestry classes) of 10 lines(one for each of the 𝑝 = 10 features) 
###Code
# Plot tuning parameter on the inferred ridge regression coefficients
𝛽μ = np.mean(𝛽, 0)
sns.set(rc = {'figure.figsize':(15,8)})
for i, c in enumerate(data):
𝛽μk = 𝛽μ[..., i]
sns.set_theme(style="whitegrid")
sns.set_palette("mako")
for j in range(1, 1 + X2):
sns.lineplot( x=λ, y=𝛽μk[:, j], palette='mako', label = 'PC{}'.format(j) )
sns.set()
plt.xscale('log')
plt.legend(bbox_to_anchor=(1.09, 1), loc='upper left')
plt.xlabel('Log Lambda')
plt.ylabel('Coefficient Values')
plt.suptitle('Inferred Ridge Regression Coefficient Tuning Parameters of' + ' ' + c + ' ' + 'Class')
for l in range(i):
# Output Deliverable 1
plt.savefig("Assignment3_Deliverable1.{}.png".format(l))
plt.show()
###Output
_____no_output_____
###Markdown
**Deliverable 2**> Illustrate the effect of the tuning parameter on the cross validation error by generating a plot with the 𝑦-axis as CV(5) error, and the 𝑥-axis the corresponding log-scaledtuning parameter value log10(𝜆) that generated the particular CV(5) error.
###Code
# Compute tuning parameter on the cross validation error
err = np.std(CV, 0) / np.sqrt(CV.shape[0])
sns.set(rc = {'figure.figsize':(15,8)})
sns.set_theme(style="whitegrid")
sns.set_palette("icefire")
sns.pointplot(x=λ, y=np.mean(CV, 0),yerr = err)
sns.set()
plt.xlabel('log10(lambda)')
plt.ylabel('CV(5) error')
plt.xscale('log')
plt.yscale('log')
plt.suptitle('Effect of the uning parameter on the cross validation error log10(lambda)')
plt.savefig("Assignment3_Deliverable2.png")
plt.show()
###Output
_____no_output_____
###Markdown
**Retrain model with best lambda**
###Code
# Set array of indices into the best lambda
best_λ = λ[np.argmin(np.mean(CV, 0))]
# Set standaridzed variables
mean_x, std_x = np.mean(x, 0), np.std(x, 0)
# Implement standarization of predictors and copy response variables
x = standardize(x, mean_x, std_x)
x = intercept(x)
y = y.copy()
# Set zeros matrix to coef and retiran model on batch gradient decent
𝛽x = np.zeros([X2 + 1, num_features])
for iter in range(n_iters):
𝛽x = BGD(x, y, 𝛽x, best_λ)
###Output
_____no_output_____
###Markdown
**Deliverable 3**> Indicate the value of 𝜆 value that generated the smallest CV(5) error **Optimal lambda**
###Code
# Plot lowest optimal lambda
palette = sns.color_palette('mako')
sns.set(rc = {'figure.figsize':(15,8)})
sns.set_theme(style="whitegrid")
ak = Decimal(best_λ)
plt.plot(λ)
sns.set_palette("icefire")
sns.countplot(data=λ)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('log10(lambda)')
plt.ylabel('Best Lambda')
plt.suptitle('Lowest optimal Lamda value:= log_1e{:.1f} = {}'.format(ak.log10(), best_λ))
print('Optimal lambda value:= {}'.format(best_λ))
plt.savefig("Assignment3_Deliverable3-1.png")
###Output
_____no_output_____
###Markdown
**Accuracy on training classifier**
###Code
#Implement Prediction function
ŷ_p = predict(X_train)
# Return the maximum value along a given y axis
ŷ0 = np.argmax(ŷ_p, 1)
# Return mean traning accuaracy
μ = np.mean(ŷ0 == Y_train)
sns.set(rc = {'figure.figsize':(15,8)})
sns.set_theme(style="whitegrid")
plt.plot(ŷ0)
sns.set_palette("icefire")
sns.boxplot(data=ŷ0 -μ**2)
plt.xscale('log')
plt.xlabel('log10(lambda)')
plt.ylabel('Best Lambda')
plt.suptitle('Classifier Training Accuracy:= {}'.format(μ))
plt.savefig("Assignment3_Deliverable3-2.png")
# print('Classifier Training Accuracy: {}'.format(μ))
###Output
_____no_output_____
###Markdown
**Retrain model on the entire dataset for optimal 𝜆**> - Given the optimal 𝜆, retrain your model on the entire dataset of 𝑁 = 183 observations to obtain an estimate of the (𝑝 + 1) × 𝐾 model parameter matrix as 𝐁̂ and make predictions of the probability for each of the 𝐾 = 5 classes for the 111 test individuals located in TestData_N111_p10.csv.- Add probability predictions to the test dataframe
###Code
# Create new test predicotr and response variables ŷ
ŷ_test = predict(X_test)
Y_class = np.argmax(ŷ_test, 1)
# Re-lable feature headers and add new class prediction index column
new_colNames = ['{}_Probability'.format(c_name) for c_name in data] + ['ClassPredInd']
# Implemnt index array of probabilities
i_prob = np.concatenate((ŷ_test, Y_class[:, None]), 1)
# Create New dataframe for probality indeces
df2 = pd.DataFrame(i_prob, columns = new_colNames)
# Concat dependant Ancestory features to dataframe
dep_preds = pd.concat([test_df['Ancestry'], df2], axis = 1)
# Add new
dep_preds['ClassPredName'] = dep_preds['ClassPredInd'].apply(lambda x: data[int(x)])
# Validate Probability predictions dataframe
dep_preds.head()
# Slice prediction and set new feature vector column variable
prob_1 = dep_preds.loc[:, 'Ancestry':'NativeAmerican_Probability']
# Unpivot convert dataFrame to long format
prob_2 = pd.melt(prob_1, id_vars = ['Ancestry'], var_name = 'Ancestry_Predictions', value_name = 'Probability')
# Test for true probability
prob_2['Ancestry_Predictions'] = prob_2['Ancestry_Predictions'].apply(lambda x: x.split('Prob')[0])
# Validate dataframe
prob_2.head(5)
# Validate dataframe features
print('Describe Columns:=', prob_2.columns, '\n')
print('Data Index values:=', prob_2.index, '\n')
print('Describe data:=', prob_2.describe(), '\n')
###Output
_____no_output_____
###Markdown
**Deliverable 4**> Given the optimal 𝜆, retrain your model on the entire dataset of 𝑁 = 183 observations to obtain an estimate of the (𝑝 + 1) × 𝐾 model parameter matrix as 𝐁̂ and make predictions of the probability for each of the 𝐾 = 5 classes for the 111 test individuals located in TestData_N111_p10.csv. 
###Code
# Plot Probality prediction matrix
sns.set(rc = {'figure.figsize':(15,8)})
sns.set_theme(style="whitegrid")
fig, ax = plt.subplots()
sns.barplot(data = prob_2[prob_2['Ancestry'] != 'Unknown'],color = 'r', x = 'Ancestry', y = 'Probability', hue = 'Ancestry_Predictions', palette = 'mako')
plt.xlabel('Ancestory Classes')
plt.ylabel('Probability')
plt.suptitle('Probabilty of Ancestor classes')
plt.savefig("Assignment3_Deliverable4.png")
plt.show()
###Output
_____no_output_____ |
final-project/Pyspark Linear Regression.ipynb | ###Markdown
Import Libraries
###Code
from pyspark.sql import SparkSession
spark= SparkSession.builder.appName('Customers').getOrCreate()
from pyspark.ml.regression import LinearRegression
dataset=spark.read.csv("Ecommerce_Customers.csv",inferSchema=True,header=True)
###Output
_____no_output_____
###Markdown
Perform EDA
###Code
dataset
dataset.show()
dataset.printSchema()
sklearn
x1,X2,X3,X4,X5 Y1 ---->model-->prediction
[X1,X2,X3,X4,X5] Y1---->model--->prediction
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
featureassembler=VectorAssembler(inputCols=["Avg Session Length","Time on App","Time on Website","Length of Membership"],outputCol="Independent Features")
output=featureassembler.transform(dataset)
output.show()
output.select("Independent Features").show()
output.columns
finalized_data=output.select("Independent Features","Yearly Amount Spent")
finalized_data.show()
###Output
+--------------------+-------------------+
|Independent Features|Yearly Amount Spent|
+--------------------+-------------------+
|[34.49726773,12.6...| 587.951054|
|[31.92627203,11.1...| 392.2049334|
|[33.00091476,11.3...| 487.5475049|
|[34.30555663,13.7...| 581.852344|
|[33.33067252,12.7...| 599.406092|
|[33.87103788,12.0...| 637.1024479|
|[32.0215955,11.36...| 521.5721748|
|[32.73914294,12.3...| 549.9041461|
|[33.9877729,13.38...| 570.200409|
|[31.93654862,11.8...| 427.1993849|
|[33.99257277,13.3...| 492.6060127|
|[33.87936082,11.5...| 522.3374046|
|[29.53242897,10.9...| 408.6403511|
|[33.19033404,12.9...| 573.4158673|
|[32.38797585,13.1...| 470.4527333|
|[30.73772037,12.6...| 461.7807422|
|[32.1253869,11.73...| 457.8476959|
|[32.33889932,12.0...| 407.7045475|
|[32.18781205,14.7...| 452.3156755|
|[32.61785606,13.9...| 605.0610388|
+--------------------+-------------------+
only showing top 20 rows
###Markdown
Create Model
###Code
train_data,test_data=finalized_data.randomSplit([0.75,0.25])
regressor=LinearRegression(featuresCol='Independent Features', labelCol='Yearly Amount Spent')
# fit the model
regressor=regressor.fit(train_data)
regressor.coefficients
regressor.intercept
###Output
_____no_output_____
###Markdown
Evaluate the model
###Code
pred_results=regressor.evaluate(test_data)
pred_results.predictions.show(40)
###Output
+--------------------+-------------------+------------------+
|Independent Features|Yearly Amount Spent| prediction|
+--------------------+-------------------+------------------+
|[29.53242897,10.9...| 408.6403511| 397.7314622481499|
|[30.39318454,11.8...| 319.9288698| 331.4354817055371|
|[30.73772037,12.6...| 461.7807422|451.00378856214934|
|[30.83643267,13.1...| 467.5019004| 471.6315593383845|
|[31.04722214,11.1...| 392.4973992| 387.6961791855133|
|[31.06621816,11.7...| 448.9332932|461.71709673379814|
|[31.35847719,12.8...| 495.1759504|491.12482994053585|
|[31.38958548,10.9...| 410.0696111| 409.3081755667929|
|[31.42522688,13.2...| 530.7667187| 534.6343118465913|
|[31.5261979,12.04...| 409.0945262|417.94636309938346|
|[31.53160448,13.3...| 436.5156057| 432.7242184024992|
|[31.57020083,13.3...| 545.9454921| 563.5592281627285|
|[31.62536013,13.1...| 376.3369008|380.96043463635397|
|[31.65480968,13.0...| 475.2634237| 468.5507644073118|
|[31.7207699,11.75...| 538.7749335| 545.5799494248888|
|[31.72420252,13.1...| 503.3878873| 509.5844377330677|
|[31.73663569,10.7...| 496.9334463|494.49593917588913|
|[31.82934646,11.2...| 385.152338|384.13548659460844|
|[31.86274111,14.0...| 556.2981412| 558.1705675935309|
|[31.86483255,13.4...| 439.8912805|450.32554539094735|
|[32.04781463,12.4...| 497.3895578|480.92538992642835|
|[32.05426185,13.1...| 561.8746577| 557.0238791456106|
|[32.0609144,12.62...| 627.6033187| 611.0063849335909|
|[32.07759004,10.3...| 401.0331352|402.78394552838677|
|[32.07894758,12.7...| 357.8637186|352.75949417674406|
|[32.08838063,11.9...| 512.1658664| 518.4098285688285|
|[32.11511907,11.9...| 350.0582002| 342.1792296532319|
|[32.1253869,11.73...| 457.8476959|437.83757808411997|
|[32.20465465,12.4...| 478.584286| 478.6402993700408|
|[32.21292383,11.7...| 513.1531119| 513.9194131613392|
|[32.21552742,12.2...| 438.417742| 445.7372873723375|
|[32.22729914,13.7...| 613.5993234| 621.4052644522508|
|[32.2559012,10.48...| 479.7319376| 478.3541922043364|
|[32.25997327,14.1...| 571.2160048| 573.522899819732|
|[32.29964716,12.1...| 547.1109824| 537.9549823334594|
|[32.37798966,11.9...| 408.2169018|435.56382374750706|
|[32.38696867,12.7...| 508.7719067|504.03454410803056|
|[32.39742194,12.0...| 483.7965221|481.15732246538573|
|[32.52976873,11.7...| 298.7620079| 305.9121405291926|
|[32.53379686,12.2...| 485.9231305| 500.660049606902|
+--------------------+-------------------+------------------+
only showing top 40 rows
|
_notebooks/2022-03-14-dh140Final.ipynb | ###Markdown
**G.G.: Good Game?** by Matthew Tran March 14, 2022 **Introduction** In the modern age, video games have become a modern past time enjoyed by many people of various ages. A now lucrative industry, video games come in a variety of genres, experiences, and platforms. When asked about successful video games, a handful of titles might come to mind. Ones that are iconic because of their characters, revolutionary because of the way they engage with storytelling, or perhaps nostalgic because of how long they have been around. This project seeks to define top performing video games and the traits that may have contributed to the success of these titles. Subsequently, I would like to conduct a more qualitative investigation on these titles, mainly examining reviews to paint a clearer picture of what consumers like about top games. **The Data** Initial exploration of defining what makes a good game will be conducted using the Video Games CORGIS dataset which can be accessed [here.](https://corgis-edu.github.io/corgis/python/video_games/) This data was originally collected by Dr. Joe Cox who conducted an empirical investigation of U.S. sales data of video games. Dr. Cox concluded that the major factors that predict for a title's ability to attain "blockbuster" status were threefold: the company that produced the title, the console, and the critic reviews. I would like to use the data that Dr. Cox collected, which spans thousands of titles that were released between 2004 and 2010, and conduct my own analysis agnostic to his fidnings. The categoies that I am interested in and their possible effects on the success of a game are: 1. Maximum number of players: how many people can play this game at one time? 2. Online Features: does the game support online play? 3. Genre: what genre does this game belong to? Within these categories, I would like to measure success of a game using: 1. Review score: the typical review score out of 100 2. Sales: the total sales made on the game measured in millions of dollars 3. Completionist: players reported completing everything in the game **Data Exploration**
###Code
#hide
import pandas as pd
import seaborn as sns
#hide
import video_games
#hide
video_game = video_games.get_video_game()
#hide
df = pd.read_csv('video_games.csv')
#hide-input
df.head()
###Output
_____no_output_____
###Markdown
1. What are the top games by critic reviews?
###Code
#hide-input
df[['Title','Metrics.Review Score']].sort_values('Metrics.Review Score', ascending = False )
###Output
_____no_output_____
###Markdown
2. What are the top games by sales?
###Code
#hide-input
df[['Title', 'Metrics.Sales']].sort_values('Metrics.Sales', ascending = False)
###Output
_____no_output_____
###Markdown
3. What games have the most number of people who report completing the game? * will be skewed based on how many people played the game
###Code
#hide-input
df[['Title', 'Length.Completionists.Polled']].sort_values ('Length.Completionists.Polled', ascending = False)
###Output
_____no_output_____
###Markdown
4. What genre of game was popular on the market during this time period (2004-2010)?
###Code
#collapse-output
df['Metadata.Genres'].value_counts()
###Output
_____no_output_____
###Markdown
I would like to take the "top games" from questions 1-3 and get a closer look at these titles, since they are considered "top performing" in their respective categories.
###Code
#collapse-output
df.iloc[837]
#collapse-output
df.iloc[156]
#collapse-output
df.iloc[442]
#hide-input
df.iloc[[837,156,442]]
###Output
_____no_output_____
###Markdown
Observed similarities and differences: 1. Action as one of the genres, though none fall exclusively into action only. 2. All 3 were a sequel of some kind, and based off of a previously licensed entity. 3. Max players do not go above 2, two of the three games are only single-player. 4. All games came from different publishers. 5. All released for different consoles. Because I am interested in the intersection of video games and pedagogy, I wanted to see the games that were considered "Educational." * These were only the titles exclusively listed as 'Educational' as the genre
###Code
#hide-input
df[df['Metadata.Genres'] == 'Educational']
#collapse-output
df.iloc[549]
#collapse-output
df.iloc[1000]
###Output
_____no_output_____
###Markdown
Takeaways from initial data exploration: 1. Because of the saturation of Action games, I would like to take a closer look at the metrics for success in that specific genre, as well as the other genres that are well-represented in the market. 2. Because the games that were successful in these categories were all sequels of some kind, I think it would be interested to investigate if there are any titles that were successful without being a sequel, which would speak to the degree to which a factor like nostalgia or investment in a story/ universe contribute to a title's success. 3. Because these three games did not have a max player capacity above 2, are there any titles that support multiplayer that are also finding success? 4. Are there certain publishers or consoles that are finding more general success with their titles than others? **Further Exploration** Based on the preliminary findings from my first data exploration, I would like to take a closer look at the data in certain places. Defining Success Using the metrics I established previously, I would like to examine the top-performing games in the categories of critic reviews, sales, and number of completionists. 1. Critic Reviews
###Code
#hide
df_reviews = df[['Title','Metrics.Review Score']]
#hide
df_reviews_top = df_reviews[df_reviews['Metrics.Review Score'] > 90].sort_values('Metrics.Review Score', ascending = False)
#hide
df_reviews_top.index
#hide
df2 = df.iloc[df_reviews_top.index]
#hide-input
sns.regplot(x = df2['Metrics.Review Score'], y = df2['Metrics.Sales'])
###Output
_____no_output_____
###Markdown
Here, a sucessful game by critic review was defined as having a critic review score of over 90, of which there were 29 games. It does not seem to be the case, however, that a high critic score correlates very strongly to commercial success in sales. In fact, the games that received the highest critic scores were not the ones which had the most number of sales, with a handfull of games receiving more commercial sucess, and the highest seller (in this group) having the lowest critics score...
###Code
#hide-input
sns.regplot(x = df2['Metrics.Review Score'], y = df2['Length.Completionists.Polled'])
###Output
_____no_output_____
###Markdown
I observed an even weaker relationship between critic review scores and number of completionists in for the games. This could however be because the games which received the highest critic review scores, such as Grand Theft Auto IV, are known for being "open-world" games in which the player can freely navigate the world without the story being a main part of interacting with the game.
###Code
#collapse-output
df2[['Title', 'Metrics.Review Score', 'Metrics.Sales', 'Length.Completionists.Polled', 'Metadata.Genres']].sort_values('Metrics.Sales', ascending = False)
###Output
_____no_output_____
###Markdown
Notably, 27 out of the 29 titles that were considered top-performers as described by their critic review scores had Action as one of their genre descriptors. The two games that did not belong to this genre were considered as Role-Playing and Racing/ Driving games. 2. Commercial Sales
###Code
#hide
df_sales = df[['Title', 'Metrics.Sales']]
#hide
df['Metrics.Sales'].mean
#hide
df_sales_top = df_sales[df_sales['Metrics.Sales'] > 4.69]
#hide
len(df_sales_top.index)
#hide
df3 = df.iloc[df_sales_top.index]
#hide-input
sns.regplot(x = df3['Metrics.Sales'], y =df3['Metrics.Review Score'] )
###Output
_____no_output_____
###Markdown
Very interestingly, for the top-performing games in terms if sales, being 14 games, there was actually a negative correlation between sales and critic scores. Shockingly, the game with the most sales had the lowest (sub-60) score of the group of games! However, the games with the highest critic scores in this set still had sales that were above the mean of the entire set, so these games were by no means unsuccessful.
###Code
#hide-input
sns.regplot(x = df3['Metrics.Sales'], y =df3['Length.Completionists.Polled'])
###Output
_____no_output_____
###Markdown
A similar negative relationship was observed between sales and number of completionist players. For similar reasons as the to critic scores grouping, the top game, Wii Play, is not a game that is well-known for having a definitive plot that players follow, but rather is a game that is often played socially with family and friends.
###Code
#hide-input
df3[['Title', 'Metrics.Review Score', 'Metrics.Sales', 'Length.Completionists.Polled', 'Metadata.Genres']].sort_values('Metrics.Sales', ascending = False)
###Output
_____no_output_____
###Markdown
The distribution of genres in this group were slightly more diverse than that of the critic scores group. While Action games still held a slight majority at 8 out 14 games being part of the Action genre, Role-Playing, sports, and Driving games made up the remainder of this group. 3. Completionists (or not?) Following my analysis of the top-performing games under critic scores and commercial sales, I have decided not to continue with using number of completionists as a measure of success for a variety of reasons. Firstly, this number would already be skewed because of how the number of players would affect this figure, and completionist data as such would require standardization. While the additional work of standardizing this data is not very much work, I also chose not to use number of completionists in the remainder of my analysis because of how easily this number could be affected by the type of game. There are many games that are made simply to be enjoyed, and do not have the aspect of following a story or plot that other games have. In the former case, players would not be as motivated to "complete" the game, which would skew how the number of com Action Games and Reviews? Because of the overrepresentation of Action games in the games with high critic reviews, I wanted to explore the idea that critics tend to favor games that are of the Action genre.
###Code
#hide
df_action = df[df['Metadata.Genres'] == 'Action']
#collapse-output
df_action['Metrics.Review Score'].mean
#hide
df_sports = df[df['Metadata.Genres'] == 'Sports']
#collapse-output
df_sports['Metrics.Review Score'].mean
#hide
df_strategy = df[df['Metadata.Genres'] == 'Strategy']
#collapse-output
df_strategy['Metrics.Review Score'].mean
###Output
_____no_output_____
###Markdown
Looking at the 3 most common genres and examining the mean critic review scores, it seems that there does not seem to be an inherent bias for Action games amonst critics, since strategy games had a higher mean score, though I think this is one area of analysis that could benefit from more investigation. **Who's at the Top?** From both my own personal perspective, as well as how I assume businesses and consumers would define success, I think commerical sales is the best way to mesure the success of a game. However, because I think critic reviews may encapsulate some measure of the quality of a game, I think it would be beneficial to include critics reviews as a measure of success in some way. Therefore, I decided that when choosing the "top games," I would choose those games that were present in both categories or top-performers in critic scores and sales. That is, games that received both above a 90 on critic scores and had sales above 4.69. To account for any phenomenon that goes beyond any conventional measure of success I would like to include those titles that had extremely high sales, but perhaps were not deemed a "good game" by critics. These three games would be: Wii Play, Mario Kart Wii, and New Super Mario Bros, all titles that had commericial sales greater that 10 million dollars.
###Code
#hide
top_reviews = df2['Title'].tolist()
top_sales = df3['Title'].tolist()
#collapse-output
top_sales
#collapse-output
top_reviews
#collapse-output
print(set(top_sales).intersection(set(top_reviews)))
#hide
top_games = set(top_sales).intersection(set(top_reviews))
#hide
top_games_dict = {'Grand Theft Auto IV' : 837,
'Mario Kart DS' : 22,
'Halo 3' : 420,
'Call of Duty 4: Modern Warfare' : 421,
'Super Mario Galaxy' : 422,
'Super Smash Bros.: Brawl' : 835
}
#hide
target_indices = [837, 22, 420, 421, 422, 835, 156, 833, 157]
top_games = df.iloc[target_indices]
#hide
top_games = top_games[['Title', 'Metrics.Review Score', 'Metrics.Sales', 'Metadata.Genres', 'Metadata.Sequel?', 'Metadata.Publishers', 'Features.Max Players', 'Release.Console', 'Release.Year']]
#hide-input
top_games.sort_values('Metrics.Sales', ascending = False)
#hide-input
sns.countplot(x = top_games['Metadata.Genres'], palette = 'ch:.25')
#hide-input
sns.countplot(x = top_games['Metadata.Publishers'], palette = 'ch:.25')
#hide-input
sns.countplot(x = top_games['Features.Max Players'], palette = 'ch:.25')
#hide-input
sns.countplot(x = top_games['Release.Console'], palette = 'ch:.25')
###Output
_____no_output_____
###Markdown
**Discussion** Examining the commonalities among the top performing games, it is clear that Nintendo games have the highest sales. They make up 6 of the 9 games that I identified as top-performing games, and represent the 6 highest-earning games in the entire dataset. This seems to operate independently of critic reviews, as the three highest selling games did not receive scores above 90 from critics. I think that there are factors, especially metadata about each game beyond the scope of information that was included in this dataset, that contributes to why games from Nintendo, and especially those that came out at the top of this dataset were considered top-performers by sales. Three of the top four games- Wii Play, Mario Kart Wii, and Mario Kart DS- are titles that do not have a strong storyline for the player to follow. Rather, they are multiplayer games that are centered around gaming as a social aspect. With family or friends, players can compete on teams with or against each other. Because you are constantly playing with real people in a competitive environment, the gaming experience is kept dynamic and engaging, rather then relying on a progressing in a story line. When considering what kinds of games are successful in the market, it may be helpful to consider whether a game is player-versus-player (PVP) or player-vs-everyone (PVE). Wii Play, Mario Kart Wii, and Mario Kart DS, are examples of PVP games, that is, players do not play by the themselves against computers, but rather against other real players, and these kinds of games inherently carry with them a competitive aspect. In terms of motivation, players are motivated to constantly return to the game in order to hone their skills in the game. In many PVE games, players are instead motivated by the desire to progress in the game itself. The other game that was represented in the top-performing game, despite not having the same PVP quality as the others, was New Super Mario Bros. I think the reason that this title in particular was so successful is because of its recognisability. Just the name Mario in the gaming sphere is already enough for people, gamer or not, to have a mental image of what the game will entail. As a game that has had many remakes and interations, I think that this game's successful largely comes from its capacity to combine the nostalgia of players with the refreshing nature of a game remake or sequel. A game beloved by many, the Super Mario series of games is one that people are invested in because of their emotional attatchment to the games and characters. When it comes to learning, motivation is a crucial part of pedagogy. In both the conventional sense and in the realm of possibly gamifying learning, I think that it would be helpful to incoroporate a healthy amount of competition, whether it be against the self or against others. I think it is also important for students to have the ability to engage with other students as well, as this social aspect to learning and gaming is something that motivates students additionally. **Nintendo: A Closer Look** Looking at the top-performing games, it is clear to see that Nintendo has a clear group on the gaming market when it comes to sales. As such, I would like to examine just what about these games makes them so desirable to players, and as such I would like to look to Nintendo themselves to see how they would market and describe these games.
###Code
#hide
from wordcloud import WordCloud, ImageColorGenerator
from PIL import Image
import matplotlib.pyplot as plt
#hide
myStopWords = list(punctuation) + stopwords.words('english')
#hide
super_mario_describe = '''
Bowser has taken over the Mushroom Kingdom, and it's up to Mario to put an end to his sinister reign! Battle Bowser's vile henchmen through 32 levels in the Original 1985 game mode. Move on to collecting special Red Coins and Yoshi Eggs in Challenge mode. Then, try to unlock a secret mode that's waiting to be found by super players like you! Every mode will give you the chance to beat your own score, and there's a lot more to do than just saving a princess. So get ready for a brick-smashin', pipe-warpin', turtle-stompin' good time!
Mario™ and Luigi™ star in their first ever Mushroom Kingdom adventure! Find out why Super Mario Bros. is instantly recognizable to millions of people across the globe, and what made it the best-selling game in the world for three decades straight. Jump over obstacles, grab coins, kick shells, and throw fireballs through eight action-packed worlds in this iconic NES classic. Only you and the Mario Bros. can rescue Princess Toadstool from the clutches of the evil Bowser.
Pick up items and throw them at your adversaries to clear levels in seven fantastical worlds. Even enemies can be picked up and tossed across the screen. Each character has a unique set of abilities: Luigi can jump higher and farther than any of the other characters, Toad can dig extremely fast and pull items out of the ground quicker than anyone, and the princess is the only one who can jump and hover temporarily. This unique installment in the Mario series will keep you coming back for more!
Relive the classic that brought renowned power-ups such as the Tanooki Suit to the world of Super Mario Bros.!
Bowser™ and the Koopalings are causing chaos yet again, but this time they’re going beyond the Mushroom Kingdom into the seven worlds that neighbor it. Now Mario™ and Luigi™ must battle a variety of enemies, including a Koopaling in each unique and distinctive world, on their way to ultimately taking on Bowser himself. Lucky for the brothers, they have more power-ups available than ever before. Fly above the action using the Super Leaf, swim faster by donning the Frog Suit, or defeat enemies using the Hammer Bros. Suit. Use the brand-new overworld map to take the chance to play a minigame in hopes of gaining extra lives or to find a Toad’s House where you can pick up additional items. All this (and more) combines into one of gaming’s most well-known and beloved titles—are you ready to experience gaming bliss?
'''
#hide-input
wc = WordCloud().generate_from_text(super_mario_describe)
#Use matplotlib.pyplot to display the fitted wordcloud
#Turn axis off to get rid of axis numbers
plt.imshow(wc)
plt.axis('off')
plt.show()
#hide
mario_kart_describe = '''
Select one of eight characters from the Mario™ series—offering a variety of driving styles—and take on three championship cups in three different kart classes. Win enough, and you'll unlock a fourth circuit: the ultra-tough Special Cup. Crossing the finish line in first place isn't an easy task, though, as each track has unique obstacles to conquer and racers can obtain special power-ups that boost them to victory. With more than 15 tracks to master and nearly endless replay value, Super Mario Kart is classic gaming…with some banana peels thrown in for good measure!
The newest installment of the fan-favorite Mario Kart™ franchise brings Mushroom Kingdom racing fun into glorious 3D. For the first time, drivers explore new competitive kart possibilities, such as soaring through the skies or plunging into the depths of the sea. New courses, strategic new abilities and customizable karts bring the racing excitement to new heights.
FEATURES:
The Mario Kart franchise continues to evolve. New kart abilities add to the wild fun that the games are known for. On big jumps, a kart deploys a wing to let it glide over the track shortcut. When underwater, a propeller pops out to help the kart cruise across the sea floor.
Players can show their own style by customizing their vehicles with accessories that give them a competitive advantage. For instance, giant tires help a kart drive off-road, while smaller tires accelerate quickly on paved courses.
People can choose to race as one of their favorite Mushroom Kingdom characters or even as their Mii™ character.
New courses take players on wild rides over mountains, on city streets and through a dusty desert. Nintendo fans will recognize new courses on Wuhu Island and in the jungles from Donkey Kong Country™ Returns.
The game supports both SpotPass™ and StreetPass™ features.
Players can compete in local wireless matches or online over a broadband Internet connection.
The newest installment of the fan-favorite Mario Kart™ franchise brings Mushroom Kingdom racing fun into glorious 3D. For the first time, drivers explore new competitive kart possibilities, such as soaring through the skies or plunging into the depths of the sea. New courses, strategic new abilities and customizable karts bring the racing excitement to new heights.
FEATURES:
The Mario Kart franchise continues to evolve. New kart abilities add to the wild fun that the games are known for. On big jumps, a kart deploys a wing to let it glide over the track shortcut. When underwater, a propeller pops out to help the kart cruise across the sea floor.
Players can show their own style by customizing their vehicles with accessories that give them a competitive advantage. For instance, giant tires help a kart drive off-road, while smaller tires accelerate quickly on paved courses.
People can choose to race as one of their favorite Mushroom Kingdom characters or even as their Mii™ character.
New courses take players on wild rides over mountains, on city streets and through a dusty desert. Nintendo fans will recognize new courses on Wuhu Island and in the jungles from Donkey Kong Country™ Returns.
The game supports both SpotPass™ and StreetPass™ features.
Players can compete in local wireless matches or online over a broadband Internet connection.
'''
#hide-input
wc2 = WordCloud().generate_from_text(mario_kart_describe)
#Use matplotlib.pyplot to display the fitted wordcloud
#Turn axis off to get rid of axis numbers
plt.imshow(wc2)
plt.axis('off')
plt.show()
#hide
smash_bros_describe = '''
Super Smash Bros. for Nintendo 3DS is the first portable entry in the renowned series, in which game worlds collide. Up to four players battle each other locally or online using some of Nintendo’s most well-known and iconic characters across beautifully designed stages inspired by classic portable Nintendo games. It’s a genuine, massive Super Smash Bros. experience that’s available to play on the go, anytime, anywhere.
FEATURES:
Smash and crash through “Smash Run” mode, a new mode exclusive to the Nintendo 3DS version that gives up to four players five minutes to fight solo through a huge battlefield while taking down recognizable enemies from almost every major Nintendo franchise and multiple third-party partners. Defeated enemies leave behind power-ups to collect. Players who collect more power-ups have an advantage once time runs out and the battle with opponents begins.
Compete with classic characters from the Super Smash Bros. series like Mario, Link, Samus and Pikachu, along with new challengers like Mega Man, Little Mac and newly announced Palutena, the Goddess of Light from the Kid Icarus games. For the first time players can even compete as their own Mii characters.
Customize different aspects of your character when playing locally or online with friends in a variety of multiplayer modes.
View most elements of the high-energy action at silky-smooth 60 frames per second and in eye-popping stereoscopic 3D.
Fight against friends and family locally or online, or battle random challengers all over the world online in “For Fun” or “For Glory” modes.
Gaming icons clash in the ultimate brawl you can play anytime, anywhere! Smash rivals off the stage as new characters Simon Belmont and King K. Rool join Inkling, Ridley, and every fighter in Super Smash Bros. history. Enjoy enhanced speed and combat at new stages based on the Castlevania series, Super Mario Odyssey, and more!
Having trouble choosing a stage? Then select the Stage Morph option to transform one stage into another while battling—a series first! Plus, new echo fighters Dark Samus, Richter Belmont, and Chrom join the battle. Whether you play locally or online, savor the faster combat, new attacks, and new defensive options, like a perfect shield. Jam out to 900 different music compositions and go 1-on-1 with a friend, hold a 4-player free-for-all, kick it up to 8-player battles and more! Feel free to bust out your GameCube controllers—legendary couch competitions await—or play together anytime, anywhere!
'''
#hide-input
wc3 = WordCloud().generate_from_text(smash_bros_describe)
#Use matplotlib.pyplot to display the fitted wordcloud
#Turn axis off to get rid of axis numbers
plt.imshow(wc3)
plt.axis('off')
plt.show()
###Output
_____no_output_____ |
exercises/Jupyter/keyboard-shortcuts.ipynb | ###Markdown
Keyboard shortcutsIn this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.> **Exercise:** Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times.
###Code
# mode practice
###Output
_____no_output_____
###Markdown
Help with commandsIf you ever need to look up a command, you can bring up the list of shortcuts by pressing `H` in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now. Creating new cellsOne of the most common commands is creating new cells. You can create a cell above the current cell by pressing `A` in command mode. Pressing `B` will create a cell below the currently selected cell. Above! > **Exercise:** Create a cell above this cell using the keyboard command. > **Exercise:** Create a cell below this cell using the keyboard command. And below! Switching between Markdown and codeWith keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to a code cell, press `Y`. To switch from code to Markdown, press `M`.> **Exercise:** Switch the cell below between Markdown and code cells.
###Code
## Practice here
def fibo(n): # Recursive Fibonacci sequence!
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n-1) + fibo(n-2)
###Output
_____no_output_____
###Markdown
Line numbersA lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing `L` (in command mode of course) on a code cell.> **Exercise:** Turn line numbers on and off in the above code cell. Deleting cellsDeleting cells is done by pressing `D` twice in a row so `D`, `D`. This is to prevent accidently deletions, you have to press the button twice!> **Exercise:** Delete the cell below. Saving the notebookNotebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press `S`. So easy! The Command PaletteYou can easily access the command palette by pressing Shift + Control/Command + `P`. > **Note:** This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands.> **Exercise:** Use the command palette to move the cell below down one position.
###Code
# below this cell
# Move this cell down
###Output
_____no_output_____ |
ImageClassificationUsingCNN.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Dropout, Flatten
###Output
_____no_output_____
###Markdown
2. Importing and Loading the data
###Code
from tensorflow.keras.datasets import fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
labelMap = ["T-shirt/top","Trouser","Pullover","Dress","Coat","Sandal","Shirt","Sneaker","Bag","Ankle boot"]
###Output
_____no_output_____
###Markdown
3. Explore the data
###Code
from tensorflow.keras.utils import to_categorical
print('Training data shape : ', train_images.shape, train_labels.shape)
print('Testing data shape : ', test_images.shape, test_labels.shape)
classes = np.unique(train_labels)
nClasses = len(classes)
print('Total number of outputs : ', nClasses)
print('Output Classes : ', classes)
plt.figure(figsize=[10,5])
# Display the first image in the training data
# plt.subplot(121);plt.imshow(train_images[0,:,:]);plt.title("Ground Truth : {}".format(train_labels[0]))
# plt.subplot(121);plt.imshow(test_images[0,:,:]);plt.title("Ground Truth : {}".format(test_labels[0]))
plt.subplot(121)
plt.imshow(train_images[0,:,:])
plt.title("Ground Truth : {}".format(train_labels[0]))
# Display the first image in testing data
plt.subplot(122)
plt.imshow(test_images[0,:,:])
plt.title("Ground Truth : {}".format(test_labels[0]))
###Output
_____no_output_____
###Markdown
4. Preprocess the data Perform normalization of data (i.e. convert the images to float and normalize the intensity values to lie between 0-1 and convert the labels to categorical variables to be used in Keras.
###Code
nDims = 1
nRows, nCols = train_images.shape[1:]
train_data = train_images.reshape(train_images.shape[0], nRows, nCols, nDims)
# Input shape to feed it to the neural network
test_data = test_images.reshape(test_images.shape[0], nRows, nCols, nDims)
input_shape = (nRows,nCols,nDims)
# Normalize the value between 0 and 1
# 1. Convert to float32
train_data = train_data.astype('float32')
test_data = test_data.astype('float32')
# 2. Scale the data to lie between 0 to 1
train_data /= 255
test_data /= 255
###Output
_____no_output_____
###Markdown
- Category is changed from integer to boolean representation using the to_categorical in keras.
###Code
train_labels_one_hot = to_categorical(train_labels)
test_lables_one_hot = to_categorical(test_labels)
print('Original Label 0 :', train_labels[0])
print('After conversion to categorical (one-hot): ', train_labels_one_hot[0])
train_labels
###Output
Original Label 0 : 9
After conversion to categorical (one-hot): [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
###Markdown
5. Model Architecture- This is obtained through hit and trial- Find an existing problem model and try to reconfigure it for the problem you are trying to solve- Both the aformentioned should be similar "" For implementing a CNN, we will stack up Convolutional Layers, followed by Max Pooling layers. We will also include Dropout to avoid overfitting.Finally, we will add a fully connected ( Dense ) layer followed by a softmax layer. Given below is the model structure.We use 6 convolutional layers and 1 fully-connected layer.The first 2 convolutional layers have 32 filters / kernels with a window size of 3×3.The remaining conv layers have 64 filters.We also add a max pooling layer with window size 2×2 after each pair of conv layer.We add a dropout layer with a dropout ratio of 0.25 after every pooling layer.In the final line, we add the dense layer which performs the classification among 10 classes using a softmax layer.""
###Code
# def createModel():
# model = Sequential()
# # THe first 2 layers have 32 filters of window size 3x3 bigger kernels are not efficient and hardly produce better result
# # Sometimes they even may produce worse result
# model.add(Conv2D(32, (3,3), padding='same', activation='relu', input_shape=input_shape))
# # second conv layer to obtain hierarcial features
# model.add(Conv2D(32, (3, 3), activation='relu'))
# # Redue the length and width to improve efficiency
# model.add(MaxPooling2D(pool_size(2,2)))
# # Dropout to prevent overfitting
# model.add(Dropout(0.25))
# model.add(Conv2D(64,(3,3), padding='same', activation='relu'))
# model.add(Conv2D(64,(3,3),activation='relu')
# model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(Dropout(0.25))
# model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
# model.add(Conv2D(64, (3, 3), activation='relu'))
# model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(Dropout(0.25))
# model.add(Flatten())
# model.add(Dense(512, activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(nClasses, activation='softmax'))
# return model
def createModel():
model = Sequential()
# The first two layers with 32 filters of window size 3x3
model.add(Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=input_shape))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(nClasses, activation='softmax'))
return model
model1 = createModel()
batch_size = 256
epochs = 20
model1.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model1.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_6 (Conv2D) (None, 28, 28, 32) 320
_________________________________________________________________
conv2d_7 (Conv2D) (None, 26, 26, 32) 9248
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 13, 13, 32) 0
_________________________________________________________________
dropout_4 (Dropout) (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 13, 13, 64) 18496
_________________________________________________________________
conv2d_9 (Conv2D) (None, 11, 11, 64) 36928
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
dropout_5 (Dropout) (None, 5, 5, 64) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 5, 5, 64) 36928
_________________________________________________________________
conv2d_11 (Conv2D) (None, 3, 3, 64) 36928
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 1, 1, 64) 0
_________________________________________________________________
dropout_6 (Dropout) (None, 1, 1, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 512) 33280
_________________________________________________________________
dropout_7 (Dropout) (None, 512) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 5130
=================================================================
Total params: 177,258
Trainable params: 177,258
Non-trainable params: 0
_________________________________________________________________
###Markdown
6. Training the model
###Code
history = model1.fit(train_data, train_labels_one_hot, batch_size=batch_size, epochs=epochs, verbose=1,
validation_data=(test_data, test_lables_one_hot))
# test_loss,test_accuracy = model1.evaluate(test_data, test_labels_one_hot)
###Output
Epoch 1/20
235/235 [==============================] - 4s 17ms/step - loss: 0.8870 - accuracy: 0.6678 - val_loss: 0.5814 - val_accuracy: 0.7727
Epoch 2/20
235/235 [==============================] - 4s 16ms/step - loss: 0.5045 - accuracy: 0.8134 - val_loss: 0.4625 - val_accuracy: 0.8247
Epoch 3/20
235/235 [==============================] - 4s 15ms/step - loss: 0.4041 - accuracy: 0.8523 - val_loss: 0.3519 - val_accuracy: 0.8706
Epoch 4/20
235/235 [==============================] - 4s 16ms/step - loss: 0.3532 - accuracy: 0.8717 - val_loss: 0.2989 - val_accuracy: 0.8924
Epoch 5/20
235/235 [==============================] - 4s 15ms/step - loss: 0.3161 - accuracy: 0.8852 - val_loss: 0.2973 - val_accuracy: 0.8871
Epoch 6/20
235/235 [==============================] - 4s 15ms/step - loss: 0.2953 - accuracy: 0.8944 - val_loss: 0.2698 - val_accuracy: 0.9011
Epoch 7/20
235/235 [==============================] - 4s 15ms/step - loss: 0.2751 - accuracy: 0.9005 - val_loss: 0.2741 - val_accuracy: 0.9017
Epoch 8/20
235/235 [==============================] - 4s 16ms/step - loss: 0.2598 - accuracy: 0.9053 - val_loss: 0.3014 - val_accuracy: 0.8859
Epoch 9/20
235/235 [==============================] - 4s 16ms/step - loss: 0.2475 - accuracy: 0.9099 - val_loss: 0.2683 - val_accuracy: 0.9015
Epoch 10/20
235/235 [==============================] - 4s 16ms/step - loss: 0.2380 - accuracy: 0.9148 - val_loss: 0.2507 - val_accuracy: 0.9070
Epoch 11/20
235/235 [==============================] - 4s 16ms/step - loss: 0.2303 - accuracy: 0.9167 - val_loss: 0.2317 - val_accuracy: 0.9154
Epoch 12/20
235/235 [==============================] - 4s 16ms/step - loss: 0.2243 - accuracy: 0.9194 - val_loss: 0.2410 - val_accuracy: 0.9156
Epoch 13/20
235/235 [==============================] - 4s 15ms/step - loss: 0.2165 - accuracy: 0.9215 - val_loss: 0.2296 - val_accuracy: 0.9194
Epoch 14/20
235/235 [==============================] - 4s 15ms/step - loss: 0.2121 - accuracy: 0.9234 - val_loss: 0.2724 - val_accuracy: 0.9023
Epoch 15/20
235/235 [==============================] - 4s 15ms/step - loss: 0.2049 - accuracy: 0.9263 - val_loss: 0.2167 - val_accuracy: 0.9236
Epoch 16/20
235/235 [==============================] - 4s 15ms/step - loss: 0.1993 - accuracy: 0.9272 - val_loss: 0.2323 - val_accuracy: 0.9164
Epoch 17/20
235/235 [==============================] - 4s 15ms/step - loss: 0.1953 - accuracy: 0.9302 - val_loss: 0.2294 - val_accuracy: 0.9237
Epoch 18/20
235/235 [==============================] - 4s 15ms/step - loss: 0.1924 - accuracy: 0.9303 - val_loss: 0.2112 - val_accuracy: 0.9224
Epoch 19/20
235/235 [==============================] - 4s 15ms/step - loss: 0.1889 - accuracy: 0.9321 - val_loss: 0.2143 - val_accuracy: 0.9254
Epoch 20/20
235/235 [==============================] - 4s 15ms/step - loss: 0.1837 - accuracy: 0.9329 - val_loss: 0.2224 - val_accuracy: 0.9211
###Markdown
7. Checking loss and accuracy curves 7.1 Training Loss vs Validation Loss- Training and validation loss both going down
###Code
# Trai
plt.figure(figsize=[8,6])
plt.plot(history.history['loss'],'r',linewidth=3.0)
plt.plot(history.history['val_loss'],'b',linewidth=3.0)
plt.legend(['Training loss', 'Validation Loss'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Loss',fontsize=16)
plt.title('Loss Curves',fontsize=16)
###Output
_____no_output_____
###Markdown
7.2 Traing Accuracy vs Validation Accuracy - Training and Validation loss both are increasing
###Code
plt.figure(figsize=[8,6])
plt.plot(history.history['accuracy'],'r',linewidth=3.0)
plt.plot(history.history['val_accuracy'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
###Output
_____no_output_____
###Markdown
8. Draw Inference 8.1 On a centered Image
###Code
testSample = test_data[0,:]
plt.imshow(testSample.reshape(28,28));plt.show()
label = model1.predict_classes(testSample.reshape(1,28,28,nDims))[0]
print("Label = {}, Item = {}".format(label,labelMap[label]))
###Output
_____no_output_____
###Markdown
8.2 On a Shifted Up image
###Code
shiftUp = np.zeros(testSample.shape)
shiftUp[1:20,:] = testSample[6:25,:]
plt.imshow(shiftUp.reshape(28,28));plt.show()
label = model1.predict_classes(shiftUp.reshape(1,28,28,nDims))[0]
print("Label = {}, Item = {}".format(label,labelMap[label]))
###Output
_____no_output_____
###Markdown
8.3 On a shifted down image
###Code
shiftDown = np.zeros(testSample.shape)
shiftDown[10:27,:] = testSample[6:23,:]
plt.imshow(shiftDown.reshape(28,28));plt.show()
label = model1.predict_classes(shiftDown.reshape(1,28,28,nDims))[0]
print("Label = {}, Item = {}".format(label,labelMap[label]))
###Output
_____no_output_____
###Markdown
8.4 On left shit
###Code
# testSample.shape
# shiftDown = np.zeros((56,56,1))
# shiftDown[5:22,0:28] = testSample[6:23,0:28]
# plt.imshow(shiftDown);plt.show()
# label = model1.predict_classes(shiftDown.reshape(1,28,28,nDims))[0]
# print("Label = {}, Item = {}".format(label,labelMap[label]))
###Output
_____no_output_____ |
notebooks/Omics_terms.ipynb | ###Markdown
**Aims**: - extract the omics mentioned in multi-omics articles**NOTE**: the articles not in PMC/with no full text need to be analysed separately, or at least highlighted.
###Code
%run notebook_setup.ipynb
import pandas
pandas.set_option('display.max_colwidth', 100)
%vault from pubmed_derived_data import literature, literature_subjects
literature['title_abstract_text_subjects'] = (
literature['title']
+ ' ' + literature['abstract_clean'].fillna('')
+ ' ' + literature_subjects.apply(lambda x: ' '.join(x[x == True].index), axis=1)
+ ' ' + literature['full_text'].fillna('')
)
omics_features = literature.index.to_frame().drop(columns='uid').copy()
from functools import partial
from helpers.text_processing import check_usage
from pandas import Series
check_usage_in_input = partial(
check_usage,
data=literature,
column='title_abstract_text_subjects',
limit=5 # show only first 5 results
)
TERM_IN_AT_LEAST_N_ARTICLES = 5
###Output
_____no_output_____
###Markdown
Omics 1. Lookup by words which end with -ome
###Code
cellular_structures = {
# organelles
'peroxisome',
'proteasome',
'ribosome',
'exosome',
'nucleosome',
'polysome',
'autosome',
'autophagosome',
'endosome',
'lysosome',
# proteins and molecular complexes
'spliceosome',
'cryptochrome',
# chromosmes
'autosome',
'chromosome',
'x-chromosome',
'y-chromosome',
}
species = {
'trichome'
}
tools_and_methods = {
# dry lab
'dphenome',
'dgenome',
'reactome',
'rexposome',
'phytozome',
'rgenome',
'igenome', # iGenomes
# wet lab
'microtome'
}
not_an_ome = {
'outcome',
'middle-income',
'welcome',
'wellcome', # :)
'chrome',
'some',
'cumbersome',
'become',
'home',
'come',
'overcome',
'cytochrome',
'syndrome',
'ubiome',
'biome', # this IS an ome, but more into envrionmental studies, rather than molecular biology!
'fluorochrome',
'post-genome',
'ubiquitin-proteasome', # UPS
*tools_and_methods,
*cellular_structures,
*species
}
from omics import get_ome_regexp
ome_re = get_ome_regexp()
get_ome_regexp??
ome_occurrences = (
literature['title_abstract_text_subjects'].str.lower()
.str.extractall(ome_re)[0]
.to_frame('term').reset_index()
)
ome_occurrences = ome_occurrences[~ome_occurrences.term.isin(not_an_ome)]
ome_occurrences.head(3)
###Output
_____no_output_____
###Markdown
1.1 Harmonise hyphenation
###Code
from helpers.text_processing import report_hyphenation_trends, harmonise_hyphenation
hyphenation_rules = report_hyphenation_trends(ome_occurrences.term)
hyphenation_rules
ome_occurrences.term = harmonise_hyphenation(ome_occurrences.term, hyphenation_rules)
###Output
_____no_output_____
###Markdown
1.2 Fix typos
###Code
from helpers.text_processing import find_term_typos, create_typos_map
ome_counts = ome_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
potential_ome_typos = find_term_typos(ome_counts, TERM_IN_AT_LEAST_N_ARTICLES - 1)
potential_ome_typos
check_usage_in_input('1-metabolome')
check_usage_in_input('miRNAome')
check_usage_in_input('miRome')
check_usage_in_input('rexposome')
check_usage_in_input('glycol-proteome')
check_usage_in_input('rgenome')
check_usage_in_input('iGenomes')
check_usage_in_input('cancergenome')
is_typo_subset_or_variant = {
('transcritome', 'transcriptome'): True,
('transciptome', 'transcriptome'): True,
('tanscriptome', 'transcriptome'): True,
('trascriptome', 'transcriptome'): True,
('microbome', 'microbiome'): True,
('protenome', 'proteome'): True,
# (neither n- nor o- is frequent enough on its own)
('o-glycoproteome', 'glycoproteome'): True,
('n-glycoproteome', 'glycoproteome'): True,
('glycol-proteome', 'glycoproteome'): True, # note "glycol" instead of "glyco"
('mirome', 'mirnome'): True,
('1-metabolome', 'metabolome'): True
}
ome_typos_map = create_typos_map(potential_ome_typos, is_typo_subset_or_variant)
replaced = ome_occurrences.term[ome_occurrences.term.isin(ome_typos_map)]
replaced.value_counts()
len(replaced)
ome_occurrences.term = ome_occurrences.term.replace(ome_typos_map)
###Output
_____no_output_____
###Markdown
1.3 Replace synonymous and narrow terms
###Code
ome_replacements = {}
###Output
_____no_output_____
###Markdown
miRNAomics → miRNomics miRNAome is more popular name for -ome, while miRNomics is more popular for -omics.
###Code
ome_occurrences.term.value_counts().loc[['mirnome', 'mirnaome']]
###Output
_____no_output_____
###Markdown
As I use -omcis for later on, for consistency I will change miRNAome → miRNome
###Code
ome_replacements['miRNAome'] = 'miRNome'
###Output
_____no_output_____
###Markdown
Cancer genome → genome
###Code
ome_occurrences.term.value_counts().loc[['genome', 'cancer-genome']]
ome_replacements['cancer-genome'] = 'genome'
###Output
_____no_output_____
###Markdown
Host microbiome → microbiome
###Code
ome_occurrences.term.value_counts().loc[['microbiome', 'host-microbiome']]
ome_replacements['host-microbiome'] = 'microbiome'
###Output
_____no_output_____
###Markdown
Replace the values
###Code
ome_occurrences.term = ome_occurrences.term.replace(
{k.lower(): v.lower() for k, v in ome_replacements.items()}
)
###Output
_____no_output_____
###Markdown
1.4 Summarise popular \*ome terms
###Code
ome_counts = ome_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
ome_common_counts = ome_counts[ome_counts >= TERM_IN_AT_LEAST_N_ARTICLES]
ome_common_counts
ome_common_terms = Series(ome_common_counts.index)
ome_common_terms[ome_common_terms.str.endswith('some')]
###Output
_____no_output_____
###Markdown
2. Lookup by omics and adjectives
###Code
from omics import get_omics_regexp
omics_re = get_omics_regexp()
get_omics_regexp??
check_usage_in_input('integromics')
check_usage_in_input('meta-omics')
check_usage_in_input('post-genomic')
check_usage_in_input('3-omics')
multi_omic = {
'multi-omic',
'muti-omic',
'mutli-omic',
'multiomic',
'cross-omic',
'panomic',
'pan-omic',
'trans-omic',
'transomic',
'four-omic',
'multiple-omic',
'inter-omic',
'poly-omic',
'polyomic',
'integromic',
'integrated-omic',
'integrative-omic',
'3-omic'
}
tools = {
# MixOmics
'mixomic',
# MetaRbolomics
'metarbolomic',
# MinOmics
'minomic',
# LinkedOmics - TCGA portal
'linkedomic',
# Mergeomics - https://doi.org/10.1186/s12864-016-3198-9
'mergeomic'
}
vague = {
'single-omic'
}
adjectives = {
'economic',
'socio-economic',
'socioeconomic',
'taxonomic',
'syndromic',
'non-syndromic',
'agronomic',
'anatomic',
'autonomic',
'atomic',
'palindromic',
# temporal
'postgenomic',
'post-genomic'
}
not_an_omic = {
'non-omic', # this on was straightforward :)
*adjectives,
*multi_omic,
*tools,
*vague
}
omic_occurrences = (
literature['title_abstract_text_subjects'].str.lower()
.str.extractall(omics_re)[0]
.to_frame('term').reset_index()
)
omic_occurrences = omic_occurrences[~omic_occurrences.term.isin(not_an_omic)]
omic_occurrences.head(2)
###Output
_____no_output_____
###Markdown
2.1 Harmonise hyphenation
###Code
hyphenation_rules = report_hyphenation_trends(omic_occurrences.term)
hyphenation_rules
omic_occurrences.term = harmonise_hyphenation(omic_occurrences.term, hyphenation_rules)
###Output
_____no_output_____
###Markdown
2.2 Fix typos
###Code
omic_counts = omic_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
potential_omic_typos = find_term_typos(omic_counts, TERM_IN_AT_LEAST_N_ARTICLES - 1)
potential_omic_typos
check_usage_in_input('non-omic')
check_usage_in_input('C-metabolomics')
###Output
_____no_output_____
###Markdown
Not captured in the text abstract, but full version has 13C, so carbon-13, so type of metabolomics.
###Code
check_usage_in_input('miRNAomics')
check_usage_in_input('miRomics')
check_usage_in_input('MinOmics')
check_usage_in_input('onomic', words=True)
literature.loc[omic_occurrences[omic_occurrences.term == 'onomic'].uid].title_abstract_text_subjects
check_usage_in_input(r'\bonomic', words=False, highlight=' onomic')
check_usage_in_input(' ionomic', words=False)
check_usage_in_input('integratomic', words=False)
###Output
_____no_output_____
###Markdown
Note: integratomics has literally three hits in PubMed, two because of http://www.integratomics-time.com/
###Code
is_typo_subset_or_variant = {
('phoshphoproteomic', 'phosphoproteomic'): True,
('transriptomic', 'transcriptomic'): True,
('transcripomic', 'transcriptomic'): True,
('transciptomic', 'transcriptomic'): True,
('trancriptomic', 'transcriptomic'): True,
('trascriptomic', 'transcriptomic'): True,
('metageonomic', 'metagenomic'): True,
('metaobolomic', 'metabolomic'): True,
('metabotranscriptomic', 'metatranscriptomic'): False,
('mirnaomic', 'mirnomic'): True,
('metranscriptomic', 'metatranscriptomic'): True,
('metranscriptomic', 'transcriptomic'): False,
('miromic', 'mirnomic'): True,
('n-glycoproteomic', 'glycoproteomic'): True,
('onomic', 'ionomic'): False,
('c-metabolomic', 'metabolomic'): True,
('integratomic', 'interactomic'): False,
('pharmacoepigenomic', 'pharmacogenomic'): False,
('metobolomic', 'metabolomic'): True,
# how to treat single-cell?
('scepigenomic', 'epigenomic'): True,
#('epitranscriptomic', 'transcriptomic'): False
('epigenomomic', 'epigenomic'): True,
}
omic_typos_map = create_typos_map(potential_omic_typos, is_typo_subset_or_variant)
replaced = omic_occurrences.term[omic_occurrences.term.isin(omic_typos_map)]
replaced.value_counts()
len(replaced)
omic_occurrences.term = omic_occurrences.term.replace(omic_typos_map)
###Output
_____no_output_____
###Markdown
2.3 Popular *omic(s) terms:
###Code
omic_counts = omic_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
omic_counts[omic_counts >= TERM_IN_AT_LEAST_N_ARTICLES].add_suffix('s')
###Output
_____no_output_____
###Markdown
Crude overview
###Code
ome_terms = Series(ome_counts[ome_counts >= TERM_IN_AT_LEAST_N_ARTICLES].index)
omic_terms = Series(omic_counts[omic_counts >= TERM_IN_AT_LEAST_N_ARTICLES].index)
assert omics_features.index.name == 'uid'
for term in ome_terms:
mentioned_by_uid = set(ome_occurrences[ome_occurrences.term == term].uid)
omics_features['mentions_' + term] = omics_features.index.isin(mentioned_by_uid)
for term in omic_terms:
mentioned_by_uid = set(omic_occurrences[omic_occurrences.term == term].uid)
omics_features['mentions_' + term] = omics_features.index.isin(mentioned_by_uid)
from helpers.text_processing import prefix_remover
ome_terms_mentioned = omics_features['mentions_' + ome_terms].rename(columns=prefix_remover('mentions_'))
omic_terms_mentioned = omics_features['mentions_' + omic_terms].rename(columns=prefix_remover('mentions_'))
%R library(ComplexUpset);
%%R -i ome_terms_mentioned -w 800 -r 100
upset(ome_terms_mentioned, colnames(ome_terms_mentioned), min_size=10, width_ratio=0.1)
###Output
[1] "Dropping 22 empty groups"
###Markdown
Merge -ome and -omic terms
###Code
from warnings import warn
terms_associated_with_omic = {
omic + 's': [omic]
for omic in omic_terms
}
for ome in ome_terms:
assert ome.endswith('ome')
auto_generate_omic_term = ome[:-3] + 'omics'
omic = auto_generate_omic_term
if omic not in terms_associated_with_omic:
if omic in omic_counts.index:
warn(f'{omic} was removed at thresholding, but it is a frequent -ome!')
else:
print(f'Creating omic {omic}')
terms_associated_with_omic[omic] = []
terms_associated_with_omic[omic].append(ome)
from omics import add_entities_to_features
add_entities_to_omic_features = partial(
add_entities_to_features,
features=omics_features,
omics_terms=terms_associated_with_omic
)
omics = {k: [k] for k in terms_associated_with_omic}
add_entities_to_omic_features(omics, entity_type='ome_or_omic')
from omics import omics_by_entity, omics_by_entity_group
###Output
_____no_output_____
###Markdown
interactomics is a proper "omics", but it is difficult to assign to a single entity - by definition
###Code
check_usage_in_input('interactomics')
###Output
_____no_output_____
###Markdown
phylogenomics is not an omic on its own, but if used in context of metagenomics it can refer to actual omics data
###Code
check_usage_in_input('phylogenomics')
###Output
_____no_output_____
###Markdown
regulomics is both a name of a tool, group (@MIM UW), and omics:
###Code
check_usage_in_input('regulomics')
from functools import reduce
omics_mapped_to_entities = reduce(set.union, omics_by_entity.values())
set(terms_associated_with_omic) - omics_mapped_to_entities
assert omics_mapped_to_entities - set(terms_associated_with_omic) == set()
omics_mapped_to_entities_groups = reduce(set.union, omics_by_entity_group.values())
set(terms_associated_with_omic) - omics_mapped_to_entities_groups
add_entities_to_omic_features(omics_by_entity, entity_type='entity')
add_entities_to_omic_features(omics_by_entity_group, entity_type='entity_group')
###Output
_____no_output_____
###Markdown
Visualize the entities & entities groups
###Code
omic_entities = omics_features['entity_' + Series(list(omics_by_entity.keys()))].rename(columns=prefix_remover('entity_'))
omic_entities_groups = omics_features['entity_group_' + Series(list(omics_by_entity_group.keys()))].rename(columns=prefix_remover('entity_group_'))
%%R -i omic_entities -w 800 -r 100
upset(omic_entities, colnames(omic_entities), min_size=10, width_ratio=0.1)
%%R -i omic_entities_groups -w 800 -r 100
upset(omic_entities_groups, colnames(omic_entities_groups), min_size=10, width_ratio=0.1)
###Output
_____no_output_____
###Markdown
Number of omics mentioned in abstract vs the multi-omic term used
###Code
omes_or_omics_df = omics_features['ome_or_omic_' + Series(list(omics.keys()))].rename(columns=prefix_remover('ome_or_omic_'))
literature['omic_terms_detected'] = omes_or_omics_df.sum(axis=1)
lt = literature[['term', 'omic_terms_detected']]
literature.sort_values('omic_terms_detected', ascending=False)[['title', 'omic_terms_detected']].head(10)
%%R -i lt -w 800
(
ggplot(lt, aes(x=term, y=omic_terms_detected))
+ geom_violin(adjust=2)
+ geom_point()
+ theme_bw()
)
%vault store omics_features in pubmed_derived_data
###Output
_____no_output_____
###Markdown
Current limitations Patchy coverage Currently I only detected omic-describing terms in less than 70% of abstracts:
###Code
omic_entities.any(axis=1).mean()
###Output
_____no_output_____
###Markdown
Potential solution: select a random sample of 50 articles, annotate manually, calculate sensitivity and specificity.If any omic is consistently omitted, reconsider how search terms are created. Apostrophes Are we missing out on \*'omic terms, such us meta'omic used in [here](https://doi.org/10.1053/j.gastro.2014.01.049)?
###Code
check_usage_in_input(
r'\w+\'omic',
words=False,
highlight='\'omic'
)
###Output
_____no_output_____
###Markdown
unlikely (but would be nice to get it in!) Fields of study
###Code
'genetics', 'epigenetics'
###Output
_____no_output_____ |
doc/source/Example 1.ipynb | ###Markdown
Example 1: The Necessity Gap for Minkowski Sums In this example, we look for an instance of the following problem. We are given two H-polytopes $\mathbb{A}, \mathbb{B}$. We manually find the H-polytope form of $C:=A \oplus B$. Then we check $ \mathbb{C} \subseteq \mathbb{A} \oplus \mathbb{B} \subseteq \mathbb{C}$ using containment arguments. Let $\mathbb{A}$ be a triangle.
###Code
import numpy as np
import pypolycontain as pp
H=np.array([[1,1],[-1,1],[0,-1]])
h=np.array([[1,1,0]]).reshape(3,1)
A=pp.H_polytope(H,h)
pp.visualize([A],title=r'$\mathbb{A}$')
###Output
_____no_output_____
###Markdown
And let $B$ be a tiny rectangle at the bottom of $A$ as follows.
###Code
e=1/3
H=np.array([[1,0],[-1,0],[0,-1],[0,1]])
h=np.array([[e,e,1,0]]).reshape(4,1)
B=pp.H_polytope(H,h,color='green')
pp.visualize([A,B],title=r'$\mathbb{A}$ (top) and $\mathbb{B}$ (Bottom)')
###Output
_____no_output_____
###Markdown
The H-polytope form of the Minkowski sum $\mathbb{A} \oplus \mathbb{B}$ can be easily found. We call this H-polytope $C_H$.
###Code
H=np.array([[1,0],[-1,0],[0,-1],[1,1],[-1,1],[0,1]])
h=np.array([[1+e,1+e,1,1+e,1+e,1]]).reshape(6,1)
p_sum=pp.H_polytope(H,h,color='purple')
pp.visualize([p_sum],title=r"$\mathbb{A} \oplus \mathbb{B}$")
###Output
_____no_output_____
###Markdown
We can also call the AH-polytope form of $A\oplus B$.
###Code
C=pp.minkowski_sum(A,B)
pp.visualize([C],title=r"$\mathbb{A} \oplus \mathbb{B}$")
###Output
_____no_output_____
###Markdown
Now we run the following experiment. We find the largest$$\begin{array}{lll}\alpha^*_N = & \max. & \alpha \\& \text{subject to} & \alpha C \subseteq_N (A \oplus B)\end{array}$$where for $N=-1$ the condition is necessary and sufficient and for $N=0$ it is only sufficient. What we expect is that for necessary and sufficient condition, we obtain the largest possible $\alpha^*$, which is 1. However, as we drop necessity, we are going to observe conservatieness in the fact that $$\alpha_i\le 1, i \ge 0.$$ Maximzing of $\alpha$ with subset encoding: linear program We import the ```mathematicalprogram``` module from ```pydrake```. As the optimization solver, we import Gurobi bindings of pydrake, but other solvers may also be used - there are often slower.
###Code
pp.necessity_gap_k(p_sum,C,[0,1,2])
###Output
==================================================
==================================================
Computing Necessity Gaps
==================================================
==================================================
k Theta.shape delta(X,Y,C)
0 (7, 9) 0.0
1 (7, 12) 0.18557078904567903
2 (7, 7) 0.2142857142857152
|
AAAI/Learnability/CIN/Linear/MNIST/MNIST_CIN_1k_Linear_m_5.ipynb | ###Markdown
generating CIN train and test data
###Code
m = 5
desired_num = 1000
np.random.seed(0)
bg_idx = np.random.randint(0,47335,m-1)
fg_idx = np.random.randint(0,12665)
bg_idx, fg_idx
for i in background_data[bg_idx]:
imshow(i)
imshow(torch.sum(background_data[bg_idx], axis = 0))
imshow(foreground_data[fg_idx])
tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] )/m
tr_data.shape
imshow(tr_data)
foreground_label[fg_idx]
train_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
train_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
np.random.seed(i)
bg_idx = np.random.randint(0,47335,m-1)
fg_idx = np.random.randint(0,12665)
tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] ) / m
label = (foreground_label[fg_idx].item())
train_images.append(tr_data)
train_label.append(label)
train_images = torch.stack(train_images)
train_images.shape, len(train_label)
imshow(train_images[0])
test_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
np.random.seed(i)
fg_idx = np.random.randint(0,12665)
tr_data = ( foreground_data[fg_idx] ) / m
label = (foreground_label[fg_idx].item())
test_images.append(tr_data)
test_label.append(label)
test_images = torch.stack(test_images)
test_images.shape, len(test_label)
imshow(test_images[0])
torch.sum(torch.isnan(train_images)), torch.sum(torch.isnan(test_images))
np.unique(train_label), np.unique(test_label)
###Output
_____no_output_____
###Markdown
creating dataloader
###Code
class CIN_Dataset(Dataset):
"""CIN_Dataset dataset."""
def __init__(self, list_of_images, labels):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.image = list_of_images
self.label = labels
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.image[idx] , self.label[idx]
batch = 250
train_data = CIN_Dataset(train_images, train_label)
train_loader = DataLoader( train_data, batch_size= batch , shuffle=True)
test_data = CIN_Dataset( test_images , test_label)
test_loader = DataLoader( test_data, batch_size= batch , shuffle=False)
train_loader.dataset.image.shape, test_loader.dataset.image.shape
###Output
_____no_output_____
###Markdown
model
###Code
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.fc1 = nn.Linear(28*28, 2)
torch.nn.init.xavier_normal_(self.fc1.weight)
torch.nn.init.zeros_(self.fc1.bias)
def forward(self, x):
x = x.view(-1, 28*28)
x = self.fc1(x)
return x
###Output
_____no_output_____
###Markdown
training
###Code
torch.manual_seed(12)
classify = Classification().double()
classify = classify.to("cuda")
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.Adam(classify.parameters(), lr=0.001 ) #, momentum=0.9)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in train_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( 10000 , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
nos_epochs = 200
tr_loss = []
for epoch in range(nos_epochs): # loop over the dataset multiple times
epoch_loss = []
cnt=0
iteration = desired_num // batch
running_loss = 0
#training data set
for i, data in enumerate(train_loader):
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
inputs = inputs.double()
# zero the parameter gradients
optimizer_classify.zero_grad()
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_classify.step()
running_loss += loss.item()
mini = 1
if cnt % mini == mini-1: # print every 40 mini-batches
# print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
tr_loss.append(np.mean(epoch_loss))
if(np.mean(epoch_loss) <= 0.001):
break;
else:
print('[Epoch : %d] loss: %.3f' %(epoch + 1, np.mean(epoch_loss) ))
print('Finished Training')
plt.plot(tr_loss)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in train_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( 10000 , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
###Output
_____no_output_____ |
[6w] [jsy07] [capstone_3w]optimization(D-9_D-0).ipynb | ###Markdown
시험출력
###Code
l1=[1,2,3,4,5,6,7,8,9,10]
print(l1[4:6])
for line in lines[1:3]:
print(line)
###Output
[5, 6]
243,050120,2018-01-02,10250,12050,10150,11800,26086769,307823874200,0.145631,-0.00423729,10108,9308.5,8621.17,8720.42,88.8317,-13027931.942863163,300377268,-0.0063048,18544682818.902584,3787360.0,6437619.607910971,423.884,10797.5,1073.04,11611.7,7005.26,10379.8,8274.83,9327.33,13150,8020,10610,11.6865,9820,9964.87,10402.2,585.305,43.1332,0.824498,1.08595,0.66153,31.6801,139.587,-1158.5,181.035,10597.5,12990.0,99.3894,67.2394,0.617452,22.3595,45.645,73.6842,-26.3158,1259.47,9897.76,44.7853,6.24021,42.056
244,050120,2018-01-03,11950,12450,10900,11750,20460474,240410569500,-0.00423729,0.0723404,10538,9448.5,8675.83,8743.33,89.5348,-11047886.07189542,279916794,-0.000901337,15749296173.345072,4355959.690865422,3712347.1053974,422.088,10849.3,1107.11,11976.9,6920.06,10572.5,8362.5,9467.5,13150,8020,10610,11.5532,10120,10239.5,10676.9,684.352,44.7943,0.811475,1.12536,0.748867,31.8566,139.281,-1218.5,214.541,10715,12912.4,99.6947,66.769,0.606881,24.6104,44.5857,72.7096,-27.2904,1435.35,10067.0,44.1718,7.16213,36.4241
###Markdown
MIN_MAX SCALING
###Code
# min_max 스케일링 전에 전체 기간 범위에 대한 min값과 max값 추출
min_list=[]
max_list=[]
for line in lines[1:]:
l_list=line.split(',')
min_list.append(int(l_list[5])) # Low 행
max_list.append(int(l_list[4])) # High 행
print(max(max_list))
all_max=max(max_list)
print(min(min_list))
all_min=min(min_list)
# min_max 스케일링
# x'= ( x - min(x) ) / ( max(x) - min(x) )
new_list=[]
new_list.append(lines[0])
for line in lines[1:]:
l_list=line.split(',') #1번째 데이터일 경우 여기서 리스트로 만들고
mini_list=[]
for l_num in range(len(l_list)): #0,1,2,8번째 열? 제외하고 스케일링 #리스트 개수를 범위로 해서 스케일링
if (l_num!=0) & (l_num!=1) & (l_num!=2) & (l_num!=8):
mini_list.append(str((float(l_list[l_num])- all_min)/(all_max-all_min)))
else:
mini_list.append(l_list[l_num])
new_row=",".join(mini_list)
new_list.append(new_row)
###Output
_____no_output_____
###Markdown
100억이상인 데이터 필터링 데이터의 컬럼리스트 만들어내기
###Code
Code_list=new_list[0].split(',') #데이터의 Code리스트
new_code=[]
new_code.append(Code_list[1])
new_code.append(Code_list[2])
for i in range(9, -1, -1):
for num1 in Code_list[3:]:
if num1=="Next_Change":
continue
else:
new_code.append('D-'+str(i)+'_'+num1)
new_code.append("Next_Change")
new_code_s=",".join(new_code)
###Output
_____no_output_____
###Markdown
컬럼명 확인
###Code
print(new_code)
print(len(new_code_s.split(',')))
#new_list사용
new_csv=[]
new_csv.append(new_code_s)#index목록을 새 데이터에 추가
standard=new_list[10].split(',') #최소 10번째 데이터 이후여야 하므로 기준이 되는 10번째 데이터를 변수에 저장
i=1#현재 몇번째 데이터인지?
for line in new_list[1:]:
new_data=[]
l_list=line.split(',')
date=l_list[2].replace("-","")
#1번째 변수=코드
if int(date)>=int(standard[2].replace("-","")): #10번째 데이터가 되기 위한 최소날짜
now_code=l_list[1]
pre=new_list[i-9].split(",")
if pre[1]==now_code: #10개 전 데이터의 코드와 현재 코드 일치한지
if float(l_list[8])>=1.00E+10:#8번째:traiding_value가 100억이상
new_data.extend(l_list[1:3]) #2번째
for m in range(i-9,i): #(lines.index(line))=지금데이터번호
# 9일 전~당일 데이터를 추가
d_list=(new_list[m]).split(',') #m일 전 데이터를 리스트화
# 당일 데이터의 index~날짜
# 11번째가 Next_change
new_data.extend(d_list[3:11])
new_data.extend(d_list[12:])
new_data.extend(l_list[3:11])
new_data.extend(l_list[12:])
new_data.append(l_list[11])
new_data_str=",".join(new_data) #새로 만들어진 행을 str(콤마(,)구문 행)로
new_csv.append(new_data_str) #새 파일에 저장될 행 추가
i+=1
###Output
_____no_output_____
###Markdown
시험 출력
###Code
#print(len(new_csv[1].split(',')))
print(new_csv[1])
###Output
050120,2018-01-15,0.005914479678521959,0.006967815329193724,0.005855961031262417,0.006821518711044868,15.2655406508561,307823874200,-8.359644428995521e-05,-8.368414518593422e-05,0.0053635266145733666,0.004961310396364354,0.00501939015376945,-3.169855640349075e-05,-7.6238532205229355,175.77663022710502,-8.368535506481819e-05,10852.097440512918,2.2162281571834272,3.7671242285989823,0.00016436951716849334,0.006234869272267954,0.000544246826972649,0.00671132809825515,0.004015701723432681,0.005990436882664845,0.004758636913445654,0.005374545675852339,0.0076115204490486916,0.004609513844634162,0.006125146808656313,-7.684288386915933e-05,0.005662849495305927,0.0057476254595908265,0.006003545059650984,0.00025883090276131935,-5.844070042139278e-05,-8.319918050486377e-05,-8.304618233123077e-05,-8.329454717392971e-05,-6.514289961067544e-05,-1.9972414309681915e-06,-0.0007616201940829455,2.2257567485166982e-05,0.0061178319777488695,0.007517890613433424,-2.552033318177004e-05,-4.4334078275712944e-05,-8.332034102326878e-05,-7.059718864714836e-05,-5.6970829039527585e-05,-4.056266849712995e-05,-9.908131575667245e-05,0.000653343141058614,0.005708353595414947,-5.747391385001787e-05,-8.002997910299107e-05,-5.907106328967257e-05,0.0069092966819341815,0.0072018899182318945,0.006294850885708986,0.006792259387415097,11.973108926024823,240410569500,-8.368414518593422e-05,-8.363933295764362e-05,0.005445452720736726,0.004993296688956419,0.005032796775856611,-3.128711179460891e-05,-6.465157161714187,163.8034376194146,-8.36821930313654e-05,9216.27498985876,2.5489650045998125,2.1723316259917596,0.00016331852226371196,0.006265181931548397,0.000564184130093975,0.006925038198046999,0.0039658438359675505,0.006103202315933984,0.004809940211498095,0.00545657126371604,0.0076115204490486916,0.004609513844634162,0.006125146808656313,-7.69208892259563e-05,0.005838405437084554,0.0059083352205597075,0.006164295783672946,0.0003167918673124784,-5.746864717176452e-05,-8.320680138829638e-05,-8.302312013234577e-05,-8.324343874297265e-05,-6.503961419826234e-05,-2.1763084915823815e-06,-0.000796731382438671,4.186482543594929e-05,0.006186591388278832,0.007472480143160019,-2.5341675751686656e-05,-4.460934999242183e-05,-8.332652702947058e-05,-6.927999241598332e-05,-5.7590717069947924e-05,-4.1132991233321456e-05,-9.965163849286395e-05,0.0007562657378586973,0.005807390554036997,-5.7832925750955174e-05,-7.94904839901759e-05,-6.236677498468274e-05,0.006850778034674639,0.008196706921644117,0.006704481416525783,0.007289667889121208,35.49958305463828,764364560400,-8.363933295764362e-05,-8.370024291090477e-05,0.005563660388201002,0.0050321179595484,0.0050492814787896235,-3.0259933979262165e-05,-14.121948026407177,199.30310435571846,-8.369979641362618e-05,12210.338083047025,2.9517080164715006,2.517243449298065,0.00016331852226371196,0.006428156364166223,0.0006244992998243856,0.007224829227957635,0.0039024856965796436,0.006286775312387169,0.004866510187803895,0.005576633972298442,0.008196706921644117,0.004609513844634162,0.006555258866013949,-7.706701028816339e-05,0.006051530350403809,0.006120874947406366,0.0064036955696117346,0.0003973187926198623,-5.625520450219065e-05,-8.32428395120247e-05,-8.300830906272439e-05,-8.317922506096181e-05,-6.49510169663114e-05,1.6045227892093962e-05,-0.0010085688855182148,5.7966231229412415e-05,0.0065625736969213925,0.007472480143160019,-2.5252317777321335e-05,-4.240138291267203e-05,-8.327691023883216e-05,-6.736140004693196e-05,-5.6757118939735745e-05,-3.9959750874414886e-05,-9.847839813395739e-05,0.001080049413145746,0.00589774334540573,-5.260920018468485e-05,-7.876381113223635e-05,-5.886613098696965e-05,0.007406705183640293,0.007640779772678463,0.006938556005563953,0.007055593300083038,8.154640792061594,170010147600,-8.370024291090477e-05,-8.365048760662829e-05,0.00566753098708669,0.005069177818857868,0.00506351906566787,-3.215043739762894e-05,-19.55843100889196,191.1483798819913,-8.371183814678465e-05,10000.019803587089,-1.3858602202591987,2.3090971742567192,0.00015547702353093325,0.006453670494371384,0.0006240721136993909,0.0074261918931777205,0.0039088466735367556,0.0064275126590463685,0.004951069633093934,0.005689282368273062,0.008196706921644117,0.004609513844634162,0.006722037010703646,-7.719738983425763e-05,0.0062528930156238935,0.006264655263723062,0.006564446293633698,0.0004362553449469441,-5.51283694305609e-05,-8.321678642507828e-05,-8.299935570969368e-05,-8.311619638155797e-05,-6.492708283958225e-05,-6.970156075084107e-06,0.00010474837859458105,7.232319614806855e-05,0.0065625736969213925,0.007472480143160019,-2.5207668049462304e-05,-4.464744563178779e-05,-8.332738315727999e-05,-6.674484757940542e-05,-5.6256491912430357e-05,-4.393135294527278e-05,-0.00010245000020481527,0.001363607370306585,0.005948771605816051,-5.373346043583518e-05,-7.846136920760486e-05,-6.376144990481943e-05,0.0072018899182318945,0.007757817067197548,0.007143371270972352,0.007435964507270064,9.638936435289787,211661434950,-8.365048760662829e-05,-8.37340363763615e-05,0.005791297926040623,0.00511248161782993,0.005081806142936477,-3.186176491069761e-05,-20.017431966842217,200.78739999894665,-8.371158973512705e-05,9466.497401980267,1.0257516732367915,0.2545905268048576,0.00015547702353093325,0.006490478723497636,0.000623674186898026,0.007677353927215677,0.0039052360730008418,0.0065770278027945,0.005048503180781072,0.005812774269584875,0.008196706921644117,0.004609513844634162,0.006722037010703646,-7.732057158673897e-05,0.006384559971957864,0.0064448341786351926,0.006797350509726677,0.0004911791917053328,-5.4049987798862045e-05,-8.32205743371154e-05,-8.300066067552756e-05,-8.305028448840366e-05,-6.495242141384563e-05,-6.643036836903266e-06,-1.901856035935131e-05,8.258268538561156e-05,0.0065625736969213925,0.007472480143160019,-2.5185372444856416e-05,-4.295350634956582e-05,-8.328931677723766e-05,-6.563515847142272e-05,-5.514375983479016e-05,-3.767524140404961e-05,-9.619388866359209e-05,0.0015509665232374622,0.005978733153212936,-6.811512022363486e-05,-7.803136833567701e-05,-6.750921814990956e-05,0.007406705183640293,0.007494483154529607,0.006528925474747155,0.006763000063785326,4.591443286360415,91801277100,-8.37340363763615e-05,-8.35066099738012e-05,0.005884342575183295,0.005141348866523062,0.005095511210124662,-3.278080026590874e-05,-22.382764034673468,196.19587303092067,-8.371609333022014e-05,7359.818330710005,-9.22955075200006,0.102555684881766,0.0001340738282957556,0.0064989639273502695,0.0006421192645142338,0.007769345240707678,0.003999322354064735,0.006720574044522157,0.0051060445666313795,0.005913309305576768,0.008196706921644117,0.004609513844634162,0.006722037010703646,-7.735258128678996e-05,0.006460166064217193,0.006493804177722,0.006867046218612792,0.0004739752945975,-5.408597676692666e-05,-8.320371921114523e-05,-8.30285155516231e-05,-8.300224067900356e-05,-6.521645755028068e-05,-3.640544527716479e-05,0.0009120131175399697,9.249223311254246e-05,0.0065625736969213925,0.007472480143160019,-2.517419538322984e-05,-4.874129315677087e-05,-8.352659584739975e-05,-6.687979157998592e-05,-5.710489526039921e-05,-4.874375093995577e-05,-0.00010726239819949825,0.0016042360478378237,0.005996639859274357,-7.474534147678828e-05,-7.82606912105577e-05,-7.233911322012315e-05,0.006733740740155554,0.008811152717869312,0.00664596276926624,0.008811152717869312,23.200553352328487,602627879200,-8.35066099738012e-05,-8.376828839393817e-05,0.006079502263793869,0.005205918341909241,0.00512652609317222,-3.2338340773979335e-05,0.8178724141341259,219.39651006491474,-8.366699975480686e-05,17908.73421707966,6.689921582695776,6.529362959387287,0.0001340738282957556,0.006683063591628791,0.000744930675884524,0.008298412330581201,0.003860598048871263,0.006982620546950388,0.005160952613355008,0.0060717778023556095,0.008811152717869312,0.004609513844634162,0.007029259908816244,-7.735258128678996e-05,0.006621092344180935,0.006850309885496563,0.0073190442500454984,0.0006175297289357741,-5.360085718114506e-05,-8.320033975926599e-05,-8.302766117937311e-05,-8.292600258535383e-05,-6.533525040421755e-05,4.924344166890496e-06,-1.4629661814885623e-05,0.00012879252038058188,0.006869796595033991,0.007472480143160019,-2.516857759309293e-05,-4.124891667354261e-05,-8.330232254659108e-05,-6.465210371610965e-05,-5.065777885452089e-05,-2.516301832160327e-05,-8.368166558114575e-05,0.0017133557693826927,0.006471460163138284,-6.745380099095478e-05,-7.709814220850544e-05,-7.021237002276961e-05,0.008050410303495261,0.011005601990102157,0.006733740740155554,0.007494483154529607,65.05996873933863,1439759321700,-8.376828839393817e-05,-8.364777444806675e-05,0.006214095152490817,0.005251170811835046,0.005146908138012719,-3.163371774232718e-05,-41.0701059632748,154.3364576439105,-8.373799668433346e-05,-5561.81610406617,7.492500293196455,-2.690322040721201,0.0001340738282957556,0.007207215115132512,0.0009908786984516552,0.008433414849808966,0.003994763751443216,0.007368258432390773,0.005133448849143024,0.0062508448629698095,0.011005601990102157,0.004609513844634162,0.008126484544932665,-7.69424240881478e-05,0.006733272590977478,0.006949440473954228,0.0074288252323044,0.0006169738017868083,-5.266479289958142e-05,-8.32186672144012e-05,-8.304083372687124e-05,-8.28658395641063e-05,-6.516981818841483e-05,5.6745532247578366e-06,-0.0007636683467370295,0.00014598412857249026,0.008018225047502513,0.007472480143160019,-2.516582721667173e-05,-4.890046387731682e-05,-8.353136160603258e-05,-6.652733376754169e-05,-5.860069040300036e-05,-6.0845114599992864e-05,-0.00011936376185953534,0.0018760024975758652,0.0065428529127949265,-7.104386148168044e-05,-7.720686985511367e-05,-5.971582174517821e-05,0.008869671365128855,0.008928190012388398,0.007318927212750979,0.007933373008976174,33.70424968268264,789063638200,-8.364777444806675e-05,-8.360905096213434e-05,0.006376484398636048,0.005305885747022718,0.005170807153553516,-4.028166195298963e-05,-49.03658475848438,188.04079100825874,-8.374491546103623e-05,-1156.0923835453605,-3.5625083825161044,-7.678695127832216,0.0001466851819666596,0.007413844458605957,0.0010290738195179584,0.008599958919909624,0.0041530040254977455,0.007607716736976821,0.005227178166458633,0.006417447451717727,0.011005601990102157,0.004609513844634162,0.008228892177636865,-7.67679800006671e-05,0.006933172290016075,0.007100769695767405,0.007584192240778485,0.0006435646751015444,-5.179561543183543e-05,-8.314195980120045e-05,-8.305194641798583e-05,-8.281227159440492e-05,-6.503048528928987e-05,-1.5961546226512807e-05,-0.0012010952350021097,0.00016547786034758907,0.008119169714025223,0.007472480143160019,-2.5164422769137496e-05,-4.7457335517249254e-05,-8.338316254666131e-05,-6.721615676443377e-05,-5.945436042922257e-05,-5.638488182451779e-05,-0.00011490352908406029,0.0019432170158181754,0.006644265728495714,-5.868934615359857e-05,-7.709100293353979e-05,-6.019707910024069e-05,0.007933373008976174,0.009337820543205195,0.007757817067197548,0.008928190012388398,34.09099005943739,897154258000,-8.360905096213434e-05,-8.372156464818297e-05,0.006588614494951889,0.0053783493879242085,0.005203723892637008,-3.674046453272567e-05,-32.62236406832369,222.1318647493617,-8.371731689661569e-05,7288.324424988746,1.9662157202846886,6.182183960467609,0.0001466851819666596,0.007607950811565859,0.0010624470040500756,0.008932578910932864,0.0042446266715120105,0.0078789506670248,0.005355041410720733,0.006616996038872767,0.011005601990102157,0.004638773168263933,0.008430781510682286,-7.69660071029934e-05,0.007238932221947184,0.007381951795849507,0.00789434107125406,0.0007354682106226559,-5.0920234987479935e-05,-8.31715269329148e-05,-8.303996179902707e-05,-8.274777234139545e-05,-6.488500793220264e-05,-1.345928886969251e-08,-0.0010738171772126047,0.00018607291306411243,0.008228892177636865,0.007472480143160019,-2.5163720545370382e-05,-4.457634547536745e-05,-8.325012742435441e-05,-6.65225352384664e-05,-5.712666419717975e-05,-4.6274907730722935e-05,-0.00010479355499026543,0.002049837991125062,0.006855927675633478,-4.349416013159673e-05,-7.646947638099619e-05,-6.094600074786831e-05,0.007986039791509764
###Markdown
\n(줄바꿈 표시 삭제)
###Code
error="\n"
for i in range(len(new_csv)):
new_csv[i]=new_csv[i].replace(error,"")
###Output
_____no_output_____
###Markdown
새로 만들어진 str행들을 다시 리스트 형태로 저장
###Code
new_array=[]
for new in new_csv:
new_array.append([new])
print(new_array[1])
###Output
['050120,2018-01-15,0.005914479678521959,0.006967815329193724,0.005855961031262417,0.006821518711044868,15.2655406508561,307823874200,-8.359644428995521e-05,-8.368414518593422e-05,0.0053635266145733666,0.004961310396364354,0.00501939015376945,-3.169855640349075e-05,-7.6238532205229355,175.77663022710502,-8.368535506481819e-05,10852.097440512918,2.2162281571834272,3.7671242285989823,0.00016436951716849334,0.006234869272267954,0.000544246826972649,0.00671132809825515,0.004015701723432681,0.005990436882664845,0.004758636913445654,0.005374545675852339,0.0076115204490486916,0.004609513844634162,0.006125146808656313,-7.684288386915933e-05,0.005662849495305927,0.0057476254595908265,0.006003545059650984,0.00025883090276131935,-5.844070042139278e-05,-8.319918050486377e-05,-8.304618233123077e-05,-8.329454717392971e-05,-6.514289961067544e-05,-1.9972414309681915e-06,-0.0007616201940829455,2.2257567485166982e-05,0.0061178319777488695,0.007517890613433424,-2.552033318177004e-05,-4.4334078275712944e-05,-8.332034102326878e-05,-7.059718864714836e-05,-5.6970829039527585e-05,-4.056266849712995e-05,-9.908131575667245e-05,0.000653343141058614,0.005708353595414947,-5.747391385001787e-05,-8.002997910299107e-05,-5.907106328967257e-05,0.0069092966819341815,0.0072018899182318945,0.006294850885708986,0.006792259387415097,11.973108926024823,240410569500,-8.368414518593422e-05,-8.363933295764362e-05,0.005445452720736726,0.004993296688956419,0.005032796775856611,-3.128711179460891e-05,-6.465157161714187,163.8034376194146,-8.36821930313654e-05,9216.27498985876,2.5489650045998125,2.1723316259917596,0.00016331852226371196,0.006265181931548397,0.000564184130093975,0.006925038198046999,0.0039658438359675505,0.006103202315933984,0.004809940211498095,0.00545657126371604,0.0076115204490486916,0.004609513844634162,0.006125146808656313,-7.69208892259563e-05,0.005838405437084554,0.0059083352205597075,0.006164295783672946,0.0003167918673124784,-5.746864717176452e-05,-8.320680138829638e-05,-8.302312013234577e-05,-8.324343874297265e-05,-6.503961419826234e-05,-2.1763084915823815e-06,-0.000796731382438671,4.186482543594929e-05,0.006186591388278832,0.007472480143160019,-2.5341675751686656e-05,-4.460934999242183e-05,-8.332652702947058e-05,-6.927999241598332e-05,-5.7590717069947924e-05,-4.1132991233321456e-05,-9.965163849286395e-05,0.0007562657378586973,0.005807390554036997,-5.7832925750955174e-05,-7.94904839901759e-05,-6.236677498468274e-05,0.006850778034674639,0.008196706921644117,0.006704481416525783,0.007289667889121208,35.49958305463828,764364560400,-8.363933295764362e-05,-8.370024291090477e-05,0.005563660388201002,0.0050321179595484,0.0050492814787896235,-3.0259933979262165e-05,-14.121948026407177,199.30310435571846,-8.369979641362618e-05,12210.338083047025,2.9517080164715006,2.517243449298065,0.00016331852226371196,0.006428156364166223,0.0006244992998243856,0.007224829227957635,0.0039024856965796436,0.006286775312387169,0.004866510187803895,0.005576633972298442,0.008196706921644117,0.004609513844634162,0.006555258866013949,-7.706701028816339e-05,0.006051530350403809,0.006120874947406366,0.0064036955696117346,0.0003973187926198623,-5.625520450219065e-05,-8.32428395120247e-05,-8.300830906272439e-05,-8.317922506096181e-05,-6.49510169663114e-05,1.6045227892093962e-05,-0.0010085688855182148,5.7966231229412415e-05,0.0065625736969213925,0.007472480143160019,-2.5252317777321335e-05,-4.240138291267203e-05,-8.327691023883216e-05,-6.736140004693196e-05,-5.6757118939735745e-05,-3.9959750874414886e-05,-9.847839813395739e-05,0.001080049413145746,0.00589774334540573,-5.260920018468485e-05,-7.876381113223635e-05,-5.886613098696965e-05,0.007406705183640293,0.007640779772678463,0.006938556005563953,0.007055593300083038,8.154640792061594,170010147600,-8.370024291090477e-05,-8.365048760662829e-05,0.00566753098708669,0.005069177818857868,0.00506351906566787,-3.215043739762894e-05,-19.55843100889196,191.1483798819913,-8.371183814678465e-05,10000.019803587089,-1.3858602202591987,2.3090971742567192,0.00015547702353093325,0.006453670494371384,0.0006240721136993909,0.0074261918931777205,0.0039088466735367556,0.0064275126590463685,0.004951069633093934,0.005689282368273062,0.008196706921644117,0.004609513844634162,0.006722037010703646,-7.719738983425763e-05,0.0062528930156238935,0.006264655263723062,0.006564446293633698,0.0004362553449469441,-5.51283694305609e-05,-8.321678642507828e-05,-8.299935570969368e-05,-8.311619638155797e-05,-6.492708283958225e-05,-6.970156075084107e-06,0.00010474837859458105,7.232319614806855e-05,0.0065625736969213925,0.007472480143160019,-2.5207668049462304e-05,-4.464744563178779e-05,-8.332738315727999e-05,-6.674484757940542e-05,-5.6256491912430357e-05,-4.393135294527278e-05,-0.00010245000020481527,0.001363607370306585,0.005948771605816051,-5.373346043583518e-05,-7.846136920760486e-05,-6.376144990481943e-05,0.0072018899182318945,0.007757817067197548,0.007143371270972352,0.007435964507270064,9.638936435289787,211661434950,-8.365048760662829e-05,-8.37340363763615e-05,0.005791297926040623,0.00511248161782993,0.005081806142936477,-3.186176491069761e-05,-20.017431966842217,200.78739999894665,-8.371158973512705e-05,9466.497401980267,1.0257516732367915,0.2545905268048576,0.00015547702353093325,0.006490478723497636,0.000623674186898026,0.007677353927215677,0.0039052360730008418,0.0065770278027945,0.005048503180781072,0.005812774269584875,0.008196706921644117,0.004609513844634162,0.006722037010703646,-7.732057158673897e-05,0.006384559971957864,0.0064448341786351926,0.006797350509726677,0.0004911791917053328,-5.4049987798862045e-05,-8.32205743371154e-05,-8.300066067552756e-05,-8.305028448840366e-05,-6.495242141384563e-05,-6.643036836903266e-06,-1.901856035935131e-05,8.258268538561156e-05,0.0065625736969213925,0.007472480143160019,-2.5185372444856416e-05,-4.295350634956582e-05,-8.328931677723766e-05,-6.563515847142272e-05,-5.514375983479016e-05,-3.767524140404961e-05,-9.619388866359209e-05,0.0015509665232374622,0.005978733153212936,-6.811512022363486e-05,-7.803136833567701e-05,-6.750921814990956e-05,0.007406705183640293,0.007494483154529607,0.006528925474747155,0.006763000063785326,4.591443286360415,91801277100,-8.37340363763615e-05,-8.35066099738012e-05,0.005884342575183295,0.005141348866523062,0.005095511210124662,-3.278080026590874e-05,-22.382764034673468,196.19587303092067,-8.371609333022014e-05,7359.818330710005,-9.22955075200006,0.102555684881766,0.0001340738282957556,0.0064989639273502695,0.0006421192645142338,0.007769345240707678,0.003999322354064735,0.006720574044522157,0.0051060445666313795,0.005913309305576768,0.008196706921644117,0.004609513844634162,0.006722037010703646,-7.735258128678996e-05,0.006460166064217193,0.006493804177722,0.006867046218612792,0.0004739752945975,-5.408597676692666e-05,-8.320371921114523e-05,-8.30285155516231e-05,-8.300224067900356e-05,-6.521645755028068e-05,-3.640544527716479e-05,0.0009120131175399697,9.249223311254246e-05,0.0065625736969213925,0.007472480143160019,-2.517419538322984e-05,-4.874129315677087e-05,-8.352659584739975e-05,-6.687979157998592e-05,-5.710489526039921e-05,-4.874375093995577e-05,-0.00010726239819949825,0.0016042360478378237,0.005996639859274357,-7.474534147678828e-05,-7.82606912105577e-05,-7.233911322012315e-05,0.006733740740155554,0.008811152717869312,0.00664596276926624,0.008811152717869312,23.200553352328487,602627879200,-8.35066099738012e-05,-8.376828839393817e-05,0.006079502263793869,0.005205918341909241,0.00512652609317222,-3.2338340773979335e-05,0.8178724141341259,219.39651006491474,-8.366699975480686e-05,17908.73421707966,6.689921582695776,6.529362959387287,0.0001340738282957556,0.006683063591628791,0.000744930675884524,0.008298412330581201,0.003860598048871263,0.006982620546950388,0.005160952613355008,0.0060717778023556095,0.008811152717869312,0.004609513844634162,0.007029259908816244,-7.735258128678996e-05,0.006621092344180935,0.006850309885496563,0.0073190442500454984,0.0006175297289357741,-5.360085718114506e-05,-8.320033975926599e-05,-8.302766117937311e-05,-8.292600258535383e-05,-6.533525040421755e-05,4.924344166890496e-06,-1.4629661814885623e-05,0.00012879252038058188,0.006869796595033991,0.007472480143160019,-2.516857759309293e-05,-4.124891667354261e-05,-8.330232254659108e-05,-6.465210371610965e-05,-5.065777885452089e-05,-2.516301832160327e-05,-8.368166558114575e-05,0.0017133557693826927,0.006471460163138284,-6.745380099095478e-05,-7.709814220850544e-05,-7.021237002276961e-05,0.008050410303495261,0.011005601990102157,0.006733740740155554,0.007494483154529607,65.05996873933863,1439759321700,-8.376828839393817e-05,-8.364777444806675e-05,0.006214095152490817,0.005251170811835046,0.005146908138012719,-3.163371774232718e-05,-41.0701059632748,154.3364576439105,-8.373799668433346e-05,-5561.81610406617,7.492500293196455,-2.690322040721201,0.0001340738282957556,0.007207215115132512,0.0009908786984516552,0.008433414849808966,0.003994763751443216,0.007368258432390773,0.005133448849143024,0.0062508448629698095,0.011005601990102157,0.004609513844634162,0.008126484544932665,-7.69424240881478e-05,0.006733272590977478,0.006949440473954228,0.0074288252323044,0.0006169738017868083,-5.266479289958142e-05,-8.32186672144012e-05,-8.304083372687124e-05,-8.28658395641063e-05,-6.516981818841483e-05,5.6745532247578366e-06,-0.0007636683467370295,0.00014598412857249026,0.008018225047502513,0.007472480143160019,-2.516582721667173e-05,-4.890046387731682e-05,-8.353136160603258e-05,-6.652733376754169e-05,-5.860069040300036e-05,-6.0845114599992864e-05,-0.00011936376185953534,0.0018760024975758652,0.0065428529127949265,-7.104386148168044e-05,-7.720686985511367e-05,-5.971582174517821e-05,0.008869671365128855,0.008928190012388398,0.007318927212750979,0.007933373008976174,33.70424968268264,789063638200,-8.364777444806675e-05,-8.360905096213434e-05,0.006376484398636048,0.005305885747022718,0.005170807153553516,-4.028166195298963e-05,-49.03658475848438,188.04079100825874,-8.374491546103623e-05,-1156.0923835453605,-3.5625083825161044,-7.678695127832216,0.0001466851819666596,0.007413844458605957,0.0010290738195179584,0.008599958919909624,0.0041530040254977455,0.007607716736976821,0.005227178166458633,0.006417447451717727,0.011005601990102157,0.004609513844634162,0.008228892177636865,-7.67679800006671e-05,0.006933172290016075,0.007100769695767405,0.007584192240778485,0.0006435646751015444,-5.179561543183543e-05,-8.314195980120045e-05,-8.305194641798583e-05,-8.281227159440492e-05,-6.503048528928987e-05,-1.5961546226512807e-05,-0.0012010952350021097,0.00016547786034758907,0.008119169714025223,0.007472480143160019,-2.5164422769137496e-05,-4.7457335517249254e-05,-8.338316254666131e-05,-6.721615676443377e-05,-5.945436042922257e-05,-5.638488182451779e-05,-0.00011490352908406029,0.0019432170158181754,0.006644265728495714,-5.868934615359857e-05,-7.709100293353979e-05,-6.019707910024069e-05,0.007933373008976174,0.009337820543205195,0.007757817067197548,0.008928190012388398,34.09099005943739,897154258000,-8.360905096213434e-05,-8.372156464818297e-05,0.006588614494951889,0.0053783493879242085,0.005203723892637008,-3.674046453272567e-05,-32.62236406832369,222.1318647493617,-8.371731689661569e-05,7288.324424988746,1.9662157202846886,6.182183960467609,0.0001466851819666596,0.007607950811565859,0.0010624470040500756,0.008932578910932864,0.0042446266715120105,0.0078789506670248,0.005355041410720733,0.006616996038872767,0.011005601990102157,0.004638773168263933,0.008430781510682286,-7.69660071029934e-05,0.007238932221947184,0.007381951795849507,0.00789434107125406,0.0007354682106226559,-5.0920234987479935e-05,-8.31715269329148e-05,-8.303996179902707e-05,-8.274777234139545e-05,-6.488500793220264e-05,-1.345928886969251e-08,-0.0010738171772126047,0.00018607291306411243,0.008228892177636865,0.007472480143160019,-2.5163720545370382e-05,-4.457634547536745e-05,-8.325012742435441e-05,-6.65225352384664e-05,-5.712666419717975e-05,-4.6274907730722935e-05,-0.00010479355499026543,0.002049837991125062,0.006855927675633478,-4.349416013159673e-05,-7.646947638099619e-05,-6.094600074786831e-05,0.007986039791509764']
###Markdown
파일 저장
###Code
import numpy as np
np.savetxt("3w_scaling.csv",new_array,fmt='%s', delimiter=',')
# ####메모#######
# key를 code로,value를 해당되는 날짜들의 리스트로 해서 위치검색?(시간 오래걸림)
# 4-딕셔너리-key를 Code로, Value를 날짜로?(value값에 나머지)
# 4-맨 처음 행의 high, low를 max, min으로 설정하고 다음 행에서 더 크거나 작을 때 업데이트하는 식으로 max,min 값 추출
###Output
_____no_output_____ |
11 - Dictionaries.ipynb | ###Markdown
DictionariesEach key is separated from its value by a colon (:), the items are separated by commas, and the whole thing is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this: {}.Keys are unique within a dictionary while values may not be. The values of a dictionary can be of any type, but the keys must be of an immutable data type such as strings, numbers, or tuples. Accessing Values in DictionaryTo access dictionary elements, you can use the familiar square brackets along with the key to obtain its value.
###Code
dict1 = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}
print("dict1['Name']: ", dict1['Name'])
print("dict1['Age']: ", dict1['Age'])
###Output
dict1['Name']: Zara
dict1['Age']: 7
###Markdown
If we attempt to access a data item with a key, which is not a part of the dictionary, we get a `KeyError`.
###Code
dict1 = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}
print("dict1['Alice']: ", dict1['Alice'])
###Output
_____no_output_____
###Markdown
Updating DictionaryYou can update a dictionary by adding a new entry or a key-value pair, modifying an existing entry, or deleting an existing entry as shown in a simple example given below.
###Code
dict1 = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}
dict1['Age'] = 8; # update existing entry
dict1['School'] = "DPS School" # Add new entry
print("dict1['Age']: ", dict1['Age'])
print("dict1['School']: ", dict1['School'])
print(dict1)
###Output
{'Name': 'Zara', 'Age': 8, 'Class': 'First', 'School': 'DPS School'}
###Markdown
Delete Dictionary Elementsou can either remove individual dictionary elements or clear the entire contents of a dictionary. You can also delete entire dictionary in a single operation.To explicitly remove an entire dictionary, just use the `del` statement.
###Code
dict1 = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}
dict1
###Output
_____no_output_____
###Markdown
Remove entry with key 'Name'
###Code
del dict1['Name']
dict1
###Output
_____no_output_____
###Markdown
Remove all entries in dict
###Code
dict1.clear()
dict1
###Output
_____no_output_____
###Markdown
Delete entire dictionary
###Code
del dict1
###Output
_____no_output_____
###Markdown
Properties of Dictionary KeysDictionary values have no restrictions. They can be any arbitrary Python object, either standard objects or user-defined objects. However, same is not true for the keys.There are two important points to remember about dictionary keys More than one entry per key is not allowed. This means no duplicate key is allowed. When duplicate keys are encountered during assignment, the last assignment wins.
###Code
dict1 = {'Name': 'Zara', 'Age': 7, 'Name': 'Manni'}
print("dict1['Name']: ", dict1['Name'])
###Output
dict1['Name']: Manni
###Markdown
Keys must be immutable. This means you can use strings, numbers or tuples as dictionary keys but something like ['key'] is not allowed.
###Code
dict1 = {['Name']: 'Zara', 'Age': 7}
print("dict1['Name']: ", dict1['Name'])
###Output
_____no_output_____
###Markdown
Built-in Dictionary FunctionsPython includes the following dictionary functions. `len(dict)` gives the total length of the dictionary. This would be equal to the number of items in the dictionary.
###Code
dict1 = {'Name': 'Zara', 'Age': 7, 'Name': 'Manni'}
len(dict1)
###Output
_____no_output_____
###Markdown
`str(dict)` produces a printable string representation of a dictionary
###Code
dict1 = {'Name': 'Zara', 'Age': 7, 'Name': 'Manni'}
str(dict1)
###Output
_____no_output_____
###Markdown
`type(variable)` returns the type of the passed variable. If passed variable is dictionary, then it would return a dictionary type.
###Code
dict1 = {'Name': 'Zara', 'Age': 7}
type(dict1)
###Output
_____no_output_____
###Markdown
Python includes many dictionary methods. Here are some of them. `dict.get(key, default=None)` for key key, returns value or default if key not in dictionary
###Code
dict1 = {'Name': 'Zara', 'Age': 7}
dict1.get('Sex', 'Female')
###Output
_____no_output_____
###Markdown
`dict.keys()` returns list of dictionary dict's keys
###Code
dict1 = {'Name': 'Zara', 'Age': 7}
dict1.keys()
###Output
_____no_output_____
###Markdown
`dict.values()` returns list of dictionary dict's values
###Code
dict1 = {'Name': 'Zara', 'Age': 7}
dict1.values()
###Output
_____no_output_____
###Markdown
`dict.items()` returns a list of dict's (key, value) tuple pairs
###Code
dict1 = {'Name': 'Zara', 'Age': 7}
dict1.items()
###Output
_____no_output_____ |
Day 3 Assignment 1 & 2 .ipynb | ###Markdown
Assignment 1
###Code
#Checking Altittude Preogram
###Output
_____no_output_____
###Markdown
num = input("Enter Altittude-")num = int(num)if num <= 1000: print("Safe to Land Plane")elif num > 1001 and num <= 5000: print("Bring down to 1000")elif num > 5000: print("Turn Around") num = input("Enter Altittude-")num = int(num)if num <= 1000: print("Safe to Land Plane")elif num > 1001 and num <= 5000: print("Bring down to 1000")elif num > 5000: print("Turn Around") num = input("Enter Altittude-")num = int(num)if num <= 1000: print("Safe to Land Plane")elif num > 1001 and num <= 5000: print("Bring down to 1000")elif num > 5000: print("Turn Around")
###Code
# Assignment 2
###Output
_____no_output_____
###Markdown
Prime Numbers Program
###Code
for num in range(1, 201):
for i in range(2, num):
if num % i == 0:
break
else:
print(num)
break
###Output
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
45
47
49
51
53
55
57
59
61
63
65
67
69
71
73
75
77
79
81
83
85
87
89
91
93
95
97
99
101
103
105
107
109
111
113
115
117
119
121
123
125
127
129
131
133
135
137
139
141
143
145
147
149
151
153
155
157
159
161
163
165
167
169
171
173
175
177
179
181
183
185
187
189
191
193
195
197
199
|
notebooks/2020-04-23_th_load_data.ipynb | ###Markdown
Load DataBilder einlesen und Dateinamen anpassen
###Code
# OPTIONAL: Load the "autoreload" extension so that code can change
%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
%autoreload 2
from src.data import make_dataset
import warnings
import pandas as pd # Deals with data
excel_readings = pd.read_excel('../data/raw/moroweg_strom_gas.xlsx')
excel_readings.head()
manual_readings = excel_readings.iloc[:,[0,1,3,5]]
manual_readings = manual_readings.melt(id_vars=["Date", "Kommentar"],
var_name="Meter Type",
value_name="Value")
manual_readings[["Meter Type", "Unit"]] = manual_readings['Meter Type'].str.split(' ',expand=True)
manual_readings = manual_readings[["Date", "Meter Type", "Unit", "Value", "Kommentar"]]
manual_readings
## Code Snippet for exifread for a sample image
import exifread
import os
#path_name = os.path.join(os.pardir, 'data', 'raw', '2017', '2017-03-03 15.06.47.jpg')
path_name = "..\data\processed\gas\IMG_20200405_173910.jpg"
# Open image file for reading (binary mode)
f = open(path_name, 'rb')
# Return Exif tags
tags = exifread.process_file(f)
# Show Tags
for tag in tags.keys():
if tag not in ('JPEGThumbnail', 'TIFFThumbnail', 'Filename', 'EXIF MakerNote'):
print("Key: %s, value %s" % (tag, tags[tag]))
def extract_file_meta(file_path):
basename = os.path.basename(file_path)
# Open image file for reading (binary mode)
f = open(file_path, 'rb')
# Read EXIF
tags = exifread.process_file(f)
try:
exif_datetime = str(tags["EXIF DateTimeOriginal"])
except KeyError:
warnings.warn("File {file_path} does not appear to have a date in EXIF Tags.".format(file_path=file_path))
return()
#exif_datetime = "2020:01:01 00:00:00"
# Format Date
datetime = pd.to_datetime(exif_datetime, format = "%Y:%m:%d %H:%M:%S")
date = pd.to_datetime(datetime.date())
return(basename, datetime, date, file_path)
def meta_from_files(files):
files_meta = []
for file_path in files:
files_meta.append(extract_file_meta(file_path))
df = pd.DataFrame.from_records(files_meta, columns = ("Filename", "Datetime", "Date", "Filepath"))
return(df)
def meta_from_dir(dir_path):
files = [top + os.sep + f for top, dirs, files in os.walk(dir_path) for f in files]
files_meta = meta_from_files(files)
return(files_meta)
gas_dir = os.path.join(os.pardir, "data", "processed", "gas")
gas_files_meta = meta_from_dir(gas_dir)
gas_files_meta
strom_dir = os.path.join(os.pardir, "data", "processed", "strom")
strom_files_meta = meta_from_dir(strom_dir)
strom_files_meta
###Output
_____no_output_____
###Markdown
Add Flag if Picture has been Labelled
###Code
gas_label_dir = os.path.join(os.pardir, "data", "labelled", "gas", "vott-json-export")
gas_labelled_files = [os.path.basename(f) for top, dirs, files in os.walk(gas_label_dir) for f in files]
gas_files_meta["Labelled"] = gas_files_meta.apply(lambda row: True if row["Filename"] in gas_labelled_files else False, axis=1)
gas_files_meta
strom_label_dir = os.path.join(os.pardir, "data", "labelled", "strom", "vott-json-export")
strom_labelled_files = [os.path.basename(f) for top, dirs, files in os.walk(strom_label_dir) for f in files]
strom_files_meta["Labelled"] = strom_files_meta.apply(lambda row: True if row["Filename"] in strom_labelled_files else False, axis=1)
strom_files_meta
###Output
_____no_output_____
###Markdown
Join Picture Data with Manual Readings Strom
###Code
manual_readings.head(2)
strom_readings_manual = manual_readings[manual_readings["Meter Type"] == "Strom"]
strom_readings_manual.head(2)
strom_files_meta.head(2)
strom = strom_files_meta.merge(strom_readings_manual, left_on="Date", right_on="Date")
strom
###Output
_____no_output_____
###Markdown
Gas
###Code
manual_readings.head(2)
gas_readings_manual = manual_readings[manual_readings["Meter Type"] == "Gas"]
gas_readings_manual.head(2)
gas_files_meta.head(2)
gas = gas_files_meta.merge(gas_readings_manual, left_on="Date", right_on="Date")
gas
###Output
_____no_output_____
###Markdown
Return one dataframe in the end
###Code
dataset = pd.concat([strom, gas])
dataset
dataset.to_csv("../data/processed/dataset.csv")
###Output
_____no_output_____ |
locale/examples/01-filter/slicing.ipynb | ###Markdown
Slicing {slice_example}=======Extract thin planar slices from a volume.
###Code
# sphinx_gallery_thumbnail_number = 2
import pyvista as pv
from pyvista import examples
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
PyVista meshes have several slicing filters bound directly to alldatasets. These filters allow you to slice through a volumetric datasetto extract and view sections through the volume of data.One of the most common slicing filters used in PyVista is the`pyvista.DataSetFilters.slice_orthogonal`{.interpreted-text role="func"}filter which creates three orthogonal slices through the datasetparallel to the three Cartesian planes. For example, let\'s slicethrough the sample geostatistical training image volume. First, load upthe volume and preview it:
###Code
mesh = examples.load_channels()
# define a categorical colormap
cmap = plt.cm.get_cmap("viridis", 4)
mesh.plot(cmap=cmap)
###Output
_____no_output_____
###Markdown
Note that this dataset is a 3D volume and there might be regions withinthis volume that we would like to inspect. We can create slices throughthe mesh to gain further insight about the internals of the volume.
###Code
slices = mesh.slice_orthogonal()
slices.plot(cmap=cmap)
###Output
_____no_output_____
###Markdown
The orthogonal slices can be easily translated throughout the volume:
###Code
slices = mesh.slice_orthogonal(x=20, y=20, z=30)
slices.plot(cmap=cmap)
###Output
_____no_output_____
###Markdown
We can also add just a single slice of the volume by specifying theorigin and normal of the slicing plane with the`pyvista.DataSetFilters.slice`{.interpreted-text role="func"} filter:
###Code
# Single slice - origin defaults to the center of the mesh
single_slice = mesh.slice(normal=[1, 1, 0])
p = pv.Plotter()
p.add_mesh(mesh.outline(), color="k")
p.add_mesh(single_slice, cmap=cmap)
p.show()
###Output
_____no_output_____
###Markdown
Adding slicing planes uniformly across an axial direction can also beautomated with the`pyvista.DataSetFilters.slice_along_axis`{.interpreted-text role="func"}filter:
###Code
slices = mesh.slice_along_axis(n=7, axis="y")
slices.plot(cmap=cmap)
###Output
_____no_output_____
###Markdown
Slice Along Line================We can also slice a dataset along a `pyvista.Spline`{.interpreted-textrole="func"} or `pyvista.Line`{.interpreted-text role="func"} using the`DataSetFilters.slice_along_line`{.interpreted-text role="func"} filter.First, define a line source through the dataset of interest. Please notethat this type of slicing is computationally expensive and might take awhile if there are a lot of points in the line - try to keep theresolution of the line low.
###Code
model = examples.load_channels()
def path(y):
"""Equation: x = a(y-h)^2 + k"""
a = 110.0 / 160.0 ** 2
x = a * y ** 2 + 0.0
return x, y
x, y = path(np.arange(model.bounds[2], model.bounds[3], 15.0))
zo = np.linspace(9.0, 11.0, num=len(y))
points = np.c_[x, y, zo]
spline = pv.Spline(points, 15)
spline
###Output
_____no_output_____
###Markdown
Then run the filter
###Code
slc = model.slice_along_line(spline)
slc
p = pv.Plotter()
p.add_mesh(slc, cmap=cmap)
p.add_mesh(model.outline())
p.show(cpos=[1, -1, 1])
###Output
_____no_output_____
###Markdown
Multiple Slices in Vector Direction===================================Slice a mesh along a vector direction perpendicularly.
###Code
mesh = examples.download_brain()
# Create vector
vec = np.random.rand(3)
# Normalize the vector
normal = vec / np.linalg.norm(vec)
# Make points along that vector for the extent of your slices
a = mesh.center + normal * mesh.length / 3.0
b = mesh.center - normal * mesh.length / 3.0
# Define the line/points for the slices
n_slices = 5
line = pv.Line(a, b, n_slices)
# Generate all of the slices
slices = pv.MultiBlock()
for point in line.points:
slices.append(mesh.slice(normal=normal, origin=point))
p = pv.Plotter()
p.add_mesh(mesh.outline(), color="k")
p.add_mesh(slices, opacity=0.75)
p.add_mesh(line, color="red", line_width=5)
p.show()
###Output
_____no_output_____
###Markdown
Slice At Different Bearings===========================From[pyvista-support\23](https://github.com/pyvista/pyvista-support/issues/23)An example of how to get many slices at different bearings all centeredaround a user-chosen location.Create a point to orient slices around
###Code
ranges = np.array(model.bounds).reshape(-1, 2).ptp(axis=1)
point = np.array(model.center) - ranges*0.25
###Output
_____no_output_____
###Markdown
Now generate a few normal vectors to rotate a slice around the z-axis.Use equation for circle since its about the Z-axis.
###Code
increment = np.pi/6.
# use a container to hold all the slices
slices = pv.MultiBlock() # treat like a dictionary/list
for theta in np.arange(0, np.pi, increment):
normal = np.array([np.cos(theta), np.sin(theta), 0.0]).dot(np.pi/2.)
name = f'Bearing: {np.rad2deg(theta):.2f}'
slices[name] = model.slice(origin=point, normal=normal)
slices
###Output
_____no_output_____
###Markdown
And now display it!
###Code
p = pv.Plotter()
p.add_mesh(slices, cmap=cmap)
p.add_mesh(model.outline())
p.show()
###Output
_____no_output_____ |
src/ForestFiresAnalysis.ipynb | ###Markdown
Forest Fire Mini Project > in this file, you can find analysis for this project. For the final report please visit `Report.ipynb`
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score, mean_squared_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.feature_selection import RFE
from sklearn.svm import SVR
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Introduction Data InformationThe Forest Fires data is available at UCI, to reach it please click [here](http://archive.ics.uci.edu/ml/datasets/Forest+Fires).The citation to this data set: [Cortez and Morais, 2007] P. Cortez and A. Morais. A Data Mining Approach to Predict Forest Fires using Meteorological Data. In J. Neves, M. F. Santos and J. Machado Eds., New Trends in Artificial Intelligence, Proceedings of the 13th EPIA 2007 - Portuguese Conference on Artificial Intelligence, December, Guimarães, Portugal, pp. 512-523, 2007. APPIA, ISBN-13 978-989-95618-0-9. Available at: [http://www.dsi.uminho.pt/~pcortez/fires.pdf](http://www3.dsi.uminho.pt/pcortez/fires.pdf) Attributes:1. `X` - x-axis spatial coordinate within the Montesinho park map: 1 to 9 2. `Y` - y-axis spatial coordinate within the Montesinho park map: 2 to 9 3. `month` - month of the year: 'jan' to 'dec' 4. `day` - day of the week: 'mon' to 'sun' 5. `FFMC` - FFMC index from the FWI system: 18.7 to 96.20 6. `DMC` - DMC index from the FWI system: 1.1 to 291.3 7. `DC` - DC index from the FWI system: 7.9 to 860.6 8. `ISI` - ISI index from the FWI system: 0.0 to 56.10 9. `temp` - temperature in Celsius degrees: 2.2 to 33.30 10. `RH` - relative humidity in %: 15.0 to 100 11. `wind` - wind speed in km/h: 0.40 to 9.40 12. `rain` - outside rain in mm/m2 : 0.0 to 6.4 13. `area` - the burned area of the forest (in ha): 0.00 to 1090.84 Model and Feature Selection Process:I will also try predict the `area` variable via regression models. - First, I fit the data with all features to Random Forest Regression with pruned `depth` hyperparameters. - Then I will use to Lasso(L1 regularization) Regression and ElasticNet(L1+L2 regularization) Regression to select features. I will not use Ridge(L2 regularization) since it does not any exact zero weigthed features. - As last step, I will fit the data to Random Forest Regression with pruned `depth` hyperparameters onto both features selected by Lasso and ElasticNet.
###Code
# load the dataset:
forestfires = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/forest-fires/forestfires.csv")
# Write the data frame to a csv file:
forestfires.to_csv("../data/forestfires.csv", encoding='utf-8', index=False)
forestfires.head()
forestfires.describe()
forestfires.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 517 entries, 0 to 516
Data columns (total 13 columns):
X 517 non-null int64
Y 517 non-null int64
month 517 non-null object
day 517 non-null object
FFMC 517 non-null float64
DMC 517 non-null float64
DC 517 non-null float64
ISI 517 non-null float64
temp 517 non-null float64
RH 517 non-null int64
wind 517 non-null float64
rain 517 non-null float64
area 517 non-null float64
dtypes: float64(8), int64(3), object(2)
memory usage: 52.6+ KB
###Markdown
Response Variable and Predictors:**Response Variable:** `area` which is the burned area in forest. - We see the original paper used this variable after log transformation since *variable is very skewed towards 0.0*. After fitting the models, the outputs were post-processed with the inverse of the ln(x+1) transform**Predictiors:** We need to assign dummy variables for categorical variables `month` and `day`.
###Code
# histogram for area response variable:
plt.hist(forestfires.area)
plt.title("Burned Area")
plt.savefig('../results/AreaBeforeTransformation.png')
## after log transformation :
plt.hist(np.log(forestfires.area+1))
plt.title("Log Transformed Burned Area")
plt.savefig('../results/AreaAfterTransformation.png')
###Output
_____no_output_____
###Markdown
> As we can see from the histograms, log transformation helps the area variable to spread out.
###Code
## Encode the categorical Variables :
# one way is:
#le = LabelEncoder()
#forestfires["month"] = le.fit_transform(forestfires["month"])
#forestfires["day"] = le.fit_transform(forestfires["day"])
forestfires = pd.get_dummies(forestfires, prefix='m', columns=['month'])
forestfires = pd.get_dummies(forestfires, prefix='d', columns=['day'])
## after encoding:
forestfires.head()
## X is perdictors' dataframe
## y is response' vector
X = forestfires.loc[:, forestfires.columns != "area"]
y = np.log(forestfires.area +1)
## save them into csv
X.to_csv("../results/X.csv", encoding='utf-8', index=False)
y.to_csv("../results/y.csv", encoding='utf-8', index=False)
## split the data:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=19)
###Output
_____no_output_____
###Markdown
Random Forest with all features:
###Code
randomforestscores = []
randf = RandomForestRegressor(max_depth=5)
randf.fit(X_train,y_train)
randomforestscores.append(randf.score(X_test,y_test))
###Output
_____no_output_____
###Markdown
Lasso Regression (L1 Regularization):
###Code
alpha_settings = [10**-4,10**-3,10**-2,10**-1,1,10**1,10**2,10**3,10**4]
Train_errorL = []
Valid_errorL = []
Nonzeros = []
for alp in alpha_settings:
clfLasso = Lasso(alpha=alp)
clfLasso.fit(X_train, y_train)
Train_errorL.append(np.sqrt(mean_squared_error(y_train, clfLasso.predict(X_train))))
Valid_errorL.append(np.sqrt(mean_squared_error(y_test, clfLasso.predict(X_test))))
#Train_error.append(1- clf.score(X,y))
#Valid_error.append(1- clf.score(Xvalidate,yvalidate))
print("For alpha value of ", alp)
print("RMSE Training error:", np.sqrt(mean_squared_error(y_train, clfLasso.predict(X_train))))
print("RMSE Validation error:", np.sqrt(mean_squared_error(y_test, clfLasso.predict(X_test))))
print("Number of non-zero features",np.count_nonzero(clfLasso.coef_))
Nonzeros.append(np.count_nonzero(clfLasso.coef_))
print("-------------------------------------------")
plt.semilogx(alpha_settings, Train_errorL, label="training error")
plt.semilogx(alpha_settings, Valid_errorL, label="test error")
plt.legend()
plt.ylabel("RMSE")
plt.xlabel("Alpha")
plt.title("Lasso")
plt.savefig('../results/LassoError.png')
print("---Optimal alpha for Lasso is",alpha_settings[np.argmin(Valid_errorL)])
plt.figure()
plt.semilogx(alpha_settings,Nonzeros)
plt.title("Number of Non-zero Features vs Alpha")
plt.xlabel("Alpha")
plt.ylabel("Number of non-zero Features")
###Output
_____no_output_____
###Markdown
> Lasso gives `alpha =0.01` as optimum value. So, let's choose alpha as 0.01 and select variables which are not zero with this value of alpha.
###Code
clfLasso = Lasso(alpha=0.01)
clfLasso.fit(X_train, y_train)
np.count_nonzero(clfLasso.coef_)
# Lasso Selected Variables:
rfe = RFE(Lasso(alpha=0.01),n_features_to_select = 18 )
rfe.fit(X_train,y_train)
rfe.score(X_test,y_test)
X.columns[rfe.support_]
Xlasso = X[(X.columns[rfe.support_])]
#save Xlasso into csv
Xlasso.to_csv("../results/Xlasso.csv", encoding='utf-8', index=False)
###Output
_____no_output_____
###Markdown
ElasticNet Regression (L1+L2 Regularization):
###Code
alpha_settings = [10**-4,10**-3,10**-2,10**-1,1,10**1,10**2,10**3,10**4]
Train_errorEN = []
Valid_errorEN = []
Nonzeros = []
for alp in alpha_settings:
clfElasticNet = ElasticNet(alpha=alp,normalize =True)
clfElasticNet.fit(X_train, y_train)
# mean_squared_err = lambda y, yhat: np.mean((y-yhat)**2)
Train_errorEN.append(np.sqrt(mean_squared_error(y_train, clfElasticNet.predict(X_train))))
Valid_errorEN.append(np.sqrt(mean_squared_error(y_test, clfElasticNet.predict(X_test))))
#Train_error.append(1- clf.score(X,y))
#Valid_error.append(1- clf.score(Xvalidate,yvalidate))
print("For alpha value of ", alp)
print("RMSE Training error:", np.sqrt(mean_squared_error(y_train, clfElasticNet.predict(X_train))))
print("RMSE Validation error:", np.sqrt(mean_squared_error(y_test, clfElasticNet.predict(X_test))))
print("Number of non-zero features",np.count_nonzero(clfElasticNet.coef_))
Nonzeros.append(np.count_nonzero(clfElasticNet.coef_))
print("-----------------------")
plt.semilogx(alpha_settings, Train_errorEN, label="training error")
plt.semilogx(alpha_settings, Valid_errorEN, label="test error")
plt.legend()
plt.ylabel("RMSE")
plt.xlabel("Alpha")
plt.title("ElasticNet")
plt.savefig('../results/ElasticNetError.png')
print("---Optimal alpha for Elastic Net is",alpha_settings[np.argmin(Valid_errorEN)])
plt.figure()
plt.semilogx(alpha_settings,Nonzeros)
plt.title("Number of Non-zero Features vs Alpha for ElasticNet")
plt.xlabel("Alpha")
plt.ylabel("Number of non-zero Features")
clfElasticNet = ElasticNet(alpha=0.01)
clfElasticNet.fit(X_train, y_train)
np.count_nonzero(clfElasticNet.coef_)
# ElasticNet Selected Variables:
rfe = RFE(ElasticNet(alpha=0.01), n_features_to_select=22)
rfe.fit(X_train,y_train)
rfe.score(X_test,y_test)
X.columns[rfe.support_]
XelasticNet = X[(X.columns[rfe.support_])]
#save Xlasso into csv
XelasticNet.to_csv("../results/XElasticNet.csv", encoding='utf-8', index=False)
###Output
_____no_output_____
###Markdown
Random Forest Regressions with only Selected Features from Lasso and ElasticNet
###Code
## for Lasso features:
X_train, X_test, y_train, y_test = train_test_split(Xlasso, y, test_size=0.33, random_state=19)
randf = RandomForestRegressor(max_depth=5)
randf.fit(X_train,y_train)
randomforestscores.append(randf.score(X_test,y_test))
## for ElasticNet features:
X_train, X_test, y_train, y_test = train_test_split(XelasticNet, y, test_size=0.33, random_state=19)
randf = RandomForestRegressor(max_depth=5)
randf.fit(X_train,y_train)
randomforestscores.append(randf.score(X_test,y_test))
randomforestscores = pd.DataFrame(randomforestscores)
randomforestscores = randomforestscores.rename(index={0: 'RandomForest with all features(29)'})
randomforestscores = randomforestscores.rename(index={1: 'RandomForest with Lasso features(18)'})
randomforestscores = randomforestscores.rename(index={2: 'RandomForest with ElasticNet features(22)'})
randomforestscores
pd.DataFrame(np.transpose(randomforestscores)).to_csv("../results/RandomForestScores.csv", encoding='utf-8', index=False)
###Output
_____no_output_____ |
GitHub_MD_rendering/__call__ method.ipynb | ###Markdown
Introduction **What?** `__call__` method Definition - `__call__` is a built-in method which enables to write classes where the instances behave like functions and can be called like a function.- In practice: `object()` is shorthand for `object.__call__()` _ _call_ _ vs. _ _init_ _ - `__init__()` is properly defined as Class Constructor which builds an instance of a class, whereas `__call__` makes such a instance callable as a function and therefore can be modifiable.- Technically `__init__` is called once by `__new__` when object is created, so that it can be initialised- But there are many scenarios where you might want to redefine your object, say you are done with your object, and may find a need for a new object. With `__call__` you can redefine the same object as if it were new. Example 1
###Code
class Example():
def __init__(self):
print("Instance created")
# Defining __call__ method
def __call__(self):
print("Instance is called via special method __call__")
e = Example()
e.__init__()
e.__call__()
###Output
Instance is called via special method __call__
###Markdown
Example 2
###Code
class Product():
def __init__(self):
print("Instance created")
# Defining __call__ method
def __call__(self, a, b):
print("Instance is called via special method __call__")
print(a*b)
p = Product()
p.__init__()
# Is being call like if p was a function
p(2,3)
# The cell above is equivalent to this call
p.__call__(2,3)
###Output
Instance is called via special method __call__
6
###Markdown
Example 3
###Code
class Stuff(object):
def __init__(self, x, y, Range):
super(Stuff, self).__init__()
self.x = x
self.y = y
self.Range = Range
def __call__(self, x, y):
self.x = x
self.y = y
print("__call with (%d, %d)" % (self.x, self.y))
def __del__(self, x, y):
del self.x
del self.y
del self.Range
s = Stuff(1, 2, 3)
s.x
s(7,8)
s.x
###Output
_____no_output_____
###Markdown
Example 4
###Code
class Sum():
def __init__(self, x, y):
self.x = x
self.y = y
print("__init__ with (%d, %d)" % (self.x, self.y))
def __call__(self, x, y):
self.x = x
self.y = y
print("__call__ with (%d, %d)" % (self.x, self.y))
def sum(self):
return self.x + self.y
sum_1 = Sum(2,2)
sum_1.sum()
sum_1 = Sum(2,2)
sum_1(3,3)
sum_1 = Sum(2,2)
# This is equivalent to
sum_1.__call__(3,3)
# You can also do this
sum_1 = Sum(2,2)(3,3)
###Output
__init__ with (2, 2)
__call__ with (3, 3)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.