path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
skimage-learning/1.3scikit-image API-exposure.ipynb | ###Markdown
scikit-image API-exposure
###Code
from skimage import data
import skimage.color as color
import skimage.draw as draw
import skimage.exposure as exposure
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1.直方图
###Code
img = data.astronaut()
gray = color.rgb2gray(img)
hist,_ = exposure.histogram(gray)
plt.subplot(121)
plt.imshow(gray,'gray')
plt.title('SRC')
plt.subplot(122)
plt.bar(np.arange(0,256),hist)
plt.title('Histogram')
###Output
_____no_output_____
###Markdown
2.直方图均衡化
###Code
gray = color.rgb2gray(img)
histeq = exposure.equalize_hist(gray)
plt.subplot(121)
plt.imshow(gray,'gray')
plt.title('SRC')
plt.subplot(122)
plt.imshow(histeq,'gray')
plt.title('Hisgoram Equalize')
plt.subplot(121)
plt.bar(np.arange(0,256),hist)
plt.title('Histogram:SRC')
plt.subplot(122)
histeq_h,_ = exposure.histogram(histeq)
plt.bar(np.arange(0,256),histeq_h)
plt.title('Equalize Histogram')
###Output
_____no_output_____
###Markdown
3.自适应均衡化
###Code
dst = exposure.equalize_adapthist(gray)
gray = color.rgb2gray(img)
histeq = exposure.equalize_hist(gray)
plt.subplot(121)
plt.imshow(gray,'gray')
plt.title('SRC')
plt.subplot(122)
plt.imshow(dst,'gray')
plt.title('Hisgoram Equalize Adapthist')
###Output
C:\Users\jenson\Anaconda3\lib\site-packages\skimage\util\dtype.py:130: UserWarning: Possible precision loss when converting from float64 to uint16
.format(dtypeobj_in, dtypeobj_out))
###Markdown
4.图像密度缩放
###Code
dst = exposure.rescale_intensity(gray)
plt.subplot(121)
plt.imshow(gray,'gray')
plt.title('SRC')
plt.subplot(122)
plt.imshow(dst,'gray')
plt.title('Rescale Intensity')
###Output
_____no_output_____
###Markdown
5.图像累积分布
###Code
cdf = exposure.cumulative_distribution(gray)
hi = exposure.histogram(gray)
result = np.alltrue(cdf[0] == np.cumsum(hi[0])/float(gray.size))
print(result)
###Output
True
###Markdown
6.Gamma校正
###Code
dst = exposure.adjust_gamma(img,2)
plt.subplot(121)
plt.imshow(img)
plt.title('SRC')
plt.subplot(122)
plt.imshow(dst)
plt.title('Gamma Adjust:2')
###Output
_____no_output_____
###Markdown
7.Sigmoid校正
###Code
dst = exposure.adjust_sigmoid(img)
plt.subplot(121)
plt.imshow(img)
plt.title('SRC')
plt.subplot(122)
plt.imshow(dst)
plt.title('Sigmoid Adjust')
###Output
_____no_output_____
###Markdown
8.对数调整
###Code
dst = exposure.adjust_log(img)
plt.subplot(121)
plt.imshow(img)
plt.title('SRC')
plt.subplot(122)
plt.imshow(dst)
plt.title('Log Adjust')
###Output
_____no_output_____
###Markdown
9.判断图片是否低对比度
###Code
res = exposure.is_low_contrast(img)
print(res)
###Output
False
|
bnn/LFC-BNN_MNIST_Webcam.ipynb | ###Markdown
BNN on PynqThis notebook covers how to use Binary Neural Networks on Pynq. It shows an example of handwritten digit recognition using a binarized neural network composed of 4 fully connected layers with 1024 neurons each, trained on the MNIST dataset of handwritten digits. In order to reproduce this notebook, you will need an external USB Camera connected to the PYNQ Board. 1. Import the package
###Code
import bnn
###Output
_____no_output_____
###Markdown
2. Checking available parametersBy default the following trained parameters are available for LFC network using 1 bit for weights and 1 threshold for activation:
###Code
print(bnn.available_params(bnn.NETWORK_LFCW1A1))
###Output
['mnist', 'chars_merged']
###Markdown
Two sets of weights are available for the LFCW1A1 network, the MNIST and one for character recognition (NIST). 3. Instantiate the classifierCreating a classifier will automatically download the correct bitstream onto the device and load the weights trained on the specified dataset. This example works with the LFCW1A1 for inferring MNIST handwritten digits.Passing a runtime attribute will allow to choose between hardware accelerated or pure software inference.
###Code
hw_classifier = bnn.LfcClassifier(bnn.NETWORK_LFCW1A1,"mnist",bnn.RUNTIME_HW)
sw_classifier = bnn.LfcClassifier(bnn.NETWORK_LFCW1A1,"mnist",bnn.RUNTIME_SW)
print(hw_classifier.classes)
###Output
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
###Markdown
4. Load the image from the cameraThe image is captured from the external USB camera and stored locally. The image is then enhanced in contract and brightness to remove background noise. The resulting image should show the digit on a white background:
###Code
import cv2
from PIL import Image as PIL_Image
from PIL import ImageEnhance
from PIL import ImageOps
#original captured image
orig_img_path = '/home/xilinx/jupyter_notebooks/BNN-PYNQ-master/notebooks/pictures/2.jpg'
img = PIL_Image.open(orig_img_path).convert("L")
#Image enhancement
contr = ImageEnhance.Contrast(img)
img = contr.enhance(3) # The enhancement values (contrast and brightness)
bright = ImageEnhance.Brightness(img) # depends on backgroud, external lights etc
img = bright.enhance(4.0)
#img = img.rotate(180) # Rotate the image (depending on camera orientation)
#Adding a border for future cropping
img = ImageOps.expand(img,border=80,fill='white')
img
###Output
_____no_output_____
###Markdown
5. Crop and scale the imageThe center of mass of the image is evaluated to properly crop the image and extract the written digit only.
###Code
from PIL import Image as PIL_Image
import numpy as np
import math
from scipy import misc
#Find bounding box
inverted = ImageOps.invert(img)
box = inverted.getbbox()
img_new = img.crop(box)
width, height = img_new.size
ratio = min((28./height), (28./width))
background = PIL_Image.new('RGB', (28,28), (255,255,255))
if(height == width):
img_new = img_new.resize((28,28))
elif(height>width):
img_new = img_new.resize((int(width*ratio),28))
background.paste(img_new, (int((28-img_new.size[0])/2),int((28-img_new.size[1])/2)))
else:
img_new = img_new.resize((28, int(height*ratio)))
background.paste(img_new, (int((28-img_new.size[0])/2),int((28-img_new.size[1])/2)))
background
img_data=np.asarray(background)
img_data = img_data[:,:,0]
misc.imsave('/home/xilinx/img_webcam_mnist.png', img_data)
###Output
_____no_output_____
###Markdown
6. Convert to BNN input formatThe image is resized to comply with the MNIST standard. The image is resized at 28x28 pixels and the colors inverted.
###Code
from array import *
from PIL import Image as PIL_Image
from PIL import ImageOps
img_load = PIL_Image.open('/home/xilinx/img_webcam_mnist.png').convert("L")
# Convert to BNN input format
# The image is resized to comply with the MNIST standard. The image is resized at 28x28 pixels and the colors inverted.
#Resize the image and invert it (white on black)
smallimg = ImageOps.invert(img_load)
smallimg = smallimg.rotate(0)
data_image = array('B')
pixel = smallimg.load()
for x in range(0,28):
for y in range(0,28):
if(pixel[y,x] == 255):
data_image.append(255)
else:
data_image.append(1)
# Setting up the header of the MNIST format file - Required as the hardware is designed for MNIST dataset
hexval = "{0:#0{1}x}".format(1,6)
header = array('B')
header.extend([0,0,8,1,0,0])
header.append(int('0x'+hexval[2:][:2],16))
header.append(int('0x'+hexval[2:][2:],16))
header.extend([0,0,0,28,0,0,0,28])
header[3] = 3 # Changing MSB for image data (0x00000803)
data_image = header + data_image
output_file = open('/home/xilinx/img_webcam_mnist_processed', 'wb')
data_image.tofile(output_file)
output_file.close()
smallimg
###Output
_____no_output_____
###Markdown
7. Launching BNN in hardwareThe image is passed in the PL and the inference is performed. Use `classify_mnist` to classify a single mnist formatted picture.
###Code
class_out = hw_classifier.classify_mnist("/home/xilinx/img_webcam_mnist_processed")
print("Class number: {0}".format(class_out))
print("Class name: {0}".format(hw_classifier.class_name(class_out)))
###Output
Inference took 23.00 microseconds
Classification rate: 43478.26 images per second
Class number: 2
Class name: 2
###Markdown
8. Launching BNN in softwareThe inference on the same image is performed in sofware on the ARM core
###Code
class_out=sw_classifier.classify_mnist("/home/xilinx/img_webcam_mnist_processed")
print("Class number: {0}".format(class_out))
print("Class name: {0}".format(hw_classifier.class_name(class_out)))
###Output
Inference took 79734.00 microseconds
Classification rate: 12.54 images per second
Class number: 2
Class name: 2
###Markdown
9. Reset the device
###Code
from pynq import Xlnk
xlnk = Xlnk()
xlnk.xlnk_reset()
###Output
_____no_output_____ |
notebooks/capstone_notebook.ipynb | ###Markdown
Udacity Data Engineering Capstone Table of contents1. [Imports](imports)2. [Step 1: Project Scope and Data Gathering](step1) * [Scope](scope) * [Data Description](data_desc)3. [Step 2: Data Exploration & Modeling](step2)4. [Step 3: Define the Data Model](step3) * [3.1 Conceptual Data Model](data_model) * [3.2 Mapping Out Data Pipelines](pipeline_steps)5. [Step 4: Run Pipelines to Model the Data](step4)6. [Step 5: Complete Project Write Up](step5) Imports
###Code
from datetime import datetime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pyspark.sql import SparkSession
from pyspark.sql.types import DateType
import pyspark.sql.functions as F
from pyspark.sql.functions import udf, rand
from pyspark.sql.functions import isnan, when, count, col
import configparser
import psycopg2
###Output
_____no_output_____
###Markdown
Step 1: Project Scope and Data Gathering Scope For my capstone project I developed a data pipeline that creates an analytics database for querying information about immigration into the U.S. The analytics tables are hosted in a Redshift Database and the pipeline implementation was done using Apache Airflow. Data Description The following datasets were used to create the analytics database:* I94 Immigration Data: This data comes from the US National Tourism and Trade Office found [here](https://travel.trade.gov/research/reports/i94/historical/2016.html). Each report contains international visitor arrival statistics by world regions and select countries (including top 20), type of visa, mode of transportation, age groups, states visited (first intended address only), and the top ports of entry (for select countries).* World Temperature Data: This dataset came from Kaggle found [here](https://www.kaggle.com/berkeleyearth/climate-change-earth-surface-temperature-data).* U.S. City Demographic Data: This dataset contains information about the demographics of all US cities and census-designated places with a population greater or equal to 65,000. Dataset comes from OpenSoft found [here](https://public.opendatasoft.com/explore/dataset/us-cities-demographics/export/).* Airport Code Table: This is a simple table of airport codes and corresponding cities. The airport codes may refer to either IATA airport code, a three-letter code which is used in passenger reservation, ticketing and baggage-handling systems, or the ICAO airport code which is a four letter code used by ATC systems and for airports that do not have an IATA airport code (from wikipedia). It comes from [here](https://datahub.io/core/airport-codesdata). I94 Immigration Data pull
###Code
spark = SparkSession.builder.\
config("spark.jars.packages","saurfang:spark-sas7bdat:2.0.0-s_2.11")\
.enableHiveSupport().getOrCreate()
imm_data = spark.read.parquet("sas_data")
print(imm_data.count())
imm_data.limit(10).toPandas()
###Output
3096313
###Markdown
U.S. City Demographic Data pull
###Code
city_dem_data = pd.read_csv('data/us-cities-demographics.csv', sep=';')
print(city_dem_data.info())
city_dem_data.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2891 entries, 0 to 2890
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 City 2891 non-null object
1 State 2891 non-null object
2 Median Age 2891 non-null float64
3 Male Population 2888 non-null float64
4 Female Population 2888 non-null float64
5 Total Population 2891 non-null int64
6 Number of Veterans 2878 non-null float64
7 Foreign-born 2878 non-null float64
8 Average Household Size 2875 non-null float64
9 State Code 2891 non-null object
10 Race 2891 non-null object
11 Count 2891 non-null int64
dtypes: float64(6), int64(2), object(4)
memory usage: 271.2+ KB
None
###Markdown
Airport Code Data
###Code
airport_code_data = pd.read_csv('data/airport-codes_csv.csv')
print(airport_code_data.info())
airport_code_data.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 55075 entries, 0 to 55074
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ident 55075 non-null object
1 type 55075 non-null object
2 name 55075 non-null object
3 elevation_ft 48069 non-null float64
4 continent 27356 non-null object
5 iso_country 54828 non-null object
6 iso_region 55075 non-null object
7 municipality 49399 non-null object
8 gps_code 41030 non-null object
9 iata_code 9189 non-null object
10 local_code 28686 non-null object
11 coordinates 55075 non-null object
dtypes: float64(1), object(11)
memory usage: 5.0+ MB
None
###Markdown
World Temperature Data
###Code
temp_data = pd.read_csv('data/GlobalLandTemperaturesByCity.csv')
print(temp_data.info())
temp_data.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8599212 entries, 0 to 8599211
Data columns (total 7 columns):
# Column Dtype
--- ------ -----
0 dt object
1 AverageTemperature float64
2 AverageTemperatureUncertainty float64
3 City object
4 Country object
5 Latitude object
6 Longitude object
dtypes: float64(2), object(5)
memory usage: 459.2+ MB
None
###Markdown
Step 2: Data Exploration & Modeling Data Prep
###Code
def sas_program_file_value_parser(sas_source_file, value, columns):
"""Parses SAS Program file to return value as pandas dataframe
Args:
sas_source_file (str): SAS source code file.
value (str): sas value to extract.
columns (list): list of 2 containing column names.
Return:
None
"""
file_string = ''
with open(sas_source_file) as f:
file_string = f.read()
file_string = file_string[file_string.index(value):]
file_string = file_string[:file_string.index(';')]
line_list = file_string.split('\n')[1:]
codes = []
values = []
for line in line_list:
if '=' in line:
code, val = line.split('=')
code = code.strip()
val = val.strip()
if code[0] == "'":
code = code[1:-1]
if val[0] == "'":
val = val[1:-1]
codes.append(code)
values.append(val)
return pd.DataFrame(zip(codes,values), columns=columns)
i94cit_res = sas_program_file_value_parser('data/I94_SAS_Labels_Descriptions.SAS', 'i94cntyl', ['code', 'country'])
i94cit_res.head()
i94port = sas_program_file_value_parser('data/I94_SAS_Labels_Descriptions.SAS', 'i94prtl', ['code', 'port'])
i94port.head()
i94mode = sas_program_file_value_parser('data/I94_SAS_Labels_Descriptions.SAS', 'i94model', ['code', 'mode'])
i94mode.head()
i94addr = sas_program_file_value_parser('data/I94_SAS_Labels_Descriptions.SAS', 'i94addrl', ['code', 'addr'])
i94addr.head()
i94visa = sas_program_file_value_parser('data/I94_SAS_Labels_Descriptions.SAS', 'I94VISA', ['code', 'type'])
i94visa.head()
###Output
_____no_output_____
###Markdown
I94 Immigration Data prep
###Code
imm_data.printSchema()
imm_data.limit(10).toPandas()
###Output
_____no_output_____ |
Flopez_lstm_stock_predictor_closing.ipynb | ###Markdown
LSTM Stock Predictor Using Closing PricesIn this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin closing prices to predict the 11th day closing price. You will need to:1. Prepare the data for training and testing2. Build and train a custom LSTM RNN3. Evaluate the performance of the model Data PreparationIn this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.You will need to:1. Use the `window_data` function to generate the X and y values for the model.2. Split the data into 70% training and 30% testing3. Apply the MinMaxScaler to the X and y values4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:```pythonreshape((X_train.shape[0], X_train.shape[1], 1))```
###Code
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# Predict Closing Prices using a 10 day window of previous closing prices
# Try a window size anywhere from 1 to 10 and see how the model performance changes
window_size = 1
# Column index 1 is the `Close` column
feature_column = 1
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# Use 70% of the data for training and the remaineder for testing
# YOUR CODE HERE!
split = int(0.7 * len(X))
X_train = X[: split - 1]
X_test = X[split:]
y_train = y[: split - 1]
y_test = y[split:]
# Use MinMaxScaler to scale the data between 0 and 1.
# YOUR CODE HERE!
# Importing the MinMaxScaler from sklearn
from sklearn.preprocessing import MinMaxScaler
# Create a MinMaxScaler object
scaler = MinMaxScaler()
# Fit the MinMaxScaler object with the features data X
scaler.fit(X)
# Scale the features training and testing sets
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# Fit the MinMaxScaler object with the target data Y
scaler.fit(y)
# Scale the target training and testing sets
y_train = scaler.transform(y_train)
y_test = scaler.transform(y_test)
# Reshape the features for the model
# YOUR CODE HERE!
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
# Print some sample data after reshaping the datasets
print (f"X_train sample values:\n{X_train[:3]} \n")
print (f"X_test sample values:\n{X_test[:3]}")
###Output
X_train sample values:
[[[0.60761794]]
[[0.58242373]]
[[0.62172321]]]
X_test sample values:
[[[0.03974167]]
[[0.04528668]]
[[0.04528668]]]
###Markdown
--- Build and Train the LSTM RNNIn this section, you will design a custom LSTM RNN and fit (train) it using the training data.You will need to:1. Define the model architecture2. Compile the model3. Fit the model to the training data Hints:You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# YOUR CODE HERE!
# Define the LSTM RNN model.
model = Sequential()
# Initial model setup
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# Compile the model
# YOUR CODE HERE!
model.compile(optimizer="adam", loss="mean_squared_error")
# Summarize the model
# YOUR CODE HERE!
model.summary()
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
# YOUR CODE HERE!
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1, verbose=1)
###Output
Epoch 1/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0509
Epoch 2/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0246
Epoch 3/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0194
Epoch 4/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0123
Epoch 5/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0079
Epoch 6/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0060
Epoch 7/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0043
Epoch 8/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0041
Epoch 9/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0035
Epoch 10/10
377/377 [==============================] - 1s 3ms/step - loss: 0.0034
###Markdown
--- Model PerformanceIn this section, you will evaluate the model using the test data. You will need to:1. Evaluate the model using the `X_test` and `y_test` data.2. Use the X_test data to make predictions3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart HintsRemember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
###Code
# Evaluate the model
# YOUR CODE HERE!
model.evaluate(X_test, y_test)
# Make some predictions
# YOUR CODE HERE!
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = scaler.inverse_transform(predicted)
real_prices = scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
})
stocks.head()
# Plot the real vs predicted values as a line chart
# YOUR CODE HERE!
stocks.plot(title="Real Vs. Predicted BTC Prices")
###Output
_____no_output_____ |
notebooks/development/hyperparameter_search.ipynb | ###Markdown
Hyperparameter grid searchNB the input data to the DNN is not normalised.
###Code
import sys
from pathlib import Path
from datetime import datetime
from dateutil.tz import gettz
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Input, Dense, Activation, Dropout
from tensorflow.keras import regularizers
from tensorflow.keras import utils
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
np.random.seed(757566)
###Output
_____no_output_____
###Markdown
User inputs
###Code
fname = 'GunPoint' # private_dog0_correct_plus
tensorboard_dir = '../logs/tensorboard'
logs_dir = '../logs'
timestamp = '{:%Y-%m-%dT%H:%M}'.format(datetime.now(gettz("Europe/London")))
logs_dir = logs_dir +'/' + timestamp
tensorboard_dir = tensorboard_dir +'/' + timestamp
if 'private' in fname:
fdir = '../data/private_data/private_events_dev2'
else:
fdir = '../data'
###Output
_____no_output_____
###Markdown
Utilities
###Code
def readucr(filename):
''' Load a dataset from a file in UCR format
space delimited, class labels in the first column.
Returns
X : DNN input data
Y : class labels
'''
data = np.loadtxt(Path(filename))
Y = data[:,0]
X = data[:,1:]
return X, Y
def prepare_data(y):
''' Return y as a categorical array'''
nb_classes = 2
y = (y - y.min())/(y.max()-y.min())*(nb_classes-1)
Y = utils.to_categorical(y, nb_classes)
return Y
###Output
_____no_output_____
###Markdown
Create model
###Code
# Hyperparameter grid search adapted from Machine Learning Mastery
# https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/
# Use scikit-learn to grid search the batch size and epochs
# Function to create model, will be parse to KerasClassifier
def create_fcn(input_shape=(150,1), num_features0=100, num_features1=100, filter_size=10, pooling_size=3, dropout=0.5):
''' Return FCN model '''
nb_classes = 2
x = Input(shape=(input_shape))
conv_x = keras.layers.Conv1D(num_features0, filter_size, activation='relu')(x)
conv_x = keras.layers.Conv1D(num_features0, filter_size, activation='relu')(conv_x)
conv_x = keras.layers.MaxPooling1D(pooling_size)(conv_x)
conv_x = keras.layers.Conv1D(num_features1, filter_size, activation='relu')(conv_x)
conv_x = keras.layers.Conv1D(num_features1, filter_size, activation='relu')(conv_x)
full = keras.layers.GlobalAveragePooling1D()(conv_x)
y = Dropout(dropout,name='Dropout')(full)
out = Dense(nb_classes, activation='sigmoid')(full)
model = Model(x, out)
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
return model
###Output
_____no_output_____
###Markdown
Run
###Code
# load dataset
x_train, y_train = readucr(fdir+'/'+fname+'/'+fname+'_TRAIN.txt')
x_test, y_test = readucr(fdir+'/'+fname+'/'+fname+'_TEST.txt')
X = np.concatenate((x_train, x_test), axis=0)
Y = np.concatenate((y_train, y_test), axis=0)
X = X.reshape(X.shape + (1,))
input_shape = X.shape[1:]
print(input_shape)
Y = prepare_data(Y)
# Add callbacks
if False:
callbacks = []
tb_dir = tensorboard_dir+'/'+fname
Path(tb_dir).mkdir(parents=True, exist_ok=True)
callbacks.append(keras.callbacks.TensorBoard(log_dir=tb_dir, histogram_freq=0))
# define the grid search parameters
batch_size = 32
epochs = 10
num_features0 = 64
num_features1 = [64, 128, 256]
filter_size = 4
pooling_size = 4
dropout = [0.2, 0.5]
param_grid = dict(num_features1=num_features1, dropout=dropout)
# Create model and run the grid search
model = KerasClassifier(build_fn=create_fcn,
input_shape=input_shape,
num_features0=num_features0, filter_size=filter_size, pooling_size=pooling_size,
batch_size=batch_size, epochs=epochs,
verbose=1)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, error_score=0) #fit_params={'callbacks': callbacks})
grid_result = grid.fit(X, Y)
# Summarise results
print('Best score:', grid_result.best_score_, 'using: ', grid_result.best_params_)
cv = pd.DataFrame(grid_result.cv_results_)
pd.set_option('display.max_colwidth', -1)
cv[['mean_test_score', 'std_test_score', 'params']]
cv
print('Completed at', '{:%Y-%m-%dT%H:%M}'.format(datetime.now(gettz("Europe/London"))))
Path(logs_dir+'/'+fname).mkdir(parents=True, exist_ok=True)
cv.to_csv(Path(logs_dir+'/'+fname+'/grid_search_summary.csv'))
print('Results saved to', Path(logs_dir+'/'+fname+'/grid_search_summary.csv'))
###Output
_____no_output_____ |
Comparing Results with that of Hybrid Model.ipynb | ###Markdown
Comparing ResultsWe need to compare our results from machine learning to the previously results achieved through Hybrid Model
###Code
#Read dataset
from pandas import read_csv
import numpy as np
fileName1 = 'Data/Results/MissingPPIsResultsFromHybridModel.txt'
fileName2 = 'Data/Results/PredictionResultsPPIs.txt'
names = ['Prot1', 'Prot2']
resultsFromHybridModel = read_csv(fileName1, delimiter='\t', names=names)
resultsFromMachineLearning = read_csv(fileName2, delimiter='\t', names=names)
HMData = np.array(resultsFromHybridModel.values)
MLData = np.array(resultsFromMachineLearning.values)
print 'Shape HMdata: ', HMData.shape
print 'Shape MLData: ', MLData.shape
# Intersect the two results to find PPI predictions common between both prediction methodologies
_MLData = ['\t'.join(ppi) for ppi in np.sort(MLData)]
_HMData = ['\t'.join(ppi) for ppi in np.sort(HMData)]
intersection = list(set(_MLData) & set(_HMData))
commonMissingPPIs = np.array([ppi.split('\t') for ppi in intersection])
print 'PPIs common in both HM and ML:\n', commonMissingPPIs
print 'Shape: ', commonMissingPPIs.shape
print ('Percentage of Common Missing PPIs to the Hybrid Model: %.2f%%' % (len(commonMissingPPIs) * 100 / len(HMData)))
###Output
Percentage of Common Missing PPIs to the Hybrid Model: 9.00%
|
notebooks/zinv/3_background_prediction/1_wjets/0_fit_test.ipynb | ###Markdown
Cross check the results from Higgs combine and the results from the dftools.fitting implementation
###Code
import copy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats
import numdifftools
import iminuit
import dftools
df = pd.read_csv("/vols/build/cms/sdb15/ZinvWidth/HiggsCombine/CMSSW_10_2_13/src/HiggsAnalysis/CombinedLimit/data/tutorials/shapes/simple-shapes-df_input.csv")
df.columns = ["region", "process", "variation", "bin_min", "sum_w", "sum_ww"]
df["bin_max"] = df["bin_min"]+1.
df = df.set_index(["region", "process", "variation", "bin_min", "bin_max"])
df_data = df.loc[(
(df.index.get_level_values("region")=="bin1")
& (df.index.get_level_values("process")=="data_obs")
)].groupby(["region", "bin_min", "bin_max"]).sum()
df_mc = df.loc[(
(df.index.get_level_values("region")=="bin1")
& (df.index.get_level_values("process").isin(["signal", "background"]))
)].reset_index()
df_mc.loc[df_mc["variation"]=="nominal", "variation"] = ""
df_mc = df_mc.groupby(["region", "process", "variation", "bin_min", "bin_max"]).sum()
tdf_mc = df_mc.loc[
(df_mc.index.get_level_values("process")=="signal")
& (df_mc.index.get_level_values("variation")==""), :
].reset_index()
tdf_mc.loc[:,"variation"] = "lumiUp"
df_mc = pd.concat([df_mc, tdf_mc.set_index(["region", "process", "variation", "bin_min", "bin_max"])*1.1], sort=True)
tdf_mc.loc[:,"variation"] = "lumiDown"
df_mc = pd.concat([df_mc, tdf_mc.set_index(["region", "process", "variation", "bin_min", "bin_max"])/1.1], sort=True)
tdf_mc = df_mc.loc[
(df_mc.index.get_level_values("process")=="background")
& (df_mc.index.get_level_values("variation")==""), :
].reset_index()
tdf_mc.loc[:,"variation"] = "bgnormUp"
df_mc = pd.concat([df_mc, tdf_mc.set_index(["region", "process", "variation", "bin_min", "bin_max"])*1.3], sort=True)
tdf_mc.loc[:,"variation"] = "bgnormDown"
df_mc = pd.concat([df_mc, tdf_mc.set_index(["region", "process", "variation", "bin_min", "bin_max"])/1.3], sort=True)
df_data = df_data.iloc[:-1]
df_mc = pd.pivot_table(df_mc, index=["bin_min", "bin_max"], values=["sum_w", "sum_ww"], columns=["region", "process", "variation"]).iloc[:-1]
df_mc = df_mc.stack().stack().stack().reset_index().set_index(["region", "process", "variation", "bin_min", "bin_max"]).sort_index()
bins = (df_data.index.get_level_values("bin_min").values, df_data.index.get_level_values("bin_max").values)
config = {
"regions": {"bin1": ["signal", "background"]},
"parameters": [
{"name": "rsignal", "value": 0., "limit": (-5., 5.), "fixed": False, "constraint": "free"},
{"name": "rbackground", "value": 1., "limit": (None, None), "fixed": True, "constraint": "free"},
{"name": "alpha", "value": 0., "limit": (-3., 3.), "fixed": False, "constraint": "gaussian"},
{"name": "sigma", "value": 0., "limit": (-3., 3.), "fixed": False, "constraint": "gaussian"},
{"name": "lumi", "value": 0., "limit": (-3., 3.), "fixed": False, "constraint": "gaussian"},
{"name": "bgnorm", "value": 0., "limit": (-3., 3.), "fixed": False, "constraint": "gaussian"},
{"name": "bin1_mcstat_bin0", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin1", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin2", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin3", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin4", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin5", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin6", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin7", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin8", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
{"name": "bin1_mcstat_bin9", "value": 0., "limit": (-10., 10.), "fixed": False, "constraint": "gamma"},
],
"scale_functions": {
("bin1", "signal"): "x, w, p: p['rsignal']",
("bin1", "background"): "x, w, p: p['rbackground']",
}
}
model = dftools.fitting.NLLModel(
df_data, df_mc, config, shape_type="shapeN",
)
minimizer = model.fit(migrad=False, minos=False)
minimizer.migrad()
minimizer.minos()
minimizer.migrad()
model = dftools.fitting.NLLModel(
df_data, df_mc, config, #shape_type="shapeN",
)
minimizer = model.fit(migrad=False, minos=False)
minimizer.migrad()
minimizer.minos()
minimizer.migrad()
model = dftools.fitting.NLLModel(
df_data, df_mc, bins, config, shape_type="shapeN"
)
minimizer = model.fit(migrad=False, minos=False)
minimizer.migrad()
minimizer.migrad()
minimizer.minos()
###Output
_____no_output_____ |
examples/Spotlight.ipynb | ###Markdown
Load the Hello World pdf
###Code
hello_world = os.path.join(sys.prefix, 'etc', 'sparclur', 'resources', 'hello_world_hand_edit.pdf')
#If the above does not load try the below. Otherwise any path to a PDF can be used here.
# hello_world = os.path.join(site.USER_BASE, 'etc', 'sparclur', 'resources', 'hello_world_hand_edit.pdf')
###Output
_____no_output_____
###Markdown
Set the parsers to run with Spotlight
###Code
parsers = ['Poppler', 'MuPDF', 'PDFCPU', 'Ghostscript', 'XPDF', 'PDFMiner', 'QPDF']
###Output
_____no_output_____
###Markdown
Instantiate Spotlight and generate the result
###Code
spotlight = Spotlight(num_workers=5, parsers=parsers)
spotlight_result = spotlight.run(hello_world)
###Output
mutool version 1.16.1
pdftocairo version 21.11.0
Copyright 2005-2021 The Poppler Developers - http://poppler.freedesktop.org
Copyright 1996-2011 Glyph & Cog, LLC
###Markdown
Display the report for the validity of the original file and all of the cleaned-up versions for each parser run through Spotlight
###Code
spotlight_result.validity_report()
###Output
_____no_output_____
###Markdown
Since Poppler rejects the cleaned-up version generated by MuPDF, the file's recovery is ambiguous. This can be seen in the call below.
###Code
print(spotlight_result.recoverable())
###Output
Recovery ambigous:
MuPDF: Rejected
###Markdown
Let's see why Poppler rejected the file in the first place. All tools came back valid except font extraction.
###Code
poppler = Poppler(hello_world)
poppler.validity
###Output
_____no_output_____
###Markdown
Here's the extracted font:
###Code
poppler.fonts
###Output
_____no_output_____
###Markdown
Let's clean up the file using Poppler and see what happens to the font:
###Code
poppler_cleanup = poppler.reforge
cleaned_poppler = Poppler(poppler_cleanup)
cleaned_poppler.validity
cleaned_poppler.fonts
###Output
_____no_output_____
###Markdown
Poppler embedded a system font into the cleaned up version. Now let's see what the MuPDF reforged document does for the font extraction:
###Code
mupdf_cleanup = MuPDF(hello_world).reforge
mu_poppler = Poppler(mupdf_cleanup)
mu_poppler.validity
###Output
mutool version 1.16.1
###Markdown
We can see that the MuPDF cleaning process left the font information untouched, which is why Poppler marked this version invalid.
###Code
mu_poppler.fonts
###Output
_____no_output_____
###Markdown
Spotlight also allows us to quickly explore the similarity between each of the reforged versions over each of the parsers. This similarity score is an average of the signatures derived from text extraction, rendering, and trace messages. So a high similarity score means that the reforging process produced the same text, rendering, and messages for each parser.
###Code
spotlight_result.sim_heatmap(report='sim', compare_orig=False, height=5, width=5)
###Output
_____no_output_____ |
Section 3/Part 1/Assignment_sec3_part1.ipynb | ###Markdown
SVMs Clustering Semi-Supervised Learning
###Code
%matplotlib inline
from matplotlib import pyplot as plt
from sklearn.datasets import make_blobs
X, Y = make_blobs(n_samples=200, n_features=2, centers=3)
plt.scatter(X[:, 0], X[:, 1], marker='o', c=Y, s=25, edgecolor='k') #c=Y changes color based on label
plt.show()
from sklearn import svm
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
model = svm.SVC(gamma="scale") # Support Vector Classifier = Support Vector Machine
model.fit(X_train, Y_train)
predictions = model.predict(X_test)
plt.scatter(X_test[:, 0], X_test[:, 1], marker='o', c=predictions, s=25, edgecolor='k')
plt.show()
###Output
_____no_output_____
###Markdown
Assignment: 1. Generate a plot of time vs. number of training points for a Support Vector Classifier2. Generate a plot of performance vs. number of training points for a Support Vector Classifier3. Evaluate the accuracy (Cluster Purity) of [k-means](https://scikit-learn.org/stable/modules/clustering.htmlk-means) on any the blobs dataset.4. After running k-means, randomly select a point from each cluster provide its label. What average accuracy does this produce? (This can be computed analytically if we know all of the labels.) Stretch Goals:- Run all of the above steps with the Fish Dataset from yesterday.- Evaluate the accuracy (Cluster Purity) of any [clustering algorithm](https://scikit-learn.org/stable/modules/clustering.htmlclustering) on any [sklearn dataset](https://scikit-learn.org/stable/datasets/index.html)- Consider: What can we do for inputs (e.g. images) where we don't have a good distance metric?
###Code
import time
npts = X_train.shape[0]
duration = []
for n in range(2,npts):
start = time.time()
svc = svm.SVC(gamma="scale").fit(X_train[:n,:], Y_train[:n])
end = time.time()
duration.append(end - start)
fig, ax = plt.subplots()
_ = ax.plot(range(2,npts), duration)
_ = ax.set(xlabel='# of points', ylabel='time')
accuracy = []
for n in range(2,npts):
svc = svm.SVC(gamma="scale").fit(X_train[:n,:], Y_train[:n])
accuracy.append(svc.score(X_test, Y_test))
fig, ax = plt.subplots()
_ = ax.plot(range(2,npts), accuracy, '.-')
_ = ax.set(xlabel='# of points', ylabel='accuracy')
from sklearn.cluster import KMeans
km = KMeans(n_clusters=3).fit(X)
yhat = km.predict(X)
fig, axs = plt.subplots(1,2,figsize=(9.5,4.8))
_ = axs[0].scatter(X[:,0], X[:,1], c=yhat)
_ = axs[1].scatter(X[:,0], X[:,1], c=Y)
for l in set(km.labels_):
obs = len(Y[km.labels_==l])
mod = len(yhat[km.labels_==l])
print('obs: {}, model: {}, obs/model: {}'.format(obs, mod, obs/mod))
for l in set(km.labels_):
x = X[km.labels_==l,:]
y = Y[km.labels_==l]
i = np.random.randint(0,len(x),1).item()
with np.printoptions(precision=6):
print(x[i,:], y[i])
import pandas as pd
df = pd.read_csv('Fish.csv')
df.head()
X = df[['Weight','Length1','Length2','Length3','Height','Width']].values
y = df['Species']
X_train, X_test, y_train, y_test = train_test_split(X, y)
# svc = svm.SVC(gamma="scale").fit(X_train, y_train)
npts = len(X_train)
duration = []
for n in range(5,npts):
start = time.time()
svc = svm.SVC(gamma="scale").fit(X_train[:n,:], y_train[:n])
end = time.time()
duration.append(end - start)
fig, ax = plt.subplots()
_ = ax.plot(range(5,npts), duration)
_ = ax.set(xlabel='# of points', ylabel='time')
###Output
_____no_output_____ |
Part 3 Descriptive Statistics/Study materials/cancer-test-results-solutions.ipynb | ###Markdown
Cancer Test Results
###Code
import pandas as pd
df = pd.read_csv('cancer_test_data.csv')
df.head()
df.shape
# number of patients with cancer
df.has_cancer.sum()
# number of patients without cancer
(df.has_cancer == False).sum()
# proportion of patients with cancer
df.has_cancer.mean()
# proportion of patients without cancer
1 - df.has_cancer.mean()
# proportion of patients with cancer who test positive
(df.query('has_cancer')['test_result'] == 'Positive').mean()
# proportion of patients with cancer who test negative
(df.query('has_cancer')['test_result'] == 'Negative').mean()
# proportion of patients without cancer who test positive
(df.query('has_cancer == False')['test_result'] == 'Positive').mean()
# proportion of patients without cancer who test negative
(df.query('has_cancer == False')['test_result'] == 'Negative').mean()
###Output
_____no_output_____ |
src/Neuronal Links/Redes_Neuronales_Niyo.ipynb | ###Markdown
Intento Cataclismico de Redes Neuronalesxd
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.style.use('fivethirtyeight')
from sklearn.datasets import make_blobs
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedKFold
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import ParameterGrid
from sklearn.inspection import permutation_importance
import multiprocessing
import warnings
warnings.filterwarnings('ignore')
from sklearn.metrics import f1_score
labels = pd.read_csv('train_labels.csv')
labels.head()
values = pd.read_csv('train_values.csv')
values.head(10).T
values.isnull().values.any()
labels.isnull().values.any()
important_values = values.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values
important_values["volume_percentage"] = important_values["area_percentage"] * important_values["height_percentage"]
superstructure_cols = [i for i in important_values.filter(regex='^has_superstructure*').columns]
important_values["any_superstructure"] = important_values[superstructure_cols[0]]
for c in superstructure_cols[1:]:
important_values["any_superstructure"] += important_values[c]
important_values
important_values["age"] = important_values["age"] ** 2
important_values["area_percentage"] = important_values["area_percentage"] ** 2
important_values["height_percentage"] = important_values["height_percentage"] ** 2
###Output
_____no_output_____
###Markdown
Entrenamiento
###Code
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'],
test_size = 0.2,
random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train.head()
# Número de neuronas
# ==============================================================================
param_grid = {'hidden_layer_sizes':[16, 40, 80, 120, 200]}
grid = GridSearchCV(
estimator = MLPClassifier(
learning_rate_init=0.001,
solver = 'lbfgs',
alpha = 0,
max_iter = 50000,
random_state = 123
),
param_grid = param_grid,
scoring = 'accuracy',
cv = 5,
refit = True,
return_train_score = True,
verbose = 1,
n_jobs = -1
)
modelo1 = grid.fit(X_train, y_train)
modelo1
y_preds = modelo1.predict(X_test)
f1_score(y_test, y_preds, average='micro')
param_grid = {'learning_rate_init':[0.0001, 0.001, 0.01, 0.1, 1]}
grid = GridSearchCV(
estimator = MLPClassifier(
hidden_layer_sizes=(20),
solver = 'adam',
alpha = 0,
max_iter = 5000,
random_state = 123
),
param_grid = param_grid,
scoring = 'accuracy',
cv = 5,
refit = True,
return_train_score = True,
verbose = 1,
n_jobs = 8
)
modelo2 = grid.fit(X_train, y_train)
y_preds = modelo2.predict(X_test)
f1_score(y_test, y_preds, average='micro')
###Output
_____no_output_____ |
01 Machine Learning/scikit_examples_jupyter/decomposition/plot_incremental_pca.ipynb | ###Markdown
Incremental PCAIncremental principal component analysis (IPCA) is typically used as areplacement for principal component analysis (PCA) when the dataset to bedecomposed is too large to fit in memory. IPCA builds a low-rank approximationfor the input data using an amount of memory which is independent of thenumber of input data samples. It is still dependent on the input data features,but changing the batch size allows for control of memory usage.This example serves as a visual check that IPCA is able to find a similarprojection of the data to PCA (to a sign flip), while only processing afew samples at a time. This can be considered a "toy example", as IPCA isintended for large datasets which do not fit in main memory, requiringincremental approaches.
###Code
print(__doc__)
# Authors: Kyle Kastner
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA, IncrementalPCA
iris = load_iris()
X = iris.data
y = iris.target
n_components = 2
ipca = IncrementalPCA(n_components=n_components, batch_size=10)
X_ipca = ipca.fit_transform(X)
pca = PCA(n_components=n_components)
X_pca = pca.fit_transform(X)
colors = ['navy', 'turquoise', 'darkorange']
for X_transformed, title in [(X_ipca, "Incremental PCA"), (X_pca, "PCA")]:
plt.figure(figsize=(8, 8))
for color, i, target_name in zip(colors, [0, 1, 2], iris.target_names):
plt.scatter(X_transformed[y == i, 0], X_transformed[y == i, 1],
color=color, lw=2, label=target_name)
if "Incremental" in title:
err = np.abs(np.abs(X_pca) - np.abs(X_ipca)).mean()
plt.title(title + " of iris dataset\nMean absolute unsigned error "
"%.6f" % err)
else:
plt.title(title + " of iris dataset")
plt.legend(loc="best", shadow=False, scatterpoints=1)
plt.axis([-4, 4, -1.5, 1.5])
plt.show()
###Output
_____no_output_____ |
Tensorflow/Courses/0_Intro_to_tf/8_week3_Assignment.ipynb | ###Markdown
Week 3: Improve MNIST with ConvolutionsIn the videos you looked at how you would improve Fashion MNIST using Convolutions. For this exercise see if you can improve MNIST to 99.5% accuracy or more by adding only a single convolutional layer and a single MaxPooling 2D layer to the model from the assignment of the previous week. You should stop training once the accuracy goes above this amount. It should happen in less than 10 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your callback.When 99.5% accuracy has been hit, you should print out the string "Reached 99.5% accuracy so cancelling training!"
###Code
import os
import numpy as np
import tensorflow as tf
from tensorflow import keras
###Output
_____no_output_____
###Markdown
Begin by loading the data. A couple of things to notice:- The file `mnist.npz` is already included in the current workspace under the `data` directory. By default the `load_data` from Keras accepts a path relative to `~/.keras/datasets` but in this case it is stored somewhere else, as a result of this, you need to specify the full path.- `load_data` returns the train and test sets in the form of the tuples `(x_train, y_train), (x_test, y_test)` but in this exercise you will be needing only the train set so you can ignore the second tuple.
###Code
# Load the data
# Get current working directory
current_dir = os.getcwd()
# Append data/mnist.npz to the previous path to get the full path
data_path = os.path.join(current_dir, "data/mnist.npz")
# Get only training set
(training_images, training_labels), _ = tf.keras.datasets.mnist.load_data(path=data_path)
###Output
_____no_output_____
###Markdown
One important step when dealing with image data is to preprocess the data. During the preprocess step you can apply transformations to the dataset that will be fed into your convolutional neural network.Here you will apply two transformations to the data:- Reshape the data so that it has an extra dimension. The reason for this is that commonly you will use 3-dimensional arrays (without counting the batch dimension) to represent image data. The third dimension represents the color using RGB values. This data might be in black and white format so the third dimension doesn't really add any additional information for the classification process but it is a good practice regardless.- Normalize the pixel values so that these are values between 0 and 1. You can achieve this by dividing every value in the array by the maximum.Remember that these tensors are of type `numpy.ndarray` so you can use functions like [reshape](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html) or [divide](https://numpy.org/doc/stable/reference/generated/numpy.divide.html) to complete the `reshape_and_normalize` function below:
###Code
# GRADED FUNCTION: reshape_and_normalize
def reshape_and_normalize(images):
### START CODE HERE
# Reshape the images to add an extra dimension
(size,len_x,len_y)=np.shape(images)
images_temp=np.zeros((size,len_x,len_y,1),dtype=float)
images_temp[:,:,:,0] = images
# Normalize pixel values
images_temp = images_temp/255.
### END CODE HERE
return images_temp
###Output
_____no_output_____
###Markdown
Test your function with the next cell:
###Code
# Reload the images in case you run this cell multiple times
(training_images, _), _ = tf.keras.datasets.mnist.load_data(path=data_path)
# Apply your function
training_images = reshape_and_normalize(training_images)
print(f"Maximum pixel value after normalization: {np.max(training_images)}\n")
print(f"Shape of training set after reshaping: {training_images.shape}\n")
print(f"Shape of one image after reshaping: {training_images[0].shape}")
###Output
Maximum pixel value after normalization: 1.0
Shape of training set after reshaping: (60000, 28, 28, 1)
Shape of one image after reshaping: (28, 28, 1)
###Markdown
**Expected Output:**```Maximum pixel value after normalization: 1.0Shape of training set after reshaping: (60000, 28, 28, 1)Shape of one image after reshaping: (28, 28, 1)``` Now complete the callback that will ensure that training will stop after an accuracy of 99.5% is reached:
###Code
# GRADED CLASS: myCallback
### START CODE HERE
# Remember to inherit from the correct class
class myCallback(tf.keras.callbacks.Callback):
# Define the correct function signature for on_epoch_end
def on_epoch_end(self,epoch,logs={}):
if logs.get('accuracy') is not None and logs.get('accuracy') > 0.99: # @KEEP
print("\nReached 99% accuracy so cancelling training!")
# Stop training once the above condition is met
self.model.stop_training=True
### END CODE HERE
###Output
_____no_output_____
###Markdown
Finally, complete the `convolutional_model` function below. This function should return your convolutional neural network:
###Code
# GRADED FUNCTION: convolutional_model
def convolutional_model():
### START CODE HERE
# Define the model, it should have 5 layers:
# - A Conv2D layer with 32 filters, a kernel_size of 3x3, ReLU activation function
# and an input shape that matches that of every image in the training set
# - A MaxPooling2D layer with a pool_size of 2x2
# - A Flatten layer with no arguments
# - A Dense layer with 128 units and ReLU activation function
# - A Dense layer with 10 units and softmax activation function
callbacks=myCallback()
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32,(3,3),activation='relu',\
input_shape=(28,28,1)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
### END CODE HERE
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
# Save your untrained model
model = convolutional_model()
# Instantiate the callback class
callbacks = myCallback()
# Train your model (this can take up to 5 minutes)
history = model.fit(training_images, training_labels, epochs=10, callbacks=[callbacks])
###Output
Epoch 1/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.1581 - accuracy: 0.9524
Epoch 2/10
1875/1875 [==============================] - 33s 18ms/step - loss: 0.0540 - accuracy: 0.9836
Epoch 3/10
1875/1875 [==============================] - 33s 18ms/step - loss: 0.0337 - accuracy: 0.9898
Epoch 4/10
1875/1875 [==============================] - ETA: 0s - loss: 0.0224 - accuracy: 0.9928
Reached 99% accuracy so cancelling training!
1875/1875 [==============================] - 33s 18ms/step - loss: 0.0224 - accuracy: 0.9928
###Markdown
If you see the message that you defined in your callback printed out after less than 10 epochs it means your callback worked as expected. You can also double check by running the following cell:
###Code
print(f"Your model was trained for {len(history.epoch)} epochs")
###Output
Your model was trained for 4 epochs
|
MM_material/lecture-materials/functions/self-study_notebooks/functions1.ipynb | ###Markdown
Functions I: basics**Functions** are one of the most important constructs in computer programming. A function is a single command which, when executed, performs some operations and may return a value. Functions allow one to reuse, abstract and modularise pieces of code to perform a particular task: compared with manually copying and pasting sections of code this approach affords the following benefits.1) *Efficiency*: there is no point in duplicating work! To perform the a task its easier to call a function than copy and paste a section of code. Furthermore, you can import already implemented functions into new programs to again save time and effort.2) *Relibability*: using functions will make your code more readable (so long as you choose sensible function names!) and the program logic clearer. This naturally speeds up development time and makes mistakes less likely and easier to fix. In addition, compared with repeatedly copying and pasting sections of code, it is much easier to update and manage your code in a consistent and robust fashion using functions.In short, functions speed up your code development and make your code easier to read, update, manage and debug. The purpose of this tutorial is to provide an overview of the basics of functions in python: namely how to declare them, their arguments, variables with global vs local scope and returning values. Declaring functions in pythonYou've already encountered functions in PIC10A, where they may have looked something like this: ```cpp// Filename: boldy.cppinclude int main() { std::cout << "To boldly go"; return 0;}```You'll notice the *type declaration* (`int`), the function name (`main`), the parameter declaration (`()`, i.e. no parameters in this case), and the *return value* (`0`). Python functions have a similar syntax. Instead of a type declaration, one uses the `def` keyword to denote a function definition. One does not use `{}` braces, but one does use a `:` colon to initiate the body of the function and whitespace to indent the body. Since Python is interpreted rather than compiled, functions are ready to use as soon as they are defined.
###Code
def boldly_print(): # colon ends declaration and begins definition
print("To boldly go")
# return values are optional
boldly_print()
# ---
###Output
_____no_output_____
###Markdown
Arguments (parameters) of a functionJust as in C++, in Python we can pass *arguments* (or *parameters*) to functions in order to modify their behavior.
###Code
def boldly_print_2(k):
for i in range(k):
print("To boldly go")
boldly_print_2(3)
# ---
###Output
_____no_output_____
###Markdown
These arguments can be given *default* values, so that it is not necessary to specify each argument in each function call.
###Code
def boldly_print_3(k, verb="go"): # here we give the verb argument the default go
for i in range(k):
print("To boldly " + verb)
boldly_print_3(2)
# ---
###Output
_____no_output_____
###Markdown
Sometimes it is desirable to use *keyword arguments* when calling a function, so that your code clearly indicates which argument is being supplied which value:
###Code
boldly_print_3(3, "code") # positional approach, inputs must be in the order the function is defined in
# ---
boldly_print_3(k=3, verb="code") # same as above, but using keywords
# ---
###Output
_____no_output_____
###Markdown
If you use positional arguments when calling a function then the order in which the arguments are provided is very important! However, if all arguments are supplied as keyword arguments then the order of the arguments does not matter!
###Code
boldly_print_3(verb="code", k=3)
###Output
_____no_output_____
###Markdown
If one uses a mix of positional and keyword arguments, then keyword arguments must be supplied after all positional arguments:
###Code
boldly_print_3(k = 3,"sing")
# ---
###Output
_____no_output_____
###Markdown
In general when choosing between calling a function using positional or keyword arguments try and choose the one that aids readability and clarity! ScopeThe **global scope** is the set of all variables available for usage outside of any function.
###Code
x = 3 # available in global scope
x
###Output
_____no_output_____
###Markdown
Functions create a **local scope**. This means: - Variables in the global scope are available within the function. - Variables created within the function are **not** available within the global scope.
###Code
# variables within the global scope are available within the function
def add_2_to_x():
print(x+2)
add_2_to_x()
# ---
# Local or function variables cannot be accessed outside of the function!
def print_y():
y = 2
print(y)
print_y()
y
# ---
###Output
2
###Markdown
Immutable variables in the global scope cannot be modified by functions, even if you use the same variable name.
###Code
# Try and change immutable global variable
def new_x():
x = 7
print(x)
new_x()
print(x)
# ---
###Output
7
3
###Markdown
On the other hand, *mutable* variables in global scope can be modified by functions. **Such scenarios need to be handled with care**: in particular, if a mutable global variable is accessed and used by many different functions throughout runtime, then the behaviour of a variable can become hard to predict and the code therefore more likely to give an erroneuous output. We'll discuss the topic of namespaces and the use of global variables in detail in the next tutorial.
###Code
# this works, but it's typically not a good idea.
animals = ["Beaver", "Ant", "Giraffe", "Python"]
def reverse_names():
for i in range(4):
animals[i] = animals[i][::-1]
reverse_names()
animals
###Output
_____no_output_____
###Markdown
Return valuesSo far, we've seen examples of functions that print but do not *return* anything. Usually, you will want your function to have one or more return values. These allow the output of a function to be used in future computations.
###Code
def boldly_return(k = 1, verb = "go"):
return(["to boldly " + verb for i in range(k)])
x = boldly_return(k = 2, verb = "dance")
print(x)
###Output
['to boldly dance', 'to boldly dance']
###Markdown
Your function can return multiple values:
###Code
def double_your_number(j):
return(j, 2*j)
x, y = double_your_number(10)
print(x,y)
###Output
_____no_output_____
###Markdown
The `return` statement *immediately* terminates the function's local scope, usually returning to global scope. So, for example, a `return` statement can be used to terminate a `while` loop, similar to a `break` statement.
###Code
def largest_power_below(a, upper_bound):
i = 1
while True: # while loop will loop until return statement is reached
i *= a
if a*i >= upper_bound:
return(i)
largest_power_below(3, 10000)
###Output
_____no_output_____ |
Jupyter Notebook Assignments/Emojify_v2a.ipynb | ###Markdown
Emojify! Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier. Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing:>"Congratulations on the promotion! Let's get coffee and talk. Love you!" The emojifier can automatically turn this into:>"Congratulations on the promotion! 👍 Let's get coffee and talk. ☕️ Love you! ❤️"* You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). Using word vectors to improve emoji lookups* In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. * In other words, you'll have to remember to type "heart" to find the desired emoji, and typing "love" won't bring up that symbol.* We can make a more flexible emoji interface by using word vectors!* When using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate additional words in the test set to the same emoji. * This works even if those additional words don't even appear in the training set. * This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set. What you'll build1. In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings.2. Then you will build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM. Updates If you were working on the notebook before this update...* The current notebook is version "2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* sentence_to_avg * Updated instructions. * Use separate variables to store the total and the average (instead of just `avg`). * Additional hint about how to initialize the shape of `avg` vector.* sentences_to_indices * Updated preceding text and instructions, added additional hints.* pretrained_embedding_layer * Additional instructions to explain how to implement each step.* Emoify_V2 * Modifies instructions to specify which parameters are needed for each Keras layer. * Remind users of Keras syntax. * Explanation of how to use the layer object that is returned by `pretrained_embedding_layer`. * Provides sample Keras code.* Spelling, grammar and wording corrections. Let's get started! Run the following cell to load the package you are going to use.
###Code
import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Baseline model: Emojifier-V1 1.1 - Dataset EMOJISETLet's start by building a simple baseline classifier. You have a tiny dataset (X, Y) where:- X contains 127 sentences (strings).- Y contains an integer label between 0 and 4 corresponding to an emoji for each sentence. **Figure 1**: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).
###Code
X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())
###Output
_____no_output_____
###Markdown
Run the following cell to print sentences from X_train and corresponding labels from Y_train. * Change `idx` to see different examples. * Note that due to the font used by iPython notebook, the heart emoji may be colored black rather than red.
###Code
for idx in range(10):
print(X_train[idx], label_to_emoji(Y_train[idx]))
###Output
never talk to me again 😞
I am proud of your achievements 😄
It is the worst day in my life 😞
Miss you so much ❤️
food is life 🍴
I love you mum ❤️
Stop saying bullshit 😞
congratulations on your acceptance 😄
The assignment is too long 😞
I want to go play ⚾
###Markdown
1.2 - Overview of the Emojifier-V1In this part, you are going to implement a baseline model called "Emojifier-v1". **Figure 2**: Baseline model (Emojifier-V1). Inputs and outputs* The input of the model is a string corresponding to a sentence (e.g. "I love you). * The output will be a probability vector of shape (1,5), (there are 5 emojis to choose from).* The (1,5) probability vector is passed to an argmax layer, which extracts the index of the emoji with the highest probability. One-hot encoding* To get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$, * Each row is a one-hot vector giving the label of one example. * Here, `Y_oh` stands for "Y-one-hot" in the variable names `Y_oh_train` and `Y_oh_test`:
###Code
Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)
###Output
_____no_output_____
###Markdown
Let's see what `convert_to_one_hot()` did. Feel free to change `index` to print out different values.
###Code
idx = 50
print(f"Sentence '{X_train[50]}' has label index {Y_train[idx]}, which is emoji {label_to_emoji(Y_train[idx])}", )
print(f"Label index {Y_train[idx]} in one-hot encoding format is {Y_oh_train[idx]}")
###Output
Sentence 'I missed you' has label index 0, which is emoji ❤️
Label index 0 in one-hot encoding format is [ 1. 0. 0. 0. 0.]
###Markdown
All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model! 1.3 - Implementing Emojifier-V1As shown in Figure 2 (above), the first step is to:* Convert each word in the input sentence into their word vector representations.* Then take an average of the word vectors. * Similar to the previous exercise, we will use pre-trained 50-dimensional GloVe embeddings. Run the following cell to load the `word_to_vec_map`, which contains all the vector representations.
###Code
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('../../readonly/glove.6B.50d.txt')
###Output
_____no_output_____
###Markdown
You've loaded:- `word_to_index`: dictionary mapping from words to their indices in the vocabulary - (400,001 words, with the valid indices ranging from 0 to 400,000)- `index_to_word`: dictionary mapping from indices to their corresponding words in the vocabulary- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.Run the following cell to check if it works.
###Code
word = "cucumber"
idx = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(idx) + "th word in the vocabulary is", index_to_word[idx])
###Output
the index of cucumber in the vocabulary is 113317
the 289846th word in the vocabulary is potatos
###Markdown
**Exercise**: Implement `sentence_to_avg()`. You will need to carry out two steps:1. Convert every sentence to lower-case, then split the sentence into a list of words. * `X.lower()` and `X.split()` might be useful. 2. For each word in the sentence, access its GloVe representation. * Then take the average of all of these word vectors. * You might use `numpy.zeros()`. Additional Hints* When creating the `avg` array of zeros, you'll want it to be a vector of the same shape as the other word vectors in the `word_to_vec_map`. * You can choose a word that exists in the `word_to_vec_map` and access its `.shape` field. * Be careful not to hard code the word that you access. In other words, don't assume that if you see the word 'the' in the `word_to_vec_map` within this notebook, that this word will be in the `word_to_vec_map` when the function is being called by the automatic grader. * Hint: you can use any one of the word vectors that you retrieved from the input `sentence` to find the shape of a word vector.
###Code
# GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
"""
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
"""
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
words = sentence.lower().split()
# Initialize the average word vector, should have the same shape as your word vectors.
avg = np.zeros((50,))
# Step 2: average the word vectors. You can loop over the words in the list "words".
for w in words:
avg += word_to_vec_map[w]
avg = avg / len(words)
### END CODE HERE ###
return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = \n", avg)
###Output
avg =
[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983
-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867
0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767
0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061
0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265
1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925
-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333
-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433
0.1445417 0.09808667]
###Markdown
**Expected Output**:```Pythonavg =[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983 -0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867 0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767 0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061 0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265 1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925 -0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333 -0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433 0.1445417 0.09808667]``` ModelYou now have all the pieces to finish implementing the `model()` function. After using `sentence_to_avg()` you need to:* Pass the average through forward propagation* Compute the cost* Backpropagate to update the softmax parameters**Exercise**: Implement the `model()` function described in Figure (2). * The equations you need to implement in the forward pass and to compute the cross-entropy cost are below:* The variable $Y_{oh}$ ("Y one hot") is the one-hot encoding of the output labels. $$ z^{(i)} = W . avg^{(i)} + b$$$$ a^{(i)} = softmax(z^{(i)})$$$$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Y_{oh,k}^{(i)} * log(a^{(i)}_k)$$**Note** It is possible to come up with a more efficient vectorized implementation. For now, let's use nested for loops to better understand the algorithm, and for easier debugging.We provided the function `softmax()`, which was imported earlier.
###Code
# GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
"""
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
"""
np.random.seed(1)
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = 5 # number of classes
n_h = 50 # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C = n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the i'th training example
avg = sentence_to_avg(X[i], word_to_vec_map)
# Forward propagate the avg through the softmax layer
z = np.dot(W, avg) + b
a = softmax(z)
# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
cost = -np.sum(Y_oh[i] * np.log(a))
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map) #predict is defined in emo_utils.py
return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)
X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
'Lets go party and drinks','Congrats on the new job','Congratulations',
'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
'You totally deserve this prize', 'Let us go play football',
'Are you down for football this afternoon', 'Work hard play harder',
'It is suprising how people can be dumb sometimes',
'I am very disappointed','It is the best day in my life',
'I think I will end up alone','My life is so boring','Good job',
'Great so awesome'])
print(X.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(type(X_train))
###Output
(132,)
(132,)
(132, 5)
never talk to me again
<class 'numpy.ndarray'>
(20,)
(20,)
(132, 5)
<class 'numpy.ndarray'>
###Markdown
Run the next cell to train your model and learn the softmax parameters (W,b).
###Code
pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
###Output
Epoch: 0 --- cost = 1.95204988128
Accuracy: 0.348484848485
Epoch: 100 --- cost = 0.0797181872601
Accuracy: 0.931818181818
Epoch: 200 --- cost = 0.0445636924368
Accuracy: 0.954545454545
Epoch: 300 --- cost = 0.0343226737879
Accuracy: 0.969696969697
[[ 3.]
[ 2.]
[ 3.]
[ 0.]
[ 4.]
[ 0.]
[ 3.]
[ 2.]
[ 3.]
[ 1.]
[ 3.]
[ 3.]
[ 1.]
[ 3.]
[ 2.]
[ 3.]
[ 2.]
[ 3.]
[ 1.]
[ 2.]
[ 3.]
[ 0.]
[ 2.]
[ 2.]
[ 2.]
[ 1.]
[ 4.]
[ 3.]
[ 3.]
[ 4.]
[ 0.]
[ 3.]
[ 4.]
[ 2.]
[ 0.]
[ 3.]
[ 2.]
[ 2.]
[ 3.]
[ 4.]
[ 2.]
[ 2.]
[ 0.]
[ 2.]
[ 3.]
[ 0.]
[ 3.]
[ 2.]
[ 4.]
[ 3.]
[ 0.]
[ 3.]
[ 3.]
[ 3.]
[ 4.]
[ 2.]
[ 1.]
[ 1.]
[ 1.]
[ 2.]
[ 3.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 3.]
[ 4.]
[ 4.]
[ 2.]
[ 2.]
[ 1.]
[ 2.]
[ 0.]
[ 3.]
[ 2.]
[ 2.]
[ 0.]
[ 3.]
[ 3.]
[ 1.]
[ 2.]
[ 1.]
[ 2.]
[ 2.]
[ 4.]
[ 3.]
[ 3.]
[ 2.]
[ 4.]
[ 0.]
[ 0.]
[ 3.]
[ 3.]
[ 3.]
[ 3.]
[ 2.]
[ 0.]
[ 1.]
[ 2.]
[ 3.]
[ 0.]
[ 2.]
[ 2.]
[ 2.]
[ 3.]
[ 2.]
[ 2.]
[ 2.]
[ 4.]
[ 1.]
[ 1.]
[ 3.]
[ 3.]
[ 4.]
[ 1.]
[ 2.]
[ 1.]
[ 1.]
[ 3.]
[ 1.]
[ 0.]
[ 4.]
[ 0.]
[ 3.]
[ 3.]
[ 4.]
[ 4.]
[ 1.]
[ 4.]
[ 3.]
[ 0.]
[ 2.]]
###Markdown
**Expected Output** (on a subset of iterations): **Epoch: 0** cost = 1.95204988128 Accuracy: 0.348484848485 **Epoch: 100** cost = 0.0797181872601 Accuracy: 0.931818181818 **Epoch: 200** cost = 0.0445636924368 Accuracy: 0.954545454545 **Epoch: 300** cost = 0.0343226737879 Accuracy: 0.969696969697 Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set. 1.4 - Examining test set performance * Note that the `predict` function used here is defined in emo_util.spy.
###Code
print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
###Output
Training set:
Accuracy: 0.977272727273
Test set:
Accuracy: 0.857142857143
###Markdown
**Expected Output**: **Train set accuracy** 97.7 **Test set accuracy** 85.7 * Random guessing would have had 20% accuracy given that there are 5 classes. (1/5 = 20%).* This is pretty good performance after training on only 127 examples. The model matches emojis to relevant wordsIn the training set, the algorithm saw the sentence >"*I love you*" with the label ❤️. * You can check that the word "adore" does not appear in the training set. * Nonetheless, lets see what happens if you write "*I adore you*."
###Code
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
###Output
Accuracy: 0.833333333333
i adore you ❤️
i love you ❤️
funny lol 😄
lets play with a ball ⚾
food is ready 🍴
not feeling happy 😄
###Markdown
Amazing! * Because *adore* has a similar embedding as *love*, the algorithm has generalized correctly even to a word it has never seen before. * Words such as *heart*, *dear*, *beloved* or *adore* have embedding vectors similar to *love*. * Feel free to modify the inputs above and try out a variety of input sentences. * How well does it work? Word ordering isn't considered in this model* Note that the model doesn't get the following sentence correct:>"not feeling happy" * This algorithm ignores word ordering, so is not good at understanding phrases like "not happy." Confusion matrix* Printing the confusion matrix can also help understand which classes are more difficult for your model. * A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).
###Code
print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
###Output
(56,)
❤️ ⚾ 😄 😞 🍴
Predicted 0.0 1.0 2.0 3.0 4.0 All
Actual
0 6 0 0 1 0 7
1 0 8 0 0 0 8
2 2 0 16 0 0 18
3 1 1 2 12 0 16
4 0 0 1 0 6 7
All 9 9 19 13 6 56
###Markdown
What you should remember from this section- Even with a 127 training examples, you can get a reasonably good model for Emojifying. - This is due to the generalization power word vectors gives you. - Emojify-V1 will perform poorly on sentences such as *"This movie is not good and not enjoyable"* - It doesn't understand combinations of words. - It just averages all the words' embedding vectors together, without considering the ordering of words. **You will build a better algorithm in the next section!** 2 - Emojifier-V2: Using LSTMs in Keras: Let's build an LSTM model that takes word **sequences** as input!* This model will be able to account for the word ordering. * Emojifier-V2 will continue to use pre-trained word embeddings to represent words.* We will feed word embeddings into an LSTM.* The LSTM will learn to predict the most appropriate emoji. Run the following cell to load the Keras packages.
###Code
import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.initializers import glorot_uniform
np.random.seed(1)
###Output
Using TensorFlow backend.
###Markdown
2.1 - Overview of the modelHere is the Emojifier-v2 you will implement: **Figure 3**: Emojifier-V2. A 2-layer LSTM sequence classifier. 2.2 Keras and mini-batching * In this exercise, we want to train Keras using mini-batches. * However, most deep learning frameworks require that all sequences in the same mini-batch have the **same length**. * This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time. Padding handles sequences of varying length* The common solution to handling sequences of **different length** is to use padding. Specifically: * Set a maximum sequence length * Pad all sequences to have the same length. Example of padding* Given a maximum sequence length of 20, we could pad every sentence with "0"s so that each input sentence is of length 20. * Thus, the sentence "I love you" would be represented as $(e_{I}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. * In this example, any sentences longer than 20 words would have to be truncated. * One way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set. 2.3 - The Embedding layer* In Keras, the embedding matrix is represented as a "layer".* The embedding matrix maps word indices to embedding vectors. * The word indices are positive integers. * The embedding vectors are dense vectors of fixed size. * When we say a vector is "dense", in this context, it means that most of the values are non-zero. As a counter-example, a one-hot encoded vector is not "dense."* The embedding matrix can be derived in two ways: * Training a model to derive the embeddings from scratch. * Using a pretrained embedding Using and updating pre-trained embeddings* In this part, you will learn how to create an [Embedding()](https://keras.io/layers/embeddings/) layer in Keras* You will initialize the Embedding layer with the GloVe 50-dimensional vectors. * In the code below, we'll show you how Keras allows you to either train or leave fixed this layer. * Because our training set is quite small, we will leave the GloVe embeddings fixed instead of updating them. Inputs and outputs to the embedding layer* The `Embedding()` layer's input is an integer matrix of size **(batch size, max input length)**. * This input corresponds to sentences converted into lists of indices (integers). * The largest integer (the highest word index) in the input should be no larger than the vocabulary size.* The embedding layer outputs an array of shape (batch size, max input length, dimension of word vectors).* The figure shows the propagation of two example sentences through the embedding layer. * Both examples have been zero-padded to a length of `max_len=5`. * The word embeddings are 50 units in length. * The final dimension of the representation is `(2,max_len,50)`. **Figure 4**: Embedding layer Prepare the input sentences**Exercise**: * Implement `sentences_to_indices`, which processes an array of sentences (X) and returns inputs to the embedding layer: * Convert each training sentences into a list of indices (the indices correspond to each word in the sentence) * Zero-pad all these lists so that their length is the length of the longest sentence. Additional Hints* Note that you may have considered using the `enumerate()` function in the for loop, but for the purposes of passing the autograder, please follow the starter code by initializing and incrementing `j` explicitly.
###Code
for idx, val in enumerate(["I", "like", "learning"]):
print(idx,val)
# GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
"""
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
"""
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
X_indices = np.zeros((m, max_len))
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
sentence_words = X[i].lower().split()
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in sentence_words:
# Set the (i,j)th entry of X_indices to the index of the correct word.
X_indices[i, j] = word_to_index[w]
# Increment j to j + 1
j = j + 1
### END CODE HERE ###
return X_indices
###Output
_____no_output_____
###Markdown
Run the following cell to check what `sentences_to_indices()` does, and check your results.
###Code
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =\n", X1_indices)
###Output
X1 = ['funny lol' 'lets play baseball' 'food is ready for you']
X1_indices =
[[ 155345. 225122. 0. 0. 0.]
[ 220930. 286375. 69714. 0. 0.]
[ 151204. 192973. 302254. 151349. 394475.]]
###Markdown
**Expected Output**:```PythonX1 = ['funny lol' 'lets play baseball' 'food is ready for you']X1_indices = [[ 155345. 225122. 0. 0. 0.] [ 220930. 286375. 69714. 0. 0.] [ 151204. 192973. 302254. 151349. 394475.]]``` Build embedding layer* Let's build the `Embedding()` layer in Keras, using pre-trained word vectors. * The embedding layer takes as input a list of word indices. * `sentences_to_indices()` creates these word indices.* The embedding layer will return the word embeddings for a sentence. **Exercise**: Implement `pretrained_embedding_layer()` with these steps:1. Initialize the embedding matrix as a numpy array of zeros. * The embedding matrix has a row for each unique word in the vocabulary. * There is one additional row to handle "unknown" words. * So vocab_len is the number of unique words plus one. * Each row will store the vector representation of one word. * For example, one row may be 50 positions long if using GloVe word vectors. * In the code below, `emb_dim` represents the length of a word embedding.2. Fill in each row of the embedding matrix with the vector representation of a word * Each word in `word_to_index` is a string. * word_to_vec_map is a dictionary where the keys are strings and the values are the word vectors.3. Define the Keras embedding layer. * Use [Embedding()](https://keras.io/layers/embeddings/). * The input dimension is equal to the vocabulary length (number of unique words plus one). * The output dimension is equal to the number of positions in a word embedding. * Make this layer's embeddings fixed. * If you were to set `trainable = True`, then it will allow the optimization algorithm to modify the values of the word embeddings. * In this case, we don't want the model to modify the word embeddings.4. Set the embedding weights to be equal to the embedding matrix. * Note that this is part of the code is already completed for you and does not need to be modified.
###Code
# GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
"""
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
"""
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Step 1
# Initialize the embedding matrix as a numpy array of zeros.
# See instructions above to choose the correct shape.
emb_matrix = np.zeros((vocab_len, emb_dim))
# Step 2
# Set each row "idx" of the embedding matrix to be
# the word vector representation of the idx'th word of the vocabulary
for word, idx in word_to_index.items():
emb_matrix[idx, :] = word_to_vec_map[word]
# Step 3
# Define Keras embedding layer with the correct input and output sizes
# Make it non-trainable.
embedding_layer = Embedding(vocab_len, emb_dim, trainable=False)
### END CODE HERE ###
# Step 4 (already done for you; please do not modify)
# Build the embedding layer, it is required before setting the weights of the embedding layer.
embedding_layer.build((None,)) # Do not modify the "None". This line of code is complete as-is.
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
###Output
weights[0][1][3] = -0.3403
###Markdown
**Expected Output**:```Pythonweights[0][1][3] = -0.3403``` 2.3 Building the Emojifier-V2Lets now build the Emojifier-V2 model. * You feed the embedding layer's output to an LSTM network. **Figure 3**: Emojifier-v2. A 2-layer LSTM sequence classifier. **Exercise:** Implement `Emojify_V2()`, which builds a Keras graph of the architecture shown in Figure 3. * The model takes as input an array of sentences of shape (`m`, `max_len`, ) defined by `input_shape`. * The model outputs a softmax probability vector of shape (`m`, `C = 5`). * You may need to use the following Keras layers: * [Input()](https://keras.io/layers/core/input) * Set the `shape` and `dtype` parameters. * The inputs are integers, so you can specify the data type as a string, 'int32'. * [LSTM()](https://keras.io/layers/recurrent/lstm) * Set the `units` and `return_sequences` parameters. * [Dropout()](https://keras.io/layers/core/dropout) * Set the `rate` parameter. * [Dense()](https://keras.io/layers/core/dense) * Set the `units`, * Note that `Dense()` has an `activation` parameter. For the purposes of passing the autograder, please do not set the activation within `Dense()`. Use the separate `Activation` layer to do so. * [Activation()](https://keras.io/activations/). * You can pass in the activation of your choice as a lowercase string. * [Model](https://keras.io/models/model/) Set `inputs` and `outputs`. Additional Hints* Remember that these Keras layers return an object, and you will feed in the outputs of the previous layer as the input arguments to that object. The returned object can be created and called in the same line.```Python How to use Keras layers in two lines of codedense_object = Dense(units = ...)X = dense_object(inputs) How to use Keras layers in one line of codeX = Dense(units = ...)(inputs)```* The `embedding_layer` that is returned by `pretrained_embedding_layer` is a layer object that can be called as a function, passing in a single argument (sentence indices).* Here is some sample code in case you're stuck```Pythonraw_inputs = Input(shape=(maxLen,), dtype='int32')preprocessed_inputs = ... some pre-processingX = LSTM(units = ..., return_sequences= ...)(processed_inputs)X = Dropout(rate = ..., )(X)...X = Dense(units = ...)(X)X = Activation(...)(X)model = Model(inputs=..., outputs=...)...```
###Code
# GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
### START CODE HERE ###
# Define sentence_indices as the input of the graph.
# It should be of shape input_shape and dtype 'int32' (as it contains indices, which are integers).
sentence_indices = Input(input_shape, dtype='int32')
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
# Propagate sentence_indices through your embedding layer
# (See additional hints in the instructions).
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# The returned output should be a batch of sequences.
X = LSTM(units = 128, return_sequences=True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# The returned output should be a single hidden state, not a batch of sequences.
X = LSTM(units = 128, return_sequences=False)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with 5 units
X = Dense(5)(X)
# Add a softmax activation
X = Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(inputs=sentence_indices, outputs=X)
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose `max_len = 10`. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001\*50 = 20,000,050 non-trainable parameters.
###Code
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 10) 0
_________________________________________________________________
embedding_3 (Embedding) (None, 10, 50) 20000050
_________________________________________________________________
lstm_3 (LSTM) (None, 10, 128) 91648
_________________________________________________________________
dropout_3 (Dropout) (None, 10, 128) 0
_________________________________________________________________
lstm_4 (LSTM) (None, 128) 131584
_________________________________________________________________
dropout_4 (Dropout) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 5) 645
_________________________________________________________________
activation_2 (Activation) (None, 5) 0
=================================================================
Total params: 20,223,927
Trainable params: 223,877
Non-trainable params: 20,000,050
_________________________________________________________________
###Markdown
As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, `adam` optimizer and `['accuracy']` metrics:
###Code
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
It's time to train your model. Your Emojifier-V2 `model` takes as input an array of shape (`m`, `max_len`) and outputs probability vectors of shape (`m`, `number of classes`). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
###Code
X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)
###Output
_____no_output_____
###Markdown
Fit the Keras model on `X_train_indices` and `Y_train_oh`. We will use `epochs = 50` and `batch_size = 32`.
###Code
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
###Output
Epoch 1/50
132/132 [==============================] - 0s - loss: 1.6000 - acc: 0.2803
Epoch 2/50
132/132 [==============================] - 0s - loss: 1.5286 - acc: 0.3636
Epoch 3/50
132/132 [==============================] - 0s - loss: 1.4854 - acc: 0.3409
Epoch 4/50
132/132 [==============================] - 0s - loss: 1.4461 - acc: 0.4167
Epoch 5/50
132/132 [==============================] - 0s - loss: 1.3381 - acc: 0.5076
Epoch 6/50
132/132 [==============================] - 0s - loss: 1.2154 - acc: 0.5455
Epoch 7/50
132/132 [==============================] - 0s - loss: 1.1117 - acc: 0.5909
Epoch 8/50
132/132 [==============================] - 0s - loss: 1.0075 - acc: 0.5758
Epoch 9/50
132/132 [==============================] - 0s - loss: 0.9729 - acc: 0.6288
Epoch 10/50
132/132 [==============================] - 0s - loss: 0.7923 - acc: 0.6970
Epoch 11/50
132/132 [==============================] - 0s - loss: 0.7097 - acc: 0.7576
Epoch 12/50
132/132 [==============================] - 0s - loss: 0.6714 - acc: 0.7955
Epoch 13/50
132/132 [==============================] - 0s - loss: 0.5903 - acc: 0.8182
Epoch 14/50
132/132 [==============================] - 0s - loss: 0.5231 - acc: 0.8106
Epoch 15/50
132/132 [==============================] - 0s - loss: 0.4578 - acc: 0.8636
Epoch 16/50
132/132 [==============================] - 0s - loss: 0.3529 - acc: 0.9091
Epoch 17/50
132/132 [==============================] - 0s - loss: 0.3117 - acc: 0.8864
Epoch 18/50
132/132 [==============================] - 0s - loss: 0.2815 - acc: 0.9015 - ETA: 0s - loss: 0.3436 - acc: 0.8
Epoch 19/50
132/132 [==============================] - 0s - loss: 0.2520 - acc: 0.9015
Epoch 20/50
132/132 [==============================] - 0s - loss: 0.2380 - acc: 0.9167
Epoch 21/50
132/132 [==============================] - 0s - loss: 0.1921 - acc: 0.9318
Epoch 22/50
132/132 [==============================] - 0s - loss: 0.4650 - acc: 0.8485
Epoch 23/50
132/132 [==============================] - 0s - loss: 0.2880 - acc: 0.9015
Epoch 24/50
132/132 [==============================] - 0s - loss: 0.2805 - acc: 0.9318
Epoch 25/50
132/132 [==============================] - 0s - loss: 0.3026 - acc: 0.8939
Epoch 26/50
132/132 [==============================] - 0s - loss: 0.1841 - acc: 0.9242
Epoch 27/50
132/132 [==============================] - 0s - loss: 0.1874 - acc: 0.9394
Epoch 28/50
132/132 [==============================] - 0s - loss: 0.1435 - acc: 0.9545
Epoch 29/50
132/132 [==============================] - 0s - loss: 0.1184 - acc: 0.9697
Epoch 30/50
132/132 [==============================] - 0s - loss: 0.1137 - acc: 0.9773
Epoch 31/50
132/132 [==============================] - 0s - loss: 0.0765 - acc: 0.9848
Epoch 32/50
132/132 [==============================] - 0s - loss: 0.1013 - acc: 0.9621
Epoch 33/50
132/132 [==============================] - 0s - loss: 0.1275 - acc: 0.9621
Epoch 34/50
132/132 [==============================] - 0s - loss: 0.1997 - acc: 0.9167
Epoch 35/50
132/132 [==============================] - 0s - loss: 0.2904 - acc: 0.8864
Epoch 36/50
132/132 [==============================] - 0s - loss: 0.1984 - acc: 0.9242
Epoch 37/50
132/132 [==============================] - 0s - loss: 0.2348 - acc: 0.9167
Epoch 38/50
132/132 [==============================] - 0s - loss: 0.1885 - acc: 0.9394
Epoch 39/50
132/132 [==============================] - 0s - loss: 0.2271 - acc: 0.9242
Epoch 40/50
132/132 [==============================] - 0s - loss: 0.1165 - acc: 0.9470
Epoch 41/50
132/132 [==============================] - 0s - loss: 0.1492 - acc: 0.9394
Epoch 42/50
132/132 [==============================] - 0s - loss: 0.0709 - acc: 0.9848
Epoch 43/50
132/132 [==============================] - 0s - loss: 0.0584 - acc: 0.9848
Epoch 44/50
132/132 [==============================] - 0s - loss: 0.0448 - acc: 1.0000
Epoch 45/50
132/132 [==============================] - 0s - loss: 0.0595 - acc: 0.9848
Epoch 46/50
132/132 [==============================] - 0s - loss: 0.0565 - acc: 0.9848
Epoch 47/50
132/132 [==============================] - 0s - loss: 0.0207 - acc: 1.0000
Epoch 48/50
132/132 [==============================] - 0s - loss: 0.0491 - acc: 0.9848
Epoch 49/50
132/132 [==============================] - 0s - loss: 0.0951 - acc: 0.9470
Epoch 50/50
132/132 [==============================] - 0s - loss: 0.5430 - acc: 0.8939
###Markdown
Your model should perform around **90% to 100% accuracy** on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
###Code
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
###Output
32/56 [================>.............] - ETA: 0s
Test accuracy = 0.892857134342
###Markdown
You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
###Code
# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
###Output
Expected emoji:😄 prediction: he got a very nice raise ❤️
Expected emoji:😄 prediction: you brighten my day ❤️
Expected emoji:😄 prediction: will you be my valentine ❤️
Expected emoji:⚾ prediction: what is your favorite baseball game 😄
Expected emoji:😞 prediction: go away ⚾
Expected emoji:😞 prediction: yesterday we lost again ⚾
###Markdown
Now you can try it on your own example. Write your own sentence below.
###Code
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['not feeling happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))
###Output
not feeling happy 😄
|
AEGAN-mnist.ipynb | ###Markdown

###Code
gpu_info = !nvidia-smi
gpu_info = gpu_info[:10]
gpu_info = '\n'.join(gpu_info)
print(gpu_info)
from varname import nameof
from collections import defaultdict
import sys
import torch
import os
import numpy as np
import torch.nn as nn
import torch.optim as optim
from tqdm.notebook import tqdm_notebook as tq
from torch.autograd import Variable
from torchvision.transforms import transforms
from torchvision.utils import save_image, make_grid
from torch.optim.lr_scheduler import StepLR, ExponentialLR
from pytorch_model_summary import summary
from tensorboardX import SummaryWriter
from src.dataset.utils import SavePath
from src.dataset.dataset import Mnist
from src.config import TrainConfig
from src.pytorch_msssim import MSSSIM, ssim
base_path = !pwd
base_path = base_path[0] + '/'
args = TrainConfig( base_path, # project directory path
n_epochs = 200, # number of epochs to train (default: 100)
batch_size = 64, # input batch size for training (default: 128)
lr = 1e-4, # learning rate (default: 0.0001)
dim_h = 128, # hidden dimension (default: 128)')
n_z = 8, # hidden dimension of z (default: 8)
LAMBDA = 10, # regularization coef term (default: 10)
sigma = 1, # variance of hidden dimension (default: 1)
n_channel = 1, # input channels (default: 1)
img_size = 28 ) # image size
def unfreeze_params(module: nn.Module):
for p in module.parameters():
p.requires_grad = True
def freeze_params(module: nn.Module):
for p in module.parameters():
p.requires_grad = False
def save_models(model_path, epoch_no, models):
print("Saving models")
for model_name, model in models.items():
torch.save(model.state_dict(), model_path + model_name + "_" + "%d.pth" % epoch_no)
def save_values_to_tensorboard(writer, epoch_no, values_dict: dict):
for name, val in values_dict.items():
if type(val) == dict:
writer.add_scalars(name, val, epoch_no)
else:
writer.add_scalar(name, val, epoch_no)
def save_images_to_tensorboard(writer, epoch_no, image, imname='im'):
writer.add_image(imname +'_{}'.format(epoch_no), image, epoch_no)
sp = SavePath(args)
transform = None # dont normalize
cdl = Mnist(args)
train_loader = cdl.get_data_loader(True)
test_loader = cdl.get_data_loader(False)
type(train_loader)
from src.models.model import Encoder as ConvEncoder
from src.models.model import Decoder as ConvDecoder
class GanDiscriminator(nn.Module):
def __init__(self, args):
super(GanDiscriminator, self).__init__()
self.n_channel = args.n_channel
self.dim_h = args.dim_h
self.n_z = args.n_z
self.main = nn.Sequential(
nn.Conv2d(self.n_channel, self.dim_h, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(self.dim_h, self.dim_h * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(self.dim_h * 2),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(self.dim_h * 2, self.dim_h * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(self.dim_h * 4),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(self.dim_h * 4, self.dim_h, 4, 2, 1, bias=False),
nn.BatchNorm2d(self.dim_h),
nn.LeakyReLU(0.2, inplace=True),
)
self.fc = nn.Sequential(
nn.Linear(self.dim_h, 1),
nn.Sigmoid()
)
def forward(self, x):
x = self.main(x)
x = x.squeeze()
x = self.fc(x)
return x
class ZDiscriminator(nn.Module):
def __init__(self, args):
super(ZDiscriminator, self).__init__()
self.n_channel = args.n_channel
self.dim_h = args.dim_h
self.n_z = args.n_z
self.main = nn.Sequential(
nn.Linear(self.n_z, self.dim_h * 4),
nn.ReLU(True),
nn.Linear(self.dim_h * 4, self.dim_h * 4),
nn.ReLU(True),
nn.Linear(self.dim_h * 4, self.dim_h * 2),
nn.ReLU(True),
nn.Linear(self.dim_h * 2, self.dim_h // 2),
nn.ReLU(True),
nn.Linear(self.dim_h // 2, self.dim_h // 4),
nn.ReLU(True),
nn.Linear(self.dim_h // 4, 1),
nn.Sigmoid()
)
def forward(self, x):
x = self.main(x)
# x = x.squeeze()
return x
E, G = ConvEncoder(args).cuda(), ConvDecoder(args).cuda()
D_x = GanDiscriminator(args).cuda()
D_z = ZDiscriminator(args).cuda()
mse_loss_fn = nn.MSELoss().cuda()
mae_loss_fn = nn.L1Loss().cuda()
adversarial_loss_fn = nn.BCELoss().cuda()
opt_AE = optim.Adam(list(E.parameters()) + list(G.parameters()), lr = args.lr)
opt_E = optim.Adam(E.parameters(), lr = 0.2 * args.lr)
opt_G = optim.Adam(G.parameters(), lr = 0.2 * args.lr)
opt_D_x = optim.Adam(D_x.parameters(), lr = args.lr)
opt_D_z = optim.Adam(D_z.parameters(), lr = args.lr)
# scheduler_opt_encoder = ExponentialLR(opt_encoder, gamma=0.01)
# scheduler_opt_decoder = ExponentialLR(opt_decoder, gamma=0.01)
# scheduler_opt_discriminator = ExponentialLR(opt_discriminator, gamma=0.01)
# scheduler_opt_zdiscriminator = ExponentialLR(opt_zdiscriminator, gamma=0.01)
# print(summary(D_x, torch.zeros((1, 1, 28, 28)).cuda(), show_input=True, show_hierarchical=False))
def load_models(checkpoint_path, checkpoint):
lp = SavePath(args, checkpoint_path)
_, _, model_load_path = lp.get_save_paths()
conv_encoder.load_state_dict(torch.load(model_load_path + "/conv_encoder_{}.pth".format(checkpoint)))
conv_decoder.load_state_dict(torch.load(model_load_path + "/conv_decoder_{}.pth".format(checkpoint)))
gan_discriminator.load_state_dict(torch.load(model_load_path + "/gan_discriminator_{}.pth".format(checkpoint)))
checkpoint = 0
if checkpoint:
load_models('/home/pr/synth.data/Autoencoder/outs/Mon-Mar--1-12-35-57-2021/', checkpoint)
running_losses = defaultdict(list)
running_norms = defaultdict(list)
writer = SummaryWriter(log_dir = sp.results_path + "logs")
image_path, list_path, model_path = sp.get_save_paths()
# one = torch.tensor(1)
# mone = one * -1
# if torch.cuda.is_available():
# one, mone = one.cuda(), mone.cuda()
with torch.autograd.set_detect_anomaly(True):
for epoch in range(checkpoint, checkpoint+args.n_epochs):
pbar = tq(enumerate(train_loader))
for step, (images, _) in pbar:
current_batch_size = images.size()[0]
x = images.cuda()
E.zero_grad()
G.zero_grad()
D_z.zero_grad()
D_x.zero_grad()
# Adversarial ground truths
ones = Variable(torch.cuda.FloatTensor(current_batch_size, 1).fill_(1.0), requires_grad=False) # real
zeros = Variable(torch.cuda.FloatTensor(current_batch_size, 1).fill_(0.0), requires_grad=False) # fake
# ======== Train Generator ======== #
freeze_params(D_x)
freeze_params(D_z)
# RECONSTRUCTION LOSS
z = torch.randn(current_batch_size, args.n_z).cuda()
z_cap = E(x)
x_tilde = G(z_cap)
x_ae_loss = 10 * mae_loss_fn(x_tilde, x)
x_cap = G(z)
z_tilde = E(x_cap)
z_ae_loss = 2.5 * mse_loss_fn(z_tilde, z)
ae_loss = z_ae_loss + x_ae_loss
d_z_cap = D_z(z_cap)
d_z_cap_loss = adversarial_loss_fn(d_z_cap, ones)
d_x_cap = D_x(x_cap)
d_x_cap_loss = adversarial_loss_fn(d_x_cap, ones)
# d_z_tilde =
d_z_tilde_loss = adversarial_loss_fn(D_z(z_tilde), ones)
# d_x_tilde =
d_x_tilde_loss = adversarial_loss_fn(D_x(x_tilde), ones)
disc_loss = d_z_cap_loss + d_x_cap_loss + d_z_tilde_loss + d_x_tilde_loss
ae_disc_loss = ae_loss + 0.2 * disc_loss # essentially adjusting the learning rate
ae_disc_loss.backward()
opt_AE.step()
# ======== Train Discriminator ======== #
unfreeze_params(D_z)
unfreeze_params(D_x)
d_z = D_z(z)
d_z_cap = D_z(z_cap.detach())
d_z_tilde = D_z(z_tilde.detach())
d_z_loss = 2 * adversarial_loss_fn(d_z, ones) + adversarial_loss_fn(d_z_tilde, zeros) + adversarial_loss_fn(d_z_cap, zeros)
d_z_loss.backward()
opt_D_z.step()
d_x = D_x(x)
d_x_cap = D_x(x_cap.detach())
d_x_tilde = D_x(x_tilde.detach())
d_x_loss = 2 * adversarial_loss_fn(d_x, ones) + adversarial_loss_fn(d_x_tilde, zeros) + adversarial_loss_fn(d_x_cap, zeros)
d_x_loss.backward()
opt_D_x.step()
running_losses["x_ae_loss"].append(x_ae_loss.mean().item())
running_losses["z_ae_loss"].append(z_ae_loss.mean().item())
running_losses["d_z_cap_loss"].append(d_z_cap_loss.mean().item())
running_losses["d_x_cap_loss"].append(d_x_cap_loss.mean().item())
running_losses["d_x_tilde_loss"].append(d_x_tilde_loss.mean().item())
running_losses["d_z_tilde_loss"].append(d_z_tilde_loss.mean().item())
running_losses["d_x_loss"].append(d_x_loss.mean().item())
running_losses["d_z_loss"].append(d_z_loss.mean().item())
running_norms["z_cap"].append([torch.norm(z_cap, dim=1).mean().item()])
running_norms["z_tilde"].append([torch.norm(z_tilde, dim=1).mean().item()])
running_norms["z"].append([torch.norm(z, dim=1).mean().item()])
s = 'Losses:'
for k, v in running_losses.items():
s += '\t' + k[:-5] + ': ' + str(round(v[-1], 3))
pbar.set_description(s)
if (epoch + 1) % 1 == 0:
s = 'Epoch:[{}/{}]\tLosses:'.format(epoch+1, args.n_epochs)
for k, v in running_losses.items():
running_losses[k] = np.mean(v, axis=0).round(3).item()
s += '\t' + k[:-5] + ': ' + str(np.mean(v, axis=0).round(3).item())
s += '\nNorms:'
for k, v in running_norms.items():
running_norms[k] = np.mean(v, axis=0).round(3).item()
s += '\t' + k + ': ' + str(np.mean(v, axis=0).round(3).item())
print(s)
if (epoch + 1) % 1 == 0:
resize_shape = [args.batch_size, args.n_channel, args.img_size, args.img_size]
test_iter, train_iter = iter(test_loader), iter(train_loader)
test_images, _ = next(test_iter)
train_images, _ = next(train_iter)
test_x = Variable(test_images)
train_x = Variable(train_images)
test_z_cap = E(test_x.cuda())
test_x_tilde = G(test_z_cap)
test_x_loss = mae_loss_fn(test_x, test_x_tilde.cpu()).mean().item()
test_x_cap = G(torch.randn_like(test_z_cap)).cpu().view(resize_shape)
test_image_recon = torch.cat((test_x.view(resize_shape),
test_x_tilde.cpu().view(resize_shape).data), axis=3)
val_dict = {
"train_losses": dict(running_losses),
"test_recon_loss": test_x_loss,
"norms": dict(running_norms)
}
save_values_to_tensorboard(writer, epoch + 1, val_dict)
save_images_to_tensorboard(writer, epoch+1, make_grid(torch.cat((train_x, G(E(train_x.cuda())).cpu()), dim=3),
normalize=False), 'train')
save_images_to_tensorboard(writer, epoch+1, make_grid(test_image_recon, normalize=False), 'test')
save_images_to_tensorboard(writer, epoch+1, make_grid(test_x_cap, normalize=False), 'sample')
running_losses.clear()
running_norms.clear()
if (epoch + 1) % 25 == 0:
models = {nameof(E): E, nameof(G): G, nameof(D_x): D_x, nameof(D_z): D_z}
save_models(model_path, epoch+1, models)
models = {nameof(E): E, nameof(G): G, nameof(D_x): D_x, nameof(D_z): D_z}
save_models(model_path, 5, models)
test_x_cap.shape
s = 'Losses:'
for k, v in d.items():
s = ' '.join((s, ' '.join((k, str(round(v, 3))))))
s
image = torch.cat((test_data[0].view(batch_size, 1, 28, 28),
reconst.data), axis=3)
image.shape
np.mean(disc_loss, axis=0)
disc_loss[0]
# encoder block (used in encoder and discriminator)
class EncoderBlock(nn.Module):
def __init__(self, channel_in, channel_out):
super(EncoderBlock, self).__init__()
# convolution to halve the dimensions
self.conv = nn.Conv2d(in_channels=channel_in, out_channels=channel_out, kernel_size=5, padding=2, stride=2,
bias=False)
self.bn = nn.BatchNorm2d(num_features=channel_out, momentum=0.9)
def forward(self, ten, out=False,t = False):
# here we want to be able to take an intermediate output for reconstruction error
if out:
ten = self.conv(ten)
ten_out = ten
ten = self.bn(ten)
ten = F.relu(ten, False)
return ten, ten_out
else:
ten = self.conv(ten)
ten = self.bn(ten)
ten = F.relu(ten, True)
return ten
# decoder block (used in the decoder)
class DecoderBlock(nn.Module):
def __init__(self, channel_in, channel_out):
super(DecoderBlock, self).__init__()
# transpose convolution to double the dimensions
self.conv = nn.ConvTranspose2d(channel_in, channel_out, kernel_size=5, padding=2, stride=2, output_padding=1,
bias=False)
self.bn = nn.BatchNorm2d(channel_out, momentum=0.9)
def forward(self, ten):
ten = self.conv(ten)
ten = self.bn(ten)
ten = F.relu(ten, True)
return ten
class Encoder(nn.Module):
def __init__(self, channel_in=3, z_size=128):
super(Encoder, self).__init__()
self.size = channel_in
layers_list = []
# the first time 3->64, for every other double the channel size
for i in range(3):
if i == 0:
layers_list.append(EncoderBlock(channel_in=self.size, channel_out=64))
self.size = 64
else:
layers_list.append(EncoderBlock(channel_in=self.size, channel_out=self.size * 2))
self.size *= 2
# final shape Bx256x8x8
self.conv = nn.Sequential(*layers_list)
self.fc = nn.Sequential(nn.Linear(in_features=8 * 8 * self.size, out_features=1024, bias=False),
nn.BatchNorm1d(num_features=1024,momentum=0.9),
nn.ReLU(True))
# two linear to get the mu vector and the diagonal of the log_variance
self.l_mu = nn.Linear(in_features=1024, out_features=z_size)
self.l_var = nn.Linear(in_features=1024, out_features=z_size)
def forward(self, ten):
ten = self.conv(ten)
ten = ten.view(len(ten), -1)
ten = self.fc(ten)
mu = self.l_mu(ten)
logvar = self.l_var(ten)
return mu, logvar
def __call__(self, *args, **kwargs):
return super(Encoder, self).__call__(*args, **kwargs)
class Decoder(nn.Module):
def __init__(self, z_size, size):
super(Decoder, self).__init__()
# start from B*z_size
self.fc = nn.Sequential(nn.Linear(in_features=z_size, out_features=8 * 8 * size, bias=False),
nn.BatchNorm1d(num_features=8 * 8 * size,momentum=0.9),
nn.ReLU(True))
self.size = size
layers_list = []
layers_list.append(DecoderBlock(channel_in=self.size, channel_out=self.size))
layers_list.append(DecoderBlock(channel_in=self.size, channel_out=self.size//2))
self.size = self.size//2
layers_list.append(DecoderBlock(channel_in=self.size, channel_out=self.size//4))
self.size = self.size//4
# final conv to get 3 channels and tanh layer
layers_list.append(nn.Sequential(
nn.Conv2d(in_channels=self.size, out_channels=3, kernel_size=5, stride=1, padding=2),
nn.Tanh()
))
self.conv = nn.Sequential(*layers_list)
def forward(self, ten):
ten = self.fc(ten)
ten = ten.view(len(ten), -1, 8, 8)
ten = self.conv(ten)
return ten
def __call__(self, *args, **kwargs):
return super(Decoder, self).__call__(*args, **kwargs)
class Discriminator(nn.Module):
def __init__(self, channel_in=3,recon_level=3):
super(Discriminator, self).__init__()
self.size = channel_in
self.recon_levl = recon_level
# module list because we need need to extract an intermediate output
self.conv = nn.ModuleList()
self.conv.append(nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=32, kernel_size=5, stride=1, padding=2),
nn.ReLU(inplace=True)))
self.size = 32
self.conv.append(EncoderBlock(channel_in=self.size, channel_out=128))
self.size = 128
self.conv.append(EncoderBlock(channel_in=self.size, channel_out=256))
self.size = 256
self.conv.append(EncoderBlock(channel_in=self.size, channel_out=256))
# final fc to get the score (real or fake)
self.fc = nn.Sequential(
nn.Linear(in_features=8 * 8 * self.size, out_features=512, bias=False),
nn.BatchNorm1d(num_features=512,momentum=0.9),
nn.ReLU(inplace=True),
nn.Linear(in_features=512, out_features=1),
)
def forward(self, ten_orig, ten_predicted, ten_sampled, mode='REC'):
if mode == "REC":
ten = torch.cat((ten_orig, ten_predicted, ten_sampled), 0)
for i, lay in enumerate(self.conv):
# we take the 9th layer as one of the outputs
if i == self.recon_levl:
ten, layer_ten = lay(ten, True)
# we need the layer representations just for the original and reconstructed,
# flatten, because it's a convolutional shape
layer_ten = layer_ten.view(len(layer_ten), -1)
return layer_ten
else:
ten = lay(ten)
else:
ten = torch.cat((ten_orig, ten_predicted, ten_sampled), 0)
for i, lay in enumerate(self.conv):
ten = lay(ten)
ten = ten.view(len(ten), -1)
ten = self.fc(ten)
return F.sigmoid(ten)
def __call__(self, *args, **kwargs):
return super(Discriminator, self).__call__(*args, **kwargs)
disc = Discriminator()
# print(summary(conv_encoder, torch.zeros((1, 3, 32, 32)).cuda(), show_input=False, show_hierarchical=False))
# print(summary(conv_decoder, torch.zeros((1, 1, 100)).cuda(), show_input=False, show_hierarchical=False))
print(summary(disc, torch.zeros((1, 1, 32, 32)).cuda(), show_input=False, show_hierarchical=False))
###Output
_____no_output_____ |
lab/part_iii_yaml/yaml_solution.ipynb | ###Markdown
Solution for YAML Structured Data --- Step 1:
###Code
# Determine the local YAML file name
# Import the OS module
import os
# List the contents of the current working directory
print(os.listdir())
# The name of the file is 'network_data.yaml'
yaml_file = 'network_data.yaml'
print(yaml_file)
###Output
_____no_output_____
###Markdown
--- Step 2:
###Code
# Import the YAML module
import yaml
###Output
_____no_output_____
###Markdown
--- Step 3:
###Code
# Use the context manager to read the YAML file and convert the contents to a Python object
with open(yaml_file, mode='rt', encoding='utf-8') as file:
yaml_data = file.read()
python_data = yaml.safe_load(yaml_data)
# Display the type of the 'python_data' object
type(python_data)
# Optionally, display the contents of the 'python_data' object
# print(python_data)
###Output
_____no_output_____
###Markdown
--- Step 4:
###Code
# Display the keys for the 'python_data' dictionary
print(python_data.keys())
# Display the sub-keys for the top-level 'python_data' dictionary key
print(python_data['devices'].keys())
###Output
_____no_output_____
###Markdown
--- Step 5
###Code
# Loops over the 'python_data' subkeys and displays the contents for any keys named 'data'
for host, attributes in python_data['devices'].items():
# The following line will result in an exception if the 'data' key isn't found in any iteration of the loop
# print(attributes['data'])
# Using 'get()' method will, instead, silently skip any iterations which don't contain the 'data' key
# The 'get()' method also allows you to specify a default value when a key is not found (not shown here)
print(attributes.get('data'))
###Output
_____no_output_____
###Markdown
--- Step 6:
###Code
# Create a dictionary object for the 'nxos2' device
nxos2 = {
'nxos2':{
'data': {
'role': 'distribution',
'site': 'atc56',
'type': 'network-device'
},
'groups': ['dna_3'],
'hostname': 'nxos2',
'platform': 'nxos',
'username': 'wwt',
'password': 'WWTwwt1!',
'port': '22'
}
}
# Display the contents of the 'nxos2' dictionary
print(nxos2.items())
###Output
_____no_output_____
###Markdown
--- Step 7:
###Code
# Add the 'nxos2' dictionary to the 'python_data' dictionary
python_data['devices'].update(nxos2)
# Import the Pretty Print (**pprint**) function from the Pretty Print (**pprint**) module
from pprint import pprint
# Display the 'python_data dictionary - option #1
pprint(python_data)
# Display the 'python_data dictionary - option #2
print(python_data.keys())
# Display the 'python_data dictionary - option #3
print(python_data.items())
###Output
_____no_output_____
###Markdown
--- Step 8
###Code
# Use the context manager to write a new YAML file with the YAML-converted contents of **python_data**
with open('new_network_data.yaml', mode='wt', encoding='utf-8') as file:
new_yaml_data = yaml.safe_dump(python_data)
file.write(new_yaml_data)
# List the contents of the current working directory
print(os.listdir())
###Output
_____no_output_____ |
coursera-data-science-in-python/Course5-Networks/Course5-Assignment3.ipynb | ###Markdown
---_You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-social-network-analysis/resources/yPcBs) course resource._--- Assignment 3In this assignment you will explore measures of centrality on two networks, a friendship network in Part 1, and a blog network in Part 2. Part 1Answer questions 1-4 using the network `G1`, a network of friendships at a university department. Each node corresponds to a person, and an edge indicates friendship. *The network has been loaded as networkx graph object `G1`.*
###Code
import networkx as nx
G1 = nx.read_gml('friendships.gml')
###Output
_____no_output_____
###Markdown
Question 1Find the degree centrality, closeness centrality, and normalized betweeness centrality (excluding endpoints) of node 100.*This function should return a tuple of floats `(degree_centrality, closeness_centrality, betweenness_centrality)`.*
###Code
def answer_one():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____
###Markdown
For Questions 2, 3, and 4, assume that you do not know anything about the structure of the network, except for the all the centrality values of the nodes. That is, use one of the covered centrality measures to rank the nodes and find the most appropriate candidate. Question 2Suppose you are employed by an online shopping website and are tasked with selecting one user in network G1 to send an online shopping voucher to. We expect that the user who receives the voucher will send it to their friends in the network. You want the voucher to reach as many nodes as possible. The voucher can be forwarded to multiple users at the same time, but the travel distance of the voucher is limited to one step, which means if the voucher travels more than one step in this network, it is no longer valid. Apply your knowledge in network centrality to select the best candidate for the voucher. *This function should return an integer, the name of the node.*
###Code
def answer_two():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____
###Markdown
Question 3Now the limit of the voucher’s travel distance has been removed. Because the network is connected, regardless of who you pick, every node in the network will eventually receive the voucher. However, we now want to ensure that the voucher reaches the nodes in the lowest average number of hops.How would you change your selection strategy? Write a function to tell us who is the best candidate in the network under this condition.*This function should return an integer, the name of the node.*
###Code
def answer_three():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____
###Markdown
Question 4Assume the restriction on the voucher’s travel distance is still removed, but now a competitor has developed a strategy to remove a person from the network in order to disrupt the distribution of your company’s voucher. Your competitor is specifically targeting people who are often bridges of information flow between other pairs of people. Identify the single riskiest person to be removed under your competitor’s strategy?*This function should return an integer, the name of the node.*
###Code
def answer_four():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____
###Markdown
Part 2`G2` is a directed network of political blogs, where nodes correspond to a blog and edges correspond to links between blogs. Use your knowledge of PageRank and HITS to answer Questions 5-9.
###Code
G2 = nx.read_gml('blogs.gml')
###Output
_____no_output_____
###Markdown
Question 5Apply the Scaled Page Rank Algorithm to this network. Find the Page Rank of node 'realclearpolitics.com' with damping value 0.85.*This function should return a float.*
###Code
def answer_five():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____
###Markdown
Question 6Apply the Scaled Page Rank Algorithm to this network with damping value 0.85. Find the 5 nodes with highest Page Rank. *This function should return a list of the top 5 blogs in desending order of Page Rank.*
###Code
def answer_six():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____
###Markdown
Question 7Apply the HITS Algorithm to the network to find the hub and authority scores of node 'realclearpolitics.com'. *Your result should return a tuple of floats `(hub_score, authority_score)`.*
###Code
def answer_seven():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____
###Markdown
Question 8 Apply the HITS Algorithm to this network to find the 5 nodes with highest hub scores.*This function should return a list of the top 5 blogs in desending order of hub scores.*
###Code
def answer_eight():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____
###Markdown
Question 9 Apply the HITS Algorithm to this network to find the 5 nodes with highest authority scores.*This function should return a list of the top 5 blogs in desending order of authority scores.*
###Code
def answer_nine():
# Your Code Here
return # Your Answer Here
###Output
_____no_output_____ |
Python Programs for YouTube/1_Introduction/Flow-Control/1_if-else/.ipynb_checkpoints/if-elif-else-checkpoint.ipynb | ###Markdown
Python if ... else Statement The **if…elif…else** statement is used in Python for decision making. if statement syntax if test expression: statement(s) The program evaluates the test expression and will execute statement(s) only if the text expression is True.If the text expression is False, the statement(s) is not executed. Python interprets non-zero values as True. None and 0 are interpreted as False. Flow Chart  Example
###Code
num = 10
# try 0, -1 and None
if None:
print("Number is positive")
print("This will print always") #This print statement always print
#change number
###Output
This will print always
###Markdown
if ... else Statement Syntax: if test expression: Body of if else: Body of else Flow Chart  Example
###Code
num = 10
if num > 0:
print("Positive number")
else:
print("Negative Number")
###Output
Positive number
###Markdown
if...elif...else Statement Syntax: if test expression: Body of if elif test expression: Body of elif else: Body of else Flow Chart  Example:
###Code
num = 0
if num > 0:
print("Positive number")
elif num == 0:
print("ZERO")
else:
print("Negative Number")
###Output
ZERO
###Markdown
Nested if Statements We can have a if...elif...else statement inside another if...elif...else statement. This is called nesting in computer programming. Example:
###Code
num = 10.5
if num >= 0:
if num == 0:
print("Zero")
else:
print("Positive number")
else:
print("Negative Number")
###Output
Positive number
###Markdown
Python program to find the largest element among three Numbers
###Code
num1 = 10
num2 = 50
num3 = 15
if (num1 >= num2) and (num1 >= num3): #logical operator and
largest = num1
elif (num2 >= num1) and (num2 >= num3):
largest = num2
else:
largest = num3
print("Largest element among three numbers is: {}".format(largest))
###Output
Largest element among three numbers is: 50
|
archive/plots/AutoCorrelation.ipynb | ###Markdown
Autocorrelation Diagnostic (Alpha) Autocorrelation plots show an estimate of the independence of a time series signal with itself. As a diagnostic for Markov Monte Carlo Chains they provides an estimate the independence of each draw from previous draws in the same chain. High auto-correlation does not necessarily imply lack of convergence, only loss of efficiency. The higher the autocorrelation the larger the sample size you will need to get the correct answer, but you can have convergence with high autocorrelation.In fact Markov chain theory guarantees that under certain (mild) conditions you will get to the correct stationary distribution. Unfortunately, this guarantee requires infinite samples. So the trick is to guess when the chain has "practically reached infinity" for a given problem. Autocorrelation plots help the practioner make that assessment. Arviz Autocorrelation plotsBy default ArviZ will generate an autocorrelation plot for each variable and for each chain. The autocorrelation should decrease as the lag increases, which indicates a low level of correlation.
###Code
import arviz as az
import numpy as np
import matplotlib.pyplot as plt
az.style.use('arviz-darkgrid')
data = az.load_arviz_data('centered_eight')
ax = az.plot_autocorr(data, var_names="mu")
###Output
_____no_output_____
###Markdown
Autocorrelation deriviationAutocorrelation at each lag is defined as $$\large \rho_{lag} = \frac{\Sigma_{i}^{N-lag} (\theta_{i} - \overline{\theta})(\theta_{i + lag} - \overline{\theta})}{\Sigma_{i}^{N}(\theta_{i} - \overline{\theta})}$$ This formula may be recognized as a special form of Pearson's correlation coefficient, where instead of measuring the correlation between two arbitrary sets, we are instead measuring the correlation of different time slices of the same chain. We can manually calculate the autocorrelation using numpy
###Code
chain_0 = data.posterior.sel(chain=0)
mu_0 = chain_0["mu"]
lag = np.arange(0,100)
auto_corr = []
for shift in lag:
# Or None is to handle the case of indexing at -0
corr = np.corrcoef(mu_0[:-shift or None],mu_0[shift:])[0,1]
auto_corr.append(corr)
auto_corr[:5]
fig, ax = plt.subplots(figsize=(12,6))
ax.vlines(x=np.arange(1, 100),
ymin=0, ymax=auto_corr,
lw=1)
fig.suptitle("Autocorrelation of Mu Chain 0");
# Filter to just one chain
axes = az.plot_autocorr(chain_0, var_names="mu", figsize=(12,6))
axes[0][0].set_title("Autocorrelation of Mu Chain 0");
###Output
_____no_output_____
###Markdown
Note that this plot is identical to chain 0 for `mu` in the posterior data Additional examples MCMC chains with high autocorrelation can indicate that a lack of convergence to a stationary distribution. Take the following example where we try and use PyMC3 estimate the parameters of a generated distribution.
###Code
import pymc3 as pm
# Generate observed distribution with fixed parameters
SD = 2
MU = -5
obs = np.random.normal(loc=MU, scale=SD, size=10000)
# Attempt to use pymc3 to estimate mean of the distribution
with pm.Model() as model:
mu = pm.Normal("mu", mu=-5000, sd=1)
y = pm.Normal("y", mu=mu, sd=SD, observed=obs)
step = pm.Metropolis()
trace = pm.sample(100, step, chains=2)
###Output
Only 100 samples in chain.
Multiprocess sampling (2 chains in 2 jobs)
Metropolis: [mu]
Sampling 2 chains: 100%|██████████| 1200/1200 [00:00<00:00, 2729.68draws/s]
The gelman-rubin statistic is larger than 1.2 for some parameters.
The number of effective samples is smaller than 10% for some parameters.
###Markdown
In this example`Mu`is the unknown parameter that is being estimated by PyMC3. The model is specified with a prior Normal prior with a mean of -5000, and only 100 steps are taken with a Metropolis Hasting sampler.
###Code
axes = az.plot_autocorr(trace, max_lag=100)
###Output
_____no_output_____
###Markdown
When plotting the autocorrelation of the chains however a high degree of correlation is present, indicating that the sampling did not converge to a stationary distribution.
###Code
# Attempt to use pymc3 to estimate mean of the distribution
with pm.Model() as model:
mu = pm.Uniform("mu")
y = pm.Normal("y", mu=mu, sd=SD, observed=obs)
step = pm.Metropolis()
trace = pm.sample(5000, step, chains=2)
axes = az.plot_autocorr(trace)
###Output
Multiprocess sampling (2 chains in 2 jobs)
Metropolis: [mu]
Sampling 2 chains: 100%|██████████| 11000/11000 [00:02<00:00, 5021.88draws/s]
The number of effective samples is smaller than 25% for some parameters.
|
Trapezoid Rule.ipynb | ###Markdown
Basic Numerical Integration: the Trapezoid Rule A simple illustration of the trapezoid rule for definite integration:$$\int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right).$$First, we define a simple function and sample it between 0 and 10 at 200 points
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return (x-3)*(x-5)*(x-7)+85
x = np.linspace(0, 10, 200)
y = f(x)
###Output
_____no_output_____
###Markdown
Choose a region to integrate over and take only a few points in that region
###Code
a, b = 1, 8 # the left and right boundaries
N = 5 # the number of points
xint = np.linspace(a, b, N)
yint = f(xint)
###Output
_____no_output_____
###Markdown
Plot both the function and the area below it in the trapezoid approximation
###Code
plt.plot(x, y, lw=2)
plt.axis([0, 9, 0, 140])
plt.fill_between(xint, 0, yint, facecolor='gray', alpha=0.4)
plt.text(0.5 * (a + b), 30,r"$\int_a^b f(x)dx$", horizontalalignment='center', fontsize=20);
###Output
_____no_output_____
###Markdown
Compute the integral both at high accuracy and with the trapezoid approximation
###Code
from __future__ import print_function
from scipy.integrate import quad
integral, error = quad(f, a, b)
integral_trapezoid = sum( (xint[1:] - xint[:-1]) * (yint[1:] + yint[:-1]) ) / 2
print("The integral is:", integral, "+/-", error)
print("The trapezoid approximation with", len(xint), "points is:", integral_trapezoid)
###Output
The integral is: 565.2499999999999 +/- 6.275535646693696e-12
The trapezoid approximation with 5 points is: 559.890625
|
pxl_scripts/colab_notebooks/User-Guide_Service-Stats.ipynb | ###Markdown
How to use Pixie to monitor the health of your services--- Setup Pixie *Prerequisites** You'll need to have Pixie running in a K8s environment already* You'll need a google account to run this notebook
###Code
# Install CLI in this environment and authorize
# Notes:
## Hit enter when prompted for path
## Click on prompted auth URL and manually enter key
!bash -c "$(curl -fsSL https://withpixie.ai/install.sh)"
# Verify px can access your cluster and platform is healthy
!px get viziers
# Verify pems are installed across your nodes
!px get pems
# View scripts you can run
!px run -l
###Output
_____no_output_____
###Markdown
Get aggregate stats
###Code
# Get aggregate service stats for all your services
!px run px/service_stats
# Check what arguments you can pass the script
!px run px/service_stats --help
# Let's restrict start-time for results
!px run px/service_stats -- -start_time -10s
# Feel free to edit time window
# Let's restrict results to specific namespace
!px run px/service_stats -- -start_time -10s -service_name pl/
# Let's restrict results to specific namespace
!px run px/service_stats -o json
###Output
_____no_output_____
###Markdown
Access underlying requests
###Code
### View http events
!px run px/http_data
###Output
_____no_output_____
###Markdown
Note: We'll be adding additional scripts and live view support for raw request analysis soon. Visualize results **Terminal UI:** You can append your scripts with `-o live` or run `px live` to view an interactive interface in your terminal **Web Browser UI:** Click on the URL printed at the bottom to view dashboard in Pixie "Live" web UI Extending Pixie
###Code
# Transform data into json to use tools like jq to pipe it into other tools
!px run px/service_stats -o json
# Write custom scripts with .pxl extention and execute using px
!px run -f <add your script name>
###Output
_____no_output_____ |
DAY 001 ~ 100/DAY018_[Programmers] 해시 전화번호 목록 (Python).ipynb | ###Markdown
2020년 2월 25일 월요일 2021년 3월 23일 화요일 재풀이 프로그래머스 해시: 전화번호 목록 문제 : https://programmers.co.kr/learn/courses/30/lessons/42577 블로그 : https://somjang.tistory.com/entry/Programmers-%EC%A0%95%EB%A0%AC-%EC%A0%84%ED%99%94%EB%B2%88%ED%98%B8-%EB%AA%A9%EB%A1%9D-Python 첫번째 시도
###Code
def solution(phone_book):
answer = True
for i in range(len(phone_book)):
for j in range(i+1, len(phone_book)):
if phone_book[j].find(phone_book[i]) == 0 or phone_book[i].find(phone_book[j]) == 0:
answer = False
break
return answer
solution(['119', '97674223', '1195524421'])
###Output
_____no_output_____
###Markdown
--- 두번째 시도
###Code
def solution(phone_book):
answer = True
phone_book.sort()
for i in range(len(phone_book)-1):
if phone_book[i] in phone_book[i+1]:
answer = False
break
return answer
solution(['119', '97674223', '1195524421'])
###Output
_____no_output_____
###Markdown
--- 세번쨰 시도
###Code
def solution(phone_book):
answer = True
phone_book.sort()
for i in range(len(phone_book)-1):
if phone_book[i] == phone_book[i+1][:len(phone_book[i])]:
answer = False
break
return answer
solution(['119', '97674223', '1195524421'])
###Output
_____no_output_____ |
.ipynb_checkpoints/02-Homework_10-SQL-Alchemy_climate_starter-checkpoint.ipynb | ###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, inspect, func
engine = create_engine("sqlite:///02-Homework_10-SQL-Alchemy_Resources_hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
###Output
_____no_output_____
###Markdown
Exploratory Climate Analysis
###Code
# Look at data column names and types
inspector = inspect(engine)
columns = inspector.get_columns('Measurement')
for c in columns:
print(c['name'], c["type"])
print(f"-------------------------------")
inspector = inspect(engine)
columns = inspector.get_columns('Station')
for c in columns:
print(c['name'], c["type"])
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Dates of my trip
#start_date = dt.date(2019,6,18)
#end_date = dt.date(2019,7,2)
# Find the last data point
last_data = session.query(func.max(Measurement.date)).scalar()
print(last_data)
print(type(last_data))
# Calculate the date 1 year ago from the last data point in the database
year = dt.timedelta(days=365)
year_prior = dt.datetime.strptime(last_data,'%Y-%m-%d') - year
print(year)
print(year_prior)
# Perform a query to retrieve the date and precipitation scores
precip_date = session.query(Measurement.date).filter(Measurement.date > year_prior).all()
precip = session.query(Measurement.prcp).filter(Measurement.date > year_prior).all()
# List comprehension
precip_date_list = list(np.ravel(precip_date))
print(len(precip_date_list))
print(precip_date_list)
precip_list = list(np.ravel(precip))
print(len(precip_list))
print(precip_list)
# Save the query results as a Pandas DataFrame and set the index to the date column
precip_data = {"Date": precip_date_list, 'Precipitation (inches)': precip_list}
precip_df = pd.DataFrame(precip_data, columns=['Date', 'Precipitation (inches)'])
precip_df.set_index('Date', inplace=True)
print(precip_df)
# Sort the dataframe by date
precip_df.sort_values('Date')
print(precip_df)
# Use Pandas Plotting with Matplotlib to plot the data
precip_df.plot(x_compat=True)
plt.title('One Year of Precipitation in Hawaii')
loc = np.arange(len(precip_df))
plt.xticks(loc)
plt.tight_layout()
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
precip_df.describe()
# Design a query to show how many stations are available in this dataset?
station = session.query(Station).count()
print(station)
# What are the most active stations? (i.e. what stations have the most rows)?
station_active = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
# List the stations and the counts in descending order.
print(station_active)
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature most active station?
most_active_station = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00519281').all()
print(most_active_station)
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station
# Perform a query to retrieve the precipitation observations for the most active station
most_active_station_data = session.query(Measurement.tobs).filter(Measurement.date > year_prior).all()
# List comprehension
most_active_station_data_list = list(np.ravel(most_active_station_data))
# Plot the results as a histogram
most_active_station_data_hist = plt.hist(most_active_station_data_list, bins=12, label='Temperature Observation')
# label the x axis
plt.xlabel('Temperature (F)')
#label the y axis
plt.ylabel('Frequency')
#set the title
plt.title(f'One Year of Temperature Observation Data from Hawaiis Most Active Station: USC00519281')
plt.legend()
plt.show()
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Calculate the rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
###Output
_____no_output_____ |
examples/plot-chord.ipynb | ###Markdown
Guitar Chord Generator
###Code
import matplotlib.pyplot as plt
from guitar import Guitar
guitar = Guitar(key="C", scale="major", font_family="Comic Sans MS")
###Output
The current [34mfont.family[0m is [32m['Comic Sans MS'][0m.
###Markdown
plot guitar layout
###Code
ax = guitar.plot_chord_layout()
###Output
_____no_output_____
###Markdown
plot guitar strings
###Code
ax = guitar.plot_strings()
###Output
_____no_output_____
###Markdown
Sample Music- Music Name : [欲望に満ちた少年団(ONE OK ROCK)](https://www.ufret.jp/song.php?data=5012)- KEY: "B"
###Code
from guitar.utils import find_key_major_scale
find_key_major_scale(majors=["B","E","F#"], minors=["G#", 'D#', 'C#'])
from guitar import Guitar
guitar = Guitar(key="B", scale="major", name="欲望に満ちた少年団(ONE OK ROCK)")
ax = guitar.plot_strings()
###Output
_____no_output_____
###Markdown
plot chord
###Code
# One-by-One
ax = guitar.plot_chord(chode="G#", string=6, mode="minor")
#three-man cell
fig, axes = guitar.chord_layout_create(n=3)
guitar.plot_chord(chode="E", string=6, mode="major", ax=axes[0])
guitar.plot_chord(chode="F#", string=6, mode="sus4", ax=axes[1])
guitar.plot_chord(chode="C#", string=5, mode="7th", ax=axes[2])
plt.show()
###Output
_____no_output_____ |
Applied Math/Y1S4/Lineair programmeren/.ipynb_checkpoints/Opgave 34-checkpoint.ipynb | ###Markdown
In dit document worden de onderstaande instellingen/functies gebruikt:
###Code
# Remove the scientific notation for 'big' numbers.
options(scipen=999)
# Add the column and row numbers to the matrix for easier indexing.
# It preserves the original naming.
colrownames <- function(M) {
cn=colnames(M)
cn=paste(cn, ' (', c(1:length(M[1,])), ')', sep='')
colnames(M)=cn
rn=rownames(M)
rn=paste(rn, ' (', c(1:length(M[,1])), ')', sep='')
rownames(M)=rn
M
}
###Output
_____no_output_____
###Markdown
LP probleem Het LP-probleem voor opgave 34 is als volgt:$$ \begin{aligned}\min 3000x+4500y&+4000z\\x+y+z &= 100 \qquad&\text{(1)} \\x &\geq 30 \qquad&\text{(2)} \\-x+y &\leq 0 \qquad&\text{(3)} \\-y+2z &\leq 0 \qquad&\text{(4)}\\z &\leq 10 \qquad&\text{(5)}\end{aligned}$$ Alle mogelijke combinaties van $\leq, \geq, =$ komen hierin voor. Ook moet het probleem worden omgezet in een maximalisatieprobleem. Doordat er kunstmatige variabelen nodig zijn, wordt er een twee-fasen model gebruikt. Canonieke vorm Eerst stellen we de doelfunctie op: $d=3000x+4500y+4000z$. Omdat dit een minimalisatie probleem is, en we er een maximalisatieprobleem van willen maken, wordt $d^*=-d$. De doelfunctie wordt dan: $-d=-3000x-4500y-4000z \iff d^*=-3000x-4500y-4000z$.Vervolgens herschrijven we de andere restricties in de canonieke vorm, dit geeft de volgende vergelijkingen:$$ \tag{0} d^*+3000x+4500y+4000z = 0 $$$$ \tag{1} x+y+z+A_1=100 $$$$ \tag{2} x-S_2+A_2=30 $$$$ \tag{3} -x+y+s_3=0 $$$$ \tag{4} -y+2z+s_4=0 $$$$ \tag{5} z+s_5=10 $$ Omdat er kunstmatige variabelen worden gebruikt hebben we een extra doelfunctie nodig, namelijk: $A=-A_1-A_2$. Deze doelfunctie wordt gemaximaliseerd tot $0$. In de canonieke vorm schrijven we dit als $A+A_1+A_2=0 \quad(0^*)$. Echter mogen $A_1$ en $A_2$ maar één keer in de vergelijkingen voorkomen omdat dit basisvariabelen zijn. Dit probleem is als volgt op te lossen: $(0^*)-(1)-(2)$. Dit geeft: $A+A_1+A_2-(x+y+z+A_1)-(x-S_2+A_2)=0-100-30$ ofwel, $A-2x-y-z+S_2=-130$.Hieronder staan alle vergelijkingen in de canonieke vorm:$$ \tag{0*} A-2x-y-z+S_2=-130 $$$$ \tag{0} d^*+3000x+4500y+4000z = 0 $$$$ \tag{1} x+y+z+A_1=100 $$$$ \tag{2} x-S_2+A_2=30 $$$$ \tag{3} -x+y+s_3=0 $$$$ \tag{4} -y+2z+s_4=0 $$$$ \tag{5} z+s_5=10 $$ Nu kunnen we dit omzetten naar een simplex tableau. Simplex Eerst stellen we het begintableau op:
###Code
A=c(1,0,0,0,0,0,0)
d=c(0,1,0,0,0,0,0)
x=c(-2,3000,1,1,-1,0,0)
y=c(-1,4500,1,0,1,-1,0)
z=c(-1,4000,1,0,0,2,1)
A1=c(0,0,1,0,0,0,0)
S2=c(1,0,0,-1,0,0,0)
A2=c(0,0,0,1,0,0,0)
s3=c(0,0,0,0,1,0,0)
s4=c(0,0,0,0,0,1,0)
s5=c(0,0,0,0,0,0,1)
RHS=c(-130,0,100,30,0,0,10)
M=cbind(A,d,x,y,z,A1,S2,A2,s3,s4,s5,RHS)
rownames(M)=c('A','d*','A1','A2','s3','s4', 's5')
M=colrownames(M)
# Iteratie 1 (A)
M
###Output
_____no_output_____
###Markdown
Wat opvalt in het begintableau, met uitzondering van vergelijking $(0^*)$, voor de toegevoegde variabelen is het volgende: 1. Een spelingsvariabel (SlackN) $s_i$ is altijd $\geq$ 0 in het begintableau. 2. Een surplusvariabel (Surp.N) $S_i$ is altijd $\leq$ 0 in het begintableau. 3. Een kunstmatige variabel (Art.N) $A_i$ is altijd $\geq$ 0 in het begintableau. Deze observatie kan worden gebruikt om het begintableau op correctheid te controleren, voordat er wordt begonnen met vegen.De meeste negatieve waarde is in dit geval $x=-2$, vegen geeft:
###Code
# Iteratie 2 (A)
I2=M
I2[1,]=I2[1,]+2*I2[4,]
I2[2,]=I2[2,]-3000*I2[4,]
I2[3,]=I2[3,]-I2[4,]
I2[5,]=I2[5,]+I2[4,]
I2
###Output
_____no_output_____
###Markdown
De meeste negatieve waarde is in dit geval $y=-1$ en $z=-1$, beiden zijn gelijk, dus het wordt $y$ omdat dit de eerste is.
###Code
# Iteratie 3 (A)
I3=I2
I3[1,]=I3[1,]+I3[5,]
I3[2,]=I3[2,]-4500*I3[5,]
I3[3,]=I3[3,]-I3[5,]
I3[6,]=I3[6,]+I3[5,]
I3
###Output
_____no_output_____
###Markdown
De meeste negatieve waarde is nu $S_2$, vegen geeft:
###Code
# Iteratie 4 (A)
I4=I3
I4[1,]=I4[1,]+I4[3,]
I4[3,]=I4[3,]
I4[2,]=I4[2,]-7500/2*I4[3,]
I4[4,]=I4[4,]+1/2*I4[3,]
I4[5,]=I4[5,]+1/2*I4[3,]
I4[6,]=I4[6,]+1/2*I4[3,]
I4[3,]=I4[3,]/2
I4
###Output
_____no_output_____
###Markdown
Nu zijn we klaar met de eerste fase en is $A=0$. Vervolgens gaan we verder met het optimaliseren van de doelfunctie $d^*$. Let er op dat we nu alle kolommen met $A$ en de eerste rij negeren.De meest negatieve waarde is $A_1$, maar deze kolom telt niet meer mee. De volgende waarde is $s_3$, vegen geeft:
###Code
# Iteratie 1 (d*)
I5=I4
I5[5,]=I5[5,]*2
I5[2,]=I5[2,]+I5[5,]*750
I5
###Output
_____no_output_____ |
homework-05/05-homework (1)/02-Billionaires-silva.ipynb | ###Markdown
Homework 5, Part 2: Answer questions with pandas**Use the Excel file to answer the following questions.** This is a little more typical of what your data exploration will look like with pandas. 0) SetupImport pandas **with the correct name** .
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
1) Reading in an Excel fileUse pandas to read in the `richpeople.xlsx` Excel file, saving it as a variable with the name we'll always use for a dataframe.> **TIP:** You will use `read_excel` instead of `read_csv`, *but you'll also need to install a new library*. You might need to restart your kernel afterward!
###Code
df = pd.read_excel("richpeople.xlsx")
###Output
_____no_output_____
###Markdown
2) Checking your dataDisplay the number of rows and columns in your data. Also display the names and data types of each column.
###Code
df.count()
df.columns
df.dtypes
###Output
_____no_output_____
###Markdown
3) Who are the top 10 richest billionaires? Use the `networthusbillion` column.
###Code
df.sort_values(by='networthusbillion', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
4) How many male billionaires are there compared to the number of female billionares? What percent is that? Do they have a different average wealth?> **TIP:** The last part uses `groupby`, but the count/percent part does not.> **TIP:** When I say "average," you can pick what kind of average you use.
###Code
df.groupby(by= 'gender').year.count()
df.gender.value_counts(normalize=True) * 100
df.groupby(by= 'gender').networthusbillion.mean()
###Output
_____no_output_____
###Markdown
5) What is the most common source/type of wealth? Is it different between males and females?> **TIP:** You know how to `groupby` and you know how to count how many times a value is in a column. Can you put them together???> **TIP:** Use percentages for this, it makes it a lot more readable.
###Code
df.groupby(by= ['gender','typeofwealth' ]).count()
df.groupby(by= ['gender','typeofwealth' ]).count()
###Output
_____no_output_____
###Markdown
6) What companies have the most billionaires? Graph the top 5 as a horizontal bar graph.> **TIP:** First find the answer to the question, then just try to throw `.plot()` on the end>> **TIP:** You can use `.head()` on *anything*, not just your basic `df`>> **TIP:** You might feel like you should use `groupby`, but don't! There's an easier way to count.>> **TIP:** Make the largest bar be at the top of the graph>> **TIP:** If your chart seems... weird, think about where in the process you're sorting vs using `head`
###Code
df.sort_values(by='networthusbillion', ascending=False).head(5).plot(kind='bar', x= 'name' , y='networthusbillion')
###Output
_____no_output_____
###Markdown
7) How much money do these billionaires have in total?
###Code
df.sort_values(by='networthusbillion', ascending=False).head(5).networthusbillion.sum()
###Output
_____no_output_____
###Markdown
8) What are the top 10 countries with the most money held by billionaires?I am **not** asking which country has the most billionaires - this is **total amount of money per country.**> **TIP:** Think about it in steps - "I want them organized by country," "I want their net worth," "I want to add it all up," and "I want 10 of them." Just chain it all together.
###Code
df.groupby(by= ['countrycode']).networthusbillion.sum().head(10)
###Output
_____no_output_____
###Markdown
9) How old is an average billionaire? How old are self-made billionaires vs. non self-made billionaires? 10) Who are the youngest billionaires? Who are the oldest? Make a graph of the distribution of ages.> **TIP:** You use `.plot()` to graph values in a column independently, but `.hist()` to draw a [histogram](https://www.mathsisfun.com/data/histograms.html) of the distribution of their values
###Code
df.sort_values(by='age', ascending=True).head(1)
df.sort_values(by='age', ascending=False).head(1)
df['age'].hist()
###Output
_____no_output_____
###Markdown
11) Make a scatterplot of net worth compared to age
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df.plot.scatter(x = 'age', y = 'networthusbillion')
###Output
_____no_output_____
###Markdown
13) Make a bar graph of the wealth of the top 10 richest billionaires> **TIP:** When you make your plot, you'll need to set the `x` and `y` or else your chart will look _crazy_>> **TIP:** x and y might be the opposite of what you expect them to be
###Code
df.sort_values(by='networthusbillion', ascending=False).head(10).plot(kind='bar', x='name', y= 'networthusbillion')
###Output
_____no_output_____ |
Chapter3_MLIntroduction/KnnSklearn/KnnSkLearn.ipynb | ###Markdown
Dataset preparation
###Code
iris = datasets.load_iris()
x = iris.data[:, :2] # :, -> jede Zeile; :2 -> nur die ersten beiden Spalten
y = iris.target
class_names = iris.target_names
discription = iris.DESCR
print(f"class names:\n{class_names}")
print(f"description:\n{discription}")
print(f"X-shape:\n{x.shape}")
print(f"y-shape:\n{y.shape}")
print(f"X:\n{x}")
print(f"y:\n{y}")
###Output
X-shape:
(150, 2)
y-shape:
(150,)
X:
[[5.1 3.5]
[4.9 3. ]
[4.7 3.2]
[4.6 3.1]
[5. 3.6]
[5.4 3.9]
[4.6 3.4]
[5. 3.4]
[4.4 2.9]
[4.9 3.1]
[5.4 3.7]
[4.8 3.4]
[4.8 3. ]
[4.3 3. ]
[5.8 4. ]
[5.7 4.4]
[5.4 3.9]
[5.1 3.5]
[5.7 3.8]
[5.1 3.8]
[5.4 3.4]
[5.1 3.7]
[4.6 3.6]
[5.1 3.3]
[4.8 3.4]
[5. 3. ]
[5. 3.4]
[5.2 3.5]
[5.2 3.4]
[4.7 3.2]
[4.8 3.1]
[5.4 3.4]
[5.2 4.1]
[5.5 4.2]
[4.9 3.1]
[5. 3.2]
[5.5 3.5]
[4.9 3.6]
[4.4 3. ]
[5.1 3.4]
[5. 3.5]
[4.5 2.3]
[4.4 3.2]
[5. 3.5]
[5.1 3.8]
[4.8 3. ]
[5.1 3.8]
[4.6 3.2]
[5.3 3.7]
[5. 3.3]
[7. 3.2]
[6.4 3.2]
[6.9 3.1]
[5.5 2.3]
[6.5 2.8]
[5.7 2.8]
[6.3 3.3]
[4.9 2.4]
[6.6 2.9]
[5.2 2.7]
[5. 2. ]
[5.9 3. ]
[6. 2.2]
[6.1 2.9]
[5.6 2.9]
[6.7 3.1]
[5.6 3. ]
[5.8 2.7]
[6.2 2.2]
[5.6 2.5]
[5.9 3.2]
[6.1 2.8]
[6.3 2.5]
[6.1 2.8]
[6.4 2.9]
[6.6 3. ]
[6.8 2.8]
[6.7 3. ]
[6. 2.9]
[5.7 2.6]
[5.5 2.4]
[5.5 2.4]
[5.8 2.7]
[6. 2.7]
[5.4 3. ]
[6. 3.4]
[6.7 3.1]
[6.3 2.3]
[5.6 3. ]
[5.5 2.5]
[5.5 2.6]
[6.1 3. ]
[5.8 2.6]
[5. 2.3]
[5.6 2.7]
[5.7 3. ]
[5.7 2.9]
[6.2 2.9]
[5.1 2.5]
[5.7 2.8]
[6.3 3.3]
[5.8 2.7]
[7.1 3. ]
[6.3 2.9]
[6.5 3. ]
[7.6 3. ]
[4.9 2.5]
[7.3 2.9]
[6.7 2.5]
[7.2 3.6]
[6.5 3.2]
[6.4 2.7]
[6.8 3. ]
[5.7 2.5]
[5.8 2.8]
[6.4 3.2]
[6.5 3. ]
[7.7 3.8]
[7.7 2.6]
[6. 2.2]
[6.9 3.2]
[5.6 2.8]
[7.7 2.8]
[6.3 2.7]
[6.7 3.3]
[7.2 3.2]
[6.2 2.8]
[6.1 3. ]
[6.4 2.8]
[7.2 3. ]
[7.4 2.8]
[7.9 3.8]
[6.4 2.8]
[6.3 2.8]
[6.1 2.6]
[7.7 3. ]
[6.3 3.4]
[6.4 3.1]
[6. 3. ]
[6.9 3.1]
[6.7 3.1]
[6.9 3.1]
[5.8 2.7]
[6.8 3.2]
[6.7 3.3]
[6.7 3. ]
[6.3 2.5]
[6.5 3. ]
[6.2 3.4]
[5.9 3. ]]
y:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
###Markdown
Dataset split
###Code
num_samples = x.shape[0]
num_features = x.shape[1]
num_classes = len(np.unique(y))
print(f"number of samples:\n{num_samples}")
print(f"number of features:\n{num_features}")
print(f"number of classes:\n{num_classes}")
test_size = num_samples // 3 # integer division -> //
random_indices = np.random.permutation(num_samples)
x_train = x[random_indices[:-test_size]]
y_train = y[random_indices[:-test_size]]
x_test = x[random_indices[-test_size:]]
y_test = y[random_indices[-test_size:]]
print(f"X-train shape:\n{x_train.shape}")
print(f"y-train shape:\n{y_train.shape}")
print(f"X-test shape:\n{x_test.shape}")
print(f"y-test shape:\n{y_test.shape}")
###Output
X-train shape:
(100, 2)
y-train shape:
(100,)
X-test shape:
(50, 2)
y-test shape:
(50,)
###Markdown
KNN Model
###Code
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=3)
clf.fit(x_train, y_train)
accuracy = clf.score(x_test, y_test)
print(f"accuracy: {accuracy*100.0:.4}%")
y_pred = clf.predict(x_test)
print(f"y_pred:\n{y_pred}")
###Output
accuracy: 58.0%
y_pred:
[0 0 0 2 1 0 1 1 0 1 2 1 2 2 0 2 2 2 2 1 2 1 2 1 0 1 2 1 0 1 1 2 0 1 0 2 2
2 1 2 2 2 1 2 0 1 1 0 2 2]
###Markdown
Try dirrent hyperparameters
###Code
n_neighbors = [n_neighbor for n_neighbor in range(1, 11)]
weight_modes = ['uniform', 'distance']
# 10 x 2 = 20 Models
for n_neighbor in n_neighbors:
for weight_mode in weight_modes:
clf = KNeighborsClassifier(n_neighbors=n_neighbor, weights=weight_mode)
clf.fit(x_train, y_train)
accracy = clf.score(x_test, y_test)
print(f"Neighbors: {n_neighbor}\nweight mode: {weight_mode}\naccuracy: {accracy*100:.4}%\n\n")
clf = KNeighborsClassifier(n_neighbors=8, weights='uniform')
clf.fit(x_train, y_train)
accuracy = clf.score(x_test, y_test)
print(f"accuracy: {accuracy*100.0:.4}%")
y_pred = clf.predict(x_test)
print(f"y_pred:\n{y_pred}")
y_pred_proba = clf.predict_proba(x_test)
print(f"y_pred_proba:\n{y_pred_proba}")
from typing import Any
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cmap_background = ListedColormap(
colors=[
'#FFAAAA',
'#AAFFAA',
'#AAAAFF']
)
cmap_points = [
'red',
'green',
'blue'
]
def make_mashgrid(
x0: np.ndarray,
x1: np.ndarray
) -> np.ndarray:
step_width = 0.05
offset = 0.1
x0_arange = np.arange(np.min(x0) - offset, np.max(x0) + offset, step_width)
x1_arange = np.arange(np.min(x1) - offset, np.max(x1) + offset, step_width)
xx0, xx1 = np.meshgrid(x0_arange, x1_arange)
return xx0, xx1
def plot_contours(
ax: plt.Axes,
clf: KNeighborsClassifier,
xx0: np.ndarray,
xx1: np.ndarray,
**params: Any
):
x_combinations = np.vstack([xx0.flatten(), xx1.flatten()]).T
z = clf.predict(x_combinations)
z = np.reshape(z, newshape=(xx0.shape))
ax.contourf(xx0, xx1, z, **params)
def plot_decision_border(
clf: KNeighborsClassifier,
x_train: np.ndarray,
y_train: np.ndarray,
x_test: np.ndarray,
y_test: np.ndarray) -> None:
fig, ax = plt.subplots()
X0 = x_train[:, 0]
X1 = x_train[:, 1]
xx0, xx1 = make_mashgrid(X0, X1)
plot_contours(
ax, clf, xx0, xx1, cmap=cmap_background, alpha=0.5
)
for index, point in enumerate(x_train):
plt.scatter(
x=point[0],
y=point[1],
color=cmap_points[y_train[index]],
s=15,
marker="o")
for index, point in enumerate(x_test):
plt.scatter(
x=point[0],
y=point[1],
color=cmap_points[y_test[index]],
s=40,
marker="*")
plt.show()
plot_decision_border(clf, x_train, y_train, x_test, y_test)
###Output
_____no_output_____ |
Caravela_Hygroclip.ipynb | ###Markdown
Change date time to isoformat
###Code
dt = []
for i in tqdm(range(0, len(hc2a['PC Timestamp[UTC]']))):
dt.append(datetime.strptime(hc2a['PC Timestamp[UTC]'][i], '%Y/%m/%d %H:%M:%S.%f').isoformat())
hc2a['datetime_UTC'] = dt
subset = hc2a[(hc2a['datetime_UTC'] >= '2020-01-22 00:00:00.000')] #select data from Caravela's launch onwards
subset
# check all time zones are utc
subset['PC Time Zone'].unique()
subset = subset.drop(['PC Timestamp[UTC]', 'PC Time Zone'],axis=1) # remove the colums we dont need
subset
###Output
_____no_output_____
###Markdown
We need to turn the ' --.--' entries into something python friendly. Below we define a function to do this
###Code
def get_better_value(value):
"""Create readable output for the given variable from hygroclip input"""
if isinstance(value, float):
#nothing to do
return value
elif value == ' --.--':
return np.nan
else:
return float(value)
orig_temp = subset['Temp'].to_numpy() #apply bad entry clean up to temperature
#Go through every element in list and use function to convert
better_temp = [get_better_value(i) for i in orig_temp]
better_temp[:5]
orig_hum = subset['Humidity'].to_numpy() #apply bad entry clean up to humidity
#Go through every element in list and use function to convert
better_hum = [get_better_value(i) for i in orig_hum]
better_hum[:5]
subset['Humidity'] = better_hum
subset['Temp'] = better_temp
subset.to_csv('../../Products/CARAVELA_Hygroclip.csv',index = None)#save file
###Output
_____no_output_____
###Markdown
Check saved file
###Code
baa = pd.read_csv('../../Products/CARAVELA_Hygroclip.csv')# import file to test it
baa
x=[] # parse the timestamp
for i in tqdm(range(len(baa['datetime_UTC']))):
x.append(datetime.fromisoformat(baa['datetime_UTC'][i]))
baa['datetime_parsed'] = x
fig,ax = plt.subplots(1,1, figsize=(18, 15), sharex=True)
ax.scatter(baa['datetime_parsed'], baa['Temp'],s=4)
ax.set_ylabel('Temp C')
ax.set_xlabel('Date')
###Output
_____no_output_____ |
made/made.ipynb | ###Markdown
Load DataWe will load shapes data and convert the data into binary form i.e. pixel values will either be 1 or zero.We are using "shapes" dataset which has images across 9 different geometrical shapes. The pickled dataset has images of shape 20x20x1. We convert pixvel values from 0-255 range to {0,1} - i.e. pixel being "off" or "on".
###Code
def load_data():
with open('data/shapes.pkl', 'rb') as f:
data = pickle.load(f)
train_data, test_data = data['train'], data['test']
train_data = (train_data > 127.5).astype('uint8')
test_data = (test_data > 127.5).astype('uint8')
return train_data, test_data
###Output
_____no_output_____
###Markdown
Visualize dataWe now load and visualize the data. We have 10479 images of shape 20x20x1 in `train_data` and 4491 images of same shape in `test_data`.In this we note book we will use train_data images to train a MADE model and then use the trained model to create synthetic images which should look similar to the train images of geometric figures.
###Code
def show_samples(samples, nrow=10, title='Samples'):
"""
samples: numpy array of shape (B x H x W x C)
"""
# the pickled image is of shape HxWxC
# make samples shape (B x C x H x W)
samples = (torch.FloatTensor(samples) / 255).permute(0, 3, 1, 2)
grid_img = make_grid(samples, nrow=nrow)
plt.figure()
plt.title(title)
# again cast the image back to HxWxC for plt.imshow to work
plt.imshow(grid_img.permute(1, 2, 0))
plt.axis('off')
plt.tight_layout()
plt.show()
# load data
train_data, test_data = load_data()
name = "Shapes dataset"
print (f'Train Shape: {train_data.shape}')
print (f'Test Shape: {test_data.shape}')
# sample 100 images from train data
idxs = np.random.choice(len(train_data), replace=False, size=(100,))
images = train_data[idxs] * 255
# show 100 samples in a 10x10 grid
show_samples(images, title=f'"{name}" Samples')
###Output
Train Shape: (10479, 20, 20, 1)
Test Shape: (4491, 20, 20, 1)
###Markdown
Simple routine to plot the train and test loss curves and to draw some samples from the trained model
###Code
def show_training_plot(train_losses, test_losses, title):
"""
test_losses: one loss value per epoch
train_losses: one loss value after every batch
"""
plt.figure()
n_epochs = len(test_losses) - 1
x_train = np.linspace(0, n_epochs, len(train_losses))
x_test = np.arange(n_epochs + 1)
plt.plot(x_train, train_losses, label='train loss')
plt.plot(x_test, test_losses, label='test loss')
plt.legend()
plt.title(title)
plt.xlabel('Epoch')
plt.ylabel('NLL')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Code to train the model
###Code
def train(model, train_loader, optimizer, epoch, grad_clip=None):
model.train()
train_losses = []
for x in train_loader:
x = x.cuda().contiguous()
loss = model.loss(x)
optimizer.zero_grad()
loss.backward()
if grad_clip:
torch.nn.utils.clip_grad_norm_(model.parameters(), grad_clip)
optimizer.step()
train_losses.append(loss.item())
return train_losses
def eval_loss(model, data_loader):
model.eval()
total_loss = 0
with torch.no_grad():
for x in data_loader:
x = x.cuda().contiguous()
loss = model.loss(x)
total_loss += loss * x.shape[0]
avg_loss = total_loss / len(data_loader.dataset)
return avg_loss.item()
def train_epochs(model, train_loader, test_loader, train_args):
epochs, lr = train_args['epochs'], train_args['lr']
grad_clip = train_args.get('grad_clip', None)
optimizer = optim.Adam(model.parameters(), lr=lr)
train_losses = []
test_losses = [eval_loss(model, test_loader)]
for epoch in range(epochs):
model.train()
train_losses.extend(train(model, train_loader, optimizer, epoch, grad_clip))
test_loss = eval_loss(model, test_loader)
test_losses.append(test_loss)
print(f'Epoch {epoch}, Test loss {test_loss:.4f}')
return train_losses, test_losses
###Output
_____no_output_____
###Markdown
MADE network
###Code
# Code based one Andrej Karpathy's implementation: https://github.com/karpathy/pytorch-made
class MaskedLinear(nn.Linear):
def __init__(self, in_features, out_features, bias=True):
super().__init__(in_features, out_features, bias)
self.register_buffer('mask', torch.ones(out_features, in_features))
def set_mask(self, mask):
self.mask.data.copy_(torch.from_numpy(mask.astype(np.uint8).T))
def forward(self, input):
return F.linear(input, self.mask * self.weight, self.bias)
class MADE(nn.Module):
def __init__(self, input_shape, d, hidden_size=[512, 512, 512],
ordering=None):
super().__init__()
self.input_shape = input_shape
self.nin = np.prod(input_shape)
self.nout = self.nin * d
self.d = d
self.hidden_sizes = hidden_size
self.ordering = np.arange(self.nin) if ordering is None else ordering
# define a simple MLP neural net
self.net = []
hs = [self.nin] + self.hidden_sizes + [self.nout]
for h0, h1 in zip(hs, hs[1:]):
self.net.extend([
MaskedLinear(h0, h1),
nn.ReLU(),
])
self.net.pop() # pop the last ReLU for the output layer
self.net = nn.Sequential(*self.net)
self.m = {}
self.create_mask() # builds the initial self.m connectivity
def create_mask(self):
L = len(self.hidden_sizes)
# sample the order of the inputs and the connectivity of all neurons
self.m[-1] = self.ordering
for l in range(L):
self.m[l] = np.random.randint(self.m[l - 1].min(),
self.nin - 1, size=self.hidden_sizes[l])
# construct the mask matrices
masks = [self.m[l - 1][:, None] <= self.m[l][None, :] for l in range(L)]
masks.append(self.m[L - 1][:, None] < self.m[-1][None, :])
masks[-1] = np.repeat(masks[-1], self.d, axis=1)
# set the masks in all MaskedLinear layers
layers = [l for l in self.net.modules() if isinstance(l, MaskedLinear)]
for l, m in zip(layers, masks):
l.set_mask(m)
def forward(self, x):
batch_size = x.shape[0]
x = x.float()
x = x.view(batch_size, self.nin)
logits = self.net(x).view(batch_size, self.nin, self.d)
return logits.permute(0, 2, 1).contiguous().view(batch_size, self.d, *self.input_shape)
def loss(self, x):
return F.cross_entropy(self(x), x.long())
def sample(self, n):
samples = torch.zeros(n, self.nin).cuda()
self.inv_ordering = {x: i for i, x in enumerate(self.ordering)}
with torch.no_grad():
for i in range(self.nin):
logits = self(samples).view(n, self.d, self.nin)[:, :, self.inv_ordering[i]]
probs = F.softmax(logits, dim=1)
samples[:, self.inv_ordering[i]] = torch.multinomial(probs, 1).squeeze(-1)
samples = samples.view(n, *self.input_shape)
return samples.cpu().numpy()
###Output
_____no_output_____
###Markdown
Code to run MADE over the Shapes dataset and produce results
###Code
def run_made():
train_data, test_data = load_data()
H = 20
W = 20
# train_data: A (n_train, H, W, 1) uint8 numpy array of binary images with values in {0, 1}
# test_data: An (n_test, H, W, 1) uint8 numpy array of binary images with values in {0, 1}
# transpose train and test data for Pytorch
train_data = np.transpose(train_data, (0, 3, 1, 2))
test_data = np.transpose(test_data, (0, 3, 1, 2))
model = MADE((1, H, W), 2, hidden_size=[512, 512]).cuda()
train_loader = data.DataLoader(train_data, batch_size=128, shuffle=True)
test_loader = data.DataLoader(test_data, batch_size=128)
train_losses, test_losses = train_epochs(model, train_loader, test_loader,
dict(epochs=20, lr=1e-3))
samples = model.sample(100)
samples = np.transpose(samples, (0, 2, 3, 1))
samples = samples * 255
show_training_plot(train_losses, test_losses, "Train Test loss Plot")
show_samples(samples, nrow=10, title='Samples')
run_made()
###Output
Epoch 0, Test loss 0.1738
Epoch 1, Test loss 0.1412
Epoch 2, Test loss 0.1266
Epoch 3, Test loss 0.1100
Epoch 4, Test loss 0.1000
Epoch 5, Test loss 0.0941
Epoch 6, Test loss 0.0892
Epoch 7, Test loss 0.0841
Epoch 8, Test loss 0.0796
Epoch 9, Test loss 0.0762
Epoch 10, Test loss 0.0736
Epoch 11, Test loss 0.0715
Epoch 12, Test loss 0.0693
Epoch 13, Test loss 0.0682
Epoch 14, Test loss 0.0660
Epoch 15, Test loss 0.0651
Epoch 16, Test loss 0.0637
Epoch 17, Test loss 0.0628
Epoch 18, Test loss 0.0619
Epoch 19, Test loss 0.0609
|
others/third_party/fairness_aware_learning/examples/discrete_fairness_aware_learning/Adult.ipynb | ###Markdown
We download and preprocess the dataset Adult from UCI as in https://github.com/jmikko/fair_ERM
###Code
from examples.data_loading import read_dataset
encoded_data, to_protect, encoded_data_test, to_protect_test = read_dataset(name='adult', fold=1)
encoded_data.head()
###Output
_____no_output_____
###Markdown
We define a very simple neural net
###Code
# Hyper Parameters
input_size = encoded_data.shape[1]-1
num_classes = 2
num_epochs = 40 #20
batch_size = 128
batchRenyi = 128.
learning_rate = 1e-3
lambda_renyi = 8. * batchRenyi/batch_size
class NetRegression(nn.Module):
def __init__(self, input_size, num_classes):
super(NetRegression, self).__init__()
size = 80
self.first = nn.Linear(input_size, size)
self.last = nn.Linear(size, num_classes)
def forward(self, x):
out = F.selu( self.first(x) )
out = self.last(out)
return out
cfg_factory=namedtuple('Config', 'model batch_size num_epochs lambda_renyi batchRenyi learning_rate input_size num_classes' )
config = cfg_factory(NetRegression, batch_size, num_epochs, lambda_renyi, batchRenyi, learning_rate, input_size, num_classes)
###Output
_____no_output_____
###Markdown
A few helper functions to compute performance metrics
###Code
def EntropyToProba(entropy): #Only for X Tensor of dimension 2
return entropy[:,1].exp() / entropy.exp().sum(dim=1)
def calc_accuracy(outputs,Y): #Care outputs are going to be in dimension 2
max_vals, max_indices = torch.max(outputs,1)
acc = (max_indices == Y).sum().numpy()/max_indices.size()[0]
return acc
def results_on_test(model, criterion, encoded_data_test, to_protect_test):
target = torch.tensor(encoded_data_test['Target'].values.astype(np.long)).long()
to_protect_test = torch.Tensor(to_protect_test)
data = torch.tensor(encoded_data_test.drop('Target', axis = 1).values.astype(np.float32))
outputs = model(data).detach()
loss = criterion(outputs, target)
p = EntropyToProba(outputs)
pt = torch.Tensor(to_protect_test)
ans = {}
balanced_acc = (calc_accuracy(outputs[to_protect_test==0],target[to_protect_test==0]) +
calc_accuracy(outputs[to_protect_test==1],target[to_protect_test==1]))/2
ans['loss'] = loss.item()
ans['accuracy'] = calc_accuracy(outputs,target)
ans['balanced_acc'] = balanced_acc
f = 0.5
p1 = (((pt == 1.)*(p>f)).sum().float() / (pt == 1).sum().float())
p0 = (((pt == 0.)*(p>f)).sum().float() / (pt == 0).sum().float())
o1 = (((pt == 1.)*(p>f)*(target==1)).sum().float() / ((pt == 1)*(target==1)).sum().float())
o2 = (((pt == 0.)*(p>f)*(target==1)).sum().float() / ((pt == 0)*(target==1)).sum().float())
di = p1 / p0
deo = (o1 - o2).abs()
ans['di'] = di.item()
ans['deo'] = deo.item()
return ans
verbose = True
model = config.model(config.input_size, config.num_classes)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate, weight_decay=0)
train_target = torch.tensor(encoded_data['Target'].values.astype(np.long)).long()
train_data = torch.tensor(encoded_data.drop('Target', axis = 1).values.astype(np.float32))
train_protect = torch.tensor(to_protect).float()
train_tensor = data_utils.TensorDataset(train_data, train_target)
train_loader = data_utils.DataLoader(dataset = train_tensor, batch_size = config.batch_size, shuffle = True)
for epoch in range(config.num_epochs):
for i, (x, y) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(x)
#Compute the usual loss of the prediction
loss = criterion(outputs, y)
if 1:
#Select a renyi regularization mini batch and compute the value of the model on it
frac=config.batchRenyi/train_data.shape[0]
foo = torch.bernoulli(frac*torch.ones(train_data.shape[0])).bool()
br = train_data[foo, : ]
pr = train_protect[foo]
yr = train_target[foo].float()
ren_outs = model(br)
#Compte the fairness penalty for positive labels only since we optimize for DEO
delta = EntropyToProba(ren_outs)
#r2 = chi_squared_kde( delta, pr[yr==1.])
r2 = chi_squared_kde(delta, pr, yr).mean()
loss += config.lambda_renyi*r2
#In Adam we trust
loss.backward()
optimizer.step()
if verbose:
print ('Epoch: [%d/%d], Batch: [%d/%d], Loss: %.4f, Accuracy: %.4f, Fairness penalty: %.4f' % (epoch+1, config.num_epochs, i, len(encoded_data)//batch_size,
loss.item(),calc_accuracy(outputs,y),
r2.item()
))
#print( results_on_test(model, criterion, encoded_data_test, to_protect_test) )
print("Results on test set")
results_on_test(model, criterion, encoded_data_test, to_protect_test)
###Output
Epoch: [1/20], Batch: [254/254], Loss: 0.6986, Accuracy: 0.7755, Fairness penalty: 0.0256
Epoch: [2/20], Batch: [254/254], Loss: 0.5739, Accuracy: 0.8980, Fairness penalty: 0.0350
Epoch: [3/20], Batch: [254/254], Loss: 0.5006, Accuracy: 0.7959, Fairness penalty: 0.0175
Epoch: [4/20], Batch: [254/254], Loss: 0.7839, Accuracy: 0.8367, Fairness penalty: 0.0543
Epoch: [5/20], Batch: [254/254], Loss: 0.6618, Accuracy: 0.8367, Fairness penalty: 0.0333
Epoch: [6/20], Batch: [254/254], Loss: 0.5163, Accuracy: 0.8776, Fairness penalty: 0.0244
Epoch: [7/20], Batch: [254/254], Loss: 0.6215, Accuracy: 0.7959, Fairness penalty: 0.0225
Epoch: [8/20], Batch: [254/254], Loss: 0.4454, Accuracy: 0.8571, Fairness penalty: 0.0094
Epoch: [9/20], Batch: [254/254], Loss: 0.4343, Accuracy: 0.8163, Fairness penalty: 0.0126
Epoch: [10/20], Batch: [254/254], Loss: 0.5537, Accuracy: 0.7959, Fairness penalty: 0.0134
Epoch: [11/20], Batch: [254/254], Loss: 0.5512, Accuracy: 0.7755, Fairness penalty: 0.0126
Epoch: [12/20], Batch: [254/254], Loss: 0.5787, Accuracy: 0.7755, Fairness penalty: 0.0194
Epoch: [13/20], Batch: [254/254], Loss: 0.8652, Accuracy: 0.7551, Fairness penalty: 0.0501
Epoch: [14/20], Batch: [254/254], Loss: 0.5866, Accuracy: 0.8571, Fairness penalty: 0.0265
Epoch: [15/20], Batch: [254/254], Loss: 0.3104, Accuracy: 0.9388, Fairness penalty: 0.0131
Epoch: [16/20], Batch: [254/254], Loss: 0.4977, Accuracy: 0.8571, Fairness penalty: 0.0177
Epoch: [17/20], Batch: [254/254], Loss: 0.5311, Accuracy: 0.8163, Fairness penalty: 0.0163
Epoch: [18/20], Batch: [254/254], Loss: 0.6447, Accuracy: 0.7143, Fairness penalty: 0.0120
Epoch: [19/20], Batch: [254/254], Loss: 0.5174, Accuracy: 0.8571, Fairness penalty: 0.0162
Epoch: [20/20], Batch: [254/254], Loss: 0.5040, Accuracy: 0.8367, Fairness penalty: 0.0217
Results on test set
|
2_ML_Linear_Regression.ipynb | ###Markdown
Linear RegressionOne of the simplest forms of Machine Learning, you fit a best-fit line to a two-dimensional plot containing one independent variable (called a ```feature``` in Machine Learning) and a dependent variable (called a ```target``` or ```outcome``` in Machine Learning). DataThis uses the boston housing dataset which is included in ```scikit-learn```.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
boston = datasets.load_boston()
###Output
_____no_output_____
###Markdown
The features are in a numpy array called ```boston['data']``` and the targets are in a numpy array called ```boston['target']```. The target values represent median home prices in Boston, in thousands.
###Code
boston['feature_names']
###Output
_____no_output_____
###Markdown
Question to AnswerCan we predict the median home value based on the number of rooms in the home using linear regression? PreprocessingThe ```RM``` feature name represents the proportion of blacks by town. Get that into its own array and plot it against the median home prices to see if there's a correlation.
###Code
rooms = boston.data[:, np.newaxis, 5] # RM is the feature column label
prices = boston.target
prices = prices.reshape(len(prices), 1)
# scatter plot
plt.scatter(x=rooms, y=prices)
plt.xlabel('Number of Rooms')
plt.ylabel('Median Home Value (1000s)')
plt.show()
###Output
_____no_output_____
###Markdown
Train the ModelThe data shows there is a positive correlation between number of rooms and median home prices. But can we use the number of rooms to predict a home's median price? When training a model, you need a training set that's a subset of all your data. The rest belongs to a test set that you use to run your trained model against to see how accurate it is. An 80/20 split is common (80% training, 20% testing).**In regression, the way you train a model is to fit a line.**
###Code
from sklearn import linear_model
from sklearn.model_selection import train_test_split
# 80/20 test/train split
X_train = rooms[:round(len(rooms)*.80)]
X_test = rooms[round(len(rooms)*.80)+1:]
y_train = prices[:round(len(prices)*.80)]
y_test = prices[round(len(prices)*.80)+1:]
# fit the line
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Test the ModelNow use the model against the test dataset.
###Code
pred = regr.predict(X_test)
plt.scatter(x=X_test, y=y_test)
plt.plot(X_test, pred, color='red')
plt.xlabel('# of rooms')
plt.ylabel("Media Home Value ($1000s)")
plt.show()
###Output
_____no_output_____
###Markdown
Evaluating the modelRegression lines seek to minimize the mean squared error between the points on the graph. We can see the mean squared error (MSE) and the variance, which is a statistical measure of how accurate the prediction is with ```scikit-learn```.
###Code
# r2_score is the variance
from sklearn.metrics import mean_squared_error, r2_score
# provide the actual test data and the predicted values
# so it can see how accurate it was
mse = mean_squared_error(y_test, pred)
r2 = r2_score(y_test, pred)
print("MSE: %.2f\tVariance: %.2f" % (mse, r2))
###Output
MSE: 72.90 Variance: -1.85
###Markdown
The closer to 0 the MSE is, the better. The closer to 1 the Variance is, the better. ConclusionThe MSE and Variance aren't that great, meaning we can't really determine with a lot of accuracy the median price of a home based on the number of rooms.
###Code
###Output
_____no_output_____ |
patrick_codes/unsupervised_sentiment_analysis.ipynb | ###Markdown
Unsupervised Sentiment Analysis > import libraries
###Code
import pandas as pd
import numpy as np
import seaborn as sns
###Output
_____no_output_____ |
visualizations/map_vis_test.ipynb | ###Markdown
In order to keep the world map visuals up and running we need a few things.1) a more efficent way to get the Lat, Lon, and ISO (also known as alpha-3 code)(https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes)2) a script that goes through the dataset and adds Lat, Lon, and the ISO. 3) a way to break up the protected groundsThese are a few of the biggest issues going on with my visuals.Right now I'm manually inputing the fields using google maps for Lat/Lon and Wikipedia for the ISO.These two visuals are intended to grow with the data, but only if the data can be altered to fit them.
###Code
import plotly.express as px
import geopandas as gpd
import pandas as pd
data = pd.read_csv(r"C:\Users\hambr\human-rights-first-asylum-ds-a\Visualizations\datav4.csv")
data.head()
#Added to the DATA the columns LATITUTDE LONGITUDE and ISO, so that you can plot them on a world map.
# Plots a world map with country boundaries and Fuchsia colored dots representing court cases.
# TO DO - Make it so dot's dont overlap but instead show dropdown menu(?), all cases(?) on the dot.
# A new GEOPANDAS api needs to be made so that the countries show up in all english.
# geopandas link - https://geopandas.org/getting_started.html
fig = px.scatter_mapbox(data, lat="Latitude", lon="Longitude", hover_name="Case ID", hover_data=["Judge's Name", "Hearing Date", "Country"],
color_discrete_sequence=["fuchsia"], zoom=3, height=300)
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
# Plots a world map and colors in countries according to "Protected Ground"
# TO DO- Show how many of each protected ground per country. Main color is the larges Protected Ground.
import plotly.graph_objects as go
fig = px.choropleth(data, locations="Iso",
color="Protected Ground", # shows the protected ground as the color
hover_name="Country", # hovering over the country will show its name
color_continuous_scale=px.colors.sequential.Plasma)
#This draws the world map
fig.update_geos(
resolution=50,
showcountries=True, countrycolor="RebeccaPurple",
showcoastlines=True, coastlinecolor="RebeccaPurple",
showland=True, landcolor="LightGreen",
showocean=True, oceancolor="LightBlue",
showlakes=True, lakecolor="LightBlue",
showrivers=False, rivercolor="Blue",
)
fig.update_layout(height=300, margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
###Output
_____no_output_____ |
Part II - Learning in Deep Networks/regularization.ipynb | ###Markdown
###Code
import keras
from keras.layers import Dense, Input, BatchNormalization
from keras.initializers import constant
from keras.regularizers import l1, l2
from keras.models import Model
import keras.backend as K
import numpy as np
###Output
_____no_output_____
###Markdown
Batch NormalizationIn this very simple example we have a network with:- one input of shape (1,) (or simply stated the input is one number)- one Dense layer with only one unit (aka neuron)When we print the summary of the model we can see that there is a total of 2 parameters, both of which are trainable.The first of these parameters is the weights (*w*) of the unit and the second is the bias (*b*)the function inside the unit is:$$f(x)=wx+b$$---Now if we uncomment the commented line and re-run the model we will see that there is one more layer in the end of the model summary, the BatchNormalization layer.This time there is a total of 6 parameters:- the initial 2 trainable parameters of the Dense layer- 2 trainable parameters of the batch_normalization layer- 2 non-trainable parameters of the batch_normalization layer---These parameters come from the above function of the batch normalization layer:$$\hat h = \gamma \frac{h-\mu_B}{\sigma_B}+\beta$$where:- $h$ and $\hat h$ are the hidden values before and after the normalization- $\mu_B$ and $\sigma_B$ represent the mean and the standard deviation of $h$. The are estimated within a batch of M samples. These are the **non trainable** parameters (since they are computed from the batch)- $\gamma$ is a scale parameter and $\beta$ is a shift parameter. These are the **trainable** paramters. We can define if we want the layer to make use of them or not. By changing the values of the *center* and *scale* arguments to *False* the layer does not make use of these parameters and thus we do not have these 2 trainable parameters
###Code
K.clear_session()
input = Input([1])
output = Dense(1, kernel_initializer=constant(2), bias_initializer=constant(1))(input)
# output = BatchNormalization(center=False, scale=False)(output)
model = Model(input, output)
model.summary()
###Output
_____no_output_____
###Markdown
Now let's see how it works on a specific example. In order to simplify the model even more we will define a model with an input layer of 1 number and a Batch Normalization layer on top of it.This means that the input numbers will pass directly through the batch_norm layer and we will get its output.We define center and scale to be False so there are no trainable parameters.We also define the momentum and the epsilon to be 0 in order to get the results based on the formula presented above (otherwise the results will be different). It is recomended when using this layer in real applications **not** to set these parameters to 0.
###Code
K.clear_session()
input = Input([1])
output = BatchNormalization(center=False, scale=False, momentum=0, epsilon=0)(input)
model = Model(input, output)
model.summary()
###Output
_____no_output_____
###Markdown
In our example we will use as input an array with 2 elements: 1 and 2.We reshape the array so that the model accepts the two numbers as a batch of two elements.When we get the output of the model however we see that the numbers remained unchainged...
###Code
x = 1, 2
x = np.reshape(np.array(x), (2, 1))
y_pred = model.predict(x)
print(*y_pred)
###Output
_____no_output_____
###Markdown
Now this happened because the (non trainable) weights of the model ($\mu_B$ and $\sigma_B$) were not calculated. The values of these parameters are the initial ones (0 and 1)
###Code
print(*model.get_weights())
###Output
_____no_output_____
###Markdown
All the parameters of the model, even the non trainable, are calculated during the training phase of the model and are retained during the inference phase.Thus we have to "train" our model on our batch.In order to train the model we first have to compile it with a specific otpimizer and loss function. The choice of these two arguments is arbitrary, as it is the choice of the y values. Thus we can safely use 'sgd', 'mae' and 'x' without loss of generality.
###Code
model.compile('sgd', 'mae')
t = model.train_on_batch(x, x)
###Output
_____no_output_____
###Markdown
Now if we run again the previous cell and print the model's weights we will get the updated mean and standard deviation (actually the variance) based on the batch mean:$$\bar x=\frac{\sum^N_{i=1}x_i}{N}$$standard deviation:$$\sigma=\sqrt{\frac{\sum^N_{i=1}(x_i-\bar x)^2}{N-1}}$$variance:$$Var=\sigma^2$$ Now you can rerun the prediction cell and obtain the new outputs of the model. The two outputs have indeed mean = 0 and std = 1.If you want to check it you can set the h variable at the next model to be equal to y_pred
###Code
h = np.array([1, 2]) # y_pred
mean = np.sum(h) / len(h)
std = np.sqrt(np.sum(np.square(h - mean)) / (len(h) - 1))
var = std**2
print('mean: %.2f\nstd: %.3f\nvar: %.2f' % (mean, std, var))
###Output
_____no_output_____
###Markdown
L1 and L2 regularization In this example we define a simple model with an input layer of one number and one Dense layer with one unit (aka neuron).However, for the specific unit we set the values of the weight and the bias during the initialization:$$f(x)=wx+b$$where $w=2$ and $b=1$We also explicitly define the kernel, bias and activity regularizers to be None (which is their default value)
###Code
K.clear_session()
input = Input([1])
output = Dense(1, kernel_initializer=constant(2), bias_initializer=constant(1),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None)(input)
model = Model(input, output)
model.summary()
###Output
_____no_output_____
###Markdown
now if we run the model on a specific number, let's say 2, we can see that we get the correct result (5)
###Code
x = 2
y_pred = model.predict(np.array((x,)))
y_pred[0, 0]
###Output
_____no_output_____
###Markdown
So when we compile the model and evaluate it with the correct numbers we see that the loss is equal to 0
###Code
x, y = 2, 5
x, y = np.array((x,)), np.array((y,))
model.compile('sgd', 'mae')
loss = model.evaluate(x, y, verbose=0)
print(loss)
###Output
_____no_output_____
###Markdown
Now let's make some changes. If we change the activity_regularizer for example to l1 norm with a factor of 1 we get the following model
###Code
K.clear_session()
input = Input([1])
output = Dense(1, kernel_initializer=constant(2), bias_initializer=constant(1),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=l1(1))(input)
model = Model(input, output)
model.summary()
###Output
_____no_output_____
###Markdown
Based on the summary nothing really changed. and if we predict on the same x the result will be once again 5
###Code
x = 2
y_pred = model.predict(np.array((x,)))
y_pred[0, 0]
###Output
_____no_output_____
###Markdown
However, the loss this time is different. this happens because the new loss is:$$new\_loss=loss+regularization$$where in our case the regularization is:$$l_1(a)=\sum{w\cdot|a|}$$where $w$ is the argument we define in the $l1()$ functionSimilarly, we have:$$l_2(a)=\sum{w\cdot a^2}$$
###Code
x, y = 2, 5
x, y = np.array((x,)), np.array((y,))
model.compile('sgd', 'mae')
loss = model.evaluate(x, y, verbose=0)
print(loss)
###Output
_____no_output_____ |
.ipynb_checkpoints/movie_sentiment_analysis_bidirectional_lstm-checkpoint.ipynb | ###Markdown
Sentiment analysis on IMBD Movie Reviews Importing dataset from Kaggle
###Code
# from google.colab import files
# files.upload()
# !mkdir -p ~/.kaggle
# !cp kaggle.json ~/.kaggle/
# !chmod 600 ~/.kaggle/kaggle.json
# !kaggle datasets download -d mwallerphunware/imbd-movie-reviews-for-binary-sentiment-analysis
# from zipfile import ZipFile
# file_name = "/content/imbd-movie-reviews-for-binary-sentiment-analysis.zip"
# with ZipFile(file_name,"r") as zip:
# zip.extractall()
# print("Done")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning)
###Output
_____no_output_____
###Markdown
Load the datasetDataset can be downloaded in https://www.kaggle.com/mwallerphunware/imbd-movie-reviews-for-binary-sentiment-analysis
###Code
df= pd.read_csv('/content/MovieReviewTrainingDatabase.csv')
print('Dataset shape:',df.shape)
df.head()
###Output
_____no_output_____
###Markdown
Sample positive movie review
###Code
print('Sentiment:',df.iloc[0]['sentiment'])
print('Review:')
print(df.iloc[0]['review'])
###Output
Sentiment: Positive
Review:
With all this stuff going down at the moment with MJ i've started listening to his music, watching the odd documentary here and there, watched The Wiz and watched Moonwalker again. Maybe i just want to get a certain insight into this guy who i thought was really cool in the eighties just to maybe make up my mind whether he is guilty or innocent. Moonwalker is part biography, part feature film which i remember going to see at the cinema when it was originally released. Some of it has subtle messages about MJ's feeling towards the press and also the obvious message of drugs are bad m'kay. Visually impressive but of course this is all about Michael Jackson so unless you remotely like MJ in anyway then you are going to hate this and find it boring. Some may call MJ an egotist for consenting to the making of this movie BUT MJ and most of his fans would say that he made it for the fans which if true is really nice of him. The actual feature film bit when it finally starts is only on for 20 minutes or so excluding the Smooth Criminal sequence and Joe Pesci is convincing as a psychopathic all powerful drug lord. Why he wants MJ dead so bad is beyond me. Because MJ overheard his plans? Nah, Joe Pesci's character ranted that he wanted people to know it is he who is supplying drugs etc so i dunno, maybe he just hates MJ's music. Lots of cool things in this like MJ turning into a car and a robot and the whole Speed Demon sequence. Also, the director must have had the patience of a saint when it came to filming the kiddy Bad sequence as usually directors hate working with one kid let alone a whole bunch of them performing a complex dance scene. Bottom line, this movie is for people who like MJ on one level or another (which i think is most people). If not, then stay away. It does try and give off a wholesome message and ironically MJ's bestest buddy in this movie is a girl! Michael Jackson is truly one of the most talented people ever to grace this planet but is he guilty? Well, with all the attention i've gave this subject....hmmm well i don't know because people can be different behind closed doors, i know this for a fact. He is either an extremely nice but stupid guy or one of the most sickest liars. I hope he is not the latter.
###Markdown
Sample negative movie review
###Code
print('Sentiment:',df.iloc[2]['sentiment'])
print('Review:')
print(df.iloc[2]['review'])
###Output
Sentiment: Negative
Review:
The film starts with a manager (Nicholas Bell) giving welcome investors (Robert Carradine) to Primal Park . A secret project mutating a primal animal using fossilized DNA, like'Jurassik Park', and some scientists resurrect one of nature's most fearsome predators, the Sabretooth tiger or Smilodon . Scientific ambition turns deadly, however, and when the high voltage fence is opened the creature escape and begins savagely stalking its prey - the human visitors , tourists and scientific.Meanwhile some youngsters enter in the restricted area of the security center and are attacked by a pack of large pre-historical animals which are deadlier and bigger . In addition , a security agent (Stacy Haiduk) and her mate (Brian Wimmer) fight hardly against the carnivorous Smilodons. The Sabretooths, themselves , of course, are the real star stars and they are astounding terrifyingly though not convincing. The giant animals savagely are stalking its prey and the group run afoul and fight against one nature's most fearsome predators. Furthermore a third Sabretooth more dangerous and slow stalks its victims. The movie delivers the goods with lots of blood and gore as beheading, hair-raising chills,full of scares when the Sabretooths appear with mediocre special effects.The story provides exciting and stirring entertainment but it results to be quite boring .The giant animals are majority made by computer generator and seem totally lousy .Middling performances though the players reacting appropriately to becoming food.Actors give vigorously physical performances dodging the beasts ,running,bound and leaps or dangling over walls . And it packs a ridiculous final deadly scene. No for small kids by realistic,gory and violent attack scenes . Other films about Sabretooths or Smilodon are the following :'Sabretooth(2002)'by James R Hickox with Vanessa Angel, David Keith and John Rhys Davies and the much better'10.000 BC(2006)' by Roland Emmerich with with Steven Strait, Cliff Curtis and Camilla Belle. This motion picture filled with bloody moments is badly directed by George Miller and with no originality because takes too many elements from previous films. Miller is an Australian director usually working for television (Tidal wave, Journey to the center of the earth, and many others) and occasionally for cinema ( The man from Snowy river, Zeus and Roxanne,Robinson Crusoe ). Rating : Below average, bottom of barrel.
###Markdown
Checking for null values
###Code
df.isna().sum()
###Output
_____no_output_____
###Markdown
Data Visualization* Sentiment distribution* Review length Sentiment distribution
###Code
plt.figure(figsize=(8,6))
sns.countplot(df['sentiment'])
plt.show()
###Output
_____no_output_____
###Markdown
Review length
###Code
review_length = [len(review) for review in df['review']]
df['review_length'] = review_length
plt.figure(figsize=(8,6))
sns.boxplot(x='sentiment',y='review_length',data=df)
plt.show()
# dropping too long reviews to speed up the process
df2 = df[df['review_length']<3000]
df2.shape
###Output
_____no_output_____
###Markdown
Data Preprocessing* Remove special characters and digits* Convert sentences into lower case* Tokenize by words / split by words* Remove stopwords and lemmetize words* Build corpus* One hot representation of corpus* Add pre-padding
###Code
# Importing essential libraries for data preprocessing and nlp
import re
import nltk
nltk.download('stopwords')
nltk.download('wordnet')
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.preprocessing.sequence import pad_sequences
corpus = []
lemmatizer = WordNetLemmatizer()
for review in df2['review']:
# Remove special characters and digits
removed_spchar_digits = re.sub('[^a-zA-Z]',' ',review)
# Convert sentences into lower case
lower_case = removed_spchar_digits.lower()
# Tokenize by words / split by words
tokenized_sentence = lower_case.split()
# Remove stopwords and lemmetize word
lemmetized_words = [lemmatizer.lemmatize(word) for word in tokenized_sentence if word not in stopwords.words('english')]
# Build corpus
review = ' '.join(lemmetized_words)
corpus.append(review)
corpus[:3]
# One hot representation of corpus
# Consist of indices of words from vocabulary/dictionary
vocabulary_size = 10000
one_hot_representation = [one_hot(review,vocabulary_size) for review in corpus]
np.array(one_hot_representation[0])
lengths = [len(i) for i in one_hot_representation]
print(f'Maximum length in one hot list: {max(lengths)}')
# Add pre-padding
# Adds additional zeroes to have lengths of all one hot representations equal
sentence_length = 310
padded_onehot = pad_sequences(one_hot_representation,maxlen=sentence_length)
padded_onehot[0]
###Output
Maximum length in one hot list: 307
###Markdown
Model Building* Splitting the dataset into training and testing data* Model training* Model evaluation* Saving the model
###Code
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Embedding, Bidirectional, LSTM, Dense
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
###Output
_____no_output_____
###Markdown
Splitting the dataset into training and testing data
###Code
X = padded_onehot
y = np.array(df2['sentiment'].replace({'Positive':1,'Negative':0}))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42,stratify=y)
print(f'X train shape: {X_train.shape}')
print(f'X test shape: {X_test.shape}')
###Output
X train shape: (16286, 310)
X test shape: (6981, 310)
###Markdown
Model training
###Code
# dimension of feature vector
embedding_vector_features = 40
model = Sequential()
# Embedding layer converts word into feature vectors
model.add(Embedding(vocabulary_size, embedding_vector_features, input_length=sentence_length))
# Bidirectional LSTM RNN
model.add(Bidirectional(LSTM(100,dropout=0.5)))
# Classification layer
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(X_train,y_train,validation_split=0.2,batch_size=64,epochs=5)
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 310, 40) 400000
_________________________________________________________________
bidirectional (Bidirectional (None, 200) 112800
_________________________________________________________________
dense (Dense) (None, 1) 201
=================================================================
Total params: 513,001
Trainable params: 513,001
Non-trainable params: 0
_________________________________________________________________
###Markdown
Model evaluation
###Code
# Evaluate our model in test data
model.evaluate(X_test,y_test)
y_predicted = [1 if y >= 0.5 else 0 for y in model.predict(X_test).flatten()]
y_predicted[:5]
confusion_matrix_result = confusion_matrix(y_test,y_predicted)
labels = ['Negative','Positive']
plt.figure(figsize=(8,6))
sns.heatmap(confusion_matrix_result,annot=True,cmap='Reds',fmt='.0f',xticklabels=labels,yticklabels=labels)
plt.title('Movie Review Sentiment')
plt.xlabel('Predicted values')
plt.ylabel('Actual values')
plt.show()
classification_report_result = classification_report(y_test,y_predicted)
print(classification_report_result)
###Output
precision recall f1-score support
0 0.88 0.81 0.84 3520
1 0.82 0.89 0.85 3461
accuracy 0.85 6981
macro avg 0.85 0.85 0.85 6981
weighted avg 0.85 0.85 0.85 6981
###Markdown
Saving the model
###Code
model.save('movie_sentiment_analysis_model.h5')
###Output
_____no_output_____ |
notebooks/summary_plots.ipynb | ###Markdown
Combined histogram
###Code
kic_sn = np.load('kic7174505_sn.npy')
gj_sn = np.load('gj1243_sn.npy')
sun_sn = np.load('sun_sn.npy')
bins = np.logspace(-4, 1, 100)
#plt.figure(figsize=(12, 5))
alpha=0.8
plt.hist(kic_sn, bins, histtype='stepfilled', alpha=alpha, label='KIC 7174505',
log=True, color="#CC3F13")
plt.hist(gj_sn, bins, lw=3, histtype='stepfilled', alpha=alpha, label='GJ 1243',
log=True, color="#31C3FF")
plt.hist(sun_sn, bins, histtype='stepfilled', alpha=alpha, label='Sun',
log=True, color="#1335CC")
plt.annotate("For a starspot\ndistribution like:", xy=(1.0, 1.25),
textcoords='axes fraction', ha='right', va='bottom')
plt.legend(loc=(0.7, 1.01))
#plt.legend(loc=(1.01, 0.))
plt.gca().set_xscale("log")
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.xlabel('$\sigma_{\mathrm{jitter}} \; / \; \sigma_{\mathrm{Gaia}}$\n(Signal-to-noise)',
fontsize=15)
plt.ylabel('Stars', fontsize=15)
plt.ylim([0.5, 1e3])
plt.setp(plt.gca().get_xticklabels(), fontsize=13)
plt.setp(plt.gca().get_yticklabels(), fontsize=13)
plt.savefig('summary.pdf', bbox_inches='tight')
plt.show()
###Output
/Users/bmmorris/anaconda/lib/python3.5/site-packages/matplotlib/text.py:2141: UserWarning: You have used the `textcoords` kwarg, but not the `xytext` kwarg. This can lead to surprising results.
warnings.warn("You have used the `textcoords` kwarg, but not "
|
notebooks/M4-Notebook2-linear_regression_without_sklearn.ipynb | ###Markdown
Linear regression without scikit-learnIn this notebook, we introduce linear regression. Before presenting theavailable scikit-learn classes, we will provide some insights with a simpleexample. We will use a dataset that contains information about penguins. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
penguins = pd.read_csv("../datasets/penguins_regression.csv")
penguins.head()
###Output
_____no_output_____
###Markdown
This dataset contains measurements taken on penguins. We will formulate thefollowing problem: using the flipper length of a penguin, we would liketo infer its mass.
###Code
import seaborn as sns
feature_names = "Flipper Length (mm)"
target_name = "Body Mass (g)"
data, target = penguins[[feature_names]], penguins[target_name]
ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name,
color="black", alpha=0.5)
ax.set_title("Flipper length in function of the body mass")
###Output
_____no_output_____
###Markdown
TipThe function scatterplot from searborn take as input the full dataframeand the parameter x and y allows to specify the name of the columns tobe plotted. Note that this function returns a matplotlib axis(named ax in the example above) that can be further used to add element onthe same matplotlib axis (such as a title). In this problem, penguin mass is our target. It is a continuousvariable that roughly varies between 2700 g and 6300 g. Thus, this is aregression problem (in contrast to classification). We also see that there isalmost a linear relationship between the body mass of the penguin and itsflipper length. The longer the flipper, the heavier the penguin.Thus, we could come up with a simple formula, where given a flipper lengthwe could compute the body mass of a penguin using a linear relationshipof the form `y = a * x + b` where `a` and `b` are the 2 parameters of ourmodel.
###Code
def linear_model_flipper_mass(flipper_length, weight_flipper_length,
intercept_body_mass):
"""Linear model of the form y = a * x + b"""
body_mass = weight_flipper_length * flipper_length + intercept_body_mass
return body_mass
###Output
_____no_output_____
###Markdown
Using the model we defined above, we can check the body mass valuespredicted for a range of flipper lengths. We will set `weight_flipper_length`to be 45 and `intercept_body_mass` to be -5000.
###Code
import numpy as np
weight_flipper_length = 45
intercept_body_mass = -5000
flipper_length_range = np.linspace(data.min(), data.max(), num=300)
predicted_body_mass = linear_model_flipper_mass(
flipper_length_range, weight_flipper_length, intercept_body_mass)
###Output
_____no_output_____
###Markdown
We can now plot all samples and the linear model prediction.
###Code
label = "{0:.2f} (g / mm) * flipper length + {1:.2f} (g)"
ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name,
color="black", alpha=0.5)
ax.plot(flipper_length_range, predicted_body_mass)
_ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))
###Output
_____no_output_____
###Markdown
The variable `weight_flipper_length` is a weight applied to the feature`flipper_length` in order to make the inference. When this coefficient ispositive, it means that penguins with longer flipper lengths will have largerbody masses. If the coefficient is negative, it means that penguins withshorter flipper lengths have larger body masses. Graphically, thiscoefficient is represented by the slope of the curve in the plot. Below weshow what the curve would look like when the `weight_flipper_length`coefficient is negative.
###Code
weight_flipper_length = -40
intercept_body_mass = 13000
predicted_body_mass = linear_model_flipper_mass(
flipper_length_range, weight_flipper_length, intercept_body_mass)
###Output
_____no_output_____
###Markdown
We can now plot all samples and the linear model prediction.
###Code
ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name,
color="black", alpha=0.5)
ax.plot(flipper_length_range, predicted_body_mass)
_ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))
###Output
_____no_output_____
###Markdown
In our case, this coefficient has a meaningful unit: g/mm.For instance, a coefficient of 40 g/mm, means that for eachadditional millimeter in flipper length, the body weight predicted willincrease by 40 g.
###Code
body_mass_180 = linear_model_flipper_mass(
flipper_length=180, weight_flipper_length=40, intercept_body_mass=0)
body_mass_181 = linear_model_flipper_mass(
flipper_length=181, weight_flipper_length=40, intercept_body_mass=0)
print(f"The body mass for a flipper length of 180 mm "
f"is {body_mass_180} g and {body_mass_181} g "
f"for a flipper length of 181 mm")
###Output
The body mass for a flipper length of 180 mm is 7200 g and 7240 g for a flipper length of 181 mm
###Markdown
We can also see that we have a parameter `intercept_body_mass` in our model.This parameter corresponds to the value on the y-axis if `flipper_length=0`(which in our case is only a mathematical consideration, as in our data, the value of `flipper_length` only goes from 170mm to 230mm). This y-valuewhen x=0 is called the y-intercept. If `intercept_body_mass` is 0, the curvewill pass through the origin:
###Code
weight_flipper_length = 25
intercept_body_mass = 0
# redefined the flipper length to start at 0 to plot the intercept value
flipper_length_range = np.linspace(0, data.max(), num=300)
predicted_body_mass = linear_model_flipper_mass(
flipper_length_range, weight_flipper_length, intercept_body_mass)
ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name,
color="black", alpha=0.5)
ax.plot(flipper_length_range, predicted_body_mass)
_ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))
###Output
_____no_output_____
###Markdown
Otherwise, it will pass through the `intercept_body_mass` value:
###Code
weight_flipper_length = 45
intercept_body_mass = -5000
predicted_body_mass = linear_model_flipper_mass(
flipper_length_range, weight_flipper_length, intercept_body_mass)
ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name,
color="black", alpha=0.5)
ax.plot(flipper_length_range, predicted_body_mass)
_ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))
###Output
_____no_output_____ |
HMM_Tagger_Pau.ipynb | ###Markdown
Project: Part of Speech Tagging with Hidden Markov Models --- IntroductionPart of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more. The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! **Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files. **Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. The Road AheadYou must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.- [Step 1](Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus- [Step 2](Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline- [Step 3](Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline- [Step 4](Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
###Code
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
###Output
_____no_output_____
###Markdown
Step 1: Read and preprocess the dataset---We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.Example from the Brown corpus. ```b100-38532Perhaps ADVit PRONwas VERBright ADJ; .; .b100-35577...```
###Code
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"
###Output
There are 57340 sentences in the corpus.
There are 45872 sentences in the training set.
There are 11468 sentences in the testing set.
###Markdown
The Dataset InterfaceYou can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.```Dataset-only Attributes: training_set - reference to a Subset object containing the samples for training testing_set - reference to a Subset object containing the samples for testingDataset & Subset Attributes: sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus vocab - an immutable collection of the unique words in the corpus tagset - an immutable collection of the unique tags in the corpus X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...) Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...) N - returns the number of distinct samples (individual words or tags) in the datasetMethods: stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus __iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs __len__() - returns the nubmer of sentences in the dataset```For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:```subset.keys == {"s1", "s0"} unorderedsubset.vocab == {"See", "run", "ran", "Spot"} unorderedsubset.tagset == {"VERB", "NOUN"} unorderedsubset.X == (("Spot", "ran"), ("See", "Spot", "run")) order matches .keyssubset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) order matches .keyssubset.N == 7 there are a total of seven observations over all sentenceslen(subset) == 2 because there are two sentences```**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data. Sentences`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.
###Code
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))
###Output
Sentence: b100-38532
words:
('Perhaps', 'it', 'was', 'right', ';', ';')
tags:
('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')
###Markdown
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data. Counting Unique ElementsYou can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.
###Code
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"
###Output
There are a total of 1161192 samples of 56057 unique words in the corpus.
There are 928458 samples of 50536 unique words in the training set.
There are 232734 samples of 25112 unique words in the testing set.
There are 5521 words in the test set that are missing in the training set.
###Markdown
Accessing word and tag SequencesThe `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
###Code
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()
###Output
Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.')
Labels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 2: ('But', 'there', 'seemed', 'to', 'be', 'some', 'difference', 'of', 'opinion', 'as', 'to', 'how', 'far', 'the', 'board', 'should', 'go', ',', 'and', 'whose', 'advice', 'it', 'should', 'follow', '.')
Labels 2: ('CONJ', 'PRT', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', '.', 'CONJ', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', '.')
###Markdown
Accessing (word, tag) SamplesThe `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
###Code
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
###Output
Stream (word, tag) pairs:
('Mr.', 'NOUN')
('Podger', 'NOUN')
('had', 'VERB')
('thanked', 'VERB')
('him', 'PRON')
('gravely', 'ADV')
(',', '.')
###Markdown
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts. Step 2: Build a Most Frequent Class tagger---Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus. IMPLEMENTATION: Pair CountsComplete the function below that computes the joint frequency counts for two input sequences.
###Code
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
#For a couple of lists, the function zip is used to obtain a list of tuples of the element-wise lists
ordered_tuple=zip(sequences_A,sequences_B)
#Crete a nested dictionary that shows the result asked as of
#{tag1:{word1:occurrencies, word2:occurrencies},tag2:{word1:occurrencies, word2:occurrencies},...}
tag_word_dict = defaultdict(Counter)
#for each tag whe obtain the number of times that a word appears paired to a specific tag in the corpus
#we need the loop to group by , resulting in a nested dictionary as "dictionary:(dictionary:frequency)"
for i_a, i_b in ordered_tuple:
tag_word_dict[i_a][i_b] += 1
return tag_word_dict
# for the pair count to work, we need the data series to be ordered,
# that is the reason why we use data.training_set.Y and data.training_set.X
# the corpus sentences are flattened into a single list of ordered words
training_tags=[tag for sentence in data.training_set.Y for tag in sentence]
training_words=[word for sentence in data.training_set.X for word in sentence]
# Calculate C(t_i, w_i)
#pass the ordered series to calculate the pair count
emission_counts = pair_counts(training_tags, training_words)
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Most Frequent Class TaggerUse the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.
###Code
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
#now the aim is to create the following nested dictionary words:{tags:occurencies} instead of tags:{words:occurrencies}
#as the pair count function was defined in a generic way, the only thing needed is to pass the inputs in a reverse order
word_counts = pair_counts(training_words,training_tags)
#using the .most_common property, we obtain the most frequent tag for each word
#in order to do so, a dictionary comprehension is used to find the most common tag for each word
mfc_table = {k:word_counts[k].most_common(1)[0][0] for k in word_counts.keys()}
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')
###Output
_____no_output_____
###Markdown
Making Predictions with a ModelThe helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.
###Code
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions
###Output
_____no_output_____
###Markdown
Example Decoding Sequences with MFC Tagger
###Code
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
###Output
Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', '<MISSING>', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADV', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
###Markdown
Evaluating Model AccuracyThe function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
###Code
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions
###Output
_____no_output_____
###Markdown
Evaluate the accuracy of the MFC taggerRun the next cell to evaluate the accuracy of the tagger on the training and test corpus.
###Code
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')
###Output
training accuracy mfc_model: 95.72%
testing accuracy mfc_model: 93.01%
###Markdown
Step 3: Build an HMM tagger---The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information. IMPLEMENTATION: Unigram CountsComplete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)$$P(tag_1) = \frac{C(tag_1)}{N}$$
###Code
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
# TODO: Finish this function!
#simply apply Counter function to the sequence provided
uc=Counter(sequences)
return uc
# TODO: call unigram_counts with a list of tag sequences from the training set
#pass the input
tag_unigrams = unigram_counts(training_tags)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Bigram CountsComplete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
###Code
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
# TODO: Finish this function!
#training tags were an ordered sequence of the tags in the corpus, then
#to create a sequence of (tag_i|tag_i-1) tuples, using the zip function with
# two lists where the first/last element has been removed will do the work,
#as this will result in a ordered sequence of tags
bc=Counter(zip(sequences[:-1], sequences[1:]))
return bc
# TODO: call bigram_counts with a list of tag sequences from the training set
tag_bigrams = bigram_counts(training_tags)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Sequence Starting CountsComplete the code below to estimate the bigram probabilities of a sequence starting with each tag.
###Code
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
# TODO: Finish this function!
#extract the first tag of each sentence and count the occurencies
sc=Counter([sentence[0] for sentence in sequences])
return sc
# TODO: Calculate the count of each tag starting a sequence
#data.training_set.Y is split by sentences(what we need), opposite to training_tags where all sentences were flattened
tag_starts = starting_counts(data.training_set.Y)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Sequence Ending CountsComplete the function below to estimate the bigram probabilities of a sequence ending with each tag.
###Code
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
# TODO: Finish this function!
#extract the last tag of each sentence and count the occurencies
ec=Counter([sentence[-1] for sentence in sequences])
return ec
# TODO: Calculate the count of each tag ending a sequence
tag_ends = ending_counts(data.training_set.Y)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')
###Output
_____no_output_____
###Markdown
IMPLEMENTATION: Basic HMM TaggerUse the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.- Add one state per tag - The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$- Add an edge from the starting state `basic_model.start` to each tag - The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$- Add an edge from each tag to the end state `basic_model.end` - The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$- Add an edge between _every_ pair of tags - The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$
###Code
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
#Create an emission probability for each word in the dataset
#for those not present, a 0 probability is assing, following the video description
emission_probabilities={word_clas:{word:emission_counts[word_clas][word]/tag_unigrams[word_clas] for word in set(training_words)} for word_clas in tag_unigrams.keys()}
#generate a loop to create all model states
#Follow the same pipe as in the HMM warmup example
list_of_states=[]
for key,val in emission_probabilities.items():
#add all probabilities to the discrete distribution
globals()[f"{key}_emissions"] =DiscreteDistribution(val)
#Then we create the State based on that distribution
globals()[f"{key}_state"] = State(globals()[f"{key}_emissions"], name=key)
#create a list of all states to add it to the model.
#model.add_states() function accepts a list as an input
list_of_states.append(globals()[f"{key}_state"])
# add the states to the model
basic_model.add_states(list_of_states)
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
#In this section, the aim is to try to mimic the warmup exercise of sunny-cloudy state for the current problem
#First, the starting probabilities are added to the basic_model
#generate a dictionary with the probabilities of all possible starting transitions probabilities
starting_probabilities={i:(j/sum(tag_starts.values())) for i,j in tag_starts.items()}
#crete an edge for each possible start states
for state in starting_probabilities:
basic_model.add_transition(basic_model.start, globals()[f"{state}_state"], starting_probabilities[state])
#Second, the ending probabilies are added, in a similar way as we did in the starting probabilities
#generate a dictionary with the probabilities of all possible ending transitions probabilities
ending_probabilities={i:(j/sum(tag_ends.values())) for i,j in tag_ends.items()}
#crete an ending edge for each possible ending states
for state in ending_probabilities:
basic_model.add_transition(globals()[f"{state}_state"], basic_model.end, ending_probabilities[state])
#Finally, all transition probabilities are added to the model
#generate a dictionary with the probabilities of all possible transitions between states observed in the data set
transition_probabilities={(pre,post):tag_bigrams[(pre,post)]/tag_unigrams[pre] for pre,post in tag_bigrams}
#the name pre and post are given to each elementt of the possible transition,
# pre for the sequence start and post to the sequence end
#add all states using a loop, all transitions are introduced without a specific order
for pre, post in transition_probabilities:
basic_model.add_transition(globals()[f"{pre}_state"],globals()[f"{post}_state"],transition_probabilities[(pre,post)])
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')
hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')
###Output
training accuracy basic hmm model: 97.53%
testing accuracy basic hmm model: 95.96%
###Markdown
Example Decoding Sequences with the HMM Tagger
###Code
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
###Output
Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
###Markdown
Finishing the project---**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
###Code
!!jupyter nbconvert *.ipynb
###Output
_____no_output_____
###Markdown
Step 4: [Optional] Improving model performance---There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts) Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.- Backoff Smoothing Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.- Extending to Trigrams HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two. Obtain the Brown Corpus with a Larger TagsetRun the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.
###Code
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]
###Output
[nltk_data] Downloading package brown to /root/nltk_data...
[nltk_data] Package brown is already up-to-date!
|
Prace_domowe/Praca_domowa3/Grupa1/SzczypekWojciech/pr_dom_3.ipynb | ###Markdown
1. Załadowanie danych i bibliotek
###Code
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, train_test_split
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn import metrics
import warnings
import xgboost as xgb
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
data = pd.read_csv("australia.csv")
data.head(3)
data.describe()
###Output
_____no_output_____
###Markdown
2. Podział danych na zbiór treningowy i testowyDane podzielimy tak, aby stosunek labeli celu był taki sam co w całym zbiorze. Nie musimy nic encodować, poneważ zmienne nie są kategoryczne.
###Code
target = "RainTomorrow"
X_train, X_test, Y_train, Y_test = train_test_split(
data.drop(target, axis = 1),
data[target],
stratify = data[target])
#train dataframe
dtrain = pd.concat([X_train,Y_train], axis=1)
#test dataframe
dtest = pd.concat([X_test,Y_test], axis=1)
predictors = [x for x in dtrain.columns if x not in [target]]
###Output
_____no_output_____
###Markdown
3. Random ForestZaczniemy od modelu z użyciem klasfykiatora Random Forest. SKorzystamy z funkcji GridSearchCV, tak aby znaleźc optymalne wartości parametrów:- n_estiamtors - ilsoc drzew - max_features - ilosc featerow do uwzglednienia przy rozglaezeiniach drzew - max_depth - maksymalna wysokosc drzew- criterion - funkcja do mierzenia jakosci podzialu w drzewie
###Code
rf_model = Pipeline([
('classifier', RandomForestClassifier())
])
param_grid = {
'classifier__n_estimators': [50, 100],# the number of trees in the forest.
'classifier__max_features': ['sqrt', 'log2'], # the number of features to consider when looking for the
# best split:
## if “sqrt”, then max_features=sqrt(n_features)
## if “log2”, then max_features=log2(n_features).
'classifier__max_depth' : [6,8],
'classifier__criterion' :['gini', 'entropy'] # the function to measure the quality of a split.
# Supported criteria are “gini” for the Gini impurity
# and “entropy” for the information gain
}
rf_grid_search = GridSearchCV(rf_model, param_grid = param_grid, cv = 5)
rf_grid_search.fit(X_train, Y_train)
rf_grid_search.best_params_
rf_model = rf_grid_search.best_estimator_
rf_predict_class = rf_model.predict(X_test)
rf_predict_proba = rf_model.predict_proba(X_test)[:, 1]
###Output
_____no_output_____
###Markdown
4. XGBoost Kolejnym klasfykiatorem jakim użyjemy będzie XGBoost. Tutaj zmniejszymy parametr learning rate. XGBoost tworzy i dodaje do modelu drzewa po kolei, nowe drzewa są tworzone, aby poprawić błędy z predykcji na podstawie poprzednich drzew. Prowadzi to do tego, ze model może szybko stać sie modelem overfitujacym. Aby temu zapobiec mozemy zmniejszyć wagę "popraw" w danym drzewie, odpowiada za to parametr learning_rate, im mniejszy on jest tym mniejsze poprawki wprowadza jedno drzewo, ergo drzew jest dużo więcej. Po zmniejszeniu learning_rate musimy znaleźć odpowiednią ilość drzew (n_estiamtors) tak, aby tych drzew nei było za dużo.Skorzystamy w tym przypadku z funkcji, która jeżeli przy dodaniu kolejnych 50 drzew algorytm nie daje lepszych wyników, to wtedy zmienai wartość parametru n_estiamtors na tę optymalną ilość.
###Code
# Function which finds the optimum number of trees (n_estimators parameter) using cv function of xgboost
# for given learning_rate and changes the alg parameter accrodingly
def modelfit(alg, dtrain, predictors, cv_folds = 5, early_stopping_rounds = 50):
xgb_param = alg.get_xgb_params()
xgtrain = xgb.DMatrix(dtrain[predictors].values, label = dtrain[target].values)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round = alg.get_params()['n_estimators'],
nfold=cv_folds, metrics='auc', early_stopping_rounds = early_stopping_rounds)
alg.set_params(n_estimators = cvresult.shape[0])
# Our default calssifier, whose parameters will be tuned in grid_search_cv
xgb1 = xgb.XGBClassifier(
learning_rate = 0.01,
n_estimators = 5000,
max_depth = 5,
min_child_weight = 1,
gamma = 0,
subsample = 0.8,
colsample_bytree = 0.8,
objective = 'binary:logistic',
nthread = 4,
scale_pos_weight = 1,
seed = 27)
# Let's assume fixed learning_rate = 0.01 and find optimum n_estiamtors parameter value for it
modelfit(xgb1, dtrain, predictors)
xgb_model = Pipeline([
('classifier', xgb1)
])
xgb_model.fit(X_train,Y_train)
xgb_predict_class = xgb_model.predict(X_test)
xgb_predict_proba = xgb_model.predict_proba(X_test)[:, 1]
###Output
_____no_output_____
###Markdown
5. Logistic RegressionNa koniec zajmiemy się regresją liniową. Tu również skorzystamy z funkcji GridSearchCV, tak aby znaleźc optymalne wartości parametrów:- C - odpowiada za regularyzacje (im wieksza wartosc parametru tym mniej silna regualryzacja)- penalty - wybór typu regualryzacji L1 - Lasso Regression, L2 - Ridge Regression.- class_weight - nadanie wag klasom targetu
###Code
lr_model = Pipeline([
('estimator',LogisticRegression(max_iter=200,random_state=123,
multi_class='ovr',solver='saga'))
])
# Grid params search
param_grid = {'estimator__C': np.logspace(-4, 4, 20),#Inverse of regularization strength
'estimator__penalty':['l2','l1'],# Type of loss function
'estimator__class_weight':['balanced',None]# Type of class weight
# None - Every class have weight 1
# Auto Class weight calcultion
}
# Preparing and fitting grid search
lr_grid_search = GridSearchCV(lr_model, param_grid = param_grid, cv = 5)
lr_grid_search.fit(X_train,Y_train)
lr_grid_search.best_params_
lr_model = lr_grid_search.best_estimator_
lr_predict_class = lr_model.predict(X_test)
lr_predict_proba = lr_model.predict_proba(X_test)[:, 1]
###Output
_____no_output_____
###Markdown
6. Porównanie wyników
###Code
pd.DataFrame({"Encoder" : ["Random Forest", "XGBoost", "Logistic Regression"],
"Precision": [precision_score(Y_test, rf_predict_class, average='macro'), precision_score(Y_test, xgb_predict_class, average='macro'), precision_score(Y_test, lr_predict_class, average='macro')],
"Recall": [recall_score(Y_test, rf_predict_class), recall_score(Y_test, xgb_predict_class),recall_score(Y_test, lr_predict_class)]})
fpr_rf, tpr_rf, thresholds_rf = metrics.roc_curve(Y_test, rf_predict_proba)
fpr_xgb, tpr_xgb, thresholds_xgb = metrics.roc_curve(Y_test, xgb_predict_proba)
fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve(Y_test, lr_predict_proba)
pd.DataFrame({"Encoder" : ["Random Forest", "XGBoost", "Logistic Regression"],
"AUC": [metrics.auc(fpr_rf, tpr_rf), metrics.auc(fpr_xgb, tpr_xgb), metrics.auc(fpr_lr, tpr_lr
)]})
# ROC curve
plt.figure()
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rf, tpr_rf, label='Random Forest')
plt.plot(fpr_xgb, tpr_xgb, label='XGBoost')
plt.plot(fpr_lr, tpr_lr, label='Logistic Regression')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____ |
notebooks/2.0.3-jjf-initial-data-wrangling-zipcodeFREDdata.ipynb | ###Markdown
Data Wrangling FRED Data for Zipcodefrom FRED public data
###Code
#Import pandas, matplotlib.pyplot, and seaborn
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import numpy as np
#change directory to get data
path= '/Users/josephfrasca/Coding_Stuff/Springboard/Capstone_2/data/raw'
os.chdir(path)
#load fred national economic data data
df_fred = pd.read_csv('NationalfredgraphForZipcodes.csv')
df_fred
###Output
_____no_output_____
###Markdown
Data Definition
###Code
df_fred.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 73 entries, 0 to 72
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 DATE 73 non-null object
1 INTDSRUSM193N 73 non-null object
2 MEHOINUSA672N 73 non-null object
3 SPPOPGROWUSA 73 non-null object
4 UNRATE 73 non-null object
5 HOUST 73 non-null object
6 TLRESCONS 73 non-null object
dtypes: object(7)
memory usage: 4.1+ KB
###Markdown
Data Cleaning
###Code
#filter for data after 2011 and reset index
df_fred_2011_2019 = df_fred[df_fred['DATE'] > '2011-01']
df_fred_2011_2019 = df_fred_2011_2019.reset_index(drop=True)
#drop empty 2020 row
df_fred_2011_2019 = df_fred_2011_2019.drop(9)
df_fred_2011_2019.dtypes
#change dtypes to floats for economic data
date = df_fred_2011_2019['DATE']
floats = df_fred_2011_2019.drop('DATE', axis=1)
floats = floats.astype('float')
floats.dtypes
df = pd.concat([date, floats], axis=1)
df
df.nunique()
#rename columns
df = df.rename(columns = {'SPPOPGROWUSA':'uspop_growth', 'MEHOINUSA672N':'med_hIncome', 'UNRATE':'unemplt_rate', 'INTDSRUSM193N':'int_rate', 'HOUST':'newHouse_starts', 'TLRESCONS':'resConstruct_spending'})
df
###Output
_____no_output_____
###Markdown
Save Data
###Code
df.to_csv(r'/Users/josephfrasca/Coding_Stuff/Springboard/Capstone_2/data/interim/Annual_fredData_2011_2019', index=False)
###Output
_____no_output_____ |
Modelproject_final.ipynb | ###Markdown
1. Introduction Taxes play an important role for every individual in the economy. The size of the tax payment affects the disposible income, and thus the (possible) consumption level. The government collects taxes in order to finance its expenditures. The literatture has estimated that an increase in the marginal tax rate induces people to work less. This implies that the government cannot set a too high marginal tax rate in order to maximize its revenue. In the end, both workers and the government benefit from an optimal tax system. In this project, we examine the change of behaviour of a worker due to a change in the marginal tax rate. We will further try to estimate the tax rate that maximizes the tax revenue by drawing a sample of agents with different abilities and preferences using `np.random`.In this project, we only consider changes on the intensive margin, i.e. how people adjust their work hours or/and effort due to changes in the marginal tax rate, and not if they change employment status due to a change in the marginal tax rate. Feldstein (1995) argues that taxable income as endogenous variable is better when estimating changes in behaviour due to changes in the marginal tax rate, since taxable income captures changes in hours worked and changes in work effort. We apply the same method as Feldstein (1995) and consider the change in taxable income from changes in the marginal tax rate. 2. Model description We consider the utility function given by $U_i = c_i - \frac{\alpha _i}{1+\frac{1}{\epsilon}} ( \frac{z _i}{ \alpha _i})^ {1+ \frac{1}{\epsilon}}$where $c_i$ denotes consumption of the individual, while $z_i$ denotes the taxable income of the individual. $\alpha _i$ denotes the potential earnings level of the individual, and $\epsilon$ denotes the elasticity of taxable income with respect to the net-of-tax rate. The consumption level is dependent of the taxable income. Thus, we consider the budget constraint given by$c_i = z_i - T(z_i)$where $T(z_i)$ is the tax payment. This implies that we assume that the individual uses all of his disposible income on consumption. The tax payment is given by $T(z_i) = m*z_i$ where $m$ is the marginal tax rate chosen by the government. Since the tax rate is independent of earnings, we consider a proportional tax system.We want to maximize the utility of the individual subject to his budget constraint, i.e.$\max\limits_{z_i} U_i=c_i - \frac{\alpha _i}{1+\frac{1}{\epsilon}} ( \frac{z _i}{ \alpha _i})^ {1+ \frac{1}{\epsilon}}$ s.t. $c_i = z_i - T(z_i)$ 3. Solutions First, we find the analytical solution. Thus, we find the optimal taxable income of the individual subject to the tax rate. Second, we show the graphical solution. Here, we see the optimum of the indifference curve, the budget constraint and the optimum of the individual. 3.1 Analytical solution We import the `sympy`-package in order to define the variables and parameters. Afterwards, we use the `.init_printing`-function, so the variables are written in a pretty way. Lastly, we define the variables and parameters.
###Code
# 1) Import sympy in order to write nice equations.
import sympy as sm
# 2) Define that we want the variables printed in a pretty way.
sm.init_printing(use_unicode=True)
# 3) Define the different variables and parameters.
U = sm.symbols('U')
c = sm.symbols('c')
z = sm.symbols('z')
eps = sm.symbols('epsilon')
T_z = sm.symbols('T(z)')
mtax = sm.symbols('m')
alp = sm.symbols('alpha')
###Output
_____no_output_____
###Markdown
We define the different functions. In total, we consider three different functions: the utility function, the budget constraint and the tax payment. The last two functions are functions of taxable income.
###Code
# Define utility function.
utility = c - alp/(1+1/eps)*(z/alp)**(1+1/eps)
utility
# Define budget constraint.
budget_constraint = sm.Eq(c, z - T_z)
budget_constraint
# Define tax payment.
taxpay = sm.Eq(T_z, mtax*z)
taxpay
###Output
_____no_output_____
###Markdown
In the next step, we want to substitute the tax payment into the budget constraint.
###Code
# 1) Isolate T_z in the tax payment (this is done already, but to make sure, we run this line of code).
tax_solve = sm.solve(taxpay, T_z)
# 2) Define that we want the tax payment substituted into the budget constraint.
budget_constraint_sub = budget_constraint.subs(T_z, tax_solve[0])
# 3) Ensure that the budget constraint is correct after the substitution.
budget_constraint_sub
###Output
_____no_output_____
###Markdown
Now, we want to substitute the budget constraint into the utility function. This is done in the same way as before.
###Code
# 1) Isolate c in the budget constraint (this is done already, but to make sure, we run this line of code).
budget_constraint_solve = sm.solve(budget_constraint_sub, c)
# 2) Define that we want the budget constraint substituted into the utility function.
utility_sub = utility.subs(c, budget_constraint_solve[0])
# 3) Ensure that the utility function is correct after the substitution.
utility_sub
###Output
_____no_output_____
###Markdown
We have concluded that the utility function is defined in the right way. After substitution, the utility function does only depend of the taxable income (and parameters, but these are fixed). In order to find the optimal level of taxable income, we differentiate the utility function with respect to taxable income. Afterwards, we isolate for taxable income to derive the optimal level of taxable income ($z^*$).
###Code
# 1) Define that we want to calculate the derivate of the utility function with respect to taxable income.
foc = sm.diff(utility_sub, z)
# 2) Define that we want to isolate for taxable income to derive z*.
z_star = sm.solve(foc, z)
# 3) Define the optimal level of taxable income as a Python-function.
z_star_func = sm.lambdify((mtax, alp, eps), z_star)
# 4) Examine the optimal level of taxable income.
z_star
###Output
_____no_output_____
###Markdown
We note that the optimal level of taxable income depends on the marginal tax rate, the potential earnings and the elasticity of taxable income with respect to the net-of-tax rate. In the special case where the individual face no taxes, i.e. where $m=0$, his taxable income will equal his potential earnings ($z _i ^* = \alpha _i$). Furthermore, we note that the taxable income is lowered more due to an increase in the tax rate when the elasticity of taxable income is high. 3.2 Graphical solution We can show how people change behaviour due to a change in the marginal tax rate. As noticed before, the marginal tax rate and the elasticity of taxable income affect how much people deviate from their potential earnings. This can be shown in a graph where one can examine the changes in taxable income when the parameters are changed. In order to do that, we need to define different functions. First, we import the `numpy`-package. Afterwards, we define a function for the budget constraint and the utility function. In the utility function, we insert the solution given by $z^*$ instead of $z$. We are then possible to determine the optimal consumption level which is a function of the optimal taxable income. Finally, we isolate consumption and keep utility fixed in order to draw an indifference curve .
###Code
# 1) Import the numpy-package.
import numpy as np
# 2) Define the budget constraint.
def budget_con_fuc(m):
return (1-m)*z
budget_con = np.array(range(2))
# 3) Define the utility function.
def value_of_choice(alpha, epsilon, m):
# The utility is
utility_f = (1-m)*(alpha*(1-m)**epsilon) - alpha/(1+1/epsilon)*(alpha*(1-m)**epsilon/alpha)**(1+1/epsilon)
return utility_f
# 3) Define the consumption level determined by the optimal taxable income.
def y(budget_con, m):
return budget_con*(1-m)
# 4) Define the consumption level as a function of the fixed utility and taxable income.
def utility_function(budget_con, alpha, epsilon, m):
u = value_of_choice(alpha, epsilon, m)
return u + alpha/(1+1/epsilon)*(budget_con/alpha)**(1+1/epsilon)
###Output
_____no_output_____
###Markdown
We want to plot the utility function, the budget constraint and the optimal taxable income. The optimal taxable income is given by the point where the budget constraint is tangent to the utility function (which is marked with a bullet on the graph). In order to do so, we import the `matplotlib`-package in order to plot the graphs. Furthermore, we import the `ipywidgets`-package in order to generate interactive sliders.
###Code
# Import packages in order to plot the graph and generate sliders.
import matplotlib.pyplot as plt
import ipywidgets as widgets
plt.style.use('ggplot')
# 1) Generate the graph.
def graph(alpha, epsilon, tax=0.3):
plt.figure(figsize=(12,6))
budget_con = np.arange(0.0, alpha, 1)
plt.plot(budget_con, utility_function(budget_con, alpha, epsilon, tax))
plt.plot(budget_con, y(budget_con, tax))
z_opt = (alpha*(1-tax)**epsilon)
print("The optimal taxable income is: " + str(round(z_opt)))
plt.plot(z_opt, (1-tax)*z_opt, 'ro')
plt.ylabel('Consumption')
plt.xlabel('Taxable income')
plt.legend(['Indifference curve','Budget constraint', 'Optimum'])
return
# 2) Generate the sliders.
widget_alpha = widgets.FloatSlider(
value = 5000,
min=100,
max=10000,
step=100,
description='α:',
readout_format='.0f',
width = 1000,
layout={'width': '500px'}
)
widget_epsilon = widgets.FloatSlider(
value = 0.8,
min=0.01,
max=2,
step=0.01,
description='$\epsilon$:',
readout_format='.2f',
width = 1000,
layout={'width': '500px'}
)
widget_m = widgets.FloatSlider(
value = 0.5,
min=0.0,
max=1,
step=0.01,
description='$m$:',
readout_format='.2f',
width = 1000,
layout={'width': '500px'}
)
# 3) Print the graph
widgets.interact(graph,
alpha=widget_alpha, epsilon=widget_epsilon, tax=widget_m
);
###Output
_____no_output_____
###Markdown
The graph confirms the conclusion from section 3.1. $z^*$ increases when $\alpha$ increases. This is obvious since one gets a higher income when his potential earnings increases everything else equal. $z^*$ decreases when $m$ increases. When the marginal tax rate increases, leisure is cheaper, and hence people tend to substitute work for leisure which lowers the taxable income. $z^*$ decreases when $\epsilon$ increases. When the elasticity of taxable income increases, people respond more to a change in the marginal tax rate. The greater the elasticity, the greater the response. Hence, an increase in the elasticity of taxable income lowers taxable income due to larger labor supply responses. 3.3 The tax rate that maximizes the tax revenue Until know we have only looked at the individual agent's optimization problem. We will examine how the govenment should tax the individuals to maximize the tax revenue. We do this considering the following draw of $N$ individuals with the preferences and abilities as follows $$ \begin{eqnarray*} & & \,\,\,\gamma^{j}=(|\alpha^{j}|,|\epsilon^{j}|)\\ & & \,\,\,\widehat{\gamma^{j}}=(\alpha^{j},\epsilon^{j}) \\ & & \,\,\,\widehat{\gamma^{j}} \sim \mathcal{N}(\mu,\Sigma) \\ & & \,\,\,\mu=(\mu_{\alpha},\mu_\epsilon) \\ & & \,\,\, \Sigma = \begin{bmatrix} \sigma_{\alpha}^2 & \sigma_{\alpha, \epsilon} \\ \sigma_{\alpha, \epsilon} & \sigma_{\epsilon}^2 \end{bmatrix} \end{eqnarray*} $$ Below we define a function which draws $N$ observations from the distribution defined above.
###Code
def draw(N = 500, mu_alpha = 5000, mu_epsilon = 0.9, sigma_alpha = 1000000, sigma_epsilon = 0.1, correlation = -0.5):
# 1) Define an array which contains mu_alpha and mu_epsilon
mu = np.array([mu_alpha,mu_epsilon])
# 2) Calculate the covariance between alpha and epsilon based on the correlation
sigma_alp_eps = correlation * (sigma_epsilon * sigma_alpha)**0.5
# 3) Define an array which contains the covariance matrix
Sigma = np.array([[sigma_alpha, sigma_alp_eps], [sigma_alp_eps, sigma_epsilon]])
# 4) Set the seed and draw the sample
seed = 2019
np.random.seed(seed)
draw = np.random.multivariate_normal(mu, Sigma, size=N)
# 5) Extract alphas and epsilons from the draw and make sure they are all positive
global alphas, epsilons
alphas = np.absolute(draw[:,0])
epsilons = np.absolute(draw[:,1])
return
# 6) Plot the distribution of alphas and epsilons
draw()
plt.figure(figsize=(12,6))
plt.scatter(alphas,epsilons)
plt.ylabel('Distribution of $\epsilon_i$')
plt.xlabel('Distribution of $α_i$')
plt.show()
###Output
_____no_output_____
###Markdown
We will now define a function which returns the tax revenue as a function of $\alpha_i$, $\epsilon_i$ and the tax rate. We will then define another function, which returns the tax revenue times -1 as a function of only the tax rate given the values of $\alpha_i$ and $\epsilon_i$ from the draw. This function has the nice property that it is easy to minimize using `optimize.minimize`. The result from this minization process will be the tax rate which maximizes the tax revenue.
###Code
from scipy import optimize
# 1) Define function which returns the tax revenue
def tax_revenue(alpha, epsilon, tax):
# 1.1) Find the tax revenue from each agent
tax_indi = alpha*(1-tax)**epsilon * tax
# 1.2) Aggregate all the tax provenues
tax_revenue = sum(tax_indi)
return tax_revenue
# 2) Define function that return the tax revenue as a function of the tax rate for the alphas and epsilons drawn earlier
def tax_maximize(tax):
t = tax_revenue(alphas, epsilons, tax) * -1
return t
# 3) Define function which maximizes the tax provenue using the tax_maximize(tax) function
def find_maximizing_tax(initial_guess=0.1):
# 3.1) Set the lower and upper bound
low_bound = 0
high_bound = 1-1e-5
bounds = ((low_bound,high_bound),)
# 3.2) Use the optimize.minimize to find the maximizing tax rate
result = optimize.minimize(tax_maximize, initial_guess, bounds=bounds)
return result
draw()
# 4) Store the optimal tax rate and the corresponding tax revenue.
results_max = find_maximizing_tax(initial_guess=0.1)
max_tax_rate_old = results_max.x
max_tax_rate = max_tax_rate_old[0]
max_tax_rev = results_max.fun
###Output
_____no_output_____
###Markdown
Given the standard parameters from the draw, we print the maximizing tax rate and the corresponding tax revenue.
###Code
print(f'The optimal tax rate is given by {max_tax_rate:.2f}')
print(f'The corresponding tax revenue is given by {-max_tax_rev:.0f}')
###Output
The optimal tax rate is given by 0.56
The corresponding tax revenue is given by 710528
###Markdown
We will now use the tax_optimize function to plot a Laffer curve with slider to change the parameters of $\mu_\alpha$, $\mu_\epsilon$, $\sigma^2_\alpha$, $\sigma^2_\epsilon$, $\sigma_{\alpha,\epsilon}$
###Code
import matplotlib
# 1) Define the interval for which the tax rate can be in [0,1], and turn this into a dataframe.
tax_rate_int = np.arange(0.0, 1.0, 0.01)
tax_table = pd.DataFrame({'taxrate':tax_rate_int})
# 2) Define a function which return the tax revenue for differnet tax levels.
def tax_revenue_df(row):
t = tax_maximize(row['taxrate'])
return -t
# 3) Define function which plot the Laffer curve.
def Laffer(mu_alpha = 5000, mu_epsilon = 0.9, sigma_alpha = 50000, sigma_epsilon = 0.1, correlation = 1):
# 3.1) Make a draw given the parameters specified in this function
draw(mu_alpha = mu_alpha, mu_epsilon = mu_epsilon, sigma_alpha = sigma_alpha, sigma_epsilon = sigma_epsilon, correlation = correlation)
# 3.2) Calculate the tax revenues for different tax rates.
tax_table[z] = tax_table.apply(tax_revenue_df, axis=1)
# 3.3) Plot the tax revenues as a function of tax rates.
plt.figure(figsize=(12,6))
plt.plot(tax_table['taxrate'],tax_table[z], color='b')
t = find_maximizing_tax(0.1)
tax = t.x[0] * 100
plt.plot(t.x[0],-t.fun,'ro')
plt.axvline(x=t.x[0], linewidth=1, color='r', linestyle='dashed')
plt.axhline(y=-t.fun, linewidth=1, color='r', linestyle='dashed')
axes = plt.gca()
axes.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
print(f'The tax revenue maximizing tax rate is: {tax:.3f} pct. with a tax revenue of {-t.fun:,.0f}')
plt.show()
return
# 4) Customize widgets
widget_mua = widgets.FloatSlider(
value = 5000,
min=10,
max=10000,
step=100,
description='$\mu_a$:',
readout_format='.0f',
width = 1000,
layout={'width': '500px'}
)
widget_mue = widgets.FloatSlider(
value = 1,
min=0,
max=3,
step=0.01,
description='$\mu_{\epsilon}$:',
readout_format='.2f',
width = 1000,
layout={'width': '500px'}
)
widget_sigmaa = widgets.FloatSlider(
value = 50000,
min=1000,
max=100000,
step=1000,
description='$\sigma_a^2$:',
readout_format='.0f',
width = 1000,
layout={'width': '500px'}
)
widget_sigmae = widgets.FloatSlider(
value = 0.1,
min=0,
max=3,
step=0.01,
description='$\sigma^2_{ \epsilon}$:',
readout_format='.2f',
width = 1000,
layout={'width': '500px'}
)
widget_corr = widgets.FloatSlider(
value = -0.5,
min=-1,
max=1,
step=0.01,
description='$\sigma_{ a, \epsilon} $:',
readout_format='.2f',
width = 1000,
layout={'width': '500px'}
)
# 5) Call the Laffer curve with widgets
widgets.interact(Laffer,
mu_alpha = widget_mua, mu_epsilon = widget_mue, sigma_alpha = widget_sigmaa, sigma_epsilon = widget_sigmae, correlation = widget_corr,
);
###Output
_____no_output_____
###Markdown
The graph above shows that the tax revenue maximizing tax rate varies between around 25 pct. to 99,99 pct. depending on the parameters. The value of $\mu_\epsilon$ have a great influence on the tax revenue maximizing tax rate. 4. Further analysis Previously, we considered a proportional tax system where everyone faced the same marginal tax rate for any income level. However, most Western contries have a progressive tax system where people with high incomes face a higher marginal tax rate. Thus, we examine the optimal taxable income when the individual face a tax system where the marginal tax rate to the right of a threshold is higher than to the left of the threshold. 4.1 Extension of the baseline model We consider the same equations as in section 2. However, the function which defines the tax payment is different now. The tax payment is given by$T_{new} (z_i) = min(z_i, K) * m + max(z_i - K, 0) * m_H$where $K$ is the threshold, and $m_H$ is the marginal tax rate faced by high-income earners (people with an income above the threshold). We note that if $z_i \leq K$ then there is no change from before, and the optimal tax rate is identical to the solution in section 3. An individual with an income above the threshold face the tax payment $T_{H} (z_i) = K * m + (z_i - K) * m_H$ 4.2 Analytical solution As mentioned in section 4.1, the introduction of the new tax system does not change anything from before for people with an income below the threshold. High-income earners respond in another way than before, and hence we calculate the optimal taxable income for them. The steps in this section is in most cases identical with the steps in section 3.2. First, we define the new symbols and the new tax system for high-income earners.
###Code
# 1) Define symbols.
T_H_z = sm.symbols('T_{H}(z)')
m_H = sm.symbols('m_H')
Kn = sm.symbols('K')
# 2) Define new tax system.
taxhigh = sm.Eq(T_H_z, mtax*Kn + m_H*(z-Kn))
taxhigh
###Output
_____no_output_____
###Markdown
We insert the new function of tax payment into the budget constraint.
###Code
# 1) Isolate T_z in the tax payment (this is done already, but to make sure, we run this line of code).
tax_high_solve = sm.solve(taxhigh, T_H_z)
# 2) Define that we want the tax payment substituted into the budget constraint.
budget_constraint_high_sub = budget_constraint.subs(T_z, tax_high_solve[0])
# 3) Ensure that the budget constraint is correct after the substitution.
budget_constraint_high_sub
###Output
_____no_output_____
###Markdown
We insert the new budget constraint into the utility function.
###Code
# 1) Isolate c in the budget constraint (this is done already, but to make sure, we run this line of code).
budget_constraint_high_solve = sm.solve(budget_constraint_high_sub, c)
# 2) Define that we want the budget constraint substituted into the utility function.
utility_high_sub = utility.subs(c, budget_constraint_high_solve[0])
# 3) Ensure that the utility function is correct after the substitution.
utility_high_sub
###Output
_____no_output_____
###Markdown
We differentiate the utility function with respect to taxable income in order to derive the optimal level of taxable income.
###Code
# 1) Define that we want to calculate the derivate of the utility function with respect to taxable income.
foc_high = sm.diff(utility_high_sub, z)
# 2) Define that we want to isolate for taxable income to derive z*.
z_star_high = sm.solve(foc_high, z)
# 3) Define the optimal level of taxable income as a Python-function.
z_star_func_high = sm.lambdify((m_H, alp, eps), z_star_high)
# 4) Examine the optimal level of taxable income.
z_star_high
###Output
_____no_output_____
###Markdown
We denote this level of optimal taxable income as $z_{high} ^*$. We note that this is the same optimal taxable income as in section 3.1 besides the marginal tax rate. However, it is not possible in this function to show that high-income earners tend to earn an income which is identical to the threshold. This is called "bunching" since people tend to bunch at kink points. If people with potential earnings above the threshold choose an optimal taxable income such that $z_{high}^*K$ then people will bunch at the kink. This is possible to show in a graphical analysis. Graphical analysis In this section we will analyze the behavior of the worker grahpically. First, we will find the maximum utility of the worker, and then we will plot the indifference curve with the budget constraint the same way as in section 3.2.
###Code
# 1) Define a function that return the maximum utility level given alpha, epsilon, low tax rate, high tax rate and the threshold
def value_of_choice_2(alpha, epsilon, tax_low, tax_high, kink):
# 1.1) If the alpha is below the income threshold, the maximum utility is found using the old budget constraint without the tax rate for high income earners
if alpha < kink:
global z_opt_new
z_opt_new = (alpha*(1-tax_low)**epsilon)
tax_opt_new = z_opt_new * tax_low
utility_f = (1-tax_low)*(alpha*(1-tax_low)**epsilon) - alpha/(1+1/epsilon)*(alpha*(1-tax_low)**epsilon/alpha)**(1+1/epsilon)
return utility_f, z_opt_new, tax_opt_new
# 1.2) Find the maximum utility if alpha is above the threshold
if alpha >= kink:
# 1.3) Find the utility for the budget constaint only below the threshold
utility_l = (1-tax_low)*(alpha*(1-tax_low)**epsilon) - alpha/(1+1/epsilon)*(alpha*(1-tax_low)**epsilon/alpha)**(1+1/epsilon)
z_opt_low = (alpha*(1-tax_low)**epsilon)
tax_opt_low = z_opt_low * tax_low
# 1.4) If the optimal income earned on the low budget set is above the threshold the low utility should be evaluated in the threshold
if alpha*(1-tax_low)**epsilon>kink:
utility_l = (1-tax_low)*kink - alpha/(1+1/epsilon)*(kink/alpha)**(1+1/epsilon)
z_opt_low = kink
tax_opt_low = kink * tax_low
# 1.5) Find the utility for the budget constaint only above the threshold
utility_h = (1-tax_low)*kink+(1-tax_high)*(alpha*(1-tax_high)**epsilon-kink) - alpha/(1+1/epsilon)*(alpha*(1-tax_high)**epsilon/alpha)**(1+1/epsilon)
z_opt_high = (alpha*(1-tax_high)**epsilon)
tax_opt_high = kink * tax_low + (z_opt_high - kink) * tax_high
# 1.4) If the optimal income earned on the high budget set is below the threshold the high utility should be evaluated in the threshold
if alpha*(1-tax_high)**epsilon<kink:
utility_h = (1-tax_low)*kink - alpha/(1+1/epsilon)*(kink/alpha)**(1+1/epsilon)
z_opt_high = kink
# 1.5) return the values for the higest utility
utility_g = max(utility_l, utility_h)
if utility_g==utility_l:
z_opt_new=z_opt_low
tax_opt_new = tax_opt_low
else:
z_opt_new=z_opt_high
tax_opt_new = tax_opt_high
return utility_g, z_opt_new, tax_opt_new
# 2) Define the indifference curve
def indif_curve(budget_con, alpha, epsilon, tax_low, tax_high, k):
u = value_of_choice_2(alpha, epsilon, tax_low, tax_high, k)[0]
return u + alpha/(1+1/epsilon)*(budget_con/alpha)**(1+1/epsilon)
# 3) Define a function that makes a plot with the indifference curve, the budget constraint and the optimal behaviour.
def graph2(alpha, epsilon, tax_low=0.35, tax_high=0.7, kink=500):
plt.figure(figsize=(12,6))
budget_con = np.arange(0.0, alpha*1.1+100, 1)
# 3.1) Plot the indifference curve
plt.plot(budget_con, indif_curve(budget_con, alpha, epsilon, tax_low, tax_high, kink))
# 3.2) Plot the budget constraints
lower = np.arange(0.0, kink, 1)
plt.plot(lower, lower*(1-tax_low))
upper = np.arange(kink, alpha*1.1+100, 1)
plt.plot(upper, (1-tax_low)*kink+(upper-kink)*(1-tax_high), color='g')
plt.axvline(x=kink, color='black', linestyle=':')
print("The optimal taxable income is: "+str(round(z_opt_new)))
plt.plot(z_opt_new, indif_curve(z_opt_new, alpha, epsilon, tax_low, tax_high, kink), 'ro')
plt.ylabel('Consumption')
plt.xlabel('Taxable income')
plt.legend(['Indifference curve','Budget constraint (low marginal tax)', 'Budget constraint (high marginal tax)', 'Threshold', 'Optimum'])
return
# 4) Plot the graph with interactive widgets
widget_ml = widgets.FloatSlider(
value = 0.5,
min=0.0,
max=1,
step=0.01,
description='$m_l$:',
readout_format='.2f',
width = 1000,
layout={'width': '500px'}
)
widget_mh = widgets.FloatSlider(
value = 0.5,
min=0.0,
max=1,
step=0.01,
description='$m_h$:',
readout_format='.2f',
width = 1000,
layout={'width': '500px'}
)
widget_kink = widgets.FloatSlider(
value = 2000,
min=10,
max=5000,
step=10,
description='$K$:',
readout_format='.2f',
width = 1000,
layout={'width': '500px'}
)
widgets.interact(graph2,
alpha=widget_alpha, epsilon=widget_epsilon, tax_low=widget_ml, tax_high=widget_mh, kink=widget_kink
);
###Output
_____no_output_____
###Markdown
We note some of the same results as in the conclusion in section 3.2. In this extended model, we note that people with a potential income slightly above the threshold will bunch at the kink. This is due to the fact that people are unwilling to work at the high marginal tax rate, but will like to work at the low marginal tax rate. Thus, they maximize their utility by having a taxable income exactly equal to the threshold. 4.3 The tax rate that maximizes the tax revenue We will use the distribution of agents we defined in section 3.3 to calculate the tax revenue for different low and high taxes and different thresholds for the high tax. First, we make new draw from the draw function and then define a function which returns the total tax renevue as well as the tax payment from each agent
###Code
# 1) Draws from the draw function
N=500
mu_alpha_a=5000
draw(N=N, mu_alpha=mu_alpha_a, mu_epsilon = 0.9, sigma_alpha = 1000000, sigma_epsilon = 0.1, correlation = -0.5)
# 2) Convert the numpy array with the alphas and epsilons into a list
alphas_list = alphas.tolist()
epsilons_list = epsilons.tolist()
# 3) Define a function which return total tax revenue as well as tax payment from each agent
def calc_tau(alphas_list, epsilons_list, tax_low, tax_high, kink):
# 3.1) Create empty list with all the tax payments
tau_list = []
z_list = []
# 3.2) Calculate the tax payment from every agent using a for loop
for x in range(N):
tau_list.append(value_of_choice_2(alphas_list[x], epsilons_list[x], tax_low, tax_high, kink)[2])
z_list.append(value_of_choice_2(alphas_list[x], epsilons_list[x], tax_low, tax_high, kink)[1])
return sum(tau_list), tau_list, z_list
# 4) Define function with calculate tax revenue times minus 1 as a function of tax_low, tax_high, kink -
# given the current values of alphas_list and epsilons_list. The minimum of this function is the maximum tax revenue
def tax_kink_max(params):
tax_low, tax_high, kink = params
# 3.1) Create empty list with all the tax payments
tau_list = []
# 4.1) Calculate the tax payment from every agent using a for loop
for x in range(N):
tau_list.append(value_of_choice_2(alphas_list[x], epsilons_list[x], tax_low, tax_high, kink)[2])
tau = - sum(tau_list)
return tau
###Output
_____no_output_____
###Markdown
Below is a calculation of the tax revenue given $m=0.4$, $m_H=0.6$ and $K=1000$.
###Code
params = [0.4,0.6,1000]
print(f'The tax revenue is: {-tax_kink_max(params):,.0f}')
###Output
The tax revenue is: 609,344
###Markdown
We will now use the `optimize.minimize` function to find the tax rates and the threshold for the high tax that maximizes the tax revenue.
###Code
# 1) Set the initial guess and the bounds for the tax rates
initial_guess = [0.5, 0.5, 200]
low_bound_tax = 0
high_bound_tax = 1-1e-5
bnds = ((low_bound_tax, high_bound_tax), (low_bound_tax, high_bound_tax), (0,100000))
# 2) Find the tax rate and kink that maximizes the tax revenue
draw(N=N, mu_alpha = mu_alpha_a, mu_epsilon = 0.9, sigma_alpha = 1000000, sigma_epsilon = 0.1, correlation = -0.5)
alphas_list = alphas.tolist()
epsilons_list = epsilons.tolist()
result = optimize.minimize(tax_kink_max, initial_guess, bounds=bnds)
print(f'mu_alpha = {mu_alpha_a:.0f}, mu_epsilon = 0.9, sigma_alpha = 1000000, sigma_epsilon = 0.1, correlation = -0.5 \n')
print(f'The maximizing low tax rate is: {result.x[0]*100:.2f} pct.')
print(f'The maximizing high tax rate is: {result.x[1]*100:.2f} pct.')
print(f'The maximizing kink rate is: {result.x[2]:,.0f}')
print(f'The tax revenue is: {-result.fun:,.0f} \n \n')
###Output
mu_alpha = 5000, mu_epsilon = 0.9, sigma_alpha = 1000000, sigma_epsilon = 0.1, correlation = -0.5
The maximizing low tax rate is: 100.00 pct.
The maximizing high tax rate is: 13.27 pct.
The maximizing kink rate is: 1,753
The tax revenue is: 837,629
###Markdown
The result above shows that the government should tax all income below 1,752.5 with 100 pct. and then tax any income above 1,752.5 with 13.27 pct. The tax revenue is 837,629 compared to 710,528 in the linear tax system. [Not that this optimization is not completely accurate, in particular for extreme value of $\gamma$ and $\Sigma$] We will now plot the tax revenue as a function of the low tax rate, the high tax rate and the kink in three different plot, respectivly.
###Code
# 1) Define a function which return a list of tax revenues for differente low tax rates given the high tax rate and the kink
def laffer_fun_low(tax_high,kink):
x=0
listen = []
xlist = []
increment = 0.01
while x<1:
xlist.append(x)
listen.append(calc_tau(alphas_list, epsilons_list, x, tax_high, kink)[0])
x += increment
return listen, xlist
# 2) Define a function which return a list of tax revenues for differente high tax rates given the low tax rate and the kink
def laffer_fun_high(tax_low,kink):
x=0
listen = []
xlist = []
increment = 0.01
while x<1:
xlist.append(x)
listen.append(calc_tau(alphas_list, epsilons_list, tax_low, x, kink)[0])
x += increment
return listen, xlist
# 3) Define a function which return a list of tax revenues for differente kinks given the low tax rate and the high tax
def laffer_fun_kink(tax_low,tax_high):
x=0
listen = []
xlist = []
increment = 100
while x<10000:
xlist.append(x)
listen.append(calc_tau(alphas_list, epsilons_list, tax_low, tax_high, x)[0])
x += increment
return listen, xlist
# 4) Make list containing the tax revenues for all low tax rate for differente high tax rates and kinks
x = laffer_fun_low(0,4752)[1]
low_1 = laffer_fun_low(result.x[1],result.x[2])[0]
low_2 = laffer_fun_low(result.x[1],result.x[2]*1.15)[0]
low_3 = laffer_fun_low(0,mu_alpha_a/2)[0]
# 5) Make list containing the tax revenues for all high tax rate for differente low tax rates and kinks
high_1 = laffer_fun_high(result.x[0],result.x[2])[0]
high_2 = laffer_fun_high(result.x[0],result.x[2]*1.15)[0]
high_3 = laffer_fun_high(0.5,mu_alpha_a/2)[0]
# 6) Make list containing the tax revenues for all kinks for differente low tax rates and high tax rates
x_kink = laffer_fun_kink(result.x[0],result.x[1])[1]
kink_1 = laffer_fun_kink(result.x[0],result.x[1])[0]
kink_2 = laffer_fun_kink(0.4,0.6)[0]
kink_3 = laffer_fun_kink(1,0)[0]
# 7) Print the plots
import matplotlib
fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(15, 6))
ax1.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
ax2.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
# 8) Tax revenue as a function of the low tax rate
ax1.plot(x,low_1, color='r')
ax1.plot(x,low_2, color='b', linestyle=':')
ax1.plot(x,low_3, color='b', linestyle='--')
ax1.legend([f'High tax = {result.x[1]:.2f}, kink = {result.x[2]:.0f}',f'High tax = {result.x[1]:.2f}, kink = {result.x[2]*1.5:.0f}',f'High tax = 0.5, kink = {mu_alpha_a/2:.0f}'])
ax1.set_ylabel('Tax revenue')
ax1.set_xlabel('Tax rate')
ax1.set_title('Laffer curve for low tax rate')
# 9) Tax revenue as a function of the low tax rate
ax2.plot(x,high_1, color='r')
ax2.plot(x,high_2, color='b', linestyle=':')
ax2.plot(x,high_3, color='b', linestyle='--')
ax2.legend([f'Low tax = {result.x[0]:.2f}, kink = {result.x[2]:.2f}',f'Low tax = {result.x[0]:.2f}, kink = {result.x[2]*1.2:.2f}',f'Low tax = 0.5, kink = {mu_alpha_a/2:.2f}'])
ax2.set_ylabel('Tax revenue')
ax2.set_xlabel('Tax rate')
ax2.set_title('Laffer curve for high tax rate')
plt.show()
# 10) Tax revenue as a function of the kink
fig, ax3 = plt.subplots(1, 1, figsize=(15, 6))
ax3.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
ax3.plot(x_kink,kink_1, color='r')
ax3.plot(x_kink,kink_2, color='b', linestyle=':')
ax3.plot(x_kink,kink_3, color='b', linestyle='--')
ax3.set_ylabel('Tax revenue')
ax3.set_xlabel('Threshold for high tax (kink)')
ax3.set_title('Laffer curve for kink')
ax3.legend([f'Low tax = {result.x[0]:.2f}, high tax = {result.x[1]:.2f}',f'Low tax = 0.4, high tax = 0.6',f'Low tax = 0.8, high tax = 0.2'])
plt.show()
###Output
_____no_output_____
###Markdown
To see how the tax rate affect the agents behavior, we have below made a histogram which plots the distribution of the tax payments and the taxable income as a function of the three tax parameters. The function also prints the tax revenue.
###Code
# 1) Make a function which plots a histogram over the tax payments and the taxable income
def histo(tax_low=1, tax_high=0, kink=4000):
plt.figure(figsize=(12,8))
graf = calc_tau(alphas_list, epsilons_list, tax_low, tax_high, kink)
tax_max = max(graf[2])
bins = np.linspace(0, tax_max, 50)
plt.hist(graf[1], bins=bins, color='r', alpha=0.5)
plt.hist(graf[2], bins=bins, color='b', alpha=0.5)
plt.legend(['Tax payment', 'Taxable income'])
plt.show()
print(f'The tax provenue is {graf[0]:,.0f}')
widgets.interact(histo,
tax_low=widget_ml, tax_high=widget_mh, kink=widget_kink
);
###Output
_____no_output_____ |
dl_tf_BDU/5.AE/ML0120EN-5.2-Review-DBNMNIST.ipynb | ###Markdown
Deep Belief Network One problem with traditional multilayer perceptrons/artificial neural networks is that backpropagation can often lead to “local minima”. This is when your “error surface” contains multiple grooves and you fall into a groove that is not lowest possible groove as you perform gradient descent.__Deep belief networks__ solve this problem by using an extra step called __pre-training__. Pre-training is done before backpropagation and can lead to an error rate not far from optimal. This puts us in the “neighborhood” of the final solution. Then we use backpropagation to slowly reduce the error rate from there.DBNs can be divided in two major parts. The first one are multiple layers of Restricted Boltzmann Machines (RBMs) to pre-train our network. The second one is a feed-forward backpropagation network, that will further refine the results from the RBM stack. Let's begin by importing the necessary libraries and utilities functions to implement a Deep Belief Network.
###Code
#urllib is used to download the utils file from deeplearning.net
from urllib import request
response = request.urlopen('http://deeplearning.net/tutorial/code/utils.py')
content = response.read()
target = open('utils.py', 'wb')
target.write(content)
target.close()
#Import the math function for calculations
import math
#Tensorflow library. Used to implement machine learning models
import tensorflow as tf
#Numpy contains helpful functions for efficient mathematical calculations
import numpy as np
#Image library for image manipulation
from PIL import Image
#import Image
#Utils file
from utils import tile_raster_images
###Output
_____no_output_____
###Markdown
Constructing the Layers of RBMs First of all, let's detail Restricted Boltzmann Machines. What are Restricted Boltzmann Machines?RBMs are shallow neural nets that learn to reconstruct data by themselves in an unsupervised fashion. How it works?Simply, RBM takes the inputs and translates them to a set of numbers that represents them. Then, these numbers can be translated back to reconstruct the inputs. Through several forward and backward passes, the RBM will be trained, and a trained RBM can reveal which features are the most important ones when detecting patterns. Why are RBMs important?It can automatically extract __meaningful__ features from a given input. What's the RBM's structure?It only possesses two layers; A visible input layer, and a hidden layer where the features are learned. To implement DBNs in TensorFlow, we will implement a class for the Restricted Boltzmann Machines (RBM). The class below implements an intuitive way of creating and using RBM's.
###Code
#Class that defines the behavior of the RBM
class RBM(object):
def __init__(self, input_size, output_size, epochs=5, learning_rate=1, batchsize=100):
#Defining the hyperparameters
self._input_size = input_size #Size of input
self._output_size = output_size #Size of output
self.epochs = epochs #Amount of training iterations
self.learning_rate = learning_rate #The step used in gradient descent
self.batchsize = batchsize #The size of how much data will be used for training per sub iteration
#Initializing weights and biases as matrices full of zeroes
self.w = np.zeros([input_size, output_size], np.float32) #Creates and initializes the weights with 0
self.hb = np.zeros([output_size], np.float32) #Creates and initializes the hidden biases with 0
self.vb = np.zeros([input_size], np.float32) #Creates and initializes the visible biases with 0
#Fits the result from the weighted visible layer plus the bias into a sigmoid curve
def prob_h_given_v(self, visible, w, hb):
#Sigmoid
return tf.nn.sigmoid(tf.matmul(visible, w) + hb)
#Fits the result from the weighted hidden layer plus the bias into a sigmoid curve
def prob_v_given_h(self, hidden, w, vb):
return tf.nn.sigmoid(tf.matmul(hidden, tf.transpose(w)) + vb)
#Generate the sample probability
def sample_prob(self, probs):
return tf.nn.relu(tf.sign(probs - tf.random_uniform(tf.shape(probs))))
#Training method for the model
def train(self, X):
#Create the placeholders for our parameters
_w = tf.placeholder("float", [self._input_size, self._output_size])
_hb = tf.placeholder("float", [self._output_size])
_vb = tf.placeholder("float", [self._input_size])
prv_w = np.zeros([self._input_size, self._output_size], np.float32) #Creates and initializes the weights with 0
prv_hb = np.zeros([self._output_size], np.float32) #Creates and initializes the hidden biases with 0
prv_vb = np.zeros([self._input_size], np.float32) #Creates and initializes the visible biases with 0
cur_w = np.zeros([self._input_size, self._output_size], np.float32)
cur_hb = np.zeros([self._output_size], np.float32)
cur_vb = np.zeros([self._input_size], np.float32)
v0 = tf.placeholder("float", [None, self._input_size])
#Initialize with sample probabilities
h0 = self.sample_prob(self.prob_h_given_v(v0, _w, _hb))
v1 = self.sample_prob(self.prob_v_given_h(h0, _w, _vb))
h1 = self.prob_h_given_v(v1, _w, _hb)
#Create the Gradients
positive_grad = tf.matmul(tf.transpose(v0), h0)
negative_grad = tf.matmul(tf.transpose(v1), h1)
#Update learning rates for the layers
update_w = _w + self.learning_rate *(positive_grad - negative_grad) / tf.to_float(tf.shape(v0)[0])
update_vb = _vb + self.learning_rate * tf.reduce_mean(v0 - v1, 0)
update_hb = _hb + self.learning_rate * tf.reduce_mean(h0 - h1, 0)
#Find the error rate
err = tf.reduce_mean(tf.square(v0 - v1))
#Training loop
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#For each epoch
for epoch in range(self.epochs):
#For each step/batch
for start, end in zip(range(0, len(X), self.batchsize),range(self.batchsize,len(X), self.batchsize)):
batch = X[start:end]
#Update the rates
cur_w = sess.run(update_w, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
cur_hb = sess.run(update_hb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
cur_vb = sess.run(update_vb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
prv_w = cur_w
prv_hb = cur_hb
prv_vb = cur_vb
error=sess.run(err, feed_dict={v0: X, _w: cur_w, _vb: cur_vb, _hb: cur_hb})
print('Epoch: {} --> Reconstruction error={}'.format(epoch, error))
self.w = prv_w
self.hb = prv_hb
self.vb = prv_vb
#Create expected output for our DBN
def rbm_outpt(self, X):
input_X = tf.constant(X)
_w = tf.constant(self.w)
_hb = tf.constant(self.hb)
out = tf.nn.sigmoid(tf.matmul(input_X, _w) + _hb)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
return sess.run(out)
###Output
_____no_output_____
###Markdown
The MNIST Dataset We will be using the MNIST dataset, which is a commonly used dataset used for model benchmarking comprised of handwritten digits. We will import the images using "One Hot Encoding" to encode the handwritten images into values varying from 0 to 1.
###Code
#Getting the MNIST data provided by Tensorflow
from tensorflow.examples.tutorials.mnist import input_data
#Loading in the mnist data
mnist = input_data.read_data_sets("../../data/MNIST/", one_hot=True)
trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images,\
mnist.test.labels
###Output
Extracting ../../data/MNIST/train-images-idx3-ubyte.gz
Extracting ../../data/MNIST/train-labels-idx1-ubyte.gz
Extracting ../../data/MNIST/t10k-images-idx3-ubyte.gz
Extracting ../../data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
Creating the Deep Belief Network With the RBM class created and MNIST Datasets loaded in, we can start creating the DBN. For our example, we are going to use a 3 RBMs, one with 500 hidden units, the second one with 200 and the last one with 50. We are generating a **deep hierarchical representation of the training data**. The cell below accomplishes this:
###Code
RBM_hidden_sizes = [500, 200 , 50 ] #create 2 layers of RBM with size 400 and 100
#Since we are training, set input as training data
inpX = trX
#Create list to hold our RBMs
rbm_list = []
#Size of inputs is the number of inputs in the training set
input_size = inpX.shape[1]
#For each RBM we want to generate
for i, size in enumerate(RBM_hidden_sizes):
print('RBM: {} {} --> {}'.format(i, input_size, size))
rbm_list.append(RBM(input_size, size))
input_size = size
###Output
RBM: 0 784 --> 500
RBM: 1 500 --> 200
RBM: 2 200 --> 50
###Markdown
RBM Train We will now begin the pre-training step and train each of the RBMs in our stack by individiually calling the train function, getting the current RBMs output and using it as the next RBM's input.
###Code
#For each RBM in our list
for rbm in rbm_list:
print('New RBM:')
#Train a new one
rbm.train(inpX)
#Return the output layer
inpX = rbm.rbm_outpt(inpX)
###Output
New RBM:
Epoch: 0 --> Reconstruction error=0.06143786013126373
Epoch: 1 --> Reconstruction error=0.052206024527549744
Epoch: 2 --> Reconstruction error=0.0491536483168602
Epoch: 3 --> Reconstruction error=0.047456011176109314
Epoch: 4 --> Reconstruction error=0.046113960444927216
New RBM:
Epoch: 0 --> Reconstruction error=0.03535943850874901
Epoch: 1 --> Reconstruction error=0.03123578242957592
Epoch: 2 --> Reconstruction error=0.029440326616168022
Epoch: 3 --> Reconstruction error=0.028372613713145256
Epoch: 4 --> Reconstruction error=0.027670882642269135
New RBM:
Epoch: 0 --> Reconstruction error=0.053318191319704056
Epoch: 1 --> Reconstruction error=0.05042925477027893
Epoch: 2 --> Reconstruction error=0.04974760860204697
Epoch: 3 --> Reconstruction error=0.04935634136199951
Epoch: 4 --> Reconstruction error=0.0488370917737484
###Markdown
Now we can convert the learned representation of input data into a supervised prediction, e.g. a linear classifier. Specifically, we use the output of the last hidden layer of the DBN to classify digits using a shallow Neural Network. Neural Network The class below implements the Neural Network that makes use of the pre-trained RBMs from above.
###Code
import numpy as np
import math
import tensorflow as tf
class NN(object):
def __init__(self, sizes, X, Y):
#Initialize hyperparameters
self._sizes = sizes
self._X = X
self._Y = Y
self.w_list = []
self.b_list = []
self._learning_rate = 1.0
self._momentum = 0.0
self._epoches = 10
self._batchsize = 100
input_size = X.shape[1]
#initialization loop
for size in self._sizes + [Y.shape[1]]:
#Define upper limit for the uniform distribution range
max_range = 4 * math.sqrt(6. / (input_size + size))
#Initialize weights through a random uniform distribution
self.w_list.append(
np.random.uniform( -max_range, max_range, [input_size, size]).astype(np.float32))
#Initialize bias as zeroes
self.b_list.append(np.zeros([size], np.float32))
input_size = size
#load data from rbm
def load_from_rbms(self, dbn_sizes,rbm_list):
#Check if expected sizes are correct
assert len(dbn_sizes) == len(self._sizes)
for i in range(len(self._sizes)):
#Check if for each RBN the expected sizes are correct
assert dbn_sizes[i] == self._sizes[i]
#If everything is correct, bring over the weights and biases
for i in range(len(self._sizes)):
self.w_list[i] = rbm_list[i].w
self.b_list[i] = rbm_list[i].hb
#Training method
def train(self):
#Create placeholders for input, weights, biases, output
_a = [None] * (len(self._sizes) + 2)
_w = [None] * (len(self._sizes) + 1)
_b = [None] * (len(self._sizes) + 1)
_a[0] = tf.placeholder("float", [None, self._X.shape[1]])
y = tf.placeholder("float", [None, self._Y.shape[1]])
#Define variables and activation functoin
for i in range(len(self._sizes) + 1):
_w[i] = tf.Variable(self.w_list[i])
_b[i] = tf.Variable(self.b_list[i])
for i in range(1, len(self._sizes) + 2):
_a[i] = tf.nn.sigmoid(tf.matmul(_a[i - 1], _w[i - 1]) + _b[i - 1])
#Define the cost function
cost = tf.reduce_mean(tf.square(_a[-1] - y))
#Define the training operation (Momentum Optimizer minimizing the Cost function)
train_op = tf.train.MomentumOptimizer(
self._learning_rate, self._momentum).minimize(cost)
#Prediction operation
predict_op = tf.argmax(_a[-1], 1)
#Training Loop
with tf.Session() as sess:
#Initialize Variables
sess.run(tf.global_variables_initializer())
#For each epoch
for i in range(self._epoches):
#For each step
for start, end in zip(
range(0, len(self._X), self._batchsize), range(self._batchsize, len(self._X), self._batchsize)):
#Run the training operation on the input data
sess.run(train_op, feed_dict={
_a[0]: self._X[start:end], y: self._Y[start:end]})
for j in range(len(self._sizes) + 1):
#Retrieve weights and biases
self.w_list[j] = sess.run(_w[j])
self.b_list[j] = sess.run(_b[j])
print("Accuracy rating for epoch " + str(i) + ": " + str(np.mean(np.argmax(self._Y, axis=1) ==
sess.run(predict_op, feed_dict={_a[0]: self._X, y: self._Y}))))
###Output
_____no_output_____
###Markdown
Now let's execute our code:
###Code
nNet = NN(RBM_hidden_sizes, trX, trY)
nNet.load_from_rbms(RBM_hidden_sizes,rbm_list)
nNet.train()
###Output
Accuracy rating for epoch 0: 0.559854545455
Accuracy rating for epoch 1: 0.717472727273
Accuracy rating for epoch 2: 0.800781818182
Accuracy rating for epoch 3: 0.846836363636
Accuracy rating for epoch 4: 0.874945454545
Accuracy rating for epoch 5: 0.893654545455
Accuracy rating for epoch 6: 0.903690909091
Accuracy rating for epoch 7: 0.910690909091
Accuracy rating for epoch 8: 0.916072727273
Accuracy rating for epoch 9: 0.920254545455
|
doc/ipython_notebooks_src/tutorial-mayavi-mlab-visualising-magnetisation.ipynb | ###Markdown
tutorial-mayavi-mlab-visualising-magnetisation Author: Mark Vousden Last Modified: 28/03/2014, tested on Mark's So'ton Desktop and his SP35P2Pro This notebook is aimed at users of finmag who are not familiar Mayavi's scripting capabilities and who want to streamline the process of creating pretty graphics. In this notebook we will create some sample data and use Mayavi to display, render and save the image, all without touching your mouse (hopefully!). We will be producing plots with glyphs in this tutorial, but the procedure is very similar for surfaces and other such objects. Much of this information has been obtained from the well-written Mayavi documentation[1].
###Code
import dolfin as df
import finmag
from mayavi import __version__ as mayavi_version
from mayavi import mlab
import IPython.display
import numpy as np
import os
title = "tutorial-visualising-magnetisation-using-mayavi-mlab"
# IPYTHON_TEST_IGNORE_OUTPUT
print "Mayavi Version: " + mayavi_version
###Output
Mayavi Version: 4.1.0
###Markdown
Firstly, lets create some simulation data. Pretty standard stuff.
###Code
# Create simulation object with single skyrmion magnetisation.
mesh = df.RectangleMesh(-100, -100, 100, 100, 20, 20)
sim = finmag.Simulation(mesh, 1e5, unit_length=1e-9)
sim.initialise_skyrmions(70)
# Obtain locations for sampling. "x", "y" and "z" are locations of vectors
# in 3D space. We set "z" to zero everywhere to show an example in 2D space
# for simplicity.
c = mesh.coordinates().transpose()
x, y = c[:]
z = np.zeros_like(x)
# Obtain vector directions. "u", "v" and "w" define the directions of the
# vectors given by the "x", "y" and "z" co-ordinates.
u = np.zeros_like(x)
v = np.zeros_like(x)
w = np.zeros_like(x)
for zI in xrange(len(x)):
u[zI], v[zI], w[zI] = sim.llg._m_field.f(x[zI], y[zI])
###Output
[2014-09-12 15:59:31] INFO: Finmag logging output will be written to file: '/home/mb4e10/finmag/doc/ipython_notebooks_src/unnamed.log' (any old content will be overwritten).
[2014-09-12 15:59:31] DEBUG: Creating DataWriter for file 'unnamed.ndt'
[2014-09-12 15:59:31] INFO: Creating Sim object 'unnamed' (rank=0/1).
[2014-09-12 15:59:31] INFO: <Mesh of topological dimension 2 (triangles) with 441 vertices and 800 cells, ordered>
[2014-09-12 15:59:31] DEBUG: Creating LLG object.
###Markdown
So now we have some data that precisely describes our field of vectors. Now for the interesting part! We are going to draw some vectors and render them to the "scene" (an object that Mayavi uses "under the hood" of mlab). Whenever data is modified using mlab, the scene is update automatically for simplicity. So, let's draw some vectors:
###Code
# Set offscreen rendering. This will still open a window, but nothing will be shown there.
mlab.options.offscreen = True
# Plot 3D vector field, and assign a reference scale to each vector equal to
# its out-of-plane component. Also use a colourmap that isn't hideous[2], though
# you can use your own custom colormap via the 'color' kwarg.
vectors = mlab.quiver3d(x, y, z, u, v, w, scalars=w, colormap="PuOr")
# We temporarily disable rendering to speed things up. This will be enabled again below.
# See http://docs.enthought.com/mayavi/mayavi/tips.html#accelerating-a-mayavi-script
vectors.scene.disable_render = True
# Colour vectors by the reference scale we included.
vectors.glyph.color_mode = 'color_by_scalar'
# Change the vectors to cones, because cones are awesome. NB <!>: I'm sure that
# there is a more elegant way of doing this, but I haven't found it yet.
cone_source = vectors.glyph.glyph_source.glyph_dict["cone_source"]
vectors.glyph.glyph_source.glyph_source = cone_source
# Resize the cones.
vectors.glyph.glyph_source.glyph_source.height = 0.4
vectors.glyph.glyph_source.glyph_source.radius = 0.2
# Reposition the cones relative to one-another (so that the result is flat).
# Strange that this isn't the default setting, but there you go.
vectors.glyph.glyph_source.glyph_source.center = np.zeros(3)
# Set the resolution of objects.
vectors.glyph.glyph_source.glyph_source.resolution *= 6
# Re-enable rendering
vectors.scene.disable_render = False
# We also disable anti-aliasing for speed. This should not be done for high-quality plots.
# See http://docs.enthought.com/mayavi/mayavi/tips.html#accelerating-a-mayavi-script
vectors.scene.anti_aliasing_frames = 0
###Output
_____no_output_____
###Markdown
Now the cones are in the scene, we must look at them from a nice angle. Once there, we can save our image and enter interactive mode.
###Code
# Choose a useful camera angle. Angles given in degrees. Lots of options for
# this function, would recommend taking a look at the documentation especially
# if you want to create "rotating camera" animations.
mlab.view(azimuth=90., elevation=0., distance=350.)
# Save the image. The size, output format (deduced by the filename) and
# magnification can be customised here.
mlab.savefig(title + "-output.png")
# Enable interactivity.
#mlab.show()
###Output
_____no_output_____
###Markdown
This is what the generated image looks like (it would be slightly nicer with anti-aliasing enabled, but it already looks quite good).
###Code
IPython.display.Image(title + "-output.png")
###Output
_____no_output_____ |
CNN/CNN_implementation_1.ipynb | ###Markdown
Power Quality Classification using CNN This notebook focusses on developing a Convolutional Neural Network which classifies a particular power signal into its respective power quality condition. The dataset used here contains signals which belong to one of the 5 classes(power quality condition). The sampling rate of this data is 128. This means that each signal is characterized by 128 data points. Here the signals provided are in time domain.The power quality condition with respect to the output class value is as follows: 1 - Normal2 - 3rd harmonic wave3 - 5th harmonic wave4 - Voltage dip5 - transient
###Code
#importing the required libraries
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import datetime
import visualkeras
from scipy.fft import fft,fftfreq
from sklearn.preprocessing import StandardScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from tensorflow.keras.optimizers import Adam
#loading the dataset using pandas
data1 = pd.read_csv("../Dataset/Train/Voltage_L1_DataSet1.csv")
out1 = pd.read_csv("../Dataset/Train/OutputFor_DataSet1.csv")
data2 = pd.read_csv("../Dataset/Test/Voltage_L1_DataSet2.csv")
out2 = pd.read_csv("../Dataset/Test/OutputFor_DataSet2.csv")
print("data1",data1.shape)
print("out1",out1.shape)
print("data2",data2.shape)
print("out2",out2.shape)
###Output
data1 (11899, 128)
out1 (5999, 1)
data2 (5999, 128)
out2 (5999, 1)
###Markdown
Data Preprocessing This segment of notebook contains all the preprocessing steps which are performed on the data. Data cleaning
###Code
#dropna() function is used to remove all those rows which contains NA values
data1.dropna(axis=0,inplace=True)
#shape of the data frame after dropping the rows containing NA values
data1.shape
#here we are constructing the array which will finally contain the column names
header =[]
for i in range(1,data1.shape[1]+1):
header.append("Col"+str(i))
#assigning the column name array to the respectinve dataframes
data1.columns = header
data2.columns = header
data1.head()
data2.head()
#now we are combining the two dataframes to make a final dataframe
data = data1.append(data2, ignore_index = True)
data.head()
data.shape
#here we are giving a name to the output column
header_out = ["output"]
out1.columns = header_out
out2.columns = header_out
out2.head()
#now we are combining the output columns
output = out1.append(out2, ignore_index = True)
output.head()
output.shape
#now we are appending the output column to the original dataframe which contains the power signals
data['output'] = output
data.head()
data_arr = data.to_numpy()
###Output
_____no_output_____
###Markdown
Data transformation The data transformation steps employed here are as follows:1) Fourier Transform2) Normalization
###Code
#In this segment we are plotting one wave from each class after applying fourier transformation
yf = np.abs(fft(data_arr[0][0:128]))
xf = fftfreq(128,1/128)
plt.plot(xf, yf)
plt.show()
print("class",data_arr[0][128], "Normal wave")
yf = np.abs(fft(data_arr[1][0:128]))
xf = fftfreq(128,1/128)
plt.plot(xf, yf)
plt.show()
print("class",data_arr[1][128], "3rd harmonic wave")
yf = np.abs(fft(data_arr[3][0:128]))
xf = fftfreq(128,1/128)
plt.plot(xf, yf)
plt.show()
print("class",data_arr[3][128], "5th harmonic wave")
yf = np.abs(fft(data_arr[6][0:128]))
xf = fftfreq(128,1/128)
plt.plot(xf, yf)
plt.show()
print("class",data_arr[6][128], "Voltage dip")
yf = np.abs(fft(data_arr[8][0:128]))
xf = fftfreq(128,1/128)
plt.plot(xf, yf)
plt.show()
print("class",data_arr[8][128], "Transient wave")
#here we are splitting the dataset in the ratio of 60%,20%,20% (training set,validation set, test set)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(data.loc[:,data.columns != 'output'],data['output'],test_size=0.2)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42)
#here we are overwritting the dataframe with the waves which we obtained after doing fourier transformation
x_train = x_train.to_numpy()
x_test = x_test.to_numpy()
x_val = x_val.to_numpy()
#here we are performing normalization
transform = StandardScaler()
x_train_tr = transform.fit_transform(x_train)
x_test_tr = transform.fit_transform(x_test)
x_val_tr = transform.fit_transform(x_val)
###Output
_____no_output_____
###Markdown
Model creation and training
###Code
# get_dummies function is used here to perform one hot encoding of the y_* numpy arrays
y_train_hot = pd.get_dummies(y_train)
y_test_hot = pd.get_dummies(y_test)
y_val_hot = pd.get_dummies(y_val)
print("Training",x_train.shape)
print(y_train_hot.shape)
print("Validation",x_val.shape)
print(y_val_hot.shape)
print("Test",x_test.shape)
print(y_test_hot.shape)
#Reshaping the Data so that it could be used in 1D CNN
x_train_re = x_train.reshape(x_train_tr.shape[0],x_train_tr.shape[1], 1)
x_test_re = x_test.reshape(x_test_tr.shape[0],x_test_tr.shape[1], 1)
x_val_re = x_val.reshape(x_val_tr.shape[0],x_val_tr.shape[1], 1)
x_train_re.shape
#importing required modules for working with CNN
import tensorflow as tf
from tensorflow.keras.layers import Conv1D
from tensorflow.keras.layers import Convolution1D, ZeroPadding1D, MaxPooling1D, BatchNormalization, Activation, Dropout, Flatten, Dense
from tensorflow.keras.regularizers import l2
#initializing required parameters for the model
batch_size = 64
num_classes = 5
epochs = 20
input_shape=(x_train.shape[1], 1)
model = Sequential()
model.add(Conv1D(128, kernel_size=3,padding = 'same',activation='relu', input_shape=input_shape))
model.add(BatchNormalization())
model.add(MaxPooling1D(pool_size=(2)))
model.add(Conv1D(128,kernel_size=3,padding = 'same', activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling1D(pool_size=(2)))
model.add(Flatten())
model.add(Dense(16, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
#compiling the model
log_dir = "logs1/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
#training the model
history = model.fit(x_train_re, y_train_hot, batch_size=batch_size, epochs=epochs, validation_data=(x_val_re, y_val_hot), callbacks=[tensorboard_callback])
%load_ext tensorboard
%tensorboard --logdir logs1/fit
print(model.metrics_names)
###Output
['loss', 'accuracy']
###Markdown
Model evaluation
###Code
np.mean(history.history['accuracy'])
pred_acc = model.evaluate(x_test_re,y_test_hot)
print("Test accuracy is {}".format(pred_acc))
#model.save('CNN_model_data1.h5')
###Output
_____no_output_____ |
notebooks/CarlosFrederico/aula25/Lesson_25.ipynb | ###Markdown
1 - Introduction We learned that the **mean** takes into account each value in the distribution, and we saw that it's fairly **easy to define the mean algebraically**. These two properties make the mean far superior to the median. The **median comes in handy**, however, **when it's not possible or appropriate to compute the mean**.In this mission we'll explore a couple of cases where neither the **mean** nor the **median** are suitable for finding an average value, and we'll learn an alternative summary metric.We'll still be working with the same data set on house sale prices that we used in the last two missions:| | Order | PID | MS SubClass | MS Zoning | Lot Frontage | Lot Area | Street | Alley | Lot Shape | Sale Condition | SalePrice | |-------|-----|-------------|-----------|--------------|----------|--------|-------|-----------|----------------|-----------|--------|| 0 | 1 | 526301100 | 20 | RL | 141.0 | 131770 | Pave | NaN | WD | Normal | 215000 || 1 | 2 | 526350040 | 20 | RH | 80.0 | 11622 | Pave | NaN | WD | Normal | 105000 || 2 | 3 | 526351010 | 20 | RL | 81.0 | 14267 | Pave | NaN | WD | Normal | 172000 || 3 | 4 | 526353030 | 20 | RL | 93.0 | 11160 | Pave | NaN | WD | Normal | 244000 || 4 | 5 | 527105010 | 60 | RL | 74.0 | 13830 | Pave | NaN | WD | Normal | 189900 |Let's get familiar with a few parts of the data set which we're going to explore in this mission.**Exercise**- Read in the TSV file (**AmesHousing_1.txt**) as a pandas **DataFrame** and save it to a variable named **houses.**- Explore the **Land Slope** column to find its scale of measurement. Refer to the [documentation](https://s3.amazonaws.com/dq-content/307/data_description.txt) to find the data dictionary of this column. - Assign your answer as a string to the variable **scale_land**. Depending on the scale of measurement, choose between these following strings: **'nominal'**, **'ordinal'**, **'interval'**, and **'ratio'**.- Explore the **Roof Style** variable and find its scale of measurement. Assign your answer as a string to a variable named **scale_roof** (choose between the four strings listed above). - What measure of average would you choose for this column?- Explore the **Kitchen AbvGr** variable and determine whether it's continuous or discrete. Assign your answer as a string to a variable named **kitchen_variable** — the string should be either **'continuous'**, or **'discrete'**.
###Code
import pandas as pd
houses = pd.read_csv('AmesHousing_1.txt', sep='\t')
scale_land = 'ordinal'
scale_roof = 'nominal'
kitchen_variable = 'discrete'
###Output
_____no_output_____
###Markdown
2 - The Mode for Ordinal Variables In the last exercise, we found that the **Land Slope** variable is **ordinal**. You may have also found from your exploration that the values of this variable are represented using words:
###Code
houses["Land Slope"].unique()
###Output
_____no_output_____
###Markdown
As you may have already found in the [documentation](https://s3.amazonaws.com/dq-content/307/data_description.txt), **'Gtl'** means gentle slope, **'Mod'** means moderate slope, and **'Sev'** stands for 'Severe slope'.**We can't compute the mean** for this variable because its values are words, not numbers. Remember that the definition of the mean is $\displaystyle \frac{\sum_{i=1}^{n}}{n}$, so we can't compute the $\displaystyle \sum_{i=1}^{n} x_i$ part if the values are words. We learned previously that the **median** is a good workaround for **ordinal data**, but the values of this ordinal variable are not numbers. Can we still compute the **median**?If we sort the values of the **Land Slope** variable, we can find that the middle two values are **['Gtl', 'Gtl']** (the variable has an even number of values). Although we can't take their mean, it's intuitively clear that the average of two identical values is one of those values, so the **median** value should be **'Gtl'**.However, if the two middle values were **['Gtl', 'Mod']**, then it wouldn't be clear at all what to choose for the **median**. In cases like this, one workaround for finding an average value is to measure the most frequent value in the distribution. For the **Land Slope** variable, we can see that the value **'Gtl'** has the greatest frequency:
###Code
houses['Land Slope'].value_counts()
###Output
_____no_output_____
###Markdown
We call the most frequent value in the distribution **the mode**. So the mode of the **Land Slope** variable is **'Gtl'**. In other words, the typical house has a **gentle slope**. Very importantly, notice that the mode is the most frequent value in the distribution, not the frequency of that value — so **the mode is 'Gtl', not 2789**.Just like for the **median**, there's no standard notation for the **mode**. It's also worth noting that the **mode is not defined algebraically.****Exercise**- Write a function that takes in an array of values (including strings) and returns the **mode** of that array. Inside the function's definition: - Initialize an empty dictionary. - Loop through the values of the array that the function takes in. For each iteration of the loop: - If the value is already a key in the dictionary we initialized before the loop, increment its dictionary value by 1. - Else, define the value as a key in the dictionary, and set the initial dictionary value to 1. - You should end up with a dictionary containing the unique values of the array as dictionary keys and the count for each unique value as a dictionary value: **example_dictionary = {'unique_value1': 230, 'unique_value2': 23, 'unique_value3': 328}.** - Return the key with the highest count (this key is the mode of the array). For instance, for the **example_dictionary** above, you should return the string **unique_value3.** - You can use this [technique](https://stackoverflow.com/questions/268272/getting-key-with-maximum-value-in-dictionary/280156280156) to return the key corresponding to the highest value.- Using the function you wrote, measure the **mode** of the **Land Slope** variable, and assign the result to a variable named **mode_function.**- Using the **Series.mode()** method, measure the **mode** of the **Land Slope** variable, and assign the result to a variable named **mode_method.**- Compare the two modes using the == operator to check whether they are the same and assign the result of the comparison to a variable named **same.**
###Code
def get_mode(iterable):
values = {}
for it in iterable:
if it in values.keys():
values[it] += 1
else:
values[it] = 1
return max(values, key=values.get)
same = get_mode(houses['Land Slope']) == houses['Land Slope'].mode()
same
houses['Land Slope'].value_counts()
###Output
_____no_output_____
###Markdown
3 - The Mode for Nominal Variables In the previous section, we learned that the **mode** is **ideal for ordinal data** represented **using words**. The mode is also a good choice for nominal data. Let's consider the **Roof Style** variable, which is measured on a **nominal scale** and describes the roof type of a house:
###Code
houses['Roof Style'].value_counts()
###Output
_____no_output_____
###Markdown
We obviously can't compute the **mean** for this variable because the **values are words**. Even if they were coded as numbers, it'd be completely wrong to compute the mean because in the case of nominal variables the numbers describe qualities, not quantities.In the previous section, we made the case that we could compute the **mean** for **ordinal variables** if the values are numbers. This reasoning doesn't extend to nominal variables because they don't describe quantities, like ordinal variables do.Because the **Roof Style** variable is **nominal**, there's also no inherent order of the values in the distribution. This means that we can't sort the values in an ascending or descending order. The first step in computing the **median** is to sort the values in ascending order, which means **we can't compute the median** for the **Roof Style** variable.**Exercise**- Edit the function you wrote to return both the **mode** of an array and the dictionary containing the count for each unique value in the array.- Use the edited function to return, at the same time, the mode of the **Roof Style** variable and the dictionary containing the counts for each unique value. - Assign the mode to a variable named **'mode'.** - Assign the dictionary to a variable named **value_counts.**- Inspect the content of **value_counts** and compare it to the value count we'd get by using the **Series.value_counts()** method. - This exercise is meant to give you a better understanding of what happens under the hood when we run **Series.value_counts().**
###Code
def mode(array):
counts = {}
for value in array:
if value in counts:
counts[value] += 1
else:
counts[value] = 1
return max(counts, key = counts.get), counts
mode(houses['Roof Style'])
mode, value_counts = mode(houses['Roof Style'])
###Output
_____no_output_____
###Markdown
4 - The Mode for Discrete Variables There are some cases where computing the **mean** and the **median** is possible and correct, but the **mode** is preferred nonetheless. **This is sometimes the case for discrete variables.**To remind you from the first course, variables measured on interval or ratio scales can also be classified as **discrete** or **continuous**. A variable is discrete if there's no possible intermediate value between any two adjacent values. Let's take for instance the **Kitchen AbvGr** variable, which describes the number of kitchens in a house:
###Code
houses['Kitchen AbvGr'].value_counts().sort_index()
###Output
_____no_output_____
###Markdown
Let's say we need to write an article about the house market in Ames, Iowa, and our main target audience are regular adult citizens from Ames. Among other aspects, we want to describe how many kitchens the typical house has. If we take the mean, we'd need to write that the typical house has 1.04 kitchens. This wouldn't make much sense for the regular reader, who expects the number of kitchens to be a whole number, not a decimal.The median is 1 — a value much easier to grasp by non-technical people compared to 1.04. But this is a lucky case because the middle two values in the sorted distribution could have been [1,2], and then the median would have been 1.5. The mode is a safer choice for cases like this because it guarantees a whole number from the distribution.The mode of the **Kitchen AbvGr** variable is 1. When we report this result, we should avoid technical jargon (like "mode" or "variable") and simply say that the typical house on the market has one kitchen.Note that the **mode** is also guaranteed to be a value from the distribution (this holds true for any kind of variable). This doesn't apply to the **mean** or the **median**, which can return values that are not present in the actual distribution. For instance, the **mean** of the **Kitchen AbvGr** is 1.04, but the value 1.04 is not present in the distribution.The **mean** and the **median** generally summarize the distribution of a discrete variable much better than the **mode**, and you should use the **mode** only if you need to communicate your results to a non-technical audience.**Exercise**- Explore the **Bedroom AbvGr** variable, and find whether it's **discrete** or **continuous**. You can refer to the [documentation](https://s3.amazonaws.com/dq-content/307/data_description.txt) for the possible values for this column. - If it's **discrete**, assign the string **'discrete'** to a variable named **bedroom_variable**, otherwise assign **'continuous'**. - If it's **discrete**, compute its mode using **Series.mode()** and assign the result to a variable named **bedroom_mode.**- Find whether the **SalePrice** variable is **discrete** or **continuous**. - If it's **discrete**, assign the string **'discrete'** to a variable named **price_variable**, otherwise assign **'continuous'.** - If it's **discrete**, compute its mode using **Series.mode()** and assign the result to a variable named **price_mode**.
###Code
#both are int64
bedroom_variable = 'discrete'
bedroom_mode = houses['Bedroom AbvGr'].mode()
bedroom_mode
price_variable = 'discrete'
price_mode = houses['SalePrice'].mode()
price_mode
###Output
_____no_output_____
###Markdown
5 - Special Cases There are distributions that can have more than one **mode**. Let's say we sampled the **Kitchen AbvGr** column and got this distribution of seven sample points:$$[0,1,1,1,2,2,2,3]$$The two most frequent values are 1 and 2 (both occur in the distribution three times), which means that this distribution has **two modes** (1 and 2). For this reason, we call this distribution **bimodal** (the prefix "bi-" means "twice"). If the distribution had only one mode, we'd call it **unimodal** (the prefix "uni-" means "only one").There's nothing wrong with having two modes. For our case above, the two modes tell us that the typical house on the market has either one or two kitchens.It's not excluded to have a distribution with more than two modes. Let's say we sampled from another column, **Bedroom AbvGr**, and got this distribution of 10 sample points:$$[0,1,1,2,2,3,3,4,4,8]$$Note that this distribution has four modes: 1, 2, 3, and 4 (each occurs twice in the distribution). When a distribution has more than two modes, we say that the distribution is **multimodal** (the prefix "multi-" means many).We can also have cases when there is no mode at all. Let's say we sample again from the **Bedroom AbvGr** column and get this distribution of 8 sample points:$$[1,1,2,2,3,3,4,4]$$Each unique value occurs twice in the distribution above, so there's no value (or values) that occurs more often than others. For this reason, **this distribution doesn't have a mode**. Contextually, we could say that there's no typical house on the market with respect to the number of bedrooms.**Distributions without a mode are often specific to continuous variables**. It's quite rare to find two identical values in a continuous distribution (especially if we have decimal numbers), so the frequency of each unique value is usually 1. Even if we find identical values, their frequency is very likely to be too low to produce a meaningful mode value.The workaround is to organize the continuous variable in a grouped frequency table, and select for the mode the midpoint of the class interval (the bin) with the highest frequency. This method has its limitations, but it generally gives reasonable answers. Let's try to get a better grasp of how this works in the following exercise.**Exercise**- Using only what we learned in the previous lessons, we already created a grouped frequency table for the **SalePrice** variable (in the cell below).```python(0, 100000] 252(100000, 200000] 1821(200000, 300000] 627(300000, 400000] 166(400000, 500000] 47(500000, 600000] 11(600000, 700000] 4(700000, 800000] 2```- Find the class interval with the highest frequency, then find its midpoint. For instance, the midpoint of the class interval **(0, 100000]** is 50000. - Assign the midpoint value to a variable named **mode**. Make sure the value you assign is of the **int** type.- Find the **mean** of the **SalePrice** column and assign it to a variable named **mean**.- Find the **median** of the **SalePrice** column and assign it to a variable named **median**.- Asses the truth value of the following sentences: - The **mode** is lower than the **median**, and the **median** is lower than the **mean**. - If you think this is true, assign the boolean **True** to a variable named **sentence_1**, otherwise assign **False**. - The **mean** is greater than the **median**, and the **median** is greater than the **mode**. - Assign **True** or **False** to a variable named **sentence_2**.
###Code
intervals = pd.interval_range(start = 0, end = 800000, freq = 100000)
gr_freq_table = pd.Series([0,0,0,0,0,0,0,0], index = intervals)
for value in houses['SalePrice']:
for interval in intervals:
if value in interval:
gr_freq_table.loc[interval] += 1
break
print(gr_freq_table)
mode = 150000
mean = houses["SalePrice"].mean()
median = houses["SalePrice"].median()
setence_1 = mode < median and median > mean
setence_2 = mean > median and median < mode
###Output
(0, 100000] 252
(100000, 200000] 1821
(200000, 300000] 627
(300000, 400000] 166
(400000, 500000] 47
(500000, 600000] 11
(600000, 700000] 4
(700000, 800000] 2
dtype: int64
###Markdown
6 - Skewed Distributions When we plot a histogram or a kernel density plot to visualize the shape of a distribution, the mode will always be the peak of the distribution. In the code below, we plot a kernel density plot to visualize the shape of the **SalePrice** variable and:- Set the limits of the x-axis using the **xlim** parameter — the lowest limit is the minimum value in the **SalePrice** variable, and the upper limit is the maximum value.- Plot a vertical line to indicate the position of the mode (note that our estimate of 150000 from the last exercise is quite close to the peak of the distribution).
###Code
import matplotlib.pyplot as plt
houses['SalePrice'].plot.kde(xlim = (houses['SalePrice'].min(),
houses['SalePrice'].max()))
plt.axvline(houses['SalePrice'].mode()[0], color = 'Green')
plt.show()
###Output
_____no_output_____
###Markdown
**This distribution is clearly right skewed**. Generally, the location of the mode, median and mean is predictable for a right-skewed distribution:- Most values are concentrated in the left body of the distribution where they will form a peak — this is where the mode will be.- Remember that the median divides a distribution in two halves of equal length. For this reason, **the median is usually positioned slightly right from the peak** (the mode) for a right-skewed distribution.- The mean takes into account each value in the distribution, and it will be affected by the outliers in the right tail. This will generally pull **the mean to the right of the median**.So in a **right-skewed distribution**, the mean will usually be to the right of the median, and the median will be to the right of the mode. This holds true for the distribution of the **SalePrice** variable:
###Code
houses['SalePrice'].plot.kde(xlim = (houses['SalePrice'].min(),
houses['SalePrice'].max()
))
plt.axvline(houses['SalePrice'].mode()[0], color = 'Green', label = 'Mode')
plt.axvline(houses['SalePrice'].median(), color = 'Black', label = 'Median')
plt.axvline(houses['SalePrice'].mean(), color = 'Orange', label = 'Mean')
plt.legend()
###Output
_____no_output_____
###Markdown
For a **left-skewed distribution**, the direction is simply reversed: the mean is positioned to the left of the median, and the median to the left of the mode. This is obvious on the distribution of the **Year Built** variable:
###Code
houses['Year Built'].plot.kde(xlim = (houses['Year Built'].min(),
houses['Year Built'].max()))
plt.axvline(houses['Year Built'].mode()[0], color = 'Green', label = 'Mode')
plt.axvline(houses['Year Built'].median(), color = 'Black', label = 'Median')
plt.axvline(houses['Year Built'].mean(), color = 'Orange', label = 'Mean')
plt.legend()
###Output
_____no_output_____
###Markdown
**Exercise**- In the code editor you can see the mean, mode and median for three distributions. Indicate whether the mean, median, and mode of each distribution suggest a left or a right skew. - If the values for **distribution_1** indicate a **right skew**, assign the string **'right skew'** to a variable named **shape_1**, otherwise assign **'left skew'**. - If the values for **distribution_2** indicate a **right skew**, assign the string **'right skew'** to a variable named **shape_2**, otherwise assign **'left skew'**. - If the values for **distribution_3** indicate a **right skew**, assign the string **'right skew'** to a variable named **shape_3**, otherwise assign **'left skew'**.
###Code
distribution_1 = {'mean': 3021 , 'median': 3001, 'mode': 2947}
distribution_2 = {'median': 924 , 'mode': 832, 'mean': 962}
distribution_3 = {'mode': 202, 'mean': 143, 'median': 199}
shape_1 = 'left_skew'
shape_2 = 'left_skew'
shape_3 = 'right_skew'
###Output
_____no_output_____
###Markdown
7 - Symmetrical Distributions The location of the **mean**, **median**, and **mode** are also predictable for **symmetrical distributions**. Remember from the last course that if the shape of a distribution is symmetrical, then we can divide the distribution in two halves that are mirror images of one another:The median divides the distribution in two equal halves. As a consequence, the median will always be at the center of a perfectly symmetrical distribution because only a line drawn at the center can divide the distribution in two equal halves.For a perfectly symmetrical distribution, the two equal halves will bear the same weight when computing the mean because the mean takes into account equally each value in the distribution. The mean is not pulled neither to the left, nor to the right, and stays instead in the center, at the same location as the median. **The mean and the median are always equal for any perfectly symmetrical distribution**.Although the mean and the median have a constant location for every symmetrical distribution (no matter the shape), the location of the mode can change. The mode is where the peak is, so for a normal distribution the mode will be at the center, right where the mean and the median are.It's possible to have a symmetrical distribution with more than one peak, which means that the mode won't be at the center:A uniform distribution doesn't have any peaks, which means it doesn't have any mode:In practice, we almost never work with perfectly symmetrical distributions, but many distributions are approximately symmetrical nonetheless. This means that the patterns outlined above are still relevant for practical purposes.**Exercise**- The distribution of the **Mo Sold** variable (which describes the month of a sale) is close to normal. - Plot a kernel density plot for this distribution using **Series.plot.kde().** - The lower boundary of the x-axis should be 1 and the upper one 12. You can use the **xlim** parameter of the **Series.plot.kde()** method. - Plot three vertical lines: - One for the **mode** — the color of the line should be **green** and its label should be **'Mode'**. - One for the **median** — the color of the line should be **orange** and its label should be **'Median'**. - One for the **mean** — the color of the line should be **black** and its label should be **'Mean'**. - Display all the labels using a **legend** (you can activate the legend using **plt.legend()**.- You should observe the **mean**, the **median**, and the **mode** clustered together in the center of the distribution.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
fig, ax = plt.subplots()
ax = sns.kdeplot(houses['Mo Sold'])
ax.axvline(houses['Mo Sold'].mode()[0], color = 'green', label = 'Mode')
ax.axvline(houses['Mo Sold'].median(), color = 'orange', label = 'Median')
ax.axvline(houses['Mo Sold'].mean(), color = 'black', label = 'Mean')
plt.legend()
###Output
_____no_output_____ |
notebooks/Kaggle_competition_quora.ipynb | ###Markdown
*df_train unbalanced : only 6% of dataset vith positive values*
###Code
df_train.loc[df_train['target'] == 1].head(30)
X = df_train.loc[:, 'question_text']
y = df_train.loc[:, 'target']
X.shape, y.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
X_train.head()
df_train1 = df_train[df_train["target"]==1]
df_train0 = df_train[df_train["target"]==0]
len(df_train1['question_text'])
len(df_train0['question_text'])
len(df_train.loc[df_train['target'] == 1]['question_text'])
#text = str(df_train['question_text'])
text_insincere = str(df_train1['question_text'])
type(text_insincere)
df_train["quest_len"] = df_train["question_text"].apply(lambda x: len(x))
df_train["quest_len"][0]
sincere = df_train[df_train["target"] == 0]
insincere = df_train[df_train["target"] == 1]
plt.figure(figsize = (15, 8))
sns.distplot(sincere["quest_len"], bins = 50, label = "Sincere")
sns.distplot(insincere["quest_len"], bins = 50, label = "Insincere")
plt.legend(fontsize = 10)
plt.show()
#thanks to https://www.kaggle.com/kaosmonkey/visualize-sincere-vs-insincere-words
class Vocabulary(object):
def __init__(self):
self.vocab = {}
self.STOPWORDS = set()
self.STOPWORDS = set(stopwords.words('english'))
def build_vocab(self, lines):
for line in lines:
for word in line.split(' '):
word = word.lower()
if (word in self.STOPWORDS):
continue
if (word not in self.vocab):
self.vocab[word] = 0
self.vocab[word] +=1
def generate_ngrams(text, n_gram=1):
"""arg: text, n_gram"""
token = [token for token in text.lower().split(" ") if token != "" if token not in STOPWORDS]
ngrams = zip(*[token[i:] for i in range(n_gram)])
return [" ".join(ngram) for ngram in ngrams]
def horizontal_bar_chart(df, color):
trace = go.Bar(
y=df["word"].values[::-1],
x=df["wordcount"].values[::-1],
showlegend=False,
orientation = 'h',
marker=dict(
color=color,
),
)
return trace
sincere_vocab = Vocabulary()
sincere_vocab.build_vocab(df_train[df_train['target'] == 0]['question_text'])
sincere_vocabulary = sorted(sincere_vocab.vocab.items(), reverse=True, key=lambda kv: kv[1])
for word, count in sincere_vocabulary[:5]:
print(word, count)
insincere_vocab = Vocabulary()
insincere_vocab.build_vocab(df_train[df_train['target'] == 1]['question_text'])
insincere_vocabulary = sorted(insincere_vocab.vocab.items(), reverse=True, key=lambda kv: kv[1])
for word, count in insincere_vocabulary[:5]:
print(word, count)
df_sincere_vocab = pd.DataFrame(sincere_vocabulary, columns=['word_sincere', 'frequency'])
df_sincere_vocab.head()
df_insincere_vocab = pd.DataFrame(insincere_vocabulary, columns=['word_insincere', 'frequency'])
df_insincere_vocab.head()
ax1 = sns.barplot(y='word_sincere', x='frequency', data=df_sincere_vocab[:20])
ax1.set_xlabel('word_sincere');
ax2 = sns.barplot(y='word_insincere', x='frequency', data=df_insincere_vocab[:20])
ax2.set_xlabel('word_insincere');
stopwords = set(stopwords.words("english"))
wordcloud = WordCloud(background_color='black',
stopwords = stopwords,
random_state = 43,
width=800,
height=400)
wordcloud
wordcloud.generate(text_insincere)
plt.figure(figsize=(12,8))
plt.imshow(wordcloud);
df_train1.head()
text = df_train.loc[:, 'question_text']
text.head()
nlp = spacy.load('en_core_web_sm',disable=['parser','ner'])
nlp
%%time
spacy_docs = nlp.pipe(X_train)
spacy_docs
%%time
lst = list(spacy_docs)
lst[1]
lst[1][12].is_stop
spacy_docs = nlp.pipe(X_train)
lemmas = [[t.lemma_ if t.lemma_ != "-PRON-" else t.text for t in lst] for lst in spacy_docs]
lemmas[:1]
lemmas_as_strings = [" ".join(x) for x in lemmas]
lemmas_as_strings
## Get the bar chart from sincere questions ##
freq_dict = defaultdict(int)
for sent in sincere['question_text']:
for word in Vocabulary.generate_ngrams(sent, 2):
freq_dict[word] += 1
fd_sorted_sincere = pd.DataFrame(sorted(freq_dict.items(), key=lambda x: x[1]))
fd_sorted_sincere.columns = ["word", "wordcount"]
fd_sorted_sincere = fd_sorted_sincere.sort_values(['wordcount'], ascending=False)
fd_sorted_sincere.head(5)
## Get the bar chart from insincere questions ##
freq_dict = defaultdict(int)
for sent in insincere['question_text']:
for word in Vocabulary.generate_ngrams(sent, 2):
freq_dict[word] += 1
fd_sorted_insincere = pd.DataFrame(sorted(freq_dict.items(), key=lambda x: x[1]))
fd_sorted_insincere.columns = ["word", "wordcount"]
fd_sorted_insincere = fd_sorted_insincere.sort_values(['wordcount'], ascending=False)
fd_sorted_insincere.head(5)
ax = sns.barplot(x='wordcount', y='word', data=fd_sorted_insincere[:20])
ax.set_xlabel('bigram_insincere');
fd_sorted_insincere.shape, fd_sorted_sincere.shape
ax = sns.barplot(x='wordcount', y='word', data=fd_sorted_sincere[:20])
ax.set_xlabel('bigram_sincere');
###Output
_____no_output_____
###Markdown
Trigram
###Code
## Get the bar chart from insincere questions ##
freq_dict = defaultdict(int)
for sent in insincere['question_text']:
for word in Vocabulary.generate_ngrams(sent, 3):
freq_dict[word] += 1
fd_sorted_insincere = pd.DataFrame(sorted(freq_dict.items(), key=lambda x: x[1]))
fd_sorted_insincere.columns = ["word", "wordcount"]
fd_sorted_insincere = fd_sorted_insincere.sort_values(['wordcount'], ascending=False)
fd_sorted_insincere.head(10)
###Output
_____no_output_____
###Markdown
**Countvectorizer + Multinomial NaiveBayes**
###Code
cvec = CountVectorizer(stop_words='english')
tf = cvec.fit_transform(lemmas_as_strings)
tf
type(lemmas_as_strings)
mnb = MultinomialNB()
pipe = Pipeline([('cvec', cvec),('mnb', mnb)])
pipe
pipe.fit(lemmas_as_strings, y_train)
y_pred = pipe.predict(X_test)
y_pred
pipe.predict_proba?
pipe.score(X_test, y_test)
cr = classification_report(y_test, y_pred)
print(cr)
cm = confusion_matrix(y_test, y_pred)
cm
sns.heatmap(cm, cmap='Blues', xticklabels=y_test.unique(), yticklabels=y_test.unique(), annot=True, fmt='.0f');
###Output
_____no_output_____
###Markdown
f1-score @ 0.5 for insincere target Tfidvectorizer + Multinomial NaiveBayes
###Code
tfid = TfidfVectorizer(stop_words='english')
tfid_ft = tfid.fit_transform(lemmas_as_strings)
pipe = Pipeline([('tfid', tfid),('mnb', mnb)])
pipe
pipe.fit(lemmas_as_strings, y_train)
y_pred = pipe.predict(X_test)
pipe.score(X_test, y_test)
cr = classification_report(y_test, y_pred)
print(cr)
cm = confusion_matrix(y_test, y_pred)
cm
sns.heatmap(cm, cmap='Blues', xticklabels=y_test.unique(), yticklabels=y_test.unique(), annot=True, fmt='.0f');
lr = LogisticRegression()
pipe = Pipeline([('cvec', cvec),('lr', lr)])
pipe
pipe.fit(lemmas_as_strings, y_train)
y_pred = pipe.predict(X_test)
y_pred
pipe.score(X_test, y_test)
cr = classification_report(y_test, y_pred)
print(cr)
###Output
precision recall f1-score support
0 0.96 0.99 0.97 612656
1 0.66 0.35 0.46 40405
micro avg 0.95 0.95 0.95 653061
macro avg 0.81 0.67 0.72 653061
weighted avg 0.94 0.95 0.94 653061
###Markdown
**Resampling using imbalanced-learn API** 1- CondensedNearestNeighbour
###Code
cnn = CondensedNearestNeighbour(random_state=42)
svd = TruncatedSVD(n_components=20)
svd_ = svd.fit_transform(tf)
type(svd_)
df_svd_ = pd.DataFrame(svd_)
df_svd_.head()
X_target_0, y_target_1 = df_svd_, y_train
X_target_0, y_target_1
X_target_0.shape, y_target_1.shape
%%time
#X__target_res, y_target_res = cnn.fit_resample(X_target_0, y_target_1)
###Output
_____no_output_____
###Markdown
**Meta Features:**
###Code
df_train["num_words"] = df_train["question_text"].apply(lambda x: len(str(x).split()))
df_test["num_words"] = df_test["question_text"].apply(lambda x: len(str(x).split()))
df_train["num_words"].head()
###Output
_____no_output_____ |
Machine learning predict the sale.ipynb | ###Markdown
1: Import the dataset
###Code
#Import the required libraries
import numpy as np
import pandas as pd
#Import the advertising dataset
adDatasets = pd.read_csv('Advertising Budget and Sales.csv', index_col=0)
adDatasets
###Output
_____no_output_____
###Markdown
2: Analyze the dataset
###Code
#View the initial few records of the dataset
adDatasets.head()
#Check the total number of elements in the dataset
adDatasets.size
###Output
_____no_output_____
###Markdown
3: Find the features or media channels used by the firm
###Code
#Check the number of observations (rows) and attributes (columns) in the dataset
adDatasets.shape
#View the names of each of the attributes
adDatasets.columns
###Output
_____no_output_____
###Markdown
4: Create objects to train and test the model; find the sales figures for each channel
###Code
#Create a feature object from the columns
x_feature = adDatasets[['TV Ad Budget ($)', 'Radio Ad Budget ($)', 'Newspaper Ad Budget ($)']]
#View the feature object
x_feature.head()
#Create a target object (Hint: use the sales column as it is the response of the dataset)
y_target = adDatasets[['Sales ($)']]
#View the target object
y_target.head()
#Verify if all the observations have been captured in the feature object
x_feature.shape
#Verify if all the observations have been captured in the target object
y_target.shape
###Output
_____no_output_____
###Markdown
5: Split the original dataset into training and testing datasets for the model
###Code
#Split the dataset (by default, 75% is the training data and 25% is the testing data)
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x_feature,y_target, random_state=1)
#Verify if the training and testing datasets are split correctly (Hint: use the shape() method)
print(x_train.shape,x_test.shape,y_train.shape,y_test.shape)
###Output
(150, 3) (50, 3) (150, 1) (50, 1)
###Markdown
6: Create a model to predict the sales outcome
###Code
#Create a linear regression model
from sklearn.linear_model import LinearRegression
leReg = LinearRegression()
leReg.fit(x_feature,y_target)
#Print the intercept and coefficients
print("intercept: ",leReg.intercept_)
print("Coefficient: ",leReg.coef_)
#Predict the outcome for the testing dataset
y_pred = leReg.predict(x_test)
y_pred
###Output
_____no_output_____
###Markdown
7: Calculate the Mean Square Error (MSE)
###Code
#Import required libraries for calculating MSE (mean square error)
from sklearn import metrics
#Calculate the MSE
print("MSE: ",np.sqrt(metrics.mean_squared_error(y_test,y_pred)))
###Output
MSE: 1.343580430635202
|
lab work/Hands-on Tutorial--Accessing databases with SQL magic.ipynb | ###Markdown
Accessing Databases with SQL Magic After using this notebook, you will know how to perform simplified database access using SQL "magic". You will connect to a Db2 database, issue SQL commands to create tables, insert data, and run queries, as well as retrieve results in a Python dataframe. To communicate with SQL Databases from within a JupyterLab notebook, we can use the SQL "magic" provided by the [ipython-sql](https://github.com/catherinedevlin/ipython-sql) extension. "Magic" is JupyterLab's term for special commands that start with "%". Below, we'll use the _load_\__ext_ magic to load the ipython-sql extension. In the lab environemnt provided in the course the ipython-sql extension is already installed and so is the ibm_db_sa driver.
###Code
%load_ext sql
###Output
_____no_output_____
###Markdown
Now we have access to SQL magic. With our first SQL magic command, we'll connect to a Db2 database. However, in order to do that, you'll first need to retrieve or create your credentials to access your Db2 database. This image shows the location of your connection string if you're using Db2 on IBM Cloud. If you're using another host the format is: username:password@hostname:port/database-name
###Code
# Enter your Db2 credentials in the connection string below
# Recall you created Service Credentials in Part III of the first lab of the course in Week 1
# i.e. from the uri field in the Service Credentials copy everything after db2:// (but remove the double quote at the end)
# for example, if your credentials are as in the screenshot above, you would write:
# %sql ibm_db_sa://my-username:[email protected]:50000/BLUDB
# Note the ibm_db_sa:// prefix instead of db2://
# This is because JupyterLab's ipython-sql extension uses sqlalchemy (a python SQL toolkit)
# which in turn uses IBM's sqlalchemy dialect: ibm_db_sa
%sql ibm_db_sa://
###Output
_____no_output_____
###Markdown
For convenience, we can use %%sql (two %'s instead of one) at the top of a cell to indicate we want the entire cell to be treated as SQL. Let's use this to create a table and fill it with some test data for experimenting.
###Code
%%sql
CREATE TABLE INTERNATIONAL_STUDENT_TEST_SCORES (
country VARCHAR(50),
first_name VARCHAR(50),
last_name VARCHAR(50),
test_score INT
);
INSERT INTO INTERNATIONAL_STUDENT_TEST_SCORES (country, first_name, last_name, test_score)
VALUES
('United States', 'Marshall', 'Bernadot', 54),
('Ghana', 'Celinda', 'Malkin', 51),
('Ukraine', 'Guillermo', 'Furze', 53),
('Greece', 'Aharon', 'Tunnow', 48),
('Russia', 'Bail', 'Goodwin', 46),
('Poland', 'Cole', 'Winteringham', 49),
('Sweden', 'Emlyn', 'Erricker', 55),
('Russia', 'Cathee', 'Sivewright', 49),
('China', 'Barny', 'Ingerson', 57),
('Uganda', 'Sharla', 'Papaccio', 55),
('China', 'Stella', 'Youens', 51),
('Poland', 'Julio', 'Buesden', 48),
('United States', 'Tiffie', 'Cosely', 58),
('Poland', 'Auroora', 'Stiffell', 45),
('China', 'Clarita', 'Huet', 52),
('Poland', 'Shannon', 'Goulden', 45),
('Philippines', 'Emylee', 'Privost', 50),
('France', 'Madelina', 'Burk', 49),
('China', 'Saunderson', 'Root', 58),
('Indonesia', 'Bo', 'Waring', 55),
('China', 'Hollis', 'Domotor', 45),
('Russia', 'Robbie', 'Collip', 46),
('Philippines', 'Davon', 'Donisi', 46),
('China', 'Cristabel', 'Radeliffe', 48),
('China', 'Wallis', 'Bartleet', 58),
('Moldova', 'Arleen', 'Stailey', 38),
('Ireland', 'Mendel', 'Grumble', 58),
('China', 'Sallyann', 'Exley', 51),
('Mexico', 'Kain', 'Swaite', 46),
('Indonesia', 'Alonso', 'Bulteel', 45),
('Armenia', 'Anatol', 'Tankus', 51),
('Indonesia', 'Coralyn', 'Dawkins', 48),
('China', 'Deanne', 'Edwinson', 45),
('China', 'Georgiana', 'Epple', 51),
('Portugal', 'Bartlet', 'Breese', 56),
('Azerbaijan', 'Idalina', 'Lukash', 50),
('France', 'Livvie', 'Flory', 54),
('Malaysia', 'Nonie', 'Borit', 48),
('Indonesia', 'Clio', 'Mugg', 47),
('Brazil', 'Westley', 'Measor', 48),
('Philippines', 'Katrinka', 'Sibbert', 51),
('Poland', 'Valentia', 'Mounch', 50),
('Norway', 'Sheilah', 'Hedditch', 53),
('Papua New Guinea', 'Itch', 'Jubb', 50),
('Latvia', 'Stesha', 'Garnson', 53),
('Canada', 'Cristionna', 'Wadmore', 46),
('China', 'Lianna', 'Gatward', 43),
('Guatemala', 'Tanney', 'Vials', 48),
('France', 'Alma', 'Zavittieri', 44),
('China', 'Alvira', 'Tamas', 50),
('United States', 'Shanon', 'Peres', 45),
('Sweden', 'Maisey', 'Lynas', 53),
('Indonesia', 'Kip', 'Hothersall', 46),
('China', 'Cash', 'Landis', 48),
('Panama', 'Kennith', 'Digance', 45),
('China', 'Ulberto', 'Riggeard', 48),
('Switzerland', 'Judy', 'Gilligan', 49),
('Philippines', 'Tod', 'Trevaskus', 52),
('Brazil', 'Herold', 'Heggs', 44),
('Latvia', 'Verney', 'Note', 50),
('Poland', 'Temp', 'Ribey', 50),
('China', 'Conroy', 'Egdal', 48),
('Japan', 'Gabie', 'Alessandone', 47),
('Ukraine', 'Devlen', 'Chaperlin', 54),
('France', 'Babbette', 'Turner', 51),
('Czech Republic', 'Virgil', 'Scotney', 52),
('Tajikistan', 'Zorina', 'Bedow', 49),
('China', 'Aidan', 'Rudeyeard', 50),
('Ireland', 'Saunder', 'MacLice', 48),
('France', 'Waly', 'Brunstan', 53),
('China', 'Gisele', 'Enns', 52),
('Peru', 'Mina', 'Winchester', 48),
('Japan', 'Torie', 'MacShirrie', 50),
('Russia', 'Benjamen', 'Kenford', 51),
('China', 'Etan', 'Burn', 53),
('Russia', 'Merralee', 'Chaperlin', 38),
('Indonesia', 'Lanny', 'Malam', 49),
('Canada', 'Wilhelm', 'Deeprose', 54),
('Czech Republic', 'Lari', 'Hillhouse', 48),
('China', 'Ossie', 'Woodley', 52),
('Macedonia', 'April', 'Tyer', 50),
('Vietnam', 'Madelon', 'Dansey', 53),
('Ukraine', 'Korella', 'McNamee', 52),
('Jamaica', 'Linnea', 'Cannam', 43),
('China', 'Mart', 'Coling', 52),
('Indonesia', 'Marna', 'Causbey', 47),
('China', 'Berni', 'Daintier', 55),
('Poland', 'Cynthia', 'Hassell', 49),
('Canada', 'Carma', 'Schule', 49),
('Indonesia', 'Malia', 'Blight', 48),
('China', 'Paulo', 'Seivertsen', 47),
('Niger', 'Kaylee', 'Hearley', 54),
('Japan', 'Maure', 'Jandak', 46),
('Argentina', 'Foss', 'Feavers', 45),
('Venezuela', 'Ron', 'Leggitt', 60),
('Russia', 'Flint', 'Gokes', 40),
('China', 'Linet', 'Conelly', 52),
('Philippines', 'Nikolas', 'Birtwell', 57),
('Australia', 'Eduard', 'Leipelt', 53)
###Output
_____no_output_____
###Markdown
Using Python Variables in your SQL StatementsYou can use python variables in your SQL statements by adding a ":" prefix to your python variable names.For example, if I have a python variable `country` with a value of `"Canada"`, I can use this variable in a SQL query to find all the rows of students from Canada.
###Code
country = "Canada"
%sql select * from INTERNATIONAL_STUDENT_TEST_SCORES where country = :country
###Output
_____no_output_____
###Markdown
Assigning the Results of Queries to Python VariablesYou can use the normal python assignment syntax to assign the results of your queries to python variables.For example, I have a SQL query to retrieve the distribution of test scores (i.e. how many students got each score). I can assign the result of this query to the variable `test_score_distribution` using the `=` operator.
###Code
test_score_distribution = %sql SELECT test_score as "Test Score", count(*) as "Frequency" from INTERNATIONAL_STUDENT_TEST_SCORES GROUP BY test_score;
test_score_distribution
###Output
_____no_output_____
###Markdown
Converting Query Results to DataFramesYou can easily convert a SQL query result to a pandas dataframe using the `DataFrame()` method. Dataframe objects are much more versatile than SQL query result objects. For example, we can easily graph our test score distribution after converting to a dataframe.
###Code
dataframe = test_score_distribution.DataFrame()
%matplotlib inline
# uncomment the following line if you get an module error saying seaborn not found
# !pip install seaborn
import seaborn
plot = seaborn.barplot(x='Test Score',y='Frequency', data=dataframe)
###Output
_____no_output_____
###Markdown
Now you know how to work with Db2 from within JupyterLab notebooks using SQL "magic"!
###Code
%%sql
-- Feel free to experiment with the data set provided in this notebook for practice:
SELECT country, first_name, last_name, test_score FROM INTERNATIONAL_STUDENT_TEST_SCORES;
###Output
_____no_output_____ |
Code/MusicalDataset1.ipynb | ###Markdown
Similarity Calculation
###Code
item_simpearson=productmat0.corr(method='pearson') #item-item similarity based on pearson method
print(item_simpearson.shape)
item_simpearson.head()
item_simcosine = cosine_similarity(train.T) #item-item similarity based on cosine method
print (item_simcosine.shape)
print (item_simcosine)
user_simpearson=(productmat0.T).corr(method='pearson') #user-user similarity based on pearson method
print(user_simpearson.shape)
user_simpearson.head()
user_simcosine = cosine_similarity(train) #user-user similarity based on cosine method
print (user_simcosine.shape)
print (user_simcosine)
###Output
(1429, 1429)
[[1. 0. 0. ... 0. 0. 0. ]
[0. 1. 0.22208408 ... 0.23690564 0.15843106 0. ]
[0. 0.22208408 1. ... 0.36366085 0. 0. ]
...
[0. 0.23690564 0.36366085 ... 1. 0. 0. ]
[0. 0.15843106 0. ... 0. 1. 0. ]
[0. 0. 0. ... 0. 0. 1. ]]
###Markdown
Memory based CF
###Code
# prediction using cosine similarity matrix as weights (memory based CF)
# and adjusting bias for individual user by mean-subtraction of rating
def predict(productmat, similarity, kind='user'):
if kind == 'user':
user_bias = productmat.mean(axis=1)
productmat = (productmat - user_bias[:, np.newaxis]).copy()
predt = similarity.dot(productmat) / np.array([np.abs(similarity).sum(axis=1)]).T
predt += user_bias[:, np.newaxis]
elif kind == 'item':
item_bias = productmat.mean(axis=0)
productmat = (productmat - item_bias[np.newaxis, :]).copy()
predt = productmat.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])
predt += item_bias[np.newaxis, :]
return predt
user_prediction = predict(train, user_simcosine, kind='user')
item_prediction = predict(train, item_simcosine, kind='item')
from sklearn.metrics import mean_squared_error
from math import sqrt
def rmse(prediction, ground_truth):
prediction = prediction[ground_truth.nonzero()].flatten()
ground_truth = ground_truth[ground_truth.nonzero()].flatten()
return sqrt(mean_squared_error(prediction, ground_truth))
print ('User-based bias-adjusted CF RMSE: %.3f' %rmse(user_prediction, val))
print ('Item-based bias-adjusted CF RMSE: %.3f' %rmse(item_prediction, val))
###Output
User-based bias-adjusted CF RMSE: 4.469
Item-based bias-adjusted CF RMSE: 4.475
###Markdown
Model based collaborative filtering(matrix factorization)
###Code
import scipy.sparse as sp
from scipy.sparse.linalg import svds
#SVD components from train matrix. Choose k.
u, s, vt = svds(train, k = 20)# k, dimensionality for rank matrix
s_diag_matrix=np.diag(s)
X_pred = np.dot(np.dot(u, s_diag_matrix), vt)
print ('matrix-factorization CF RMSE: %.3f' %rmse(X_pred, val))
###Output
matrix-factorization CF RMSE: 4.490
###Markdown
KNN BASED ALGORITHM
###Code
from scipy.sparse import csr_matrix
productmat1=csr_matrix(productmat0.values)
from sklearn.neighbors import NearestNeighbors
model_knn=NearestNeighbors(metric='cosine',algorithm='brute')
model_knn.fit(productmat1.T)
query_index = np.random.choice(productmat0.T.shape[0])
distances, indices = model_knn.kneighbors(productmat0.T.iloc[query_index, :].values.reshape(1, -1), n_neighbors = 10)
for i in range(0, len(distances.flatten())):
if i == 0:
print ('Recommendations for {0}:\n'.format(productmat0.T.index[query_index]))
else:
print ('{0}: {1}, with distance of {2}:'.format(i, productmat0.T.index[indices.flatten()[i]], distances.flatten()[i]))
useritem_mat = df.pivot_table(index='reviewerID', columns='asin', values='overall').fillna(0)
useritem_mat.head()
useritem_mat.shape
X = useritem_mat.values.T
X.shape
###Output
_____no_output_____
###Markdown
Matrix Factorization(SVD)
###Code
import sklearn
from sklearn.decomposition import TruncatedSVD
SVD = TruncatedSVD(n_components=10, random_state=17)
matrix = SVD.fit_transform(X)
matrix.shape
import warnings
warnings.filterwarnings("ignore",category =RuntimeWarning)
corr = np.corrcoef(matrix)
corr.shape
product_no = useritem_mat.columns
product_no_list = list(product_no)
sample_product = product_no_list.index("B0002CZT0M")
print(sample_product)
sample_one = corr[sample_product]
list(product_no[(sample_one<0.9) & (sample_one>0.8)])[:10]
###Output
_____no_output_____
###Markdown
NMF Algorithm
###Code
from sklearn.decomposition import NMF
nmf = NMF(n_components=100,solver="mu") #‘mu’ is a Multiplicative Update solver.
#beta_loss : float or string, default ‘frobenius’
W = nmf.fit_transform(productmat0)
H = nmf.components_
print(W)
import seaborn as sns
fig, ax = plt.subplots(figsize=(10,6))
sns.heatmap(W,vmin=0, vmax=1, ax=ax)
#sns.heatmap(W,cmap='RdBu')
print(H)
fig, ax = plt.subplots(figsize=(10,6))
sns.heatmap(H,vmin=0, vmax=1,ax=ax)
#sns.heatmap(H,cmap='RdBu')
Vt=np.matmul(W,H)
print(Vt)
fig, ax = plt.subplots(figsize=(10,6))
sns.heatmap(Vt,vmin=0, vmax=1,ax=ax)
print(type(Vt))
VtFinal=pd.DataFrame(data=Vt[0:,0:],index=(productmat0.index),columns=(productmat0.columns))
VtFinal.head(5)
key=input("Enter the CUSTOMER ID:") #Predicting PRODUCT ID for a single CUSTOMER ID as per keyboard input
N=VtFinal.loc[[key]].T
N_item=N.sort_values(by=key,ascending=False)
print("The list of PRODUCT to be recommended for--->:",N_item[:10])
import random #Predicting PRODUCT ID for randomly selected CUSTOMER ID
for i in range (5):
key=random.choice((productmat0.index))
N=VtFinal.loc[[key]].T
N_item=N.sort_values(by=key,ascending=False)
print(N_item[:10])
key=input("Enter the PRODUCT ID:") #Predicting CUSTOMER ID for a single PRODUCT ID ID as per keyboard input
M=((VtFinal[[key]]))
M_user=M.sort_values(by=key,ascending=False)
print("The list of CUSTOMER ID to whom product is recommended :\n",M_user[:10])
import random #Predicting CUSTOMER ID for randomly selected PRODUCT ID
for i in range (5):
key=random.choice((productmat0.columns))
M=((VtFinal[[key]]))
M_user=M.sort_values(by=key,ascending=False)
print(M_user[:10])
#s = s.replace(',', '')
import random
sample= random.choices(productmat0.index, k=10)
#sample
#type(sample)
acc=[]
count=0
for key in sample:
count=count+1 #key=input()
df0=productmat0.loc[[key]].T.sort_values(by=key, ascending=False)
df1=df0[df0[key]!=0]
df1=df1.drop([key],axis=1)
df1 = df1.rename_axis(None)
P=(df1.index).tolist()
print(count)
print("Actual Product purchased for "+key+": ", P)
N=VtFinal.loc[[key]].T
N_item=N.sort_values(by=key,ascending=False)
#print("The list of PRODUCT to be recommended for--->:",N_item[:15])
N_item1=N_item.drop([key],axis=1)
N_item1 = N_item1.rename_axis(None)
Q=(N_item1.index).tolist()[:10]
print("Predicted Product : ", Q)
match = set(P) & set(Q)
print ("Accurate Prediction :", match)
accuracy=(len(match)/len(P))*100
print(accuracy)
acc.append(accuracy)
print("NMF Avg Accuracy : \n\n\n" ,sum(acc)/len(sample))
import statistics
print("Standard deviation: " ,statistics.stdev(acc))
print("Mean accuracy: " ,statistics.mean(acc))
print("Min accuracy: " ,min(acc))
print("Max accuracy: " ,max(acc))
###Output
Standard deviation: 18.154733714957732
Mean accuracy: 63.714285714285715
Min accuracy: 28.57142857142857
Max accuracy: 83.33333333333334
|
day05/policy_gradients/policy_gradients_pytorch.ipynb | ###Markdown
Policy Gradients (PG) PyTorch Tutorial Author: [Nir Ben-Zvi]([email protected]), on top of PyTorch's original tutorialThis tutorial is a Jupyter notebook version of PyTorch's [original example code](https://github.com/pytorch/examples/tree/master/reinforcement_learning), made to run inside a Docker container. TaskThe agent has to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright. You can find an official leaderboard with various algorithms and visualizations at the [Gym website](http://gym.openai.com).As the agent observes the current state of the environment and chooses an action, the environment transitions to a new state, and also returns a reward that indicates the consequences of the action. In this task, the environment terminates if the pole falls over too far.The CartPole task is designed so that the inputs to the agent are 4 real values representing the environment state (position, velocity, etc.). This is a much simpler task, compared to one where the input is a raw input from the game screen - which allows us to quickly experience the agent's improvement on our screen. PackagesNothing really interesting here;- `torch` is the main PyTorch module- `torch.nn` for Neural Networks- `torch.nn.functional` - `torch.optim` is an optimization package- `torch.autograd` for auto differentiation- `torch.autograd.Variable` an auto-differentiable Variable (Tensor)
###Code
import argparse
import gym
import numpy as np
from itertools import count
from collections import namedtuple
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.autograd as autograd
from torch.autograd import Variable
# Now the jupyter/gym render part comes in
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
# iPython
from IPython import display, get_ipython
###Output
_____no_output_____
###Markdown
Argument InputKeeping original code commented out, but we will have the parameters hard coded so we could run this in Jupyter.
###Code
def parse_args():
# parser = argparse.ArgumentParser(description='PyTorch REINFORCE example')
# parser.add_argument('--gamma', type=float, default=0.99, metavar='G',
# help='discount factor (default: 0.99)')
# parser.add_argument('--seed', type=int, default=543, metavar='N',
# help='random seed (default: 543)')
# parser.add_argument('--render', action='store_true',
# help='render the environment', default=False)
# parser.add_argument('--log_interval', type=int, default=10, metavar='N',
# help='interval between training status logs (default: 10)')
# args = parser.parse_args()
dictionary = {'gamma': 0.99, 'seed': 543, 'render': False, 'log_interval': 10}
args = namedtuple('GenericDict', dictionary.keys())(**dictionary)
return args
###Output
_____no_output_____
###Markdown
Defining the Policy Reason
###Code
class Policy(nn.Module):
"""
This defines the Policy Network
"""
def __init__(self):
super(Policy, self).__init__()
self.affine1 = nn.Linear(4, 128)
self.affine2 = nn.Linear(128, 2)
self.saved_actions = []
self.rewards = []
def forward(self, x):
"""
This is our network's forward pass; Backward pass is created implicitly
:param x:
:return:
"""
x = F.relu(self.affine1(x))
action_scores = self.affine2(x)
return F.softmax(action_scores)
###Output
_____no_output_____
###Markdown
Policy Gradient Framework Selecting ActionsA state is given, for which the network (policy) computes the next action probability distributionFrom this, a new action is created and is also appended to saved_actions When an Episode Ends`reward` will denote a vector $\in\mathbb{R}^T$, such that $\mathrm{reward}_t$ is a normalized sum of rewards up to timestep $t$. Dicount factor $\gamma$ is taken into account.Following this, the REINFORCE algorithm is performed for every action. As we've seen in the theoretical part, this is an accepted way of differentiating stochastic units. `reinforce` is a method of `pytorch.autograd.Variable`. Note that `action` instances are probability distributions, and each one will be differentiated w.r.t. the entire vector of rewards.Following this, the optimizer is advanced as usual per neural networks.
###Code
def select_action(state, policy):
state = torch.from_numpy(state).float().unsqueeze(0)
probs = policy(Variable(state))
action = probs.multinomial()
policy.saved_actions.append(action)
return action.data
def finish_episode(policy, optimizer, gamma):
R = 0
rewards = []
for r in policy.rewards[::-1]:
R = r + gamma * R
rewards.insert(0, R)
rewards = torch.Tensor(rewards)
rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps)
for action, r in zip(policy.saved_actions, rewards):
action.reinforce(r)
optimizer.zero_grad()
autograd.backward(policy.saved_actions, [None for _ in policy.saved_actions])
optimizer.step()
del policy.rewards[:]
del policy.saved_actions[:]
###Output
_____no_output_____
###Markdown
Training Policy Network InitializationWe first load the `gym` environment, initialize a Policy instance and create an optimizer for it. TrainingFollowing this, we iterate for a number of episodes. As we've seen before, episodes are standalone interactions with the environment - each composed of $T$ timesteps. An environment interaction is roughly:- Receive an action from the model (based on current state, $s_t$)- Advance, receiving the tuple $(s_{t+1}, a_{t+1}, r_{t+1})$- Append the reward to the running list of per-step rewardsWe will stop when 'running_reward' raises above some predefined threshold.
###Code
def main():
args = parse_args()
env = gym.make('CartPole-v0')
env.seed(args.seed)
torch.manual_seed(args.seed)
policy = Policy()
optimizer = optim.Adam(policy.parameters(), lr=1e-2)
def show_state(env, step=0, episode=0, info=""):
plt.figure(3)
plt.clf()
plt.imshow(env.render(mode='rgb_array'))
plt.title("{} | Episode: {:3d}, Step: {:4d}\n{}".format(env.spec.id, episode, step, info))
plt.axis('off')
display.clear_output(wait=True)
display.display(plt.gcf())
running_reward = 10
msgs = ['']
for i_episode in count(1):
state = env.reset()
frames = []
for t in range(10000): # Don't infinite loop while learning
action = select_action(state, policy)
state, reward, done, _ = env.step(action[0,0])
if args.render:
show_state(env, step=t, episode=i_episode, info='\n'.join(msgs))
policy.rewards.append(reward)
if done:
break
running_reward = running_reward * 0.99 + t * 0.01
finish_episode(policy, optimizer, args.gamma)
if i_episode % args.log_interval == 0:
msgs.append('Episode {}\tLast length: {:5d}\tAverage length: {:.2f}\n'.format(
i_episode, t, running_reward))
if not args.render:
print(msgs[-1])
if running_reward > env.spec.reward_threshold:
print("Solved! Running reward is now {} and "
"the last episode runs to {} time steps!".format(running_reward, t))
break
env.render(close=True)
env.close()
if __name__ == '__main__':
main()
###Output
Episode 10 Last length: 13 Average length: 10.64
Episode 20 Last length: 24 Average length: 11.37
Episode 30 Last length: 115 Average length: 15.63
Episode 40 Last length: 17 Average length: 19.16
Episode 50 Last length: 77 Average length: 22.33
Episode 60 Last length: 52 Average length: 24.56
Episode 70 Last length: 67 Average length: 28.63
|
EG_guessing_the_proof_last_iter_convergence.ipynb | ###Markdown
EG: SDP for worst case $\|F(x^{k+1})\|^2 - \|F(x^k)\|^2$ In fact, we study here slightly different method:$$x^{k+1} = x^k - \gamma F\left(x^k - \gamma F(x^k)\right).$$ Define parameters and the matrices for the problem $\gamma = \frac{1}{5L}$
###Code
# Lipschitz parameter and stepsize
L = 1.0
gamma1 = 1.0 / (5*L)
gamma2 = gamma1
# Matrices for SDP
M0 = 1.0*np.array([[0, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 1]])
M1 = 1.0*np.array([[0, 0, 0, 0],
[0, 1, -1.0/2, 0],
[0, -1.0/2, 0, 0],
[0, 0, 0, 0]])
M2 = 1.0*np.array([[0, 0, 0, 0],
[0, (L**2)*(gamma1**2)-1, 1, 0],
[0, 1, -1, 0],
[0, 0, 0, 0]])
M3 = 1.0*np.array([[0, 0, 0, 0],
[0, 0, 1.0/2, 0],
[0, 1.0/2, 0, -1.0/2],
[0, 0, -1.0/2, 0]])
M4 = 1.0*np.array([[0, 0, 0, 0],
[0, -1, 0, 1],
[0, 0, (L**2)*(gamma2**2), 0],
[0, 1, 0, -1]])
M5 = 1.0*np.array([[0, 0, 0, 0],
[0, 0, -gamma1/2, gamma1/2],
[0, -gamma1/2, gamma2, -gamma2/2],
[0, gamma1/2, -gamma2/2, 0]]) / gamma1
M6 = 1.0*np.array([[0, 0, 0, 0],
[0, (L**2)*(gamma1**2), -(L**2)*(gamma1*gamma2), 0],
[0, -(L**2)*(gamma1*gamma2), (L**2)*(gamma2**2)-1, 1],
[0, 0, 1, -1]])
M7 = 1.0*np.array([[0, 0.5, 0, 0],
[0.5, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
M8 = 1.0*np.array([[L**2, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
M9 = 1.0*np.array([[0, 0, 0.5, 0],
[0, 0, -gamma1/2, 0],
[0.5, -gamma1/2, 0, 0],
[0, 0, 0, 0]])
M10 = 1.0*np.array([[L**2, -(L**2)*gamma1, 0, 0],
[-(L**2)*gamma1, (L**2)*(gamma1**2), 0, 0],
[0, 0, -1, 0],
[0, 0, 0, 0]])
M11 = 1.0*np.array([[0, 0, 0, 0.5],
[0, 0, 0, 0],
[0, 0, 0, -gamma2/2],
[0.5, 0, -gamma2/2, 0]])
M12 = 1.0*np.array([[(L**2), 0, -(L**2)*gamma2, 0],
[0, 0, 0, 0],
[-(L**2)*gamma2, 0, (L**2)*(gamma2**2), 0],
[0, 0, 0, -1]])
M13 = 1.0*np.array([[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
###Output
_____no_output_____
###Markdown
Define and solve SDP problem
###Code
%%time
G = cp.Variable((4,4), symmetric=True)
constraints = [G >> 0]
constraints += [cp.trace(M1 @ G) >= 0]
constraints += [cp.trace(M2 @ G) >= 0]
constraints += [cp.trace(M3 @ G) >= 0]
constraints += [cp.trace(M4 @ G) >= 0]
constraints += [cp.trace(M5 @ G) >= 0]
constraints += [cp.trace(M6 @ G) >= 0]
constraints += [cp.trace(M7 @ G) >= 0]
constraints += [cp.trace(M8 @ G) >= 0]
constraints += [cp.trace(M9 @ G) >= 0]
constraints += [cp.trace(M10 @ G) >= 0]
constraints += [cp.trace(M11 @ G) >= 0]
constraints += [cp.trace(M12 @ G) >= 0]
constraints += [cp.trace(M13 @ G) == 1]
prob = cp.Problem(cp.Maximize(cp.trace(M0 @ G)),
constraints)
prob.solve()
print("The optimal value is", prob.value)
print("G = ")
print(G.value)
###Output
The optimal value is 1.431010310248837e-10
G =
[[1. 0.07927615 0.07927615 0.07927615]
[0.07927615 0.30908499 0.30908499 0.30908499]
[0.07927615 0.30908499 0.30908499 0.30908499]
[0.07927615 0.30908499 0.30908499 0.30908499]]
###Markdown
Print dual variables
###Code
print("Dual variable 1 : ", constraints[1].dual_value)
print("Dual variable 2 : ", constraints[2].dual_value)
print("Dual variable 3 : ", constraints[3].dual_value)
print("Dual variable 4 : ", constraints[4].dual_value)
print("Dual variable 5 : ", constraints[5].dual_value)
print("Dual variable 6 : ", constraints[6].dual_value)
print("Dual variable 7 : ", constraints[7].dual_value)
print("Dual variable 8 : ", constraints[8].dual_value)
print("Dual variable 9 : ", constraints[9].dual_value)
print("Dual variable 10: ", constraints[10].dual_value)
print("Dual variable 11: ", constraints[11].dual_value)
print("Dual variable 12: ", constraints[12].dual_value)
print("Dual variable 13: ", constraints[13].dual_value)
###Output
Dual variable 1 : 4.77030987477042e-10
Dual variable 2 : 0.0
Dual variable 3 : 1.999999999801979
Dual variable 4 : 0.0
Dual variable 5 : 0.5803686163130072
Dual variable 6 : 1.337493848880875
Dual variable 7 : 0.0
Dual variable 8 : 0.0
Dual variable 9 : 0.0
Dual variable 10: 0.0
Dual variable 11: 0.0
Dual variable 12: 0.0
Dual variable 13: 1.6145668435654499e-10
###Markdown
Define parameters and the matrices for the problem $\gamma = \frac{1}{4L}$
###Code
# Lipschitz parameter and stepsize
L = 1.0
gamma1 = 1.0 / (4*L)
gamma2 = gamma1
# Matrices for SDP
M0 = 1.0*np.array([[0, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 1]])
M1 = 1.0*np.array([[0, 0, 0, 0],
[0, 1, -1.0/2, 0],
[0, -1.0/2, 0, 0],
[0, 0, 0, 0]])
M2 = 1.0*np.array([[0, 0, 0, 0],
[0, (L**2)*(gamma1**2)-1, 1, 0],
[0, 1, -1, 0],
[0, 0, 0, 0]])
M3 = 1.0*np.array([[0, 0, 0, 0],
[0, 0, 1.0/2, 0],
[0, 1.0/2, 0, -1.0/2],
[0, 0, -1.0/2, 0]])
M4 = 1.0*np.array([[0, 0, 0, 0],
[0, -1, 0, 1],
[0, 0, (L**2)*(gamma2**2), 0],
[0, 1, 0, -1]])
M5 = 1.0*np.array([[0, 0, 0, 0],
[0, 0, -gamma1/2, gamma1/2],
[0, -gamma1/2, gamma2, -gamma2/2],
[0, gamma1/2, -gamma2/2, 0]]) / gamma1
M6 = 1.0*np.array([[0, 0, 0, 0],
[0, (L**2)*(gamma1**2), -(L**2)*(gamma1*gamma2), 0],
[0, -(L**2)*(gamma1*gamma2), (L**2)*(gamma2**2)-1, 1],
[0, 0, 1, -1]])
M7 = 1.0*np.array([[0, 0.5, 0, 0],
[0.5, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
M8 = 1.0*np.array([[L**2, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
M9 = 1.0*np.array([[0, 0, 0.5, 0],
[0, 0, -gamma1/2, 0],
[0.5, -gamma1/2, 0, 0],
[0, 0, 0, 0]])
M10 = 1.0*np.array([[L**2, -(L**2)*gamma1, 0, 0],
[-(L**2)*gamma1, (L**2)*(gamma1**2), 0, 0],
[0, 0, -1, 0],
[0, 0, 0, 0]])
M11 = 1.0*np.array([[0, 0, 0, 0.5],
[0, 0, 0, 0],
[0, 0, 0, -gamma2/2],
[0.5, 0, -gamma2/2, 0]])
M12 = 1.0*np.array([[(L**2), 0, -(L**2)*gamma2, 0],
[0, 0, 0, 0],
[-(L**2)*gamma2, 0, (L**2)*(gamma2**2), 0],
[0, 0, 0, -1]])
M13 = 1.0*np.array([[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
###Output
_____no_output_____
###Markdown
Define and solve SDP problem
###Code
%%time
G = cp.Variable((4,4), symmetric=True)
constraints = [G >> 0]
constraints += [cp.trace(M1 @ G) >= 0]
constraints += [cp.trace(M2 @ G) >= 0]
constraints += [cp.trace(M3 @ G) >= 0]
constraints += [cp.trace(M4 @ G) >= 0]
constraints += [cp.trace(M5 @ G) >= 0]
constraints += [cp.trace(M6 @ G) >= 0]
constraints += [cp.trace(M7 @ G) >= 0]
constraints += [cp.trace(M8 @ G) >= 0]
constraints += [cp.trace(M9 @ G) >= 0]
constraints += [cp.trace(M10 @ G) >= 0]
constraints += [cp.trace(M11 @ G) >= 0]
constraints += [cp.trace(M12 @ G) >= 0]
constraints += [cp.trace(M13 @ G) == 1]
prob = cp.Problem(cp.Maximize(cp.trace(M0 @ G)),
constraints)
prob.solve()
print("The optimal value is", prob.value)
print("G = ")
print(G.value)
###Output
The optimal value is -1.1110082932697107e-05
G =
[[1.0000053 0.09827903 0.09827698 0.09829571]
[0.09827903 0.30086963 0.30084578 0.30086077]
[0.09827698 0.30084578 0.30082181 0.30083944]
[0.09829571 0.30086077 0.30083944 0.30085852]]
###Markdown
Print dual variables
###Code
print("Dual variable 1 : ", constraints[1].dual_value)
print("Dual variable 2 : ", constraints[2].dual_value)
print("Dual variable 3 : ", constraints[3].dual_value)
print("Dual variable 4 : ", constraints[4].dual_value)
print("Dual variable 5 : ", constraints[5].dual_value)
print("Dual variable 6 : ", constraints[6].dual_value)
print("Dual variable 7 : ", constraints[7].dual_value)
print("Dual variable 8 : ", constraints[8].dual_value)
print("Dual variable 9 : ", constraints[9].dual_value)
print("Dual variable 10: ", constraints[10].dual_value)
print("Dual variable 11: ", constraints[11].dual_value)
print("Dual variable 12: ", constraints[12].dual_value)
print("Dual variable 13: ", constraints[13].dual_value)
###Output
Dual variable 1 : 4.001109602667928e-05
Dual variable 2 : 0.0
Dual variable 3 : 2.0000650018484327
Dual variable 4 : 0.0
Dual variable 5 : 0.5859157618454286
Dual variable 6 : 1.331117064465093
Dual variable 7 : 0.0
Dual variable 8 : 0.0
Dual variable 9 : 0.0
Dual variable 10: 0.0
Dual variable 11: 0.0
Dual variable 12: 0.0
Dual variable 13: -2.0038238481393826e-05
###Markdown
Define parameters and the matrices for the problem $\gamma = \frac{1}{3L}$
###Code
# Lipschitz parameter and stepsize
L = 1.0
gamma1 = 1.0 / (3*L)
gamma2 = gamma1
# Matrices for SDP
M0 = 1.0*np.array([[0, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 1]])
M1 = 1.0*np.array([[0, 0, 0, 0],
[0, 1, -1.0/2, 0],
[0, -1.0/2, 0, 0],
[0, 0, 0, 0]])
M2 = 1.0*np.array([[0, 0, 0, 0],
[0, (L**2)*(gamma1**2)-1, 1, 0],
[0, 1, -1, 0],
[0, 0, 0, 0]])
M3 = 1.0*np.array([[0, 0, 0, 0],
[0, 0, 1.0/2, 0],
[0, 1.0/2, 0, -1.0/2],
[0, 0, -1.0/2, 0]])
M4 = 1.0*np.array([[0, 0, 0, 0],
[0, -1, 0, 1],
[0, 0, (L**2)*(gamma2**2), 0],
[0, 1, 0, -1]])
M5 = 1.0*np.array([[0, 0, 0, 0],
[0, 0, -gamma1/2, gamma1/2],
[0, -gamma1/2, gamma2, -gamma2/2],
[0, gamma1/2, -gamma2/2, 0]]) / gamma1
M6 = 1.0*np.array([[0, 0, 0, 0],
[0, (L**2)*(gamma1**2), -(L**2)*(gamma1*gamma2), 0],
[0, -(L**2)*(gamma1*gamma2), (L**2)*(gamma2**2)-1, 1],
[0, 0, 1, -1]])
M7 = 1.0*np.array([[0, 0.5, 0, 0],
[0.5, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
M8 = 1.0*np.array([[L**2, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
M9 = 1.0*np.array([[0, 0, 0.5, 0],
[0, 0, -gamma1/2, 0],
[0.5, -gamma1/2, 0, 0],
[0, 0, 0, 0]])
M10 = 1.0*np.array([[L**2, -(L**2)*gamma1, 0, 0],
[-(L**2)*gamma1, (L**2)*(gamma1**2), 0, 0],
[0, 0, -1, 0],
[0, 0, 0, 0]])
M11 = 1.0*np.array([[0, 0, 0, 0.5],
[0, 0, 0, 0],
[0, 0, 0, -gamma2/2],
[0.5, 0, -gamma2/2, 0]])
M12 = 1.0*np.array([[(L**2), 0, -(L**2)*gamma2, 0],
[0, 0, 0, 0],
[-(L**2)*gamma2, 0, (L**2)*(gamma2**2), 0],
[0, 0, 0, -1]])
M13 = 1.0*np.array([[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
###Output
_____no_output_____
###Markdown
Define and solve SDP problem
###Code
%%time
G = cp.Variable((4,4), symmetric=True)
constraints = [G >> 0]
constraints += [cp.trace(M1 @ G) >= 0]
constraints += [cp.trace(M2 @ G) >= 0]
constraints += [cp.trace(M3 @ G) >= 0]
constraints += [cp.trace(M4 @ G) >= 0]
constraints += [cp.trace(M5 @ G) >= 0]
constraints += [cp.trace(M6 @ G) >= 0]
constraints += [cp.trace(M7 @ G) >= 0]
constraints += [cp.trace(M8 @ G) >= 0]
constraints += [cp.trace(M9 @ G) >= 0]
constraints += [cp.trace(M10 @ G) >= 0]
constraints += [cp.trace(M11 @ G) >= 0]
constraints += [cp.trace(M12 @ G) >= 0]
constraints += [cp.trace(M13 @ G) == 1]
prob = cp.Problem(cp.Maximize(cp.trace(M0 @ G)),
constraints)
prob.solve()
print("The optimal value is", prob.value)
print("G = ")
print(G.value)
###Output
The optimal value is -2.887441768906207e-06
G =
[[1.00000046 0.12070312 0.12070496 0.12070714]
[0.12070312 0.32381443 0.32381085 0.32381296]
[0.12070496 0.32381085 0.32381018 0.32381151]
[0.12070714 0.32381296 0.32381151 0.32381154]]
###Markdown
Print dual variables
###Code
print("Dual variable 1 : ", constraints[1].dual_value)
print("Dual variable 2 : ", constraints[2].dual_value)
print("Dual variable 3 : ", constraints[3].dual_value)
print("Dual variable 4 : ", constraints[4].dual_value)
print("Dual variable 5 : ", constraints[5].dual_value)
print("Dual variable 6 : ", constraints[6].dual_value)
print("Dual variable 7 : ", constraints[7].dual_value)
print("Dual variable 8 : ", constraints[8].dual_value)
print("Dual variable 9 : ", constraints[9].dual_value)
print("Dual variable 10: ", constraints[10].dual_value)
print("Dual variable 11: ", constraints[11].dual_value)
print("Dual variable 12: ", constraints[12].dual_value)
print("Dual variable 13: ", constraints[13].dual_value)
###Output
Dual variable 1 : 2.755998537282654e-06
Dual variable 2 : 0.0
Dual variable 3 : 2.0000049348919617
Dual variable 4 : 0.0
Dual variable 5 : 0.43020469368251846
Dual variable 6 : 1.427219078631502
Dual variable 7 : 0.0
Dual variable 8 : 0.0
Dual variable 9 : 0.0
Dual variable 10: 0.0
Dual variable 11: 0.0
Dual variable 12: 0.0
Dual variable 13: -2.66371289206901e-06
###Markdown
Define parameters and the matrices for the problem $\gamma = \frac{1}{2L}$
###Code
# Lipschitz parameter and stepsize
L = 1.0
gamma1 = 1.0 / (2*L)
gamma2 = gamma1
# Matrices for SDP
M0 = 1.0*np.array([[0, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 1]])
M1 = 1.0*np.array([[0, 0, 0, 0],
[0, 1, -1.0/2, 0],
[0, -1.0/2, 0, 0],
[0, 0, 0, 0]])
M2 = 1.0*np.array([[0, 0, 0, 0],
[0, (L**2)*(gamma1**2)-1, 1, 0],
[0, 1, -1, 0],
[0, 0, 0, 0]])
M3 = 1.0*np.array([[0, 0, 0, 0],
[0, 0, 1.0/2, 0],
[0, 1.0/2, 0, -1.0/2],
[0, 0, -1.0/2, 0]])
M4 = 1.0*np.array([[0, 0, 0, 0],
[0, -1, 0, 1],
[0, 0, (L**2)*(gamma2**2), 0],
[0, 1, 0, -1]])
M5 = 1.0*np.array([[0, 0, 0, 0],
[0, 0, -gamma1/2, gamma1/2],
[0, -gamma1/2, gamma2, -gamma2/2],
[0, gamma1/2, -gamma2/2, 0]]) / gamma1
M6 = 1.0*np.array([[0, 0, 0, 0],
[0, (L**2)*(gamma1**2), -(L**2)*(gamma1*gamma2), 0],
[0, -(L**2)*(gamma1*gamma2), (L**2)*(gamma2**2)-1, 1],
[0, 0, 1, -1]])
M7 = 1.0*np.array([[0, 0.5, 0, 0],
[0.5, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
M8 = 1.0*np.array([[L**2, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
M9 = 1.0*np.array([[0, 0, 0.5, 0],
[0, 0, -gamma1/2, 0],
[0.5, -gamma1/2, 0, 0],
[0, 0, 0, 0]])
M10 = 1.0*np.array([[L**2, -(L**2)*gamma1, 0, 0],
[-(L**2)*gamma1, (L**2)*(gamma1**2), 0, 0],
[0, 0, -1, 0],
[0, 0, 0, 0]])
M11 = 1.0*np.array([[0, 0, 0, 0.5],
[0, 0, 0, 0],
[0, 0, 0, -gamma2/2],
[0.5, 0, -gamma2/2, 0]])
M12 = 1.0*np.array([[(L**2), 0, -(L**2)*gamma2, 0],
[0, 0, 0, 0],
[-(L**2)*gamma2, 0, (L**2)*(gamma2**2), 0],
[0, 0, 0, -1]])
M13 = 1.0*np.array([[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
###Output
_____no_output_____
###Markdown
Define and solve SDP problem
###Code
%%time
G = cp.Variable((4,4), symmetric=True)
constraints = [G >> 0]
constraints += [cp.trace(M1 @ G) >= 0]
constraints += [cp.trace(M2 @ G) >= 0]
constraints += [cp.trace(M3 @ G) >= 0]
constraints += [cp.trace(M4 @ G) >= 0]
constraints += [cp.trace(M5 @ G) >= 0]
constraints += [cp.trace(M6 @ G) >= 0]
constraints += [cp.trace(M7 @ G) >= 0]
constraints += [cp.trace(M8 @ G) >= 0]
constraints += [cp.trace(M9 @ G) >= 0]
constraints += [cp.trace(M10 @ G) >= 0]
constraints += [cp.trace(M11 @ G) >= 0]
constraints += [cp.trace(M12 @ G) >= 0]
constraints += [cp.trace(M13 @ G) == 1]
prob = cp.Problem(cp.Maximize(cp.trace(M0 @ G)),
constraints)
prob.solve()
print("The optimal value is", prob.value)
print("G = ")
print(G.value)
###Output
The optimal value is -2.2439360946036047e-06
G =
[[1.00000069 0.16229743 0.16230239 0.16229271]
[0.16229743 0.29713577 0.29713687 0.29713668]
[0.16230239 0.29713687 0.29713931 0.29713707]
[0.16229271 0.29713668 0.29713707 0.29713353]]
###Markdown
Print dual variables
###Code
print("Dual variable 1 : ", constraints[1].dual_value)
print("Dual variable 2 : ", constraints[2].dual_value)
print("Dual variable 3 : ", constraints[3].dual_value)
print("Dual variable 4 : ", constraints[4].dual_value)
print("Dual variable 5 : ", constraints[5].dual_value)
print("Dual variable 6 : ", constraints[6].dual_value)
print("Dual variable 7 : ", constraints[7].dual_value)
print("Dual variable 8 : ", constraints[8].dual_value)
print("Dual variable 9 : ", constraints[9].dual_value)
print("Dual variable 10: ", constraints[10].dual_value)
print("Dual variable 11: ", constraints[11].dual_value)
print("Dual variable 12: ", constraints[12].dual_value)
print("Dual variable 13: ", constraints[13].dual_value)
###Output
Dual variable 1 : 0.0
Dual variable 2 : 0.0
Dual variable 3 : 2.0000119980201756
Dual variable 4 : 0.0
Dual variable 5 : 0.4212982062584506
Dual variable 6 : 1.3436820772438518
Dual variable 7 : 0.0
Dual variable 8 : 0.0
Dual variable 9 : 0.0
Dual variable 10: 0.0
Dual variable 11: 0.0
Dual variable 12: 0.0
Dual variable 13: -2.320994038404713e-06
|
06_q_learn_fin_2_mlp.ipynb | ###Markdown
Reinforcement Learning © Dr Yves J Hilpisch | The Python Quants GmbH[quants@dev Discord Server](https://discord.gg/uJPtp9Awaj) | [@quants_dev](https://twitter.com/quants_dev) | [email protected] Imports
###Code
import os
import math
import random
import numpy as np
import pandas as pd
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
np.set_printoptions(precision=4, suppress=True)
os.environ['PYTHONHASHSEED'] = '0'
%config InlineBackend.figure_format = 'svg'
import warnings as w
w.simplefilter('ignore')
###Output
_____no_output_____
###Markdown
Improved Finance Environment
###Code
class observation_space:
def __init__(self, n):
self.shape = (n,)
class action_space:
def __init__(self, n):
self.n = n
def seed(self, seed):
pass
def sample(self):
return random.randint(0, self.n - 1)
class Finance:
url = 'http://hilpisch.com/aiif_eikon_eod_data.csv'
def __init__(self, symbol, features, window, lags,
leverage=1, min_performance=0.85,
start=0, end=None, mu=None, std=None):
self.symbol = symbol
self.features = features
self.n_features = len(features)
self.window = window
self.lags = lags
self.leverage = leverage
self.min_performance = min_performance
self.start = start
self.end = end
self.mu = mu
self.std = std
self.observation_space = observation_space(self.lags)
self.action_space = action_space(2)
self._get_data()
self._prepare_data()
def _get_data(self):
self.raw = pd.read_csv(self.url, index_col=0,
parse_dates=True).dropna()
def _prepare_data(self):
self.data = pd.DataFrame(self.raw[self.symbol])
self.data = self.data.iloc[self.start:]
self.data['r'] = np.log(self.data / self.data.shift(1))
self.data.dropna(inplace=True)
self.data['s'] = self.data[self.symbol].rolling(
self.window).mean()
self.data['m'] = self.data['r'].rolling(self.window).mean()
self.data['v'] = self.data['r'].rolling(self.window).std()
self.data.dropna(inplace=True)
if self.mu is None:
self.mu = self.data.mean()
self.std = self.data.std()
self.data_ = (self.data - self.mu) / self.std
self.data_['d'] = np.where(self.data['r'] > 0, 1, 0)
self.data_['d'] = self.data_['d'].astype(int)
if self.end is not None:
self.data = self.data.iloc[:self.end - self.start]
self.data_ = self.data_.iloc[:self.end - self.start]
def _get_state(self):
return self.data_[self.features].iloc[self.bar -
self.lags:self.bar]
def seed(self, seed):
random.seed(seed)
np.random.seed(seed)
def reset(self):
self.treward = 0
self.accuracy = 0
self.performance = 1
self.bar = self.lags
state = self.data_[self.features].iloc[self.bar-
self.lags:self.bar]
return state.values
def step(self, action):
correct = action == self.data_['d'].iloc[self.bar]
ret = self.data['r'].iloc[self.bar] * self.leverage
reward_1 = 1 if correct else 0
reward_2 = abs(ret) if correct else -abs(ret)
self.treward += reward_1
self.bar += 1
self.accuracy = self.treward / (self.bar - self.lags)
self.performance *= math.exp(reward_2)
if self.bar >= len(self.data):
done = True
elif reward_1 == 1:
done = False
elif (self.performance < self.min_performance and
self.bar > self.lags + 15):
done = True
else:
done = False
state = self._get_state()
info = {}
return state.values, reward_1 + reward_2 * 252, done, info
env = Finance('EUR=', ['EUR=', 'r', 'm'], 10, 5)
a = env.action_space.sample()
a
env.reset()
env.step(a)
###Output
_____no_output_____
###Markdown
Improved Financial QL Agent
###Code
from collections import deque
from sklearn.neural_network import MLPRegressor
class FQLAgent:
def __init__(self, hidden_layers, hidden_units, learning_rate,
learn_env, valid_env):
self.learn_env = learn_env
self.valid_env = valid_env
self.epsilon = 1.0
self.epsilon_min = 0.1
self.epsilon_decay = 0.99
self.learning_rate = learning_rate
self.gamma = 0.95
self.batch_size = 128
self.max_treward = 0
self.trewards = list()
self.averages = list()
self.performances = list()
self.aperformances = list()
self.vperformances = list()
self.memory = deque(maxlen=2000)
self.model = self._build_model(hidden_layers, hidden_units,
learning_rate)
def _build_model(self, hl, hu, lr):
model = MLPRegressor(hidden_layer_sizes=hl * [hu],
solver='adam', learning_rate='constant',
learning_rate_init=lr,
random_state=100, max_iter=500,
warm_start=True
)
model.fit(np.random.standard_normal((2, self.learn_env.lags *
self.learn_env.n_features)),
np.random.standard_normal((2, 2)))
return model
def act(self, state):
if random.random() <= self.epsilon:
return self.learn_env.action_space.sample()
action = self.model.predict(state.flatten().reshape(1, -1))[0]
return np.argmax(action)
def replay(self):
batch = random.sample(self.memory, self.batch_size)
for state, action, reward, next_state, done in batch:
if not done:
reward += self.gamma * np.amax(
self.model.predict(next_state.flatten().reshape(1, -1))[0])
target = self.model.predict(state.flatten().reshape(1, -1))
target[0, action] = reward
self.model.partial_fit(state.flatten().reshape(1, -1), target)
if self.epsilon > self.epsilon_min:
self.epsilon *= self.epsilon_decay
def learn(self, episodes):
for e in range(1, episodes + 1):
state = self.learn_env.reset()
state = np.reshape(state, [1, self.learn_env.lags,
self.learn_env.n_features])
for _ in range(10000):
action = self.act(state)
next_state, reward, done, info = \
self.learn_env.step(action)
next_state = np.reshape(next_state,
[1, self.learn_env.lags,
self.learn_env.n_features])
self.memory.append([state, action, reward,
next_state, done])
state = next_state
if done:
treward = _ + 1
self.trewards.append(treward)
av = sum(self.trewards[-25:]) / 25
perf = self.learn_env.performance
self.averages.append(av)
self.performances.append(perf)
self.aperformances.append(
sum(self.performances[-25:]) / 25)
self.max_treward = max(self.max_treward, treward)
templ = 'episode: {:2d}/{} | treward: {:4d} | '
templ += 'perf: {:5.3f} | av: {:5.1f} | max: {:4d}'
print(templ.format(e, episodes, treward, perf,
av, self.max_treward), end='\r')
break
self.validate(e, episodes)
if len(self.memory) > self.batch_size:
self.replay()
print()
def validate(self, e, episodes):
state = self.valid_env.reset()
for _ in range(10000):
action = np.argmax(self.model.predict(state.flatten().reshape(1, -1))[0])
next_state, reward, done, info = self.valid_env.step(action)
state = np.reshape(next_state, [self.valid_env.lags,
self.valid_env.n_features])
if done:
treward = _ + 1
perf = self.valid_env.performance
self.vperformances.append(perf)
if e % 20 == 0:
templ = 71 * '='
templ += '\nepisode: {:2d}/{} | VALIDATION | '
templ += 'treward: {:4d} | perf: {:5.3f} | '
templ += 'eps: {:.2f}\n'
templ += 71 * '='
print(templ.format(e, episodes, treward,
perf, self.epsilon))
break
symbol = 'EUR='
features = ['r', 's', 'm', 'v']
a = 0
b = 2000
c = 500
learn_env = Finance(symbol, features, window=10, lags=6,
leverage=1, min_performance=0.85,
start=a, end=a + b, mu=None, std=None)
learn_env.data.info()
valid_env = Finance(symbol, features, window=learn_env.window,
lags=learn_env.lags, leverage=learn_env.leverage,
min_performance=learn_env.min_performance,
start=a + b, end=a + b + c,
mu=learn_env.mu, std=learn_env.std)
valid_env.data.info()
agent = FQLAgent(2, 24, 0.0001, learn_env, valid_env)
episodes = 61
%time agent.learn(episodes)
agent.epsilon
plt.figure(figsize=(10, 6))
x = range(1, len(agent.averages) + 1)
y = np.polyval(np.polyfit(x, agent.averages, deg=3), x)
plt.plot(agent.averages, label='moving average')
plt.plot(x, y, 'r--', label='regression')
plt.xlabel('episodes')
plt.ylabel('total reward')
plt.legend();
plt.figure(figsize=(10, 6))
x = range(1, len(agent.performances) + 1)
y = np.polyval(np.polyfit(x, agent.performances, deg=3), x)
y_ = np.polyval(np.polyfit(x, agent.vperformances, deg=3), x)
plt.plot(agent.performances[:], label='training')
plt.plot(agent.vperformances[:], label='validation')
plt.plot(x, y, 'r--', label='regression (train)')
plt.plot(x, y_, 'r-.', label='regression (valid)')
plt.xlabel('episodes')
plt.ylabel('gross performance')
plt.legend();
###Output
_____no_output_____ |
sagemaker-python-sdk/tensorflow_distributed_mnist/tensorflow_distributed_mnist_neo.ipynb | ###Markdown
TensorFlow BYOM: Train with Custom Training Script, Compile with Neo, and Deploy on SageMakerThis notebook can be compared to [TensorFlow MNIST distributed training notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_distributed_mnist/tensorflow_distributed_mnist.ipynb) in terms of its functionality. We will do the same classification task, but this time we will compile the trained model using the Neo API backend, to optimize for our choice of hardware. Finally, we setup a real-time hosted endpoint in SageMaker for our compiled model using the Neo Deep Learning Runtime. Set up the environment
###Code
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download the MNIST dataset
###Code
import utils
from tensorflow.contrib.learn.python.learn.datasets import mnist
import tensorflow as tf
data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
###Output
_____no_output_____
###Markdown
Upload the dataWe use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job.
###Code
inputs = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-mnist')
###Output
_____no_output_____
###Markdown
Construct a script for distributed training Here is the full code for the network model:
###Code
!cat 'mnist.py'
###Output
_____no_output_____
###Markdown
The script here is and adaptation of the [TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist). It provides a ```model_fn(features, labels, mode)```, which is used for training, evaluation and inference. See [TensorFlow MNIST distributed training notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_distributed_mnist/tensorflow_distributed_mnist.ipynb) for more details about the training script.At the end of the training script, there are two additional functions, to be used with Neo Deep Learning Runtime:* `neo_preprocess(payload, content_type)`: Function that takes in the payload and Content-Type of each incoming request and returns a NumPy array* `neo_postprocess(result)`: Function that takes the prediction results produced by Deep Learining Runtime and returns the response body Create a training job using the sagemaker.TensorFlow estimator
###Code
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
framework_version='1.11.0',
training_steps=1000,
evaluation_steps=100,
train_instance_count=2,
train_instance_type='ml.c4.xlarge')
mnist_estimator.fit(inputs)
###Output
_____no_output_____
###Markdown
The **```fit```** method will create a training job in two **ml.c4.xlarge** instances. The logs above will show the instances doing training, evaluation, and incrementing the number of **training steps**. In the end of the training, the training job will generate a saved model for TF serving. Deploy the trained model to prepare for predictions (the old way)The deploy() method creates an endpoint which serves prediction requests in real-time.
###Code
mnist_predictor = mnist_estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Invoking the endpoint
###Code
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
for i in range(10):
data = mnist.test.images[i].tolist()
tensor_proto = tf.make_tensor_proto(values=np.asarray(data), shape=[1, len(data)], dtype=tf.float32)
predict_response = mnist_predictor.predict(tensor_proto)
print("========================================")
label = np.argmax(mnist.test.labels[i])
print("label is {}".format(label))
prediction = predict_response['outputs']['classes']['int64_val'][0]
print("prediction is {}".format(prediction))
###Output
_____no_output_____
###Markdown
Deleting the endpoint
###Code
sagemaker.Session().delete_endpoint(mnist_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Deploy the trained model using NeoNow the model is ready to be compiled by Neo to be optimized for our hardware of choice. We are using the ``TensorFlowEstimator.compile_model`` method to do this. For this example, our target hardware is ``'ml_c5'``. You can changed these to other supported target hardware if you prefer. Compiling the modelThe ``input_shape`` is the definition for the model's input tensor and ``output_path`` is where the compiled model will be stored in S3. **Important. If the following command result in a permission error, scroll up and locate the value of execution role returned by `get_execution_role()`. The role must have access to the S3 bucket specified in ``output_path``.**
###Code
output_path = '/'.join(mnist_estimator.output_path.split('/')[:-1])
optimized_estimator = mnist_estimator.compile_model(target_instance_family='ml_c5',
input_shape={'data':[1, 784]}, # Batch size 1, 3 channels, 224x224 Images.
output_path=output_path,
framework='tensorflow', framework_version='1.11.0')
###Output
_____no_output_____
###Markdown
Deploying the compiled model
###Code
optimized_predictor = optimized_estimator.deploy(initial_instance_count = 1,
instance_type = 'ml.c5.4xlarge')
# The neo_preprocess() function expects an image in the request body
# But the MNIST example data is saved as NumPy array.
# So we convert it to PNG before invoking the endpoint
def png_serializer(data):
im = PIL.Image.fromarray(data.reshape((28,28))*255).convert('L')
f = io.BytesIO()
im.save(f, format='png')
f.seek(0)
return f.read()
optimized_predictor.content_type = 'application/x-image'
optimized_predictor.serializer = png_serializer
###Output
_____no_output_____
###Markdown
Invoking the endpoint
###Code
from tensorflow.examples.tutorials.mnist import input_data
from IPython import display
import PIL.Image
import io
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
for i in range(10):
data = mnist.test.images[i]
# Display image
im = PIL.Image.fromarray(data.reshape((28,28))*255).convert('L')
display.display(im)
# Invoke endpoint with image
predict_response = optimized_predictor.predict(data)
print("========================================")
label = np.argmax(mnist.test.labels[i])
print("label is {}".format(label))
prediction = predict_response
print("prediction is {}".format(prediction))
###Output
_____no_output_____
###Markdown
Deleting endpoint
###Code
sagemaker.Session().delete_endpoint(optimized_predictor.endpoint)
###Output
_____no_output_____ |
03_optimizers.ipynb | ###Markdown
Optimizers> This module implements interfaces and several known optimizers that can be tested against different functions
###Code
#exporti
def tuple_float_cast(_tuple):
x, y = _tuple
return np.round(float(x), 3), np.round(float(y), 3)
class History(list):
"""
This object stores the states through which an optimizer has passed through.
Normally we would have just a list for this but because we are storing `jax` states,
we need to subclass the `__repr__` method so we process the output a bit
(displaying the parameters of the state) and not the state in itself
"""
def __repr__(self):
if not hasattr(self, '_get_params'):
return super().__repr__()
else:
elements = [tuple_float_cast(self._get_params(state)) for state in self]
return str(elements)
#hide
h = History([(1, 2), (2, 3), (4, 5)])
assert str(h) == str([(1, 2), (2, 3), (4, 5)])
h._get_params = lambda x: x
assert str(h) == str([(1.0, 2.0), (2.0, 3.0), (4.0, 5.0)]), str(h)
#exporti
"""
Each class of optimizers has a special calling convention.
It's unfortunte that we can't just subclass the optimizers and inject our custom method
and we need to do this. This happens because the optimziers are written in a pure functional
style, and the methods are just functions that share state between them, back and forth.
"""
def _derivatives_based_update(i, state, update_fn, get_params_fn, grad_fn):
params = get_params_fn(state)
grads = grad_fn(*params)
return update_fn(i, grads, state)
def _derivatives_free_update(i, state, update_fn, function):
return update_fn(i, function, state)
#export
class optimize:
def __init__(self, function):
self.function = function
self.history = History()
def using(self, optimizer=(None, None, None), name='sgd', derivatives_based=True, render_decorator: Callable=None):
self.derivatives_based = derivatives_based
self.__init, self.__update, self._get_params = optimizer
self.render_decorator = render_decorator
# add this to the history object so it can extract the value for presenting them in __repr__
# otherwise we will see a list of `jax` states
self.history._get_params = self._get_params
#functional polymorphysm ?!
if derivatives_based:
self._update_fn = partial(
_derivatives_based_update,
update_fn=self.__update,
get_params_fn=self._get_params,
grad_fn=grad(self.function, argnums=(0, 1))
)
else:
self._update_fn = partial(
_derivatives_free_update,
update_fn=self.__update,
function=self.function
)
self.optimizer = optimizer
self.optimizer_name = name
return self
def start_from(self, params):
self.state = self.__init(tuple(params))
self.history.append(self.state)
return self
def update(self, nr_iterations=1):
# we add the initial state as state 0, but we haven't made any udpdates yet
# so even if we have something in history, the current_iteration is one behind
current_iteration = len(self.history) - 1
for i in range(nr_iterations):
self.state = self._update_fn(current_iteration + i, self.state)
self.history.append(self.state)
return self.history
from optimisations.functions import himmelblau
from jax.experimental.optimizers import sgd
(
optimize(himmelblau())
.using(sgd(step_size=0.001))
.start_from([1., 1.])
.update(10)
)
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
`JAX` is kind of rough, the optimizers (for now) sit inside the `experimental` submodule which means that their API might change in the future. An optimizer is a function that has some initialization parameters, and which returns 3 functions:* `init` - is a function to which you pass all the initial values of your hidden parameters and you get back a `state` object, which is a `pytree` structure (some internal representation). This is a bit confusing and I'm guessing this intermediate `pytree` thing might disappear from the API in the near future.* `update` - is the function that does a single update pass over the whole parameters. It receives as inputs: * `i` - the count of the current iteration. This usefull because, depending on the optimizer implementation, you can have different learning properties at each iteration (like some annealing strategy for the learning rate, etc..) * `g` - the gradient values (you get these by extracting the params from the `state` function, using the `get_params` function bellow (these are the variables that will get updated by the optimizer). Then pass these onto your gradient function and its results as input to this function. * `state` - that `pytre` structure that you've got after calling `init` (and which you'll contrantly replace with the result of this `update` function call)* `get_params` - a `utils` function that extracts the param object from a known `state` object (which is a `pytree`). So the full flow of the above, in code is shown bellow:
###Code
from jax.experimental.optimizers import sgd
init, update, get_params = sgd(step_size=0.001) # instantiate the optimizer
state = init((1., 2.)) # initialize the optimizer state with some initial weights and get a state back
print(state)
print(get_params(state)) # you use this function to extract the weight values from the state object
grad_function = grad(himmelblau(), argnums=(0, 1)) # you build the function that will compute your gradients
# The argnum part is needed because we have to specify that there are two parameters the parent function uses, and we want the derivative to both of them.
state = update(0, grad_function(*get_params(state)), state) # you call update with a iteration number, the gradient of the params, and the previous state and you get back a new state
print(state)
from jax import grad
from optimisations.functions import himmelblau
grad(himmelblau(), argnums=(0, 1))(*get_params(state))
###Output
_____no_output_____
###Markdown
And you can see the result of running 10 iterations of the above, in a loop. It moves to some direction, and I'm sure you're eager to see where, on the graph...
###Code
grad_function = grad(himmelblau(), argnums=(0, 1))
def run():
state = init((1., 2.))
for i in range(10):
params = get_params(state)
yield params
state = update(i, grad_function(*params), state)
[(float(x), float(y)) for x, y in run()]
#exporti
import re
def heuristic_get_jax_optimizer_name(init):
"""
Tries to find the name of the optimiser used to instantiate the init function, by parting the
string representation of the given function.
JAX based optimisers usually have the following string representation:
function jax.experimental.optimizers.sgd.<locals>.init(x0)
function jax.experimental.optimizers.sgd.<locals>.update(i, g, x)
function jax.experimental.optimizers.sgd.<locals>.get_params(x)
"""
function_name = str(init)
result = re.search("function\s+([^\.]+)", function_name)
if result is not None:
return result.group(1)
else:
None
heuristic_get_jax_optimizer_name(init)
#exporti
def build_optimizer_params(elements_list):
optimizer_params = dict()
if len(elements_list) == 1:
optimizer_params['optimizer'] = (init, update, get_params) = elements_list[0]
assert callable(init), f"Expected {init} be a callable."
assert callable(update), f"Expected {update} be a callable."
assert callable(get_params), f"Expected {get_params} be a callable."
optimizer_params['name'] = heuristic_get_jax_optimizer_name(init)
elif len(elements_list) == 2:
optimizer_params = elements_list[1]
optimizer_params['optimizer'] = (init, update, get_params) = elements_list[0]
assert callable(init), f"Expected {init} be a callable."
assert callable(update), f"Expected {update} be a callable."
assert callable(get_params), f"Expected {get_params} be a callable."
elif len(elements_list) == 3:
optimizer_params['optimizer'] = (init, update, get_params) = elements_list
assert callable(init), f"Expected {init} be a callable."
assert callable(update), f"Expected {update} be a callable."
assert callable(get_params), f"Expected {get_params} be a callable."
optimizer_params['name'] = heuristic_get_jax_optimizer_name(init)
else:
raise f"""
Unknown optimizer constructor list shape or size {len(elements_list)}.
Expected either
1 for [(init, update, get_params)] or
2 for [(init, update, get_params), \{other: configs\}] or
3 for (init, update, get_params)
Received {elements_list}
"""
return optimizer_params
print(build_optimizer_params([sgd(step_size=0.01), {"name":"sgd", "derivative":True}]))
print(build_optimizer_params([sgd(step_size=0.01)]))
print(build_optimizer_params(sgd(step_size=0.01)))
#export
class optimize_multi:
def __init__(self, function):
self.function = function
def using(self, optimizers):
self.optimizers = optimizers
return self
def start_from(self, params):
self.params = params
return self
def tolist(self):
return [optimize(self.function).using(**build_optimizer_params(optimizer)).start_from(self.params) for optimizer in self.optimizers]
###Output
_____no_output_____
###Markdown
When you want to compare the performance of multiple optimizers.
###Code
from jax.experimental.optimizers import sgd, adam
from optimisations.functions import himmelblau
(optimizers) = (
optimize_multi(himmelblau())
.using([
sgd(step_size=0.01),
adam(step_size=0.3),
])
.start_from([-1., 1.])
.tolist()
)
optimizers
for optimizer in optimizers:
print(optimizer.optimizer_name, optimizer.update(10))
###Output
sgd [(-1.0, 1.0), (-1.22, 1.46), (-1.491, 1.977), (-1.805, 2.475), (-2.132, 2.846), (-2.419, 3.036), (-2.619, 3.103), (-2.728, 3.122), (-2.776, 3.128), (-2.795, 3.13), (-2.801, 3.131), (-2.804, 3.131), (-2.805, 3.131), (-2.805, 3.131), (-2.805, 3.131), (-2.805, 3.131), (-2.805, 3.131), (-2.805, 3.131), (-2.805, 3.131), (-2.805, 3.131), (-2.805, 3.131)]
adam [(-1.0, 1.0), (-1.3, 1.3), (-1.6, 1.6), (-1.9, 1.901), (-2.201, 2.202), (-2.501, 2.502), (-2.79, 2.796), (-3.044, 3.078), (-3.221, 3.332), (-3.297, 3.534), (-3.288, 3.662), (-3.219, 3.711), (-3.113, 3.695), (-2.988, 3.629), (-2.858, 3.529), (-2.737, 3.408), (-2.634, 3.278), (-2.557, 3.149), (-2.507, 3.031), (-2.486, 2.929), (-2.491, 2.85)]
|
Chapter02/.ipynb_checkpoints/Chapter02-checkpoint.ipynb | ###Markdown
Understanding Tensors
###Code
import torch
first_order_tensor = torch.tensor([1, 2, 3])
print(first_order_tensor)
print(first_order_tensor[0])
print(first_order_tensor[0:2])
print(first_order_tensor[1:])
second_order_tensor = torch.tensor([ [ 11, 22, 33 ],
[ 21, 22, 23 ]
])
print(second_order_tensor)
print(second_order_tensor[0, 1])
fourth_order_tensor = torch.tensor(
[
[
[
[1111, 1112],
[1121, 1122]
],
[
[1211, 1212],
[1221, 1222]
]
],
[
[
[2111, 2112],
[2121, 2122]
],
[
[2211, 2212],
[2221, 2222]
]
]
])
my_tensor = torch.tensor([1, 2, 3, 4, 5])
print(my_tensor.size())
my_tensor = torch.tensor([[11, 12, 13], [21, 22, 23]])
print(my_tensor.size())
print(fourth_order_tensor.size())
random_tensor = torch.rand([4, 2])
print(random_tensor)
random_tensor.view([2, 4])
random_tensor = torch.rand([4, 2, 4])
random_tensor.view([2, 4, -1])
random_tensor.view([2, -1, 4])
x = torch.tensor([5, 3])
y = torch.tensor([3, 2])
torch.add(x, y)
torch.sub(x, y)
torch.mul(x, y)
x + y
torch.div(x, y)
x.dtype
y.dtype
x_float = torch.tensor([5, 3], dtype = torch.float32)
y_float = torch.tensor([3, 2], dtype = torch.float32)
print(x_float / y_float)
torch.FloatTensor([5, 3])
x.type(torch.DoubleTensor)
###Output
_____no_output_____ |
Sentiment_Analysis_with_Logistic_Regression_(2).ipynb | ###Markdown
Import Dataset
###Code
import nltk
from nltk.corpus import twitter_samples
import numpy as np
nltk.download('twitter_samples')
positive_tweets =twitter_samples.strings('positive_tweets.json')
negative_tweets =twitter_samples.strings('negative_tweets.json')
example_postive_tweet=positive_tweets[1]
example_negative_tweet=negative_tweets[0]
test_pos = positive_tweets[4000:]
train_pos = positive_tweets[:4000]
test_neg = negative_tweets[4000:]
train_neg = negative_tweets[:4000]
train_x = train_pos + train_neg
test_x = test_pos + test_neg
train_y = np.append(np.ones((len(train_pos), 1)), np.zeros((len(train_neg), 1)), axis=0)
test_y = np.append(np.ones((len(test_pos), 1)), np.zeros((len(test_neg), 1)), axis=0)
print("positive tweet-> ",example_postive_tweet)
print("negative tweet-> ",example_negative_tweet)
###Output
positive tweet-> @Lamb2ja Hey James! How odd :/ Please call our Contact Centre on 02392441234 and we will be able to assist you :) Many thanks!
negative tweet-> hopeless for tmr :(
###Markdown
Feature Engineering Tokenizing
###Code
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
tokens = tokenizer.tokenize(example_postive_tweet)
tokens
###Output
_____no_output_____
###Markdown
Removing Stopwords
###Code
from nltk.corpus import stopwords
nltk.download('stopwords')
stopwords_english = stopwords.words('english')
stopwords_english[:10]
import string
tweet_processsed=[word for word in tokens
if word not in stopwords_english and word not in string.punctuation]
tweet_processsed
###Output
_____no_output_____
###Markdown
Stemming
###Code
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
tweet_after_stem=[]
for word in tweet_processsed:
word=stemmer.stem(word)
tweet_after_stem.append(word)
tweet_after_stem
import re
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import TweetTokenizer
nltk.download('stopwords')
def text_process(tweet):
tweet = re.sub(r'^RT[\s]+', '', tweet)
tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet)
tweet = re.sub(r'#', '', tweet)
tweet = re.sub(r'@[\w\d]+', '', tweet)
tokenizer = TweetTokenizer()
tweet_tokenized = tokenizer.tokenize(tweet)
stopwords_english = stopwords.words('english')
tweet_processsed=[word for word in tweet_tokenized
if word not in stopwords_english and word not in
string.punctuation]
stemmer = PorterStemmer()
tweet_after_stem=[]
for word in tweet_processsed:
word=stemmer.stem(word)
tweet_after_stem.append(word)
return tweet_after_stem
print(text_process(example_postive_tweet))
###Output
['hey', 'jame', 'how', 'odd', ':/', 'pleas', 'call', 'contact', 'centr', '02392441234', 'abl', 'assist', ':)', 'mani', 'thank']
###Markdown
Word Encodings
###Code
pos_words=[]
for tweet in train_pos:
tweet=text_process(tweet)
for word in tweet:
pos_words.append(word)
freq_pos={}
for word in pos_words:
if (word,1) not in freq_pos:
freq_pos[(word,1)]=1
else:
freq_pos[(word,1)]=freq_pos[(word,1)]+1
neg_words=[]
for tweet in train_neg:
tweet=text_process(tweet)
for word in tweet:
neg_words.append(word)
freq_neg={}
for word in neg_words:
if (word,0) not in freq_neg:
freq_neg[(word,0)]=1
else:
freq_neg[(word,0)]=freq_neg[(word,0)]+1
freqs_dict = dict(freq_pos)
freqs_dict.update(freq_neg)
def features_extraction(tweet, freqs_dict):
word_l = text_process(tweet)
x = np.zeros((1, 3))
x[0,0] = 1
for word in word_l:
try:
x[0,1] += freqs_dict[(word,1)]
except:
x[0,1] += 0
try:
x[0,2] += freqs_dict[(word,0.0)]
except:
x[0,2] += 0
assert(x.shape == (1, 3))
return x
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
def sigmoid(x):
h = 1/(1+np.exp(-x))
return h
def gradientDescent(x, y, theta, alpha, num_iters):
m = x.shape[0]
for i in range(0, num_iters):
z = np.dot(x,theta)
h = sigmoid(z)
J = -1/m*(np.dot(y.T,np.log(h))+np.dot((1-y).T,np.log(1-h)))
theta = theta-(alpha/m)*np.dot(x.T,h-y)
J = float(J)
return J, theta
X = np.zeros((len(train_x), 3))
for i in range(len(train_x)):
X[i, :]= features_extraction(train_x[i], freqs_dict)
Y = train_y
J, theta = gradientDescent(X, Y, np.zeros((3, 1)), 1e-9, 1500)
###Output
_____no_output_____
###Markdown
Prediction and Model Accuracy
###Code
def test_accuracy_with_rule_based_model(test_x, test_y):
y_hat1 = []
for each in test_x:
if each[1]>each[2]:
y_hat1.append(1)
else:
y_hat1.append(0)
m=len(y_hat1)
y_hat1=np.array(y_hat1)
y_hat1=y_hat1.reshape(m)
test_y=test_y.reshape(m)
c=y_hat1==test_y
j=0
j= len([x for x in c if x==True])
accuracy1 = j/m
return accuracy1
accuracy1 = test_accuracy_with_rule_based_model(test_x, test_y)
print(accuracy1*100,'%')
def predict(tweet, freqs_dict, theta):
x = features_extraction(tweet,freqs_dict)
y_pred = sigmoid(np.dot(x,theta))
return y_pred
def test_accuracy_with_logistic_regression(test_x, test_y, freqs_dict, theta):
y_hat = []
for tweet in test_x:
y_pred = predict(tweet, freqs_dict, theta)
if y_pred > 0.5:
y_hat.append(1)
else:
y_hat.append(0)
m=len(y_hat)
y_hat=np.array(y_hat)
y_hat=y_hat.reshape(m)
test_y=test_y.reshape(m)
c=y_hat==test_y
j=0
j= len([x for x in c if x==True])
accuracy = j/m
return accuracy
accuracy = test_accuracy_with_logistic_regression(test_x, test_y, freqs_dict, theta)
print(accuracy*100,'%')
def test_your_own_tweet(tweet, freqs_dict, theta):
y_pred = predict(tweet, freqs_dict, theta)
if y_pred > 0.5:
print("positve")
else:
print("negative")
tweet = "I'm happy, not sad"
test_your_own_tweet(tweet, freqs_dict, theta)
###Output
negative
|
Audio Visual/Problems/ObjectDetection_Video.ipynb | ###Markdown
Object Detection using OpenCV This Code Template is for Object Detection through webcam video capture using OpenCV library in python. Object Detection is A computer vision technique that deals with detecting object/s (face, eye, any inanimate object) in an image or video. This technique draws boundary or a bounding box around target object and may also include their target label. It has many real-life applications like image retrieval and video surveillance. **Required Packages**
###Code
!pip install opencv-python
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
import imutils
from imutils.video import VideoStream
import time
###Output
_____no_output_____
###Markdown
**Initialization** Image LabelsHere, the COCO dataset is used for image labeling since the model YOLO is trained on it. This dataset is popular for Object Detection and constitutes 80 labels including oven, toaster, bench, car, etc.The file is downloadable at [coco.names](https://opencv-tutorial.readthedocs.io/en/latest/_downloads/a9fb13cbea0745f3d11da9017d1b8467/coco.names)
###Code
classFile = '' # Path to labels
classes = []
with open(classFile, 'rt') as f:
classes = f.read().rstrip('\n').split('\n')
###Output
_____no_output_____
###Markdown
Model OpenCV uses the function cv2.dnn.readNet() to load a pre-trained weights and network configuration of supported format and build object detection model. This function automatically detects an origin framework of trained model and calls an appropriate function such readNetFromCaffe, readNetFromTensorflow, readNetFromTorch or readNetFromDarknet. Model Tuning Parameters1. **model**: const String & >Binary file contains trained weights. 2. **config**: const String &>Text file contains network configuration3. **framework**: const String & >Explicit framework name tag to determine a format.More details at the [API](https://docs.opencv.org/4.5.1/d6/d0f/group__dnn.html) YOLOIn this tutorial, the model used is YOLO — You Only Look Once. It is an extremely fast multi object detection algorithm which uses convolutional neural network (CNN) to detect and identify objects.>**config_path**: [YOLO Configuration File](https://opencv-tutorial.readthedocs.io/en/latest/_downloads/10e685aad953495a95c17bfecd1649e5/yolov3.cfg)>**weights_path**: [YOLO Weights](https://pjreddie.com/media/files/yolov3.weights) Paths to configuration and weight files
###Code
config_path = '' # configuration path
weights_path = '' # weights path
net = cv2.dnn.readNet(config_path, weights_path)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
# determine the output layer
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
###Output
_____no_output_____
###Markdown
BLOBOpenCv object detection models take input images as BLOBs. A BLOB is a binary large object (BLOB) is a collection of binary data stored as a single entity.cv2.dnn.blobFromImage() creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels. Parameters1. **image**: InputArray >input image (with 1-, 3- or 4-channels).2. **size**: const Size &>spatial size for output image3. **scalefactor**: double >multiplier for image values.4. **swapRB**: bool >flag which indicates that swap first and last channels in 3-channel image is necessary.5. **crop**: bool >flag which indicates whether image will be cropped after resize or notMore details at the [API](https://docs.opencv.org/4.5.1/d6/d0f/group__dnn.html) **Inference**
The input video stream is captured through webcam and each frame is processed through the Following code section [[Reference](https://opencv-tutorial.readthedocs.io/en/latest/yolo/yolo.html)] and draws bounding box and a corresponding confidence score around an object. Confidence score is the probability that a bounding box contains an object.
cv.dnn.NMSBoxes performs non maximum suppression given boxes and corresponding scores.
Parameters
1. **bboxes**: const std::vector &
>a set of bounding boxes to apply NMS.
2. **scores**: const std::vector &
>a set of corresponding confidences.
3. **score_threshold**: const float
>a threshold used to filter boxes by score.
4. **nms_threshold**: const float
>a threshold used in non maximum suppression.
More details at the [API](https://docs.opencv.org/4.5.1/d6/d0f/group__dnn.html)
###Code
# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream...")
vs = VideoStream(src=0).start()
time.sleep(2.0)
while True:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 400 pixels
frame = vs.read()
frame = imutils.resize(frame, width=400)
boxes = []
confidences = []
classIDs = []
# grab the frame dimensions and convert it to a blob
(h, w) = frame.shape[:2]
blob = cv2.dnn.blobFromImage(image = frame,
scalefactor = 1/255.0,
size = (256, 256),
swapRB=True,
crop=False)
# pass the blob through the network and obtain the detections and
# predictions
net.setInput(blob)
outputs = net.forward(output_layers)
# random colors for bounding box
colors = np.random.randint(0, 255, size=(len(classes), 3), dtype='uint8') #np.full((len(classes), 3), 255, dtype='uint8')
# Bounding Box and Confidence Score
for output in outputs:
for detection in output:
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
if confidence > 0.5:
box = detection[:4] * np.array([w, h, w, h])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
box = [x, y, int(width), int(height)]
boxes.append(box)
confidences.append(float(confidence))
classIDs.append(classID)
# Non Maximum Suppression
indices = cv2.dnn.NMSBoxes(bboxes = boxes,
scores = confidences,
score_threshold = 0.4,
nms_threshold = 0.3)
if len(indices) > 0:
for i in indices.flatten():
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
color = [int(c) for c in colors[classIDs[i]]]
cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
text = "{}: {:.4f}".format(classes[classIDs[i]], confidences[i])
cv2.putText(frame, text, (x, y - 5), cv2.FONT_HERSHEY_DUPLEX, 0.7, color, 2)
# show the output frame
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()
###Output
[INFO] starting video stream...
|
electrons.ipynb | ###Markdown
Partie I: Diagramme de bandes et calcul des coefficients de diffusion Le but de cette partie du TP est de calculer le diagramme de bandes du matériau semi-conducteur ainsi que les coefficients de diffusion des électrons de la bande de conduction et des trous dans la bande de valence de celui-ci. 1- Présentation du modèle continu: problème aux valeurs propres Nous commençons par présenter le modèle simplifié que nous allons considérer ici pour le calcul de ce diagramme de bandes. On notera dans la suite $$L^2_{\rm per}((0,2\pi); \mathbb{C}):= \left\{ f \in L^2_{\rm loc}(\mathbb{R}; \mathbb{C}), \; f \; 2\pi\mbox{-périodique} \right\},$$$$H^1_{\rm per}((0,2\pi); \mathbb{C}):= \left\{ f \in L^2_{\rm loc}(\mathbb{R}; \mathbb{C}), \; f' \in L^2_{\rm loc}(\mathbb{R}; \mathbb{C}), \; f \; 2\pi\mbox{-périodique} \right\},$$et$$L^\infty_{\rm per}((0,2\pi); \mathbb{R}):= \left\{ f \in L^\infty_{\rm loc}(\mathbb{R}; \mathbb{R}), \; f \; 2\pi\mbox{-périodique} \right\}.$$ On suppose que le semiconducteur est composé d'un arrangement infini périodique d'atomes, et on note $a>0$ la distance entre deux atomes. Nous négligeons dans cette partie l'influence des impuretés liées au dopage. Dans la suite, pour simplifier, nous supposerons que $a=2\pi$, ce qu'il est possible de faire quitte à rescaler toutes les quantités physiques de manière appropriée. On note $V_{\rm per} \in L^\infty_{\rm per}( (0,2\pi); \mathbb{R})$ le potentiel électrique généré par les atomes et les électrons dans le semiconducteur. Nous supposerons dans toute la suite que ce potentiel est bien connu. Le moment $q$ d'un électron est un élément $q\in [0,1/2]$. On admettra que, pour tout $q\in [0,1/2]$, il existe une suite croissante de réels $(\epsilon_n(q))_{n\in \mathbb{N}^*}$ tendant vers $+\infty$ et une base hilbertienne $(u_n^q)_{n\in\mathbb{N}^*}$ de $L^2_{\rm per}((0,2\pi); \mathbb{C})$ telle que pour tout $n\in \mathbb{N}^*$, $$- \partial_{yy} u_n^q(y) - 2 \, i \, q \, \partial_y u_n^q(y) + |q|^2 \, u_n^q(y) + V_{\rm per}(y) \, u_n^q(y) = \epsilon_n(q) \, u_n^q(y) \quad \mbox{ pour tout } y\in (0,2\pi).$$Attention: Notez bien que les fonctions $u_n^q(y)$ peuvent prendre des valeurs complexes, mais que les $\epsilon_n(q)$ sont des réels. On dira que la suite $(\epsilon_n(q))_{n\in \mathbb{N}^*}$ est la suite des valeurs propres de l'opérateur $-\partial_{yy} - 2 \, i \, q \, \partial_y + |q|^2 + V_{\rm per}(y)$ sur $L^2_{\rm per}((0,2\pi); \mathbb{C})$.Nous allons nous intéresser tout particulièrement aux deux plus petites valeurs propres $\epsilon_1(q)\leq \epsilon_2(q)$. Dans ce modèle simplifié, nous allons supposer que les états d'énergie admissibles de la bande de valence du semiconducteur est égal à l'ensemble des valeurs $\{ \epsilon_1(q) \}_{q\in [0,1/2]}$, et que les états d'énergie admissibles de la bande de conduction du semiconducteur est égal à l'ensemble des valeurs $\{ \epsilon_2(q) \}_{q\in [0,1/2]}$.Une expression simplifiée des coefficients de diffusion $\mu_n$ et $\mu_p$ est alors donnée par:$$\mu_p \approx 4 \int_{[0,1/2]} q \, \partial_q \epsilon_1(q) \, dq \quad \mbox{ et } \quad \mu_n \approx - 4 \int_{[0,1/2]} q \, \partial_q \epsilon_2(q) \, dq.$$Nous verrons dans la section I.2 une méthode numérique pour calculer une approximation de ces coefficients en utilisant une méthode de Galerkin pour résoudre le problème aux valeurs propres ci-dessus. Question 1) Soit $q\in [0,1/2]$. Par un calcul formel, montrer qu'une formulation variationnelle associée au problème aux valeurs propres ci-dessus peut s'écrire: chercher $(u_n^q, \epsilon_n(q))\in H^1_{\rm per}((0,2\pi);\mathbb{C}) \times \mathbb{R}$, solution de$$\forall v\in H^1_{\rm per}((0,2\pi); \mathbb{C}), \qquad a(u_n^q, v) = \epsilon_n(q) \, \langle u_n^q,v\rangle_{L^2_{\rm per}},$$où pour tout $v,w\in L^2_{\rm per}( (0,2\pi); \mathbb{C})$, $$\langle v,w\rangle_{L^2_{\rm per}}:= \int_0^{2\pi} \overline{v} \, w,$$et pour tout $v,w\in H^1_{\rm per}( (0,2\pi); \mathbb{C})$, $$a(v,w):= \int_0^{2\pi} \left( \overline{i \, q \, v + \partial_y v}\right) \left( i \, q \, w + \partial_y w\right) + V_{\rm per} \, \overline{v} \, w.$$ $\color{blue}{\textrm{Réponse 1}\\}$$\color{blue}{\textrm{On considère le problème suivant :}}$$\color{blue}{\left\{ \begin{array}{l} \textrm{Pour $q\in [0,1/2]$, chercher $(u_n^q, \epsilon_n(q))\in H^1_{\rm per}((0,2\pi);\mathbb{C})$} \\ \forall v\in H^1_{\rm per}((0,2\pi); \mathbb{C}) \ \ a(u_n^q, v) = \epsilon_n(q) \langle u_n^q,v\rangle_{L^2_{\rm per}} \end{array}\right. \\\textrm{où pour tout}\ v,w\in L^2_{\rm per}( (0,2\pi); \mathbb{C}), \langle v,w\rangle_{L^2_{\rm per}}:= \int_0^{2\pi} \overline{v} \, w,\\\textrm{et pour tout}\ v,w\in H^1_{\rm per}( (0,2\pi); \mathbb{C}), a(v,w):= \int_0^{2\pi} \left( \overline{i \, q \, v + \partial_y v}\right) \left( i \, q \, w + \partial_y w\right) + V_{\rm per} \, \overline{v} \, w.}$$\color{blue}{\textrm{On peut particulariser l'égalité}\ a(u_n^q, v) = \epsilon_n(q) \langle u_n^q,v\rangle_{L^2_{\rm per}}\ \textrm{pour des fonctions}\ \phi \in D((0,2\pi); \mathbb{C}) \subset H^1_{\rm per}((0,2\pi); \mathbb{C}) }$ $\color{blue}{\textrm{Soit}\ q\in [0,1/2], \ n \in \mathbb{N} \ \textrm{et} \ \phi \in H^1_{\rm per}((0,2\pi); \mathbb{C}) : \\}$$\color{blue}{\begin{array}{rcl} a(u_n^q, \phi) &=& \epsilon_n(q) \langle u_n^q,\phi\rangle_{L^2_{\rm per}}\\ \int_0^{2\pi} \left( \overline{i \, q \, u_n^q + \partial_y u_n^q}\right) \left( i \, q \, \phi + \partial_y \phi\right) + V_{\rm per} \, \overline{u_n^q} \, \phi &=&\epsilon_n(q) \int_0^{2\pi} \overline{u_n^q} \, \phi \\ \int_0^{2\pi}(|q|^2\overline{u_n^q}+2i\partial_y\overline{u_n^q}-\partial_{yy}\overline{u_n^q}+V_{\rm per} \, \overline{u_n^q} \,) \phi &=&\epsilon_n(q)\int_0^{2\pi} \overline{u_n^q} \, \phi \\\end{array}\\}$$\color{blue}{\textrm{Ainsi, on montre que pour q} \in [0,1/2], \ \textrm{pour tout n} \in \mathbb{N}^* \textrm{ et pour tout} \ y\in (0,2\pi):}\\$$\color{blue}{\begin{array}{lrcl} &|q|^2\overline{u_n^q}+2i\partial_y\overline{u_n^q}-\partial_{yy}\overline{u_n^q}+V_{\rm per} \, \overline{u_n^q} &=& \epsilon_n(q)\overline{u_n^q}\\ \textrm{i.e} &- \partial_{yy} u_n^q(y) - 2 \, i \, q \, \partial_y u_n^q(y) + |q|^2 \, u_n^q(y) + V_{\rm per}(y) \, u_n^q(y) &=& \epsilon_n(q) \, u_n^q(y)\end{array}}$$\color{blue}{\textrm{Afin de montrer cette dernière égalité on a utilisé la densité de} D((0,2\pi); \mathbb{C}) \ \textrm{dans } \ H^1_{\rm per}((0,2\pi); \mathbb{C}). \\}$$\color{blue}{\textrm{Pour montrer la réciproque il suffit de repasser par les mêmes étapes. On multiplie la dernière égalité à droite par v} \in H^1_{\rm per}((0,2\pi); \mathbb{C}) \ \textrm{et on intègre par rapport à y sur} \ (0,2\pi).}$ Question 2) Montrer que $a$ est une forme bilinéaire continue sur $H^1_{\rm per}((0,2\pi); \mathbb{C}) \times H^1_{\rm per}((0,2\pi); \mathbb{C})$, qui est de plus hermitienne, au sens où $\overline{a(v,w)} = a(w,v)$. $\color{blue}{\textrm{Réponse 2}\\}$$\color{blue}{(i)\ \textrm{Bilinéarité}\\}$$\color{blue}{\textrm{Soit}\ u,v,w \in H^1_{\rm per}((0,2\pi); \mathbb{C})\\}$$\color{blue}{\textrm{Soit}\ \lambda \in \mathbb{C}\\}$$\color{blue}{\begin{array}{ccl}a(\lambda u+v,w)&=&\int_0^{2\pi} \left( \overline{i \, q \, (\lambda u+v) + \partial_y (\lambda u+v)}\right) \left( i \, q \, w + \partial_y w\right) + V_{\rm per} \, \overline{(\lambda u+v)} \, w. \\a(\lambda u+v,w)&=&\overline{\lambda}\int_0^{2\pi} \left( \overline{i \, q \, u + \partial_y u}\right) \left( i \, q \, w + \partial_y w\right) + V_{\rm per} \, \overline{u} \, w + \int_0^{2\pi} \left( \overline{i \, q \, v + \partial_y v}\right) \left( i \, q \, w + \partial_y w\right) + V_{\rm per} \, \overline{v} \, w. \\a(\lambda u+v,w)&=&\overline{\lambda}a(u,w)+a(v,w)\\a(u,\lambda v+w)&=&\lambda a(u,v)+a(u,w)\end{array}\\}$$\color{blue}{(ii)\ \textrm{Continuité}\\}$$\color{blue}{\begin{array}{ccl}a(v,w)&=& \int_0^{2\pi} \left( \overline{i \, q \, v + \partial_y v}\right) \left( i \, q \, w + \partial_y w\right) + V_{\rm per} \, \overline{v} \, w.\\|a(v,w)|&\leq& \int_0^{2\pi} |i \, q \, v + \partial_y v| |i \, q \, w + \partial_y w|+ |V_{\rm per}| \, |v| \, |w|.\\ |a(v,w)|&\leq& \int_0^{2\pi}|q|^{2}|v||w|+|q||v||\partial_y w|+|q||w||\partial_y v|+\left|\partial_{y} v \right||\partial y w|+\left|V_{per}\right||v||w|\\|a(v,w)|&\leq&|q|^{2}\|v\|_{L^{2}}\|\omega\|_{L^{2}}+|q|\|v\|_{L^{2}}\left\|\partial_{y} w\right\|_{L^{2}}+|q|\|w\|_{L^{2}}\|\partial_y v\|_{L^{2}}+\left\|\partial_{y} v\right\|_{L^2}\|\partial_y w\|_{L^{2}}+\left\|v\right\|_{L^2}\|w\|_{L^{2}}\|V_{per}\|_{L^{\infty}}\\|a(v,w)|&\leq&(|q|^{2}+2|q|+1+\|V_{per}\|_{L^{\infty}})\left\|v\right\|_{H^1}\|w\|_{H^{1}}\end{array}\\}$$\color{blue}{(iii)\ \textrm{hermicité}\\}$$\color{blue}{\textrm{Soit}\ (v,w) \in H^1_{\rm per}((0,2\pi); \mathbb{C}) \times H^1_{\rm per}((0,2\pi); \mathbb{C})\\}$$\color{blue}{\begin{array}{lcclr}&\overline{a(v,w)}&=& \int_0^{2\pi}\overline{ \left( \overline{i \, q \, v + \partial_y v}\right) \left( i \, q \, w + \partial_y w\right) + V_{\rm per} \, \overline{v} \, w}\\&\overline{a(v,w)}&=& \int_0^{2\pi} \left( i \, q \, v + \partial_y v\right) \left(\overline{ i \, q \, w + \partial_y w}\right) + V_{\rm per} \, \overline{\overline{v} \, w}&\textrm{car} \ q \ \text{et}\ V_{\rm per}\ \textrm{sont à valeurs réelles.} \\&\overline{a(v,w)}&=& \int_0^{2\pi} \left( \overline{i \, q \, w + \partial_y w}\right) \left( i \, q \, v + \partial_y v\right) + V_{\rm per} \, \overline{w} \, v\\\textrm{Donc} &\overline{a(v,w)}&=&a(w,v)\end{array}\\}$$\color{blue}{(iv)\ \textrm{positivité}\\}$$\color{blue}{\textrm{Soit}\ v \in H^1_{\rm per}((0,2\pi); \mathbb{C})\\}$$\color{blue}{\begin{array}{lccll} &a(v,v)&=& \int_0^{2\pi} | i \, q \, v + \partial_y v|^2 + V_{\rm per} \, |v|^2\\ &a(v,v)&\geq&0&\textrm{car l'intégrande est une somme de carrés, elle est donc positive}\end{array}\\}$$\color{blue}{(v)}$$\color{blue}{\textrm{Soit}\ v \in H^1_{\rm per}((0,2\pi); \mathbb{C})\\}$$\color{blue}{\begin{array}{lccll} &a(v,v)=0&\Rightarrow& \int_0^{2\pi} | i \, q \, v + \partial_y v|^2 + V_{\rm per} \, |v|^2=0\\ &&\Rightarrow&\forall y\in (0,2\pi)\ | i \, q \, v + \partial_y v|^2 + V_{\rm per} \, |v|^2=0\\ &&\Rightarrow&\forall y\in (0,2\pi)\ | i \, q \, v + \partial_y v|^2=0 \ \textrm{et} \ V_{\rm per} \, |v|^2=0\\ &&\Rightarrow&\forall y\in (0,2\pi)\ v=0\end{array}\\}$$\color{blue}{\textrm{Par ailleurs}\ v=0 \Rightarrow a(v,v)=0\\ }$$\color{blue}{\textrm{Donc}\ v=0 \Leftrightarrow a(v,v)=0\\ }$ Question 3) Montrer que $\epsilon_n(q) = a(u_n^q, u_n^q)$. En déduire que pour tout $n\in \mathbb{N}^*$ et pour tout $q\in [0, 1/2]$, $$\epsilon_n(q) \geq - \|V_{\rm per}\|_{L^\infty}.$$ $\color{blue}{\textrm{Réponse 3}\\}$$\color{blue}{\textrm{On sait que}\ (u_n^q)_{n \in \mathbb{N}} \textrm{est une base hilbertienne de } \ L^2_{\rm per}((0,2\pi)), \ \textrm{donc} \ ||u_n^q||_{L^2_{\rm per}}=1 \\}$$\color{blue}{\textrm{Ainsi on montre que :}\\}$$\color{blue}{\begin{array}{ccl}a(u_n^q, u_n^q)&=&\epsilon_n(q)\langle u_n^q,u_n^q\rangle_{L^2_{\rm per}} \\a(u_n^q, u_n^q)&=&\epsilon_n(q)||u_n^q||^2_{L^2_{\rm per}} \\a(u_n^q, u_n^q)&=&\epsilon_n(q)\end{array}}$$\color{blue}{\begin{array}{ccll}a(u_n^q, u_n^q)&=& \int_0^{2\pi} \left( \overline{i \, q \, u_n^q + \partial_y u_n^q}\right) \left( i \, q \, u_n^q + \partial_y u_n^q\right) + V_{\rm per} \, \overline{u_n^q} \, u_n^q.\\a(u_n^q,u_n^q)&=& \int_0^{2\pi}|i \, q \, u_n^q + \partial_y u_n^q|^2+V_{\rm per} \, |u_n^q|^2 \\a(u_n^q,u_n^q)&\geq&\int_0^{2\pi}V_{\rm per} \, |u_n^q|^2 \\a(u_n^q,u_n^q)&\geq&-\int_0^{2\pi}||V_{\rm per}||_{L^{\infty}} \, |u_n^q|^2 & \textrm{car} \ \forall y, ||V_{\rm per}||_{L^{\infty}}\geq|V_{\rm per}(y)|\geq V_{\rm per}(y) \\ a(u_n^q,u_n^q)&\geq&-||V_{\rm per}||_{L^{\infty}}\end{array}}$ 2- Approximation numérique du problème aux valeurs propres par méthode de Galerkin Nous allons résoudre le problème aux valeurs propres ci-dessus par une méthode de Galerkin par modes de Fourier pour calculer des approximations numériques des quantités $\epsilon_1(q)$ et $\epsilon_2(q)$ pour $q\in [0,1/2]$. Pour tout $k\in \mathbb{Z}$ et pour tout $y\in (0,2\pi)$, on note $e_k(y):= \frac{1}{\sqrt{2\pi}}e^{i k y}$. On rappelle également que pour deux fonctions $u,v\in L^2_{\rm per}((0,2\pi); \mathbb{C})$, on a$$\langle u, v \rangle_{L^2}:= \int_0^{2\pi} \overline{u}(y) v(y)\,dy. $$Pour tout $L\in \mathbb{N}^*$, on définit $$V^L:= {\rm Vect}\left\{ e_k, -L \leq k \leq L\right\}.$$Pour tout $q\in [0,1/2]$ et $n\in \mathbb{N}^*$, on considère $(u_{n,L}^q, \epsilon_{n,L}(q))\in V^L \times \mathbb{R}$ l'approximation par méthode de Galerkin associée à l'espace de discrétisation $V^L$ du couple $(u_n^q, \epsilon_n(q))$ définie par $$a(u_{n,L}^q, v_L) = \epsilon_{n,L}(q) \, \langle u_{n,L}^q, v_L \rangle_{L^2_{\rm per}}, \quad \forall v_L \in V^L.$$ On introduit les matrices $H^L:=\left( H^L_{ij}\right)_{1\leq i,j \leq 2L+1}$, $Y^L:=\left( Y^L_{ij}\right)_{1\leq i,j \leq 2L+1}$, $Po^L:= \left(Po^L_{ij}\right)_{1\leq i,j \leq 2L+1}$ et $Id^L:= \left(Id^L_{ij}\right)_{1\leq i,j \leq 2L+1}$, qui sont toutes dans $\mathbb{C}^{(2L+1)\times (2L+1)}$, définies comme suit: pour tout $1\leq i,j \leq 2L+1$, $$H^L_{ij}:= \langle \partial_{y} e_{k_i}, \partial_y e_{k_j} \rangle_{L^2}, \quad Y^L_{ij}:= \langle -i \partial_{y} e_{k_i}, e_{k_j} \rangle_{L^2}, \quad Po^L_{ij} = \langle V_{\rm per}e_{k_i}, e_{k_j}\rangle_{L^2}, \quad Id^L_{ij} = \langle e_{k_i}, e_{k_j}\rangle_{L^2}, $$où $$k_i:= i - L -1 \quad \mbox{ et } \quad k_j := j - L -1.$$On remarque que lorsque $1\leq i,j \leq 2L+1$, on a alors $-L \leq k_i, k_j \leq L$. On définit également pour tout $q\in [0,1/2]$, $$A^L(q):= H^L + 2 \, q \, Y^L + |q|^2 \, Id^L + Po^L. $$ Question 4) Montrer que pour tout $1\leq i,j \leq 2L+1$, $$(A^L(q))_{ij} = a(e_{k_i}, e_{k_j}). $$En déduire que $A^L(q)$ est une matrice hermitienne. $\color{blue}{\textrm{Réponse 4}\\}$$\color{blue}{\textrm{Soit}\ i,j \in \mathbb{N}, \, 1\leq i,j \leq 2L+1\\}$$\color{blue}{\begin{array}{ccl}a(e_{k_i}, e_{k_j})&=&\int_0^{2\pi} \left( \overline{i \, q \, e_{k_i} + \partial_y e_{k_i}}\right) \left( i \, q \, e_{k_j} + \partial_y e_{k_j}\right) + V_{\rm per} \, \overline{e_{k_i}} \, e_{k_j} \\a(e_{k_i}, e_{k_j})&=&\int_0^{2\pi} \left(-i q \overline{\, e_{k_i}} + \partial_y \overline{e_{k_i}}\right) \left( i \, q \, e_{k_j} + \partial_y e_{k_j}\right) + Po^L_{ij} \\a(e_{k_i}, e_{k_j})&=&\int_0^{2\pi} |q|^2\overline{e_{k_i}}e_{k_j}-iq\partial_y\overline{e_{k_i}}e_{k_j} + iq\partial_y\overline{e_{k_i}}e_{k_j} + \partial_y\overline{e_{k_i}}\partial_ye_{k_j}+ Po^L_{ij} \\e_{k_i}, e_{k_j})&=&|q|^2Id^L_{ij}+ qY^L_{ij}+ H^L_{ij} + Po^L_{ij} + \int_0^{2\pi}-iq\partial_y\overline{e_{k_i}}e_{k_j} \\e_{k_i}, e_{k_j})&=&|q|^2Id^L_{ij}+ 2qY^L_{ij}+ H^L_{ij} + Po^L_{ij}\\e_{k_i}, e_{k_j})&=&(A^L(q))_{ij} \end{array}}$$\color{blue}{\textrm{L'avant dernière égalité est obtenue à l'aide d'une intégration par partie.}\\}$$\color{blue}{\textrm{Montrons que } \ A^L(q) \ \textrm{est hermitienne : }\\}$$\color{blue}{\textrm{Soit}\ i,j \in \mathbb{N}, \, 1\leq i,j \leq 2L+1\\}$$\color{blue}{\begin{array}{ccl}\overline{(A^L(q))_{ji}}&=&\overline{a(e_{k_j}, e_{k_i})}\\\overline{(A^L(q))_{ji}}&=&a(e_{k_i}, e_{k_j})\\\overline{(A^L(q))_{ji}}&=&(A^L(q))_{ij}\\\end{array}}$$\color{blue}{\textrm{Donc} \ A^L(q) \ \textrm{est hermitienne}\\}$ Question 5) Montrer que $(u_{n,L}^q, \epsilon_{n,L}^q)$ est solution du problème aux valeurs propres ci-dessus si et seulement si $$u_{n,L}^q = \sum_{i=1}^{2L+1} U_{n,L}^{i}(q) \, e_{k_i}$$où le vecteur $U_{n,L}(q):= (U_{n,L}^{i}(q))_{1\leq i \leq 2L+1} \in \mathbb{C}^{2L+1}$ est solution du problème aux valeurs propres matriciel$$A^L(q) \, U_{n,L}(q) = \epsilon_{n,L}(q) \, Id^L \, U_{n,L}(q). $$ $\color{blue}{\textrm{Réponse 5}\\}$$\color{blue}{\textrm{(i) Supposons que}\ u_{n,L}^q = \sum_{i=1}^{2L+1} U_{n,L}^{i}(q) \, e_{k_i} \ \textrm{où le vecteur} \ U_{n,L}(q) \ \textrm{est solution du problème aux valeurs propres matriciel précédent.} \\}$$\color{blue}{\textrm{Soit}\ j \in \mathbb{N}, \, 1\leq j \leq 2L+1\\}$$\color{blue}{\begin{array}{ccl}a(u_{n,L}^q, e_{k_j})&=&a(\sum_{i=1}^{2L+1} U_{n,L}^{i}(q) \, e_{k_i},e_{k_j})\\a(u_{n,L}^q, e_{k_j})&=&\sum_{i=1}^{2L+1} \overline{U_{n,L}^{i}(q)} a(e_{k_i},e_{k_j})\\a(u_{n,L}^q, e_{k_j})&=&\sum_{i=1}^{2L+1}\overline{U_{n,L}^{i}(q)}(A^L(q))_{ij}\\a(u_{n,L}^q, e_{k_j})&=&\overline{U_{n,L}(q)}^TA^L(q)_j\\a(u_{n,L}^q, e_{k_j})&=&\overline{A^L(q)U_{n,L}(q)}^T_j\\a(u_{n,L}^q, e_{k_j})&=&\overline{\epsilon_{n,L}(q) \, Id^L \, U_{n,L}(q)}^T_j\\a(u_{n,L}^q, e_{k_j})&=&\epsilon_{n,L}(q)\overline{U_{n,L}(q)}^T\overline{Id^L}^Ta(u_{n,L}^q, e_{k_j})&=&\epsilon_{n,L}(q)\sum_{i=1}^{2L+1}\overline{U_{n,L}^{i}(q)}\langle e_{k_i},e_{k_j} \rangle\\a(u_{n,L}^q, e_{k_j})&=&\epsilon_{n,L}(q)\langle\sum_{i=1}^{2L+1}U_{n,L}^{i}(q) e_{k_i},e_{k_j} \rangle\\a(u_{n,L}^q, e_{k_j})&=&\epsilon_{n,L}(q)\langle u_{n,L}^q,e_{k_j} \rangle\\\end{array}\\}$$\color{blue}{\textrm{(ii) Supposons que}\ (u_{n,L}^q, \epsilon_{n,L}^q) \ \textrm{ est solution du problème aux valeurs propres ci-dessus} \\}$$\color{blue}{(u_{n,L}^q, \epsilon_{n,L}(q))\in V^L \times \mathbb{R}}$$\color{blue}{\textrm{On peut donc écrire} \ u_{n,L}^q \ \textrm{sous la forme suivante :} \\}$$\color{blue}{u_{n,L}^q = \sum_{i=1}^{2L+1} W_{n,L}^{i}(q) \, e_{k_i}\ \textrm{avec} \ W_{n,L}(q):= (U_{n,L}^{i}(q))_{1\leq i \leq 2L+1} \in \mathbb{C}^{2L+1}\\} $$\color{blue}{\begin{array}{ccl}a(u_{n,L}^q, e_{k_j})&=&\epsilon_{n,L}(q)\langle u_{n,L}^q,e_{k_j} \rangle\\a(\sum_{i=1}^{2L+1} W_{n,L}^{i}(q) \, e_{k_i}, e_{k_j})&=&\epsilon_{n,L}(q)\langle\sum_{i=1}^{2L+1} W_{n,L}^{i}(q) \, e_{k_i},e_{k_j} \rangle\\\sum_{i=1}^{2L+1} \overline{W_{n,L}^{i}(q)} \, a(e_{k_i}, e_{k_j})&=&\epsilon_{n,L}(q)\sum_{i=1}^{2L+1} \overline{W_{n,L}^{i}(q)} \,\langle e_{k_i},e_{k_j} \rangle\\\sum_{i=1}^{2L+1} \overline{W_{n,L}^{i}(q)} \, (A^L(q))_{ij})&=&\epsilon_{n,L}(q)\sum_{i=1}^{2L+1} \overline{W_{n,L}^{i}(q)} \, Id^L_{ij}\\\overline{W_{n,L}(q)}^TA^L(q))_j&=&\epsilon_{n,L}(q)\overline{W_{n,L}(q)}^TId^L_{j}\\A^L(q)^T\overline{W_{n,L}(q)})_j&=&\epsilon_{n,L}(q)(Id^L)^T \overline{W_{n,L}(q)}_j\\A^L(q)W_{n,L}(q))_j&=&\epsilon_{n,L}(q)\overline{(Id^L)^T} W_{n,L}(q)_j\\A^L(q)W_{n,L}(q))_j&=&\epsilon_{n,L}(q)(Id^L) W_{n,L}(q)_j\\\end{array}\\}$$\color{blue}{\textrm{La dernière égalité provient du fait que} \ Id^L \ \textrm{est hermitienne}\\}$$\color{blue}{\textrm{Le vecteur} \ W_{n,L}(q)= (U_{n,L}^{i}(q))_{1\leq i \leq 2L+1} \in \mathbb{C}^{2L+1} \ \textrm{est solution du problème aux valeurs propres matriciel :}\\}$$\color{blue}{A^L(q) \, U_{n,L}(q) = \epsilon_{n,L}(q) \, Id^L \, U_{n,L}(q). }$ Question 6) Que valent les matrices $Id^L$, $H^L$ et $Y^L$? Remplir les lignes de code ci-dessous. $\color{blue}{\textrm{Réponse 6}\\}$$\color{blue}{\textrm{Soit}\ 1\leq i,j \leq 2L+1\\}$$\color{blue}{\begin{array}{ccl}(Id^L)_{ij}&=&\delta_{ij}k_i^2\\(H^L)_{ij}&=&\delta_{ij}k_i\\(Y^L)_{ij}&=&\delta_{ij}\\\end{array}}$
###Code
def getLaplacianMatrix(L): ## renvoie la matrice H^L
Hmat = np.zeros((2*L+1, 2*L+1), dtype = 'complex')
for k in range(-L, L+1):
Hmat[k+L,k+L] = k**2
return Hmat
def getiGradientMatrix(L): ## renvoie la matrice Y^L
Ymat = np.zeros((2*L+1, 2*L+1), dtype = 'complex')
for k in range(-L, L+1):
Ymat[k+L,k+L] = k
return Ymat
def getIdentityMatrix(L): ## renvoie la matrice I^L
Imat = np.zeros((2*L+1, 2*L+1), dtype = 'complex')
for k in range(-L, L+1):
Imat[k+L,k+L] = 1
return Imat
def getPotentialMatrix(V, L): #renvoie la matrice de multiplication par le potentiel V_{per},
#c'est-à-dire la matrice Po^L
#Dans cette fonction, V est un vecteur défini comme suit:
# si V_{per}(y) = \sum_{p=1}^P a_p \cos(py) + b_p \sin(py) + V_0 avec a_p, b_p et V_0 des reels,
# alors le vecteur V est defini comme V = [b_{P}, b_{P_1}, ..., b_1, V_0, a_1, ..., a_P]
P = int ( (len(V)-1)/2 )
Vmat = np.zeros((2*L+1,2*L+1), dtype='complex');
V0 = V[P]; # the constant coeff
for k in range(-L,L+1):
for l in range(-L,L+1):
kp= k+L
lp= l+L
if (k==l):
Vmat[kp,lp] = V0
else:
ind = abs(k-l)
if (ind <=P):
pind = P + ind
Vmat[kp,lp] = V[pind]/2
pind2 = P -ind
if (k>l):
b = 0.5*V[pind2]
Vmat[kp,lp] = Vmat[kp,lp] + 1j*b
else:
b = -0.5*V[pind2]
Vmat[kp,lp] = Vmat[kp,lp] + 1j*b
return Vmat
###Output
_____no_output_____
###Markdown
Le but des lignes de codes ci-dessous est de fixer la valeur de $L$, de $V_{\rm per}$ et de définir les matrices $Id^L$, $Po^L$, $H^L$ et $Y^L$.
###Code
L = 100 ## Discretisation en termes de modes de Fourier
V = [0, 0, 1] ## Definition du potentiel V_{per}(y) = cos y
IdL = getIdentityMatrix(L)
PoL = getPotentialMatrix(V, L)
HL = getLaplacianMatrix(L)
YL = getiGradientMatrix(L)
###Output
_____no_output_____
###Markdown
Question 7) Remplir les lignes de code ci-dessous pour définir la fonction qui à $q$ associe la matrice $A^L(q)$.
###Code
def getAL(q): ## Renvoie la matrice A^L(q)
Amat = HL+2*q*YL+q*q*IdL+PoL
return Amat
###Output
_____no_output_____
###Markdown
On note alors $\epsilon_{1,L}(q) \leq \epsilon_{2,L}(q) \leq \cdots \leq \epsilon_{2L+1,L}(k)$ les valeurs propres de la matrice $A^L(q)$ (qui est, on le rappelle, de taille $(2L+1) \times (2L+1)$) rangées par valeurs croissantes. On admettra dans la suite que $$\epsilon_{1,L}(q) \mathop{\longrightarrow}_{L\to +\infty} \epsilon_1(q) \quad \mbox{ et } \quad \epsilon_{2,L}(q) \mathop{\longrightarrow}_{L\to +\infty} \epsilon_2(q). $$On notera de plus $U_{1,L}(q), \cdots, U_{2L+1,L}(q) \in \mathbb{C}^{2L+1}$ des vecteurs propres associés de norme $1$ de sorte que$$\forall 1\leq n \leq 2L+1, \quad A^{L}(q) \, U_{n,L}(q) = \epsilon_{n,L}(q) \, Id^L \, U_{n,L}(q).$$ Question 8) Quel est le but de la fonction getSpectrumAL(q) définie dans les lignes de code ci-dessous? $\color{blue}{\textrm{Réponse 8}\\}$$\color{blue}{\textrm{Comme cela est indiqué en commentaire, la fonction getSpectremAl(q) renvoie les valeurs propres} \ \epsilon_{1,L}(q), \epsilon_{2,L}(q) \ \textrm{et les vecteurs propres}\ U_{1,L}(q), U_{2,L}(q) \\}$$\color{blue}{\textrm{Cette fonction permet de calculer les coefficients de diffusion dans la sous-partie suivante.}\\}$
###Code
def getSpectrumAL(q):
## Renvoie les valeurs propres epsilon_{1,L}(q), epsilon_{2,L}(q) et les vecteurs propres U_{1,L}(q) et U_{2,L}(q)
A = getAL(q)
eps, U = np.linalg.eigh(A)
return eps[0],eps[1],U[:,0],U[:,1]
###Output
_____no_output_____
###Markdown
3- Calcul des coefficients de diffusion Nous allons à présent utiliser les valeurs de $\epsilon_{1,L}(q)$ et $\epsilon_{2,L}(q)$ comme approximations de $\epsilon_1(q)$ et $\epsilon_2(q)$ pour approcher les valeurs de $\mu_n$ et $\mu_p$. Plus précisément, nous allons calculer $\epsilon_{1,L}(q)$ et $\epsilon_{2,L}(q)$, ainsi que $U_{1,L}(q)$ et $U_{2,L}(q)$, pour un nombre $Q$ de valeurs de $q\in [0,1/2]$. Nous utiliserons ensuite ces valeurs pour calculer des approximations des coefficients de diffusion $\mu_n$ et $\mu_p$. Soit $Q\in \mathbb{N}^*$. Pour tout $0\leq l \leq Q-1$, on définit $q_l:= \frac{l}{2(Q-1)}$ de sorte que $q_0 = 0 < q_1 < \cdots < q_{Q-1} = \frac{1}{2}$. On note $\Delta q := \frac{1}{2(Q-1)}$.
###Code
Q = 50 # nombre de valeurs de q pour lesquelles nous allons resoudre le probleme aux valeurs propres complets
qq = np.linspace(0, 1/2, Q)
###Output
_____no_output_____
###Markdown
Question 9) Afficher les valeurs de $\epsilon_{1,L}(q_l)$ et de $\epsilon_{2,L}(q_l)$ pour tout $0\leq l \leq Q-1$.
###Code
## Nous allons commencer par afficher les valeurs de epsilon_1^L(q) et epsilon_2^L(q) en fonction de q
eps1 = np.zeros(Q)
eps2 = np.zeros(Q)
for q in range(0,Q):
eps1q, eps2q, U1q, U2q = getSpectrumAL(qq[q])
eps1[q] = eps1q
eps2[q] = eps2q
plt.plot(qq,eps1, '+r', label ="$\epsilon_{1,L}(q)$")
plt.plot(qq,eps2, '+b',label ="$\epsilon_{2,L}(q)$")
plt.legend()
plt.grid() #C'est quand même plus joli avec un quadrillage
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
Dans les lignes de code ci-dessous, les valeurs de $\mu_n$ et $\mu_p$ sont approchées respectivement par:$$\mu_p \approx 4 \Delta q \sum_{l=1}^{Q-1} \frac{1}{2}(q_{l-1} + q_{l}) \frac{\epsilon_{1,L}(q_l) - \epsilon_{1,L}(q_{l-1})}{\Delta q},$$et $$\mu_n \approx - 4 \Delta q \sum_{l=1}^{Q-1} \frac{1}{2}(q_{l-1} + q_{l}) \frac{\epsilon_{2,L}(q_l) - \epsilon_{2,L}(q_{l-1})}{\Delta q}.$$
###Code
## Calcul des coefficients mu_n et mu_p
dq = (1/2)/(Q-1)
mun = 0
mup = 0
for q in range(0,Q-1):
mup = mup + dq*(eps1[q+1] - eps1[q])/dq*4*0.5*(qq[q] + qq[q+1])
mun = mun - dq*(eps2[q+1] - eps2[q])/dq*4*0.5*(qq[q] + qq[q+1])
print("mun = ")
print(mun)
print("mup = ")
print(mup)
###Output
mun =
0.27613877023220057
mup =
0.031456013755282376
###Markdown
Question 10) Justifier pourquoi ces formules d'approximation sont utilisées. On rappelle qu'au niveau continu les coefficients $\mu_n$ et $\mu_p$ sont donnés par les formules données au début de la partie I. $\color{blue}{\textrm{Réponse 10}\\}$$\color{blue}{\textrm{Comme cela est expliqué plus haut, on a discrétisé l'intervalle de valeurs pris par q. Cette discrétisation permet de calculer la valeur de} \epsilon_{1,L}(q),\epsilon_{2,L}(q) \ \textrm{ comme approximations de } \ \epsilon_1(q), \epsilon_2(q)\\}$$\color{blue}{\textrm{Enfin, ces valeurs approchées sont utilisées afin de calculer les approximations des coefficients de diffusion} \mu_n, \mu_p$. \\}$$\color{blue}{\textrm{Puisque les coefficients de diffusion s'expriment comme des intégrales, on peut évaluer leur approximation à l'aide d'une somme discrète.}}$$\color{blue}{\textrm{Ici, c'est la méthode des trapèzes qui est utilisée}\\}$ Question 11) Quelles sont les valeurs de $\mu_n$ et $\mu_p$ obtenues? $\color{blue}{\textrm{Réponse 11}\\}$$\color{blue}{\mu_n=0.276\\}$$\color{blue}{\mu_p=0.031}$ Partie II: Système d'équations de drift-diffusion 1 - Méthode de différences finies Le but de cette partie est de résoudre par une méthode de différences finies le système d'équations couplées:$$\left\{\begin{array}{l} - \partial_{xx} v(x,t) = n(x,t) - p(x,t) - c(x)\\ \partial_t n(x,t) - \mu_n \Big( \partial_x n(x,t) - \partial_x v(x,t) n(x,t) \Big) = - \left( n(x,t)p(x,t) - n_i^2\right)\\ \partial_t p(x,t) - \mu_p\Big( \partial_x p(x,t) + \partial_x v(x,t) p(x,t) \Big) = - \left( n(x,t)p(x,t) - n_i^2\right)\\\end{array}\right.$$présenté dans le fichier d'introduction, avec des conditions de bord périodiques en $x\in (-1,1)$ et des conditions initiales $n(x,0) = p(x,0) = n_i$. Nous conservons dans cette partie les mêmes notations que dans le fichier d'introduction. Nous utiliserons les valeurs des coefficients $\mu_n$ et $\mu_p$ calculées dans la partie I du TP.Pour simplifier, nous supposerons que $c_0 = 1$ et que $n_i = 1$. Quitte à changer les unités du problème de manière adéquate, on peut toujours se ramener à ce cas. Soit $X\in \mathbb{N}^*$. On note $\Delta x:= \frac{2}{X}$ et pour tout $i\in \mathbb{Z}$, on notera $x_i = -1 + i \Delta x$ de telle sorte que $x_0 = -1 < x_1 < \cdots < x_X = 1$. On notera également pour tout $i \in \mathbb{Z}$, $y_i:= \frac{x_{i-1} + x_i}{2}$.
###Code
X = 70 # Nombre de points de discretisation
xx = np.linspace(-1, 1, X+1)
dx = 2.0/X
xxplot = np.linspace(-1+0.5*dx, 1-0.5*dx, X)
###Output
_____no_output_____
###Markdown
Commençons par discrétiser en espace le système d'équations ci-dessus. Nous cherchons à obtenir pour tout $t>0$, des vecteurs $V(t):=(V_i(t))_{1\leq i \leq X}$, $N(t):=(N_i(t))_{1\leq i \leq X}$ et $P(t):= \left( P_i(t)\right)_{1\leq i \leq X}\in\mathbb{R}^X$ tels que pour tout $1\leq i \leq X$, $$v(y_i,t) \approx V_i(t), \quad n(y_i,t) \approx N_i(t), \quad p(y_i,t) \approx P_i(t). $$On note également $C:=(C_i)_{1\leq i \leq X}\in \mathbb{R}^X$ le vecteur défini par$$\forall 1\leq i \leq X, \; C_i:= c(y_i).$$
###Code
#Definition du vecteur C
C = np.zeros(X)
for i in range(0,X):
if (i<int(X/2)):
C[i] = 1
else:
C[i] = -1
###Output
_____no_output_____
###Markdown
A l'instant initial $t=0$, on suppose que $n(x,0) = p(x,0) = 1$. On définit alors $N(0)$ et $P(0)$ comme des vecteurs de taille $X$ dont toutes les composantes valent $1$. Pour tout vecteur $U,V\in \mathbb{R}^X$, on notera dans la suite $U\odot V$ le vecteur de taille $X$ tel que $$\forall 1\leq i \leq X, \quad (U\odot V)_i = U_i V_i.$$ En utilisant les formules d'approximations suivantes:\begin{align*}-\partial_{xx} v(y_i,t) & \approx \frac{- v(y_{i+1},t) + 2 v(y_i,t) - v(y_{i-1},t)}{\Delta x^2}\\\partial_x v(y_i,t) & \approx \frac{v(y_{i+1},t) - v(y_{i-1},t)}{2\Delta x}\\\partial_x n(y_i,t) & \approx \frac{n(y_{i+1},t) - n(y_{i-1},t)}{2\Delta x}\\\partial_x p(y_i,t) & \approx \frac{p(y_{i+1},t) - p(y_{i-1},t)}{2\Delta x}\end{align*}et en utilisant les conditions aux bords périodiques du problème, on considère le schéma aux différences finis suivants pour approcher le système ci-dessus: pour tout $1\leq i \leq X$, $$\left\{\begin{array}{l}\displaystyle \frac{- V_{i+1}(t) + 2 V_i(t) - V_{i-1}(t)}{\Delta x^2} = N_i(t) - P_i(t) - C_i \\\displaystyle \partial_t N_i(t) - \mu_n\left( \frac{N_{i+1}(t) - N_{i-1}(t)}{2\Delta x} - \frac{V_{i+1}(t) - V_{i-1}(t)}{2\Delta x} N_i(t) \right) = - \left( N_i(t) P_i(t) - 1 \right)\\\displaystyle \partial_t P_i(t) - \mu_p\left( \frac{P_{i+1}(t) - P_{i-1}(t)}{2\Delta x} + \frac{V_{i+1}(t) - V_{i-1}(t)}{2\Delta x} P_i(t) \right) = - \left( N_i(t) P_i(t) - 1 \right)\\\end{array}\right.$$Pour donner un sens à toutes les quantités utilisées ci-dessus, nous utilisons le fait que nous avons des conditions de bords périodiques, si bien que $V_{X+1}(t) = V_1(t)$, $V_{-1}(t) = V_{X-1}(t)$ etc. Question 1) En déduire qu'il existe une matrice $D\in \mathbb{R}^{X\times X}$ et une matrice $G\in \mathbb{R}^{X\times X}$ tel que les vecteurs $V(t)$, $N(t)$ et $P(t)$ sont solutions du système$$\left\{\begin{array}{l}D V(t) = N(t) - P(t) - C \\ \displaystyle \frac{dN}{dt}(t) - \mu_n\left( G N(t) - (G V(t))\odot N(t) \right) = - \left( N(t) \odot P(t) - Z \right)\\ \displaystyle \frac{dP}{dt}(t) - \mu_p\left( G P(t) + (GV(t))\odot P(t) \right) = - \left( N(t)\odot P(t) - Z \right)\\\end{array}\right.$$où $Z\in \mathbb{R}^X$ est le vecteur dont toutes les coordonnées valent $1$. Donner les expressions des matrices $D$ et $G$. Complétez les expressions dans les lignes de code ci-dessous. $\color{blue}{\textrm{Réponse 1}\\}$$\color{blue}{\textrm{Les trois équations données ci-dessus sont valables pour tout }1\leq i \leq X\\}$$\color{blue}{\textrm{Par ailleurs, les membres de gauche s'expriment comme une combinaison linéaire des composantes des vecteurs }V(t) \ \textrm{et} \ N(t)\\}$$\color{blue}{\textrm{On peut donc introduire une matrice}\ D \in \mathbb{R}^{XxX} \ \textrm{telle que pour tout}\ 2\leq i \leq X-1 :\\}$$\color{blue}{\begin{array}{rcl}DV(t)_i&=&\frac{- V_{i+1}(t) + 2 V_i(t) - V_{i-1}(t)}{\Delta x^2}\\\sum_{k=1}^X D_{ik}V_k(t)&=&\frac{- V_{i+1}(t) + 2 V_i(t) - V_{i-1}(t)}{\Delta x^2}\\\end{array}}$$\color{blue}{\textrm{Il reste à examiner la première et la denière ligne. A l'aide des égalités fournies par l'énoncé on montre que } D_{11}=D_{XX}=\frac{2}{\Delta x^2}, D_{12}=D_{1X}=D_{X1}=D_{X,X-1}=-\frac{1}{\Delta x^2} \\}$$\color{blue}{\textrm{Donc, pour}\ 1\leq i \leq X, \ 1\leq k \leq X :\\}$$\color{blue}{D_{ik}=\left\{\begin{array}{cll}-\frac{1}{\Delta x^2}& \textrm{si}&k=i+1\\\frac{2}{\Delta x^2}& \textrm{si}&k=i\\-\frac{1}{\Delta x^2}& \textrm{si}&k=i-1\\-\frac{1}{\Delta x^2}& \textrm{si}&i=X \ \textrm{et} \ k=1\\-\frac{1}{\Delta x^2}& \textrm{si}&i=1 \ \textrm{et} \ k=X\\0 & \textrm{sinon}\end{array}\right.}$$\color{blue}{\textrm{On peut aussi introduire une matrice}\ G \in \mathbb{R}^{XxX} \ \textrm{telle que pour tout}\ 2\leq i \leq X-1 :\\}$$\color{blue}{\begin{array}{rcl}GV(t)_i&=&\frac{V_{i+1}(t) - V_{i-1}(t)}{2\Delta x}\\\sum_{k=1}^X G_{ik}V_k(t)&=&\frac{V_{i+1}(t) - V_{i-1}(t)}{2\Delta x}\\\end{array}}$$\color{blue}{\textrm{Il reste à examiner la première et la denière ligne. A l'aide des égalités fournies par l'énoncé on montre que } G_{12}=G_{X1}=\frac{1}{2 \Delta x}, G_{1X}=G_{X,X-1}=-\frac{1}{2 \Delta x} \\}$$\color{blue}{\textrm{Donc, pour}\ 1\leq i \leq X, \ 1\leq k \leq X :\\}$$\color{blue}{G_{ik}=\left\{\begin{array}{cll}\frac{1}{2 \Delta x}& \textrm{si}&k=i+1\\\frac{1}{2 \Delta x}& \textrm{si}&i=X \ \textrm{et} \ k=1\\-\frac{1}{2 \Delta x}& \textrm{si}&k=i-1\\-\frac{1}{2 \Delta x}& \textrm{si}&i=1 \ \textrm{et} \ k=X\\0 & \textrm{sinon}\end{array}\right.}$
###Code
# Definition de la matrice identite de taille X * X
I = np.eye(X)
#Definition du vecteur Z
Z = np.ones(X)
# Definition de la matrice D
D=np.zeros((X,X))
for i in range (0,X-1):
D[i,i]=2/(dx*dx)
D[i,i+1]=-1/(dx*dx)
D[i+1,i]=-1/(dx*dx)
D[X-1,X-1]=2.0/(dx*dx)
D[0,X-1]=-1.0/(dx*dx)
D[X-1,0]=-1.0/(dx*dx)
# Definition de la matrice G:
G=np.zeros((X,X))
for i in range (1,X-1):
G[i,i-1]= -1/(2*dx)
G[i,i+1]= +1/(2*dx)
G[X-1,X-2]=-1.0/(2*dx)
G[X-1,0]=1.0/(2*dx)
G[0,X-1]=-1.0/(2*dx)
G[0,1]=1.0/(2*dx)
###Output
_____no_output_____
###Markdown
Malheureusement la matrice $D$ n'est pas inversible. En effet, le potentiel $v(x,t)$ n'est défini qu'à une constante additive près. Plus précisément, si $v(x,t)$ est une solution du système ci-dessus, alors $v(x,t) + v_0$ est également solution pour tout $v_0\in \mathbb{R}$. Un moyen pour pallier à ce problème est de choisir la solution $v(x,t)$ telle que $v(y_X,t) = 0$ pour tout $t>0$. En termes de vecteur, il s'agit d'imposer que $V_X(t) = 0$ pour tout $t>0$. On note alors $\overline{D}:= (D_{ij})_{1\leq i,j \leq X-1} \in \mathbb{R}^{(X-1)\times (X-1)}$, et pour tout $t>0$, on note $$\overline{V}(t):= (V_i(t))_{1\leq i \leq X-1}, \quad \overline{P}(t):= (P_i(t))_{1\leq i \leq X-1},\quad \overline{N}(t):= (N_i(t))_{1\leq i \leq X-1},\quad \overline{C}:= (C_i)_{1\leq i \leq X-1}.$$On admet dans la suite que $\overline{D}$ est inversible. Question 2) Montrer que $V(t)$ est solution de l'équation $$ DV(t) = N(t) - P(t) - C $$ avec $V_X(t) = 0$ si et seulement si $$ \overline{D} \; \overline{V}(t) = \overline{N}(t) - \overline{P}(t) - \overline{C}. $$ $\color{blue}{\textrm{Réponse 2}\\}$$\color{blue}{\textrm{(i) : Si}\ V(t) \ \textrm{est solution de l'équation} \ DV(t) = N(t) - P(t) - C \ \textrm{avec} \ V_X(t) = 0 \ \textrm{alors} \ \overline{D} \; \overline{V}(t) = \overline{N}(t) - \overline{P}(t) - \overline{C} \\}$$\color{blue}{\textrm{(ii) Si :} \overline{D} \; \overline{V}(t) = \overline{N}(t) - \overline{P}(t) - \overline{C} \\}$
###Code
#Definition du vecteur Cbar
Cbar = C[0:(X-1)]
# Definition de la matrice Dbar
Dbar = D[0:(X-1), 0:(X-1)]
###Output
_____no_output_____
###Markdown
Question 3) Que fait la fonction compute\_Potential dans les lignes de code ci-dessous? $\color{blue}{\textrm{Réponse 3}\\}$$\color{blue}{\textrm{La fonction compute_Potential commence par assembler la matrice rhs telle que} : rhs=\overline{N}(t) - \overline{P}(t) - \overline{C}\\}$$\color{blue}{\textrm{Ensuite cette fonction résout le système } \ \overline{D}Y=rhs, \ Y \textrm{ représente en fait le vecteur } \ \overline{V}.\\}$$\color{blue}{\textrm{Enfin, le vecteur V, solution du problème est assemblé à partir du vecteur} \ \overline{V}.\\}$
###Code
def compute_Potential(N,P):
Nbar = N[0:(X-1)]
Pbar = P[0:(X-1)]
rhs = Nbar - Pbar - Cbar
y = np.linalg.solve(Dbar,rhs)
V = np.zeros(X)
V[0:(X-1)] = y
return V
###Output
_____no_output_____
###Markdown
Question 4) Que fait la fonction compute\_product dans les lignes de code ci-dessous? $\color{blue}{\textrm{Réponse 4}\\}$$\color{blue}{\textrm{La fonction compute_product prend en arguments deux vecteurs}\ N,P \ \textrm{et renvoie le vecteur} \ N\odot P.\\}$
###Code
def compute_product(N,P):
R = np.zeros(X)
for i in range(0,X):
R[i] = N[i]*P[i]
return R
###Output
_____no_output_____
###Markdown
Il nous reste à discrétiser ce système d'équations également en temps. Pour cela, on introduit un pas de temps $\Delta t>0$ et on définit pour tout $m\in \mathbb{N}$, $t_m:= m\Delta t$. Pour tout $m\in \mathbb{N}$, $V^m \in \mathbb{R}^X$ sera un vecteur qui approchera $V(t_m)$. De même, $N^m$ et $P^m$ seront des vecteurs de $\mathbb{R}^X$ qui approcheront respectivement $N(t_m)$ et $P(t_m)$. On pose donc $N^0:= N(0)$ et $P^0:=P(0)$.
###Code
#Definition of the time step
dt = 0.2
# Initial values of the vectors N(t) and P(t)
N0init = np.ones(X)
P0init = np.ones(X)
N0 = N0init
P0 = P0init
###Output
_____no_output_____
###Markdown
Nous allons utiliser un schéma de discrétisation en temps semi-implicite pour le système semi-dicrétisé obtenu à la question 1 qui va s'écrire comme suit: pour tout $m\in \mathbb{N}$, $$\left\{\begin{array}{l}D V^m = N^m - P^m - C \\\displaystyle \frac{N^{m+1} - N^m}{\Delta t} - \mu_n\left( G N^{m+1} - (G V^m)\odot N^{m+1} \right) = - \left( N^m \odot P^m - Z \right)\\\displaystyle \frac{P^{m+1} - P^m}{\Delta t} - \mu_p\left( G P^{m+1} + (GV^m)\odot P^{m+1} \right) = - \left( N^m\odot P^m - Z \right)\\\end{array}\right.$$ Question 5) Montrer que $N^{m+1}$ et $P^{m+1}$ sont solutions des équations ci-dessus si et seulement si ils sont solutions de systèmes linéaires de la forme $$ K^m_n N^{m+1} = R_n^m \quad \text{et} \quad K^m_p P^{m+1} = R_p^m $$ où $K^m_n, K^m_p \in \mathbb{R}^{X \times X}$ sont des matrices et $R_n^m, R_p^m \in \mathbb{R}^X$ sont des vecteurs dont on donnera les expressions en fonction de $G$, $V^m$, $N^m$, $P^m$, $Z$, $\mu_p$ et $\Delta t$. $\color{blue}{\textrm{Réponse 5}\\}$$\color{blue}{N^{m+1}, P^{m+1} \ \textrm{sont solutions des équations ci-dessus si et seulement si ils sont solutions de systèmes linéaires de la forme } \\ \ K^m_n N^{m+1} = R_n^m \ \textrm{et} \ K^m_p P^{m+1} = R_p^m \ \textrm{avec : }}$$\color{blue}{\begin{array}{ccl} K^m_n&=&\frac{I_d^X}{\Delta t}-\mu_n(G-\textrm{Diag}(GV^m)) \\ K^m_p&=&\frac{I_d^X}{\Delta t}-\mu_p(G-\textrm{Diag}(GV^m)) \\ R_n^m&=&\frac{N^m}{\Delta t}-(N^m \odot P^m - Z )\\ R_p^m&=&\frac{P^m}{\Delta t}-(N^m \odot P^m - Z )\end{array}}$$\color{blue}{\textrm{Diag est une fonction qui à un vecteur associe une matrice diagonale} \\\textrm{où la ième composante de la diagonale est égale à la i-ème composante du vecteur}}$$\color{blue}{\textrm{(i) Si les matrices et vecteurs s'écrivent de la forme suivante alors on montre sans problème que } N^{m+1}, P^{m+1}, \ \textrm{sont solutions de deux dernières équations ci-dessus.}\\}$$\color{blue}{\textrm{(ii) Réciproquement, il est possible d'écrire ces deux équations sous forme matricielle}\\\textrm{car elles s'expriment comme des combinaisons linéaires des vecteurs} N^{m+1}, P^{m+1}\\}$ Soit $M\in \mathbb{N}^*$. On définit $V_{tab}, N_{tab}, P_{tab}\in \mathbb{R}^{X \times M}$ les matrices telles que $$V_{tab} = (V^0 | V^1|\cdots|V^{M-1}), \quad N_{tab} = (N^0 | N^1|\cdots|N^{M-1}), \quad V_{tab} = (P^0 | P^1|\cdots|P^{M-1}).$$
###Code
#Number of time steps to be computed
M = 2000
###Output
_____no_output_____
###Markdown
Question 6) Compléter les expressions de $K^m_n$, $K^m_p$, $R_n^m$ et $R_p^m$ dans les lignes de code ci-dessous.
###Code
def resolution_probleme_DD(mun, mup):
#These tabs will contain the different values of the approximate solution at the different time steps
Ntab = np.zeros((X,M))
Ptab = np.zeros((X,M))
Vtab = np.zeros((X,M))
N = N0
P = P0
for it in range(0,M):
V = compute_Potential(N,P)
Ntab[:,it] = N
Ptab[:,it] = P
Vtab[:,it] = V
dV = np.dot(G,V)
dVmat = np.zeros((X,X))
for i in range(0,X):
dVmat[i,i] = dV[i]
Knmat = np.eye(X)/dt-mun*(G-dVmat)
Kpmat = np.eye(X)/dt-mup*(G-dVmat)
Rn = N/dt-(compute_product(N,P)-Z)
Rp = P/dt-(compute_product(N,P)-Z)
Nnew = np.linalg.solve(Knmat, Rn)
Pnew = np.linalg.solve(Kpmat,Rp)
N = Nnew
P = Pnew
return Ntab, Ptab, Vtab
###Output
_____no_output_____
###Markdown
Pour résoudre le système de drift-diffusion, nous utilisons les valeurs des paramètres $\mu_n$ et $\mu_p$ calculées à la fin de la partie I du TP.
###Code
# Valeurs des coefficients de diffusion pour la résolution du problème fin
Ntab, Ptab, Vtab = resolution_probleme_DD(mun, mup)
###Output
_____no_output_____
###Markdown
Le but des lignes de code ci-dessous est d'afficher l'évolution des fonctions $n(t,x)$, $p(t,x)$ et $v(t,x)$ calculées avec le schéma implémenté ci-dessus.
###Code
## Le but de cette fonction est d'afficher l'évolution de n(t,x), p(t,x) et v(t,x) au cours du temps
fig, (ax1, ax2, ax3) = plt.subplots(3,1)
plotN, = ax1.plot(xxplot,Ntab[:,0])
plotP, = ax2.plot(xxplot,Ptab[:,0])
plotV, = ax3.plot(xxplot,Vtab[:,0])
def animate(p):
N = Ntab[:,p]
plotN.set_ydata(N)
P = Ptab[:,p]
plotP.set_ydata(P)
V = Vtab[:,p]
plotV.set_ydata(V)
def init():
ax1.set_xlim(-1, 1)
ax1.set_ylim( 0.3, 2.2)
ax1.set_xlabel('x')
ax1.set_ylabel('n(x)')
ax2.set_xlim(-1, 1)
ax2.set_ylim( 0.3, 2.2)
ax2.set_xlabel('x')
ax2.set_ylabel('p(x)')
ax3.set_xlim(-1, 1)
ax3.set_ylim(-0.6, 1.0)
ax3.set_xlabel('x')
ax3.set_ylabel('v(x)')
return plotN,plotP,plotV,
step = 1
steps = np.arange(1,M,step)
ani1 = FuncAnimation(fig, animate,steps, init_func = init, interval = 100, blit = True)
plt.close()
###Output
_____no_output_____
###Markdown
2- Calcul de l'énergie potentielle électrique du matériau semi-conducteur Question 7) Calculer, pour tout $0\leq m < M$ la valeur de l'énergie potentielle électrique du matériau semi-conducteur à l'instant $t_m$, définie comme $$E(t_m) = \int_{-1}^1 v(t_m, x) \ \big( p(t_m, x) - n(t_m, x) \big) \,dx.$$Pour ce faire, on utilisera une somme de Riemann pour approcher l'intégrale en $x$ ci-dessus et on utilisera l'approximation des fonctions $v(t,x)$, $p(t,x)$ et $n(t,x)$ obtenue avec la méthode des différences finies. $\color{blue}{\textrm{On calcule l'énergie potentielle électrique du matériau semi-conducteur à l'aide de la méthode des trapèzes.}\\}$
###Code
def Ener(m): #calcul de la valeur de l'énergie potentielle électrique à l'instant t_m en utilisant une somme de Riemann
Nm=Ntab[:,m]
Pm=Ptab[:,m]
Vm=Vtab[:,m]
E=0
for k in range (X-1):
E+=dx*(Vm[k]*(Pm[k]-Nm[k])+Vm[k+1]*(Pm[k+1]-Nm[k+1]))/2
return E
###Output
_____no_output_____
###Markdown
Question 8) Tracer les valeurs de $E(t_m)$ pour $0\leq m < M$. Que constatez-vous? $\color{blue}{\textrm{Réponse 8}\\}$$\color{blue}{\textrm{L'energie potentielle électrique du matériau conducteur considéré est représentée ci-après. }\\}$$\color{blue}{\textrm{On note la présence d'un régime transitoire oscillant caractérisé par un fort amortissement. }\\}$$\color{blue}{\textrm{Après un temps caractéristique égal à 1/8eme du temps total simulé, l'energie potentielle demeure constante.}\\}$
###Code
ener_vec = np.zeros(M)
t_vec = np.zeros(M)
for m in range(0, M):
t_vec[m] = m*dt
ener_vec[m] = Ener(m)
plt.plot(t_vec,ener_vec, 'r', label ="$energie \, potentielle$")
plt.xlabel("temps ")
plt.ylabel("energie ")
plt.grid()
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
TP1 : Transport des électrons dans un matériau semi-conducteur dopé
###Code
import numpy as np
from numpy import linalg as LA
import matplotlib.pyplot as plt
import scipy.optimize
import math
import os
%matplotlib notebook
from matplotlib.animation import FuncAnimation
###Output
_____no_output_____ |
algoExpert/yongest_common_ancestor/solution.ipynb | ###Markdown
Youngest Common Ancestor[link](https://www.algoexpert.io/questions/Youngest%20Common%20Ancestor) My Solution
###Code
# This is an input class. Do not edit.
class AncestralTree:
def __init__(self, name):
self.name = name
self.ancestor = None
def getYoungestCommonAncestor(topAncestor, descendantOne, descendantTwo):
# O(d) time | O(d) space
oneAncestors = []
twoAncestors = []
cur = descendantOne
while cur is not None:
oneAncestors.append(cur)
cur = cur.ancestor
cur = descendantTwo
while cur is not None:
twoAncestors.append(cur)
cur = cur.ancestor
idxOne = len(oneAncestors) - 1
idxTwo = len(twoAncestors) - 1
while idxOne >= 0 and idxTwo >= 0:
if oneAncestors[idxOne] == twoAncestors[idxTwo]:
idxOne -= 1
idxTwo -= 1
else:
break
return oneAncestors[idxOne + 1]
# This is an input class. Do not edit.
class AncestralTree:
def __init__(self, name):
self.name = name
self.ancestor = None
def getYoungestCommonAncestor(topAncestor, descendantOne, descendantTwo):
# Write your code here.
# O(d) time | O(1) space
depthOne = getDepth(descendantOne, topAncestor)
depthTwo = getDepth(descendantTwo, topAncestor)
if depthTwo > depthOne:
deeper = descendantTwo
shallower = descendantOne
diff = depthTwo - depthOne
else:
deeper = descendantOne
shallower = descendantTwo
diff = depthOne - depthTwo
while diff > 0:
deeper = deeper.ancestor
diff -= 1
while deeper is not shallower:
deeper = deeper.ancestor
shallower = shallower.ancestor
return deeper
def getDepth(node, topNode):
depth = 0
cur = node
while cur is not topNode:
depth += 1
cur = cur.ancestor
return depth
###Output
_____no_output_____
###Markdown
Expert Solution
###Code
# This is an input class. Do not edit.
class AncestralTree:
def __init__(self, name):
self.name = name
self.ancestor = None
# O(d) time | O(1) space
def getYoungestCommonAncestor(topAncestor, descendantOne, descendantTwo):
depthOne = getDescendantDepth(descendantOne, topAncestor)
depthTwo = getDescendantDepth(descendantTwo, topAncestor)
if depthOne > depthTwo:
return backtrackAncestralTree(descendantOne, descendantTwo, depthOne - depthTwo)
else:
return backtrackAncestralTree(descendantTwo, descendantOne, depthTwo - depthOne)
def getDescendantDepth(descendant, topAncestor):
depth = 0
while descendant != topAncestor:
depth += 1
descendant = descendant.ancestor
return depth
def backtrackAncestralTree(lowerDescendant, higherDescendant, diff):
while diff > 0:
lowerDescendant = lowerDescendant.ancestor
diff -= 1
while lowerDescendant != higherDescendant:
lowerDescendant = lowerDescendant.ancestor
higherDescendant = higherDescendant.ancestor
return lowerDescendant
###Output
_____no_output_____ |
code/week3_stats/1_Basic_Statistics_Demo_toStudent.ipynb | ###Markdown
OUTLINESPart1: Statistical AnalysisPart2: Inferential StatisticsPart3: A/B Testing Part1: Statistical Analysis Introduction to Statistics
###Code
import pandas
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
%matplotlib inline
plt.rcParams['figure.figsize'] = (15,12)
#Data
url = "https://github.com/kaopanboonyuen/2110446_DataScience_2021s2/raw/main/datasets/pima-indians-diabetes.data.csv"
names = ['preg','plas','pres','skin','test','mass','pedi','age','class']
data = pandas.read_csv(url, names=names)
# Take a peek at your raw data.
data.head(10)
# Review the dimensions of your dataset.
data.shape
# Review the data types of attributes in your data.
data.dtypes
###Output
_____no_output_____
###Markdown
Descriptive Statistics
###Code
data.info()
# Summarize the distribution of instances across classes in your dataset.
data.describe(include="all")
# mean
data["age"].mean()
# std
data["age"].std()
# variance
data["age"].var()
# mode
data["age"].mode()
# median
data["age"].median()
# interquartile range (IQR)
from scipy.stats import iqr
iqr(data["age"])
# 10th percentile
data["age"].quantile(0.1)
# 50th percentile same as median
data["age"].quantile(0.5)
# 90th percentile
data["age"].quantile(0.9)
# Class Distribution
data.groupby('class').size()
# Histograms
data.hist(bins=20) # adjust bin, range
plt.show()
# Density distribution
data.plot(kind= 'density' , subplots=True, layout=(3,3), sharex=False)
plt.show()
# Box plot
# Box plot with points representing data that extend beyond the whiskers (outliers)
data.plot(kind= 'box' , subplots=True, layout=(3,3), sharex=False, sharey=False)
plt.show()
# don't show outliers
data.plot(kind= 'box' , subplots=True, layout=(3,3), sharex=False, sharey=False, showfliers=False)
plt.show()
# X% Truncated (Trimmed) Mean
# x% of observations from each end are removed before the mean is computed
from scipy import stats
# scipy.stats.trim_mean(a, proportiontocut, axis=0)[source]
# If proportiontocut = 0.1, slices off ‘leftmost’ and ‘rightmost’ 10% of scores.
stats.trim_mean(data["age"], 0.1)
# X% Winsorized Mean
# x% of observations from each end are replaced with the most extreme remaining values (on both ends) before the mean is computed
# https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.mstats.winsorize.html
age = data["age"]
age.describe()
# The 10% of the lowest value and the 20% of the highest
stats.mstats.winsorize(age, limits=[0.1, 0.2], inplace=True)
age.describe()
###Output
_____no_output_____
###Markdown
Part2: Inferential Statistics
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
np.random.seed(42)
coffee_full = pd.read_csv('https://github.com/kaopanboonyuen/2110446_DataScience_2021s2/raw/main/datasets/coffee_dataset.csv')
print(coffee_full.shape)
coffee_full.info()
# population
coffee_full["drinks_coffee"].mean()
# sample1
coffee_sample = coffee_full.sample(200)
coffee_sample["drinks_coffee"].mean()
# sample2
coffee_sample = coffee_full.sample(200)
coffee_sample["drinks_coffee"].mean()
# sample3
coffee_sample = coffee_full.sample(200)
coffee_sample["drinks_coffee"].mean()
# central limit theorem; is it normal distribution?
# population mean = 0.589778076664425
sample_means = []
for _ in range(10000):
coffee_sample = coffee_full.sample(200)
m = coffee_sample["drinks_coffee"].mean()
sample_means.append(m)
plt.hist(sample_means, bins=30)
plt.show()
# confidence interval
# population mean = 0.589778076664425
def mean_confidence_interval(data, confidence=0.95):
a = 1.0 * np.array(data)
n = len(a)
m = np.mean(a)
se = scipy.stats.sem(a)
h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
return m, h, m-h, m+h
coffee_sample = coffee_full.sample(200)
data = coffee_sample["drinks_coffee"]
m, bound, lower1, upper1 = mean_confidence_interval(data)
print(m, lower1, upper1)
# confidence interval with function
import numpy as np, scipy.stats as st
st.t.interval(0.95, len(data)-1, loc=np.mean(data), scale=st.sem(data))
###Output
_____no_output_____
###Markdown
Central Limit Theorem (optional)* Vary number of samplings* Vary sampling size
###Code
#---------------------------------------
# Generate simulated data from Gamma distribution
#---------------------------------------
import numpy as np
import random
import matplotlib.pyplot as plt
import scipy.stats as stats
%matplotlib inline
## Central limit theorom
# build gamma distribution as population
shape, scale = 2., 2. # mean=4, std=2*sqrt(2)
s = np.random.gamma(shape, scale, 1000000)
# plot
plt.figure(figsize=(20,10))
plt.hist(s, 200, density=True)
plt.show()
#---------------------------------------
# PART1: fixed sample size = 500 & vary the number samplings
#---------------------------------------
# sample from population with different number of sampling
# a list of sample mean
meansample = []
# number of sample
numofsample = [1000,2500,5000,10000,25000,50000]
# sample size
samplesize = 500
# for each number of sampling (1000 to 50000)
for i in numofsample:
# collect mean of each sample
eachmeansample = []
# for each sampling
for j in range(0,i):
# sampling 500 sample from population
rc = random.choices(s, k=samplesize)
# collect mean of each sample
eachmeansample.append(sum(rc)/len(rc))
# add mean of each sampling to the list
meansample.append(eachmeansample)
# plot
cols = 2
rows = 3
fig, ax = plt.subplots(rows, cols, figsize=(20,15))
n = 0
for i in range(0, rows):
for j in range(0, cols):
ax[i, j].hist(meansample[n], 200, density=True)
ax[i, j].set_title(label="number of sampling :" + str(numofsample[n]))
n += 1
#---------------------------------------
# PART1 (cont.): fixed sample size = 500 & vary the number samplings
# convert to z-score
#---------------------------------------
# use last sampling
sm = meansample[len(meansample)-1]
# calculate start deviation
std = np.std(sm)
# set population mean
mean = np.mean(sm)
# list of standarded sample
zn = []
# for each sample subtract with mean and devided by standard deviation
for i in sm:
zn.append((i-mean)/std)
# plot hist
plt.figure(figsize=(20,10))
plt.hist(zn, 200, density=True)
# compare with standard normal disrtibution line
mu = 0
sigma = 1
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
# draw standard normal disrtibution line
plt.plot(x, stats.norm.pdf(x, mu, sigma),linewidth = 5, color='red')
plt.show()
#---------------------------------------
# PART2: vary the number of sampling sizes & the number samplings = 500 times
#---------------------------------------
## sample with different sample size
# list of sample mean
meansample = []
# number of sampling
numofsample = 25000
# sample size
samplesize = [1,5,10,30,100,1000]
# for each sample size (1 to 1000)
for i in samplesize:
# collect mean of each sample
eachmeansample = []
# for each sampling
for j in range(0,numofsample):
# sampling i sample from population
rc = random.choices(s, k=i)
# collect mean of each sample
eachmeansample.append(sum(rc)/len(rc))
# add mean of each sampling to the list
meansample.append(eachmeansample)
# plot
cols = 2
rows = 3
fig, ax = plt.subplots(rows, cols, figsize=(20,15))
n = 0
for i in range(0, rows):
for j in range(0, cols):
ax[i, j].hist(meansample[n], 200, density=True)
ax[i, j].set_title(label="sample size :" + str(samplesize[n]))
n += 1
#---------------------------------------
# PART2 (cont.): vary the number of sampling sizes & the number samplings = 500 times
# With sample size = 1000 - check all the values
#---------------------------------------
## expect value of sample
# use last sampling (samplesize=1000)
sample = meansample[len(meansample)-1]
# expected value of sample equal to expect value of population
print("expected value of sample:", np.mean(sample))
print("expected value of population:", shape*scale)
print()
# standard deviation of sample equal to standard deviation of population divided by squre root of n
print("standard deviation of sample:", np.std(sample))
print("standard deviation of population:", scale*np.sqrt(shape))
print("standard deviation of population divided by squre root of sample size:", scale*np.sqrt(shape)/np.sqrt(1000))
#---------------------------------------
# PART3: increase sample size can reduce error
#---------------------------------------
## show that as the sample size increases the mean of sample is close to population mean
# set expected values of population
mu = shape*scale # mean
# sample size
samplesize = []
# collect difference between sample mean and mu
diflist = []
# for each sample size
for n in range(10,20000,20):
# sample 10000 sample
rs = random.choices(s, k=n)
# start count
c = 0
# calculate mean
mean = sum(rs)/len(rs)
# collect difference between sample mean and mu
diflist.append(mean-mu)
samplesize.append(n)
# set figure size.
plt.figure(figsize=(20,10))
# plot each diference.
plt.scatter(samplesize,diflist, marker='o')
# show plot.
plt.show()
#---------------------------------------
# PART4: vary sample size
# for each sample size, trial 100 times
# count #error > 0.05
# Increase sample size can reduce prob of errors
#---------------------------------------
## show that as the sample size increases the probability that sample mean is further from population mean than error
# margin of error
epsilon = 0.05
# list of probability of each sample size
proberror = []
# sample size for plotting
samplesize = []
# for each sample size
for n in range(100,10101,500):
# start count
c = 0
for i in range(0,100):
# sample 10000 sample
rs = random.choices(s, k=n)
# calculate mean
mean = sum(rs)/len(rs)
# check if the difference is larger than error
if abs(mean - mu) > epsilon:
# if larger count the sampling
c += 1
# calculate the probability
proberror.append(c/100)
# save sample size for plotting
samplesize.append(n)
# set figure size.
plt.figure(figsize=(20,10))
# plot each probability.
plt.plot(samplesize,proberror, marker='o')
# show plot.
plt.show()
###Output
_____no_output_____
###Markdown
Part3: A/B Testing One sample t-test: You have 10 ages and you are checking whether avg age is 30 or not. (check code below for that using python)
###Code
mylist = [32,34,29,29,22,39,38,37,38,36,30,26,22,22]
df = pandas.DataFrame(data=mylist)
df.to_csv("ages.csv", sep=',',index=False,header=None)
!head ages.csv
from scipy.stats import ttest_1samp
import numpy as np
ages = np.genfromtxt('ages.csv')
print(ages)
ages_mean = np.mean(ages)
print(ages_mean)
t, pval = ttest_1samp(ages, 30) # Calculate the T-test for the mean of ONE group of scores.
print("t =", t, ", p-value =", pval)
if pval < 0.05: # alpha value is 0.05 or 5%
print("we are rejecting null hypothesis")
else:
print("we are accepting null hypothesis")
###Output
we are accepting null hypothesis
###Markdown
Two sample t-test: Example: is there any association between week1 and week2 ( code is given below in python)
###Code
import numpy as np
np.random.seed(2019) #option for reproducibility
week1_list = np.random.randint(low=0, high=100, size=50).tolist()
np.random.seed(2020) #option for reproducibility
week2_list = np.random.randint(low=0, high=100, size=50).tolist()
df = pandas.DataFrame(data=week1_list)
df.to_csv("week1.csv", sep=',',index=False,header=None)
df = pandas.DataFrame(data=week2_list)
df.to_csv("week2.csv", sep=',',index=False,header=None)
from scipy.stats import ttest_ind
import numpy as np
week1 = np.genfromtxt("week1.csv", delimiter=",")
week2 = np.genfromtxt("week2.csv", delimiter=",")
print("week1 data :-\n")
print(week1)
print("\n")
print("week2 data :-\n")
print(week2)
import scipy.stats
# https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.levene.html
stats.levene(week1,week2)
week1_mean = np.mean(week1)
week2_mean = np.mean(week2)
print("week1 mean value:",week1_mean)
print("week2 mean value:",week2_mean)
week1_std = np.std(week1)
week2_std = np.std(week2)
print("week1 std value:",week1_std)
print("week2 std value:",week2_std)
ttest,pval = ttest_ind(week1,week2,equal_var=True) # two independent samples of scores.
# ref: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html
print("p-value",pval)
if pval <0.05:
print("we reject null hypothesis")
else:
print("we accept null hypothesis")
###Output
we accept null hypothesis
###Markdown
Paired t-testH0 :- mean difference between two sample is 0H1:- mean difference between two sample is not 0check the code below for same
###Code
import pandas as pd
from scipy import stats
from statsmodels.stats import weightstats as stests
df = pd.read_csv("https://github.com/kaopanboonyuen/2110446_DataScience_2021s2/raw/main/datasets/blood_pressure.csv")
df.head()
df[['bp_before','bp_after']].describe()
ttest,pval = stats.ttest_rel(df['bp_before'], df['bp_after']) # return t-statistic and p-value
print(pval)
if pval<0.05:
print("reject null hypothesis")
else:
print("accept null hypothesis")
###Output
reject null hypothesis
###Markdown
Reference:1. https://reneshbedre.github.io/blog/anova.html2. https://medium.com/analytics-vidhya/illustration-with-python-central-limit-theorem-aa4d81f7b570
###Code
###Output
_____no_output_____ |
jupyter-notebooks/model-testing.ipynb | ###Markdown
Testing the different models- Naive Forecast- SARIMA- LSTM- LSTM SeqToSeq- XGBoost
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from river_forecast.training_data_access import get_combined_flow_split
train, validation, test = get_combined_flow_split()
fig, ax = plt.subplots(figsize=(17, 4))
ax.plot(train, label='train')
ax.plot(validation, label='validation')
ax.plot(test, label='test')
ax.legend()
ax.set_ylabel('Flow')
from river_forecast.forecast import SARIMAXForecast, NaiveForecast, LSTMForecast, LSTMSeq2SeqForecast, XGBForecast
models = {
"Naive" : NaiveForecast(),
"SARIMA": SARIMAXForecast(),
"LSTM": LSTMForecast(),
"LSTMSeq2Seq": LSTMSeq2SeqForecast(),
"XGBoost": XGBForecast()}
sns.set_style('darkgrid')
fig, axs = plt.subplots(1, 2,figsize=(10, 5))
for name, model in models.items():
errors = model.get_error_metrics()
ax = axs[0]
ax.plot(np.arange(1, 7), errors['mae'], '.-', label=name)
ax.set_xlabel('t (h)')
ax.set_ylabel('MAE ($m^3/s$)')
ax = axs[1]
ax.plot(errors['rmse'], '.-', label=name)
ax.set_xlabel('t (h)')
ax.set_ylabel('RMSE ($m^3/s$)')
axs[0].legend()
###Output
_____no_output_____
###Markdown
Alternative error comparison and prediction examples
###Code
input_length = 72
forecast_length = int(6)
n_possible_forecasts = len(validation) - input_length + 1 - forecast_length
n_forecasts = 20 # n_possible_forecasts
# Put this part in a module
forecasts = np.zeros((5, n_forecasts, forecast_length))
real_values = np.zeros((n_forecasts, forecast_length))
real_recent_flows = np.zeros((n_forecasts, 12))
np.random.seed(5)
for i, j in enumerate(np.random.choice(n_possible_forecasts, size=n_forecasts, replace=False)):
if i%50 == 0:
print(i, 'out of', n_possible_forecasts)
recent_flow = validation.iloc[j:(j + input_length)]
real_recent_flows[i, :] = recent_flow.iloc[-12:]['discharge']
real_values[i, :] = validation.iloc[(j + input_length):(j + input_length + forecast_length)]['discharge']
forecasts[4, i, :] = bst.dynamic_forecast(recent_flow, n_hours=forecast_length)
forecasts[3, i, :] = lstm2.dynamic_forecast(recent_flow, n_hours=forecast_length)
forecasts[2, i, :] = lstm.dynamic_forecast(recent_flow, n_hours=forecast_length)
forecasts[1, i, :] = sf.dynamic_forecast(recent_flow, n_hours=forecast_length)
forecasts[0, i, :] = nf.dynamic_forecast(recent_flow, n_hours=forecast_length)
maes = np.mean(np.abs(forecasts - real_values[np.newaxis, ...]), axis=1)
fig, ax = plt.subplots()
ax.plot(np.arange(forecast_length) + 1, maes[0], marker='.', label='naive')
ax.plot(np.arange(forecast_length) + 1, maes[1], marker='.', label='SARIMAX')
ax.plot(np.arange(forecast_length) + 1, maes[2], marker='.', label='LSTM')
ax.plot(np.arange(forecast_length) + 1, maes[3], marker='.', label='LSTM2')
ax.plot(np.arange(forecast_length) + 1, maes[4], marker='.', label='XGB')
ax.set_ylabel('Error (MAE)')
ax.set_xlabel('Hours')
ax.legend()
fig, ax = plt.subplots()
ax.plot(np.arange(forecast_length) + 1, maes[0], marker='.', label='naive')
ax.plot(np.arange(forecast_length) + 1, maes[1], marker='.', label='SARIMAX')
ax.plot(np.arange(forecast_length) + 1, maes[2], marker='.', label='LSTM')
ax.plot(np.arange(forecast_length) + 1, maes[3], marker='.', label='LSTM2')
ax.set_ylabel('Error (MAE)')
ax.set_xlabel('Hours')
# ax.set_xlim([0.9, 6.1])
# ax.set_ylim([0, 3.5])
ax.legend()
rmses = np.sqrt(np.mean((forecasts - real_values[np.newaxis, ...])**2, axis=1))
fig, ax = plt.subplots()
ax.plot(np.arange(forecast_length) + 1, rmses[0], marker='.', label='naive')
ax.plot(np.arange(forecast_length) + 1, rmses[1], marker='.', label='SARIMAX')
ax.plot(np.arange(forecast_length) + 1, rmses[2], marker='.', label='LSTM')
ax.plot(np.arange(forecast_length) + 1, rmses[3], marker='.', label='LSTM2')
ax.plot(np.arange(forecast_length) + 1, rmses[4], marker='.', label='XGB')
ax.set_ylabel('Error (RMSD)')
ax.set_xlabel('Hours')
###Output
_____no_output_____
###Markdown
Plotting example curves
###Code
sns.set_style('ticks')
errors = forecasts[4, :, :] / real_values[:, :]
negative_ci = np.insert(np.percentile(errors, 20, axis=0), 0, 1)
positive_ci = np.insert(np.percentile(errors, 80, axis=0), 0, 1)
fig, axs = plt.subplots(12, 6, figsize=(15, 20), sharex=True)
colors = ['#feb24c','#f03b20']
for i, ax in enumerate(axs.flatten()):
last_value = real_recent_flows[i, -1]
ax.plot(np.arange(-11, 1), real_recent_flows[i, :], color=colors[0])
ax.plot(np.arange(0, 7), np.insert(real_values[i, :], 0, last_value), color=colors[0], linestyle='--', label='Real flow')
pred = np.insert(forecasts[4, i, :], 0, last_value)
ax.plot(np.arange(0, 7), pred, color=colors[1], linestyle='--', label='Forecast (60%)')
ax.fill_between(np.arange(0, 7), pred * negative_ci , pred * positive_ci, color=colors[1],
alpha=0.1)
for ax in axs[-1, :]:
ax.set_xlabel('Time (h)')
for ax in axs[:, 0]:
ax.set_ylabel('Flow (m^3/s)')
axs[0, 0].legend()
plt.tight_layout()
###Output
_____no_output_____ |
report_notebooks/encdec_noing10_200_512_04dra.ipynb | ###Markdown
Encoder-Decoder Analysis Model Architecture
###Code
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04dra/encdec_noing10_200_512_04dra.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04dra/encdec_noing10_200_512_04dra_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
###Output
Encoder:
nn.Sequential {
[input -> (1) -> (2) -> (3) -> output]
(1): nn.LookupTable
(2): nn.LSTM(200 -> 512)
(3): nn.Dropout(0.400000)
}
Decoder:
nn.gModule
###Markdown
Perplexity on Each Dataset
###Code
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
###Output
('Train Perplexity: ', 71.711428171036)
('Valid Perplexity: ', 413.59050936172)
('Test Perplexity: ', 440.71114299039)
###Markdown
Loss vs. Epoch
###Code
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Perplexity vs. Epoch
###Code
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Generations
###Code
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
for i, sample in enumerate(report['train_samples']):
print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for i, sample in enumerate(report['valid_samples']):
print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for i, sample in enumerate(report['test_samples']):
print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
###Output
Input: smoked salmon , avocado , dill and parsley mayo sandwich
Gend: <beg> preheat oven to a . . . . . . . . . . . . . . . . . . . . .
True: mash the lemon juice and avocado with dill and parsley mayo . spoon over a slice of bread and <end>
Closest BLEU Match: heat the oil in a skillet over medium heat . <step> in a bowl , combine the coconut flour , <end>
Closest BLEU Score: 0
Input: fancy hot dogs
Gend: <beg> . . . . . . . . . . . . . . . . . . . . . . . . .
True: <step> 1 melt butter in a large skillet ( cast iron works well for this purpose <end>
Closest BLEU Match: heat the oil in a skillet over medium heat . <step> in a bowl , combine the coconut flour , <end>
Closest BLEU Score: 0
Input: healthy oatmeal cookies
Gend: <beg> . . . . . . . . . . . . . . . . . . . . . . . . .
True: preheat oven to 350 degrees . in a medium bowl , whisk together flours and baking powder ; set aside . <step> in <end>
Closest BLEU Match: heat the oil in a skillet over medium heat . <step> in a bowl , combine the coconut flour , <end>
Closest BLEU Score: 0
Input: mexican hummus
Gend: <beg> . . . . . . . . . . . . . . . . . . . . . . . . .
True: position the knife blade in a food processor bowl . drop the garlic through the food chute with the processor running ; process 3 seconds
Closest BLEU Match: heat the oil in a skillet over medium heat . <step> in a bowl , combine the coconut flour , <end>
Closest BLEU Score: 0
Input: saute ? ed mushrooms
Gend: <beg> . . . . . . . . . . . . . . . . . . . . . . . . .
True: 1 . cook shiitake mushrooms in a single layer in 1 1 / 2 tbsp . hot oil in a 10 - to <end>
Closest BLEU Match: heat the oil in a skillet over medium heat . <step> in a bowl , combine the coconut flour , <end>
Closest BLEU Score: 0
Input: saute ? ed mushrooms
Gend: <beg> . . . . . . . . . . . . . . . . . . . . . . . . .
True: 1 . cook shiitake mushrooms in a single layer in 1 1 / 2 tbsp . hot oil in a 10 - to <end>
Closest BLEU Match: heat the oil in a skillet over medium heat . <step> in a bowl , combine the coconut flour , <end>
Closest BLEU Score: 0
Input: marty 's loosemeat sandwich
Gend: <beg> . . . . . . . . . . . . . . . . . . . . . . . . .
True: in a medium skillet over medium heat , cook the ground beef until evenly browned ; drain <end>
Closest BLEU Match: heat the oil in a skillet over medium heat . <step> in a bowl , combine the coconut flour , <end>
Closest BLEU Score: 0
###Markdown
BLEU Analysis
###Code
def print_bleu(blue_struct):
print 'Overall Score: ', blue_struct['score'], '\n'
print '1-gram Score: ', blue_struct['components']['1']
print '2-gram Score: ', blue_struct['components']['2']
print '3-gram Score: ', blue_struct['components']['3']
print '4-gram Score: ', blue_struct['components']['4']
# Training Set BLEU Scores
print_bleu(report['train_bleu'])
# Validation Set BLEU Scores
print_bleu(report['valid_bleu'])
# Test Set BLEU Scores
print_bleu(report['test_bleu'])
# All Data BLEU Scores
print_bleu(report['combined_bleu'])
###Output
Overall Score: 0
1-gram Score: 4.6
2-gram Score: 0
3-gram Score: 0
4-gram Score: 0
###Markdown
N-pairs BLEU AnalysisThis analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
###Code
# Training Set BLEU n-pairs Scores
print_bleu(report['n_pairs_bleu_train'])
# Validation Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_valid'])
# Test Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_test'])
# Combined n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_all'])
# Ground Truth n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_gold'])
###Output
Overall Score: 9.89
1-gram Score: 24.8
2-gram Score: 10.6
3-gram Score: 6.7
4-gram Score: 5.5
###Markdown
Alignment AnalysisThis analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
###Code
print 'Average (Train) Generated Score: ', report['average_alignment_train']
print 'Average (Valid) Generated Score: ', report['average_alignment_valid']
print 'Average (Test) Generated Score: ', report['average_alignment_test']
print 'Average (All) Generated Score: ', report['average_alignment_all']
print 'Average Gold Score: ', report['average_alignment_gold']
###Output
Average (Train) Generated Score: 56
Average (Valid) Generated Score: 56
Average (Test) Generated Score: 52
Average (All) Generated Score: 54.6666666667
Average Gold Score: 21.5428571429
|
notebooks/T10 - 3 - Plotly para dibujar_Py38.ipynb | ###Markdown
Gráficos con PlotLy
###Code
#import plotly.plotly as py
#import plotly.graph_objs as go
#import plotly.tools as tls
import chart_studio.plotly as py
import plotly.graph_objects as go
from chart_studio import tools as tls
import warnings
warnings.filterwarnings('ignore')
#tls.set_credentials_file(username='JuanGabriel', api_key='6mEfSXf8XNyIzpxwb8z7') # Version Anterior se debe cambiar
# Para generar los graficos con plotly se debe crear un usuario para tener acceso en plotly cloud.
# Una vez creado ese usuario en los campos username y api_key se debe colocar la informacion correspondiente
# Al username y al api_key otorgados en la aplicacion
tls.set_credentials_file(username='<usermane>', api_key='<api_key>')
import plotly
plotly.__version__
help(plotly)
import numpy as np
help(np.random)
###Output
Help on package numpy.random in numpy:
NAME
numpy.random
DESCRIPTION
========================
Random Number Generation
========================
Use ``default_rng()`` to create a `Generator` and call its methods.
=============== =========================================================
Generator
--------------- ---------------------------------------------------------
Generator Class implementing all of the random number distributions
default_rng Default constructor for ``Generator``
=============== =========================================================
============================================= ===
BitGenerator Streams that work with Generator
--------------------------------------------- ---
MT19937
PCG64
Philox
SFC64
============================================= ===
============================================= ===
Getting entropy to initialize a BitGenerator
--------------------------------------------- ---
SeedSequence
============================================= ===
Legacy
------
For backwards compatibility with previous versions of numpy before 1.17, the
various aliases to the global `RandomState` methods are left alone and do not
use the new `Generator` API.
==================== =========================================================
Utility functions
-------------------- ---------------------------------------------------------
random Uniformly distributed floats over ``[0, 1)``
bytes Uniformly distributed random bytes.
permutation Randomly permute a sequence / generate a random sequence.
shuffle Randomly permute a sequence in place.
choice Random sample from 1-D array.
==================== =========================================================
==================== =========================================================
Compatibility
functions - removed
in the new API
-------------------- ---------------------------------------------------------
rand Uniformly distributed values.
randn Normally distributed values.
ranf Uniformly distributed floating point numbers.
random_integers Uniformly distributed integers in a given range.
(deprecated, use ``integers(..., closed=True)`` instead)
random_sample Alias for `random_sample`
randint Uniformly distributed integers in a given range
seed Seed the legacy random number generator.
==================== =========================================================
==================== =========================================================
Univariate
distributions
-------------------- ---------------------------------------------------------
beta Beta distribution over ``[0, 1]``.
binomial Binomial distribution.
chisquare :math:`\chi^2` distribution.
exponential Exponential distribution.
f F (Fisher-Snedecor) distribution.
gamma Gamma distribution.
geometric Geometric distribution.
gumbel Gumbel distribution.
hypergeometric Hypergeometric distribution.
laplace Laplace distribution.
logistic Logistic distribution.
lognormal Log-normal distribution.
logseries Logarithmic series distribution.
negative_binomial Negative binomial distribution.
noncentral_chisquare Non-central chi-square distribution.
noncentral_f Non-central F distribution.
normal Normal / Gaussian distribution.
pareto Pareto distribution.
poisson Poisson distribution.
power Power distribution.
rayleigh Rayleigh distribution.
triangular Triangular distribution.
uniform Uniform distribution.
vonmises Von Mises circular distribution.
wald Wald (inverse Gaussian) distribution.
weibull Weibull distribution.
zipf Zipf's distribution over ranked data.
==================== =========================================================
==================== ==========================================================
Multivariate
distributions
-------------------- ----------------------------------------------------------
dirichlet Multivariate generalization of Beta distribution.
multinomial Multivariate generalization of the binomial distribution.
multivariate_normal Multivariate generalization of the normal distribution.
==================== ==========================================================
==================== =========================================================
Standard
distributions
-------------------- ---------------------------------------------------------
standard_cauchy Standard Cauchy-Lorentz distribution.
standard_exponential Standard exponential distribution.
standard_gamma Standard Gamma distribution.
standard_normal Standard normal distribution.
standard_t Standard Student's t-distribution.
==================== =========================================================
==================== =========================================================
Internal functions
-------------------- ---------------------------------------------------------
get_state Get tuple representing internal state of generator.
set_state Set state of generator.
==================== =========================================================
PACKAGE CONTENTS
_bit_generator
_bounded_integers
_common
_generator
_mt19937
_pcg64
_philox
_pickle
_sfc64
mtrand
setup
tests (package)
CLASSES
builtins.object
numpy.random._bit_generator.BitGenerator
numpy.random._mt19937.MT19937
numpy.random._pcg64.PCG64
numpy.random._philox.Philox
numpy.random._sfc64.SFC64
numpy.random._bit_generator.SeedSequence
numpy.random._generator.Generator
numpy.random.mtrand.RandomState
class BitGenerator(builtins.object)
| BitGenerator(seed=None)
|
| Base Class for generic BitGenerators, which provide a stream
| of random bits based on different algorithms. Must be overridden.
|
| Parameters
| ----------
| seed : {None, int, array_like[ints], SeedSequence}, optional
| A seed to initialize the `BitGenerator`. If None, then fresh,
| unpredictable entropy will be pulled from the OS. If an ``int`` or
| ``array_like[ints]`` is passed, then it will be passed to
| ~`numpy.random.SeedSequence` to derive the initial `BitGenerator` state.
| One may also pass in a `SeedSequence` instance.
|
| Attributes
| ----------
| lock : threading.Lock
| Lock instance that is shared so that the same BitGenerator can
| be used in multiple Generators without corrupting the state. Code that
| generates values from a bit generator should hold the bit generator's
| lock.
|
| See Also
| -------
| SeedSequence
|
| Methods defined here:
|
| __getstate__(...)
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce__(...)
| Helper for pickle.
|
| __setstate__(...)
|
| random_raw(...)
| random_raw(self, size=None)
|
| Return randoms as generated by the underlying BitGenerator
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| output : bool, optional
| Output values. Used for performance testing since the generated
| values are not returned.
|
| Returns
| -------
| out : uint or ndarray
| Drawn samples.
|
| Notes
| -----
| This method directly exposes the the raw underlying pseudo-random
| number generator. All values are returned as unsigned 64-bit
| values irrespective of the number of bits produced by the PRNG.
|
| See the class docstring for the number of bits returned.
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| capsule
|
| cffi
| CFFI interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing CFFI wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| ctypes
| ctypes interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing ctypes wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| lock
|
| state
| Get or set the PRNG state
|
| The base BitGenerator.state must be overridden by a subclass
|
| Returns
| -------
| state : dict
| Dictionary containing the information required to describe the
| state of the PRNG
class Generator(builtins.object)
| Generator(bit_generator)
|
| Container for the BitGenerators.
|
| ``Generator`` exposes a number of methods for generating random
| numbers drawn from a variety of probability distributions. In addition to
| the distribution-specific arguments, each method takes a keyword argument
| `size` that defaults to ``None``. If `size` is ``None``, then a single
| value is generated and returned. If `size` is an integer, then a 1-D
| array filled with generated values is returned. If `size` is a tuple,
| then an array with that shape is filled and returned.
|
| The function :func:`numpy.random.default_rng` will instantiate
| a `Generator` with numpy's default `BitGenerator`.
|
| **No Compatibility Guarantee**
|
| ``Generator`` does not provide a version compatibility guarantee. In
| particular, as better algorithms evolve the bit stream may change.
|
| Parameters
| ----------
| bit_generator : BitGenerator
| BitGenerator to use as the core generator.
|
| Notes
| -----
| The Python stdlib module `random` contains pseudo-random number generator
| with a number of methods that are similar to the ones available in
| ``Generator``. It uses Mersenne Twister, and this bit generator can
| be accessed using ``MT19937``. ``Generator``, besides being
| NumPy-aware, has the advantage that it provides a much larger number
| of probability distributions to choose from.
|
| Examples
| --------
| >>> from numpy.random import Generator, PCG64
| >>> rg = Generator(PCG64())
| >>> rg.standard_normal()
| -0.203 # random
|
| See Also
| --------
| default_rng : Recommended constructor for `Generator`.
|
| Methods defined here:
|
| __getstate__(...)
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| __setstate__(...)
|
| __str__(self, /)
| Return str(self).
|
| beta(...)
| beta(a, b, size=None)
|
| Draw samples from a Beta distribution.
|
| The Beta distribution is a special case of the Dirichlet distribution,
| and is related to the Gamma distribution. It has the probability
| distribution function
|
| .. math:: f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1}
| (1 - x)^{\beta - 1},
|
| where the normalization, B, is the beta function,
|
| .. math:: B(\alpha, \beta) = \int_0^1 t^{\alpha - 1}
| (1 - t)^{\beta - 1} dt.
|
| It is often seen in Bayesian inference and order statistics.
|
| Parameters
| ----------
| a : float or array_like of floats
| Alpha, positive (>0).
| b : float or array_like of floats
| Beta, positive (>0).
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` and ``b`` are both scalars.
| Otherwise, ``np.broadcast(a, b).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized beta distribution.
|
| binomial(...)
| binomial(n, p, size=None)
|
| Draw samples from a binomial distribution.
|
| Samples are drawn from a binomial distribution with specified
| parameters, n trials and p probability of success where
| n an integer >= 0 and p is in the interval [0,1]. (n may be
| input as a float, but it is truncated to an integer in use)
|
| Parameters
| ----------
| n : int or array_like of ints
| Parameter of the distribution, >= 0. Floats are also accepted,
| but they will be truncated to integers.
| p : float or array_like of floats
| Parameter of the distribution, >= 0 and <=1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``n`` and ``p`` are both scalars.
| Otherwise, ``np.broadcast(n, p).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized binomial distribution, where
| each sample is equal to the number of successes over the n trials.
|
| See Also
| --------
| scipy.stats.binom : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the binomial distribution is
|
| .. math:: P(N) = \binom{n}{N}p^N(1-p)^{n-N},
|
| where :math:`n` is the number of trials, :math:`p` is the probability
| of success, and :math:`N` is the number of successes.
|
| When estimating the standard error of a proportion in a population by
| using a random sample, the normal distribution works well unless the
| product p*n <=5, where p = population proportion estimate, and n =
| number of samples, in which case the binomial distribution is used
| instead. For example, a sample of 15 people shows 4 who are left
| handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4,
| so the binomial distribution should be used in this case.
|
| References
| ----------
| .. [1] Dalgaard, Peter, "Introductory Statistics with R",
| Springer-Verlag, 2002.
| .. [2] Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill,
| Fifth Edition, 2002.
| .. [3] Lentner, Marvin, "Elementary Applied Statistics", Bogden
| and Quigley, 1972.
| .. [4] Weisstein, Eric W. "Binomial Distribution." From MathWorld--A
| Wolfram Web Resource.
| http://mathworld.wolfram.com/BinomialDistribution.html
| .. [5] Wikipedia, "Binomial distribution",
| https://en.wikipedia.org/wiki/Binomial_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> rng = np.random.default_rng()
| >>> n, p = 10, .5 # number of trials, probability of each trial
| >>> s = rng.binomial(n, p, 1000)
| # result of flipping a coin 10 times, tested 1000 times.
|
| A real world example. A company drills 9 wild-cat oil exploration
| wells, each with an estimated probability of success of 0.1. All nine
| wells fail. What is the probability of that happening?
|
| Let's do 20,000 trials of the model, and count the number that
| generate zero positive results.
|
| >>> sum(rng.binomial(9, 0.1, 20000) == 0)/20000.
| # answer = 0.38885, or 38%.
|
| bytes(...)
| bytes(length)
|
| Return random bytes.
|
| Parameters
| ----------
| length : int
| Number of random bytes.
|
| Returns
| -------
| out : str
| String of length `length`.
|
| Examples
| --------
| >>> np.random.default_rng().bytes(10)
| ' eh\x85\x022SZ\xbf\xa4' #random
|
| chisquare(...)
| chisquare(df, size=None)
|
| Draw samples from a chi-square distribution.
|
| When `df` independent random variables, each with standard normal
| distributions (mean 0, variance 1), are squared and summed, the
| resulting distribution is chi-square (see Notes). This distribution
| is often used in hypothesis testing.
|
| Parameters
| ----------
| df : float or array_like of floats
| Number of degrees of freedom, must be > 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``df`` is a scalar. Otherwise,
| ``np.array(df).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized chi-square distribution.
|
| Raises
| ------
| ValueError
| When `df` <= 0 or when an inappropriate `size` (e.g. ``size=-1``)
| is given.
|
| Notes
| -----
| The variable obtained by summing the squares of `df` independent,
| standard normally distributed random variables:
|
| .. math:: Q = \sum_{i=0}^{\mathtt{df}} X^2_i
|
| is chi-square distributed, denoted
|
| .. math:: Q \sim \chi^2_k.
|
| The probability density function of the chi-squared distribution is
|
| .. math:: p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)}
| x^{k/2 - 1} e^{-x/2},
|
| where :math:`\Gamma` is the gamma function,
|
| .. math:: \Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.
|
| References
| ----------
| .. [1] NIST "Engineering Statistics Handbook"
| https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm
|
| Examples
| --------
| >>> np.random.default_rng().chisquare(2,4)
| array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random
|
| choice(...)
| choice(a, size=None, replace=True, p=None, axis=0):
|
| Generates a random sample from a given 1-D array
|
| Parameters
| ----------
| a : 1-D array-like or int
| If an ndarray, a random sample is generated from its elements.
| If an int, the random sample is generated as if a were np.arange(a)
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn from the 1-d `a`. If `a` has more
| than one dimension, the `size` shape will be inserted into the
| `axis` dimension, so the output ``ndim`` will be ``a.ndim - 1 +
| len(size)``. Default is None, in which case a single value is
| returned.
| replace : boolean, optional
| Whether the sample is with or without replacement
| p : 1-D array-like, optional
| The probabilities associated with each entry in a.
| If not given the sample assumes a uniform distribution over all
| entries in a.
| axis : int, optional
| The axis along which the selection is performed. The default, 0,
| selects by row.
| shuffle : boolean, optional
| Whether the sample is shuffled when sampling without replacement.
| Default is True, False provides a speedup.
|
| Returns
| -------
| samples : single item or ndarray
| The generated random samples
|
| Raises
| ------
| ValueError
| If a is an int and less than zero, if p is not 1-dimensional, if
| a is array-like with a size 0, if p is not a vector of
| probabilities, if a and p have different lengths, or if
| replace=False and the sample size is greater than the population
| size.
|
| See Also
| --------
| integers, shuffle, permutation
|
| Examples
| --------
| Generate a uniform random sample from np.arange(5) of size 3:
|
| >>> rng = np.random.default_rng()
| >>> rng.choice(5, 3)
| array([0, 3, 4]) # random
| >>> #This is equivalent to rng.integers(0,5,3)
|
| Generate a non-uniform random sample from np.arange(5) of size 3:
|
| >>> rng.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])
| array([3, 3, 0]) # random
|
| Generate a uniform random sample from np.arange(5) of size 3 without
| replacement:
|
| >>> rng.choice(5, 3, replace=False)
| array([3,1,0]) # random
| >>> #This is equivalent to rng.permutation(np.arange(5))[:3]
|
| Generate a non-uniform random sample from np.arange(5) of size
| 3 without replacement:
|
| >>> rng.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0])
| array([2, 3, 0]) # random
|
| Any of the above can be repeated with an arbitrary array-like
| instead of just integers. For instance:
|
| >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
| >>> rng.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3])
| array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random
| dtype='<U11')
|
| dirichlet(...)
| dirichlet(alpha, size=None)
|
| Draw samples from the Dirichlet distribution.
|
| Draw `size` samples of dimension k from a Dirichlet distribution. A
| Dirichlet-distributed random variable can be seen as a multivariate
| generalization of a Beta distribution. The Dirichlet distribution
| is a conjugate prior of a multinomial distribution in Bayesian
| inference.
|
| Parameters
| ----------
| alpha : sequence of floats, length k
| Parameter of the distribution (length ``k`` for sample of
| length ``k``).
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| vector of length ``k`` is returned.
|
| Returns
| -------
| samples : ndarray,
| The drawn samples, of shape ``(size, k)``.
|
| Raises
| -------
| ValueError
| If any value in ``alpha`` is less than or equal to zero
|
| Notes
| -----
| The Dirichlet distribution is a distribution over vectors
| :math:`x` that fulfil the conditions :math:`x_i>0` and
| :math:`\sum_{i=1}^k x_i = 1`.
|
| The probability density function :math:`p` of a
| Dirichlet-distributed random vector :math:`X` is
| proportional to
|
| .. math:: p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},
|
| where :math:`\alpha` is a vector containing the positive
| concentration parameters.
|
| The method uses the following property for computation: let :math:`Y`
| be a random vector which has components that follow a standard gamma
| distribution, then :math:`X = \frac{1}{\sum_{i=1}^k{Y_i}} Y`
| is Dirichlet-distributed
|
| References
| ----------
| .. [1] David McKay, "Information Theory, Inference and Learning
| Algorithms," chapter 23,
| http://www.inference.org.uk/mackay/itila/
| .. [2] Wikipedia, "Dirichlet distribution",
| https://en.wikipedia.org/wiki/Dirichlet_distribution
|
| Examples
| --------
| Taking an example cited in Wikipedia, this distribution can be used if
| one wanted to cut strings (each of initial length 1.0) into K pieces
| with different lengths, where each piece had, on average, a designated
| average length, but allowing some variation in the relative sizes of
| the pieces.
|
| >>> s = np.random.default_rng().dirichlet((10, 5, 3), 20).transpose()
|
| >>> import matplotlib.pyplot as plt
| >>> plt.barh(range(20), s[0])
| >>> plt.barh(range(20), s[1], left=s[0], color='g')
| >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r')
| >>> plt.title("Lengths of Strings")
|
| exponential(...)
| exponential(scale=1.0, size=None)
|
| Draw samples from an exponential distribution.
|
| Its probability density function is
|
| .. math:: f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),
|
| for ``x > 0`` and 0 elsewhere. :math:`\beta` is the scale parameter,
| which is the inverse of the rate parameter :math:`\lambda = 1/\beta`.
| The rate parameter is an alternative, widely used parameterization
| of the exponential distribution [3]_.
|
| The exponential distribution is a continuous analogue of the
| geometric distribution. It describes many common situations, such as
| the size of raindrops measured over many rainstorms [1]_, or the time
| between page requests to Wikipedia [2]_.
|
| Parameters
| ----------
| scale : float or array_like of floats
| The scale parameter, :math:`\beta = 1/\lambda`. Must be
| non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``scale`` is a scalar. Otherwise,
| ``np.array(scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized exponential distribution.
|
| References
| ----------
| .. [1] Peyton Z. Peebles Jr., "Probability, Random Variables and
| Random Signal Principles", 4th ed, 2001, p. 57.
| .. [2] Wikipedia, "Poisson process",
| https://en.wikipedia.org/wiki/Poisson_process
| .. [3] Wikipedia, "Exponential distribution",
| https://en.wikipedia.org/wiki/Exponential_distribution
|
| f(...)
| f(dfnum, dfden, size=None)
|
| Draw samples from an F distribution.
|
| Samples are drawn from an F distribution with specified parameters,
| `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of
| freedom in denominator), where both parameters must be greater than
| zero.
|
| The random variate of the F distribution (also known as the
| Fisher distribution) is a continuous probability distribution
| that arises in ANOVA tests, and is the ratio of two chi-square
| variates.
|
| Parameters
| ----------
| dfnum : float or array_like of floats
| Degrees of freedom in numerator, must be > 0.
| dfden : float or array_like of float
| Degrees of freedom in denominator, must be > 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``dfnum`` and ``dfden`` are both scalars.
| Otherwise, ``np.broadcast(dfnum, dfden).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Fisher distribution.
|
| See Also
| --------
| scipy.stats.f : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The F statistic is used to compare in-group variances to between-group
| variances. Calculating the distribution depends on the sampling, and
| so it is a function of the respective degrees of freedom in the
| problem. The variable `dfnum` is the number of samples minus one, the
| between-groups degrees of freedom, while `dfden` is the within-groups
| degrees of freedom, the sum of the number of samples in each group
| minus the number of groups.
|
| References
| ----------
| .. [1] Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill,
| Fifth Edition, 2002.
| .. [2] Wikipedia, "F-distribution",
| https://en.wikipedia.org/wiki/F-distribution
|
| Examples
| --------
| An example from Glantz[1], pp 47-40:
|
| Two groups, children of diabetics (25 people) and children from people
| without diabetes (25 controls). Fasting blood glucose was measured,
| case group had a mean value of 86.1, controls had a mean value of
| 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these
| data consistent with the null hypothesis that the parents diabetic
| status does not affect their children's blood glucose levels?
| Calculating the F statistic from the data gives a value of 36.01.
|
| Draw samples from the distribution:
|
| >>> dfnum = 1. # between group degrees of freedom
| >>> dfden = 48. # within groups degrees of freedom
| >>> s = np.random.default_rng().f(dfnum, dfden, 1000)
|
| The lower bound for the top 1% of the samples is :
|
| >>> np.sort(s)[-10]
| 7.61988120985 # random
|
| So there is about a 1% chance that the F statistic will exceed 7.62,
| the measured value is 36, so the null hypothesis is rejected at the 1%
| level.
|
| gamma(...)
| gamma(shape, scale=1.0, size=None)
|
| Draw samples from a Gamma distribution.
|
| Samples are drawn from a Gamma distribution with specified parameters,
| `shape` (sometimes designated "k") and `scale` (sometimes designated
| "theta"), where both parameters are > 0.
|
| Parameters
| ----------
| shape : float or array_like of floats
| The shape of the gamma distribution. Must be non-negative.
| scale : float or array_like of floats, optional
| The scale of the gamma distribution. Must be non-negative.
| Default is equal to 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``shape`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(shape, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized gamma distribution.
|
| See Also
| --------
| scipy.stats.gamma : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the Gamma distribution is
|
| .. math:: p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},
|
| where :math:`k` is the shape and :math:`\theta` the scale,
| and :math:`\Gamma` is the Gamma function.
|
| The Gamma distribution is often used to model the times to failure of
| electronic components, and arises naturally in processes for which the
| waiting times between Poisson distributed events are relevant.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Gamma Distribution." From MathWorld--A
| Wolfram Web Resource.
| http://mathworld.wolfram.com/GammaDistribution.html
| .. [2] Wikipedia, "Gamma distribution",
| https://en.wikipedia.org/wiki/Gamma_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2)
| >>> s = np.random.default_rng().gamma(shape, scale, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> import scipy.special as sps # doctest: +SKIP
| >>> count, bins, ignored = plt.hist(s, 50, density=True)
| >>> y = bins**(shape-1)*(np.exp(-bins/scale) / # doctest: +SKIP
| ... (sps.gamma(shape)*scale**shape))
| >>> plt.plot(bins, y, linewidth=2, color='r') # doctest: +SKIP
| >>> plt.show()
|
| geometric(...)
| geometric(p, size=None)
|
| Draw samples from the geometric distribution.
|
| Bernoulli trials are experiments with one of two outcomes:
| success or failure (an example of such an experiment is flipping
| a coin). The geometric distribution models the number of trials
| that must be run in order to achieve success. It is therefore
| supported on the positive integers, ``k = 1, 2, ...``.
|
| The probability mass function of the geometric distribution is
|
| .. math:: f(k) = (1 - p)^{k - 1} p
|
| where `p` is the probability of success of an individual trial.
|
| Parameters
| ----------
| p : float or array_like of floats
| The probability of success of an individual trial.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``p`` is a scalar. Otherwise,
| ``np.array(p).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized geometric distribution.
|
| Examples
| --------
| Draw ten thousand values from the geometric distribution,
| with the probability of an individual success equal to 0.35:
|
| >>> z = np.random.default_rng().geometric(p=0.35, size=10000)
|
| How many trials succeeded after a single run?
|
| >>> (z == 1).sum() / 10000.
| 0.34889999999999999 #random
|
| gumbel(...)
| gumbel(loc=0.0, scale=1.0, size=None)
|
| Draw samples from a Gumbel distribution.
|
| Draw samples from a Gumbel distribution with specified location and
| scale. For more information on the Gumbel distribution, see
| Notes and References below.
|
| Parameters
| ----------
| loc : float or array_like of floats, optional
| The location of the mode of the distribution. Default is 0.
| scale : float or array_like of floats, optional
| The scale parameter of the distribution. Default is 1. Must be non-
| negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``loc`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Gumbel distribution.
|
| See Also
| --------
| scipy.stats.gumbel_l
| scipy.stats.gumbel_r
| scipy.stats.genextreme
| weibull
|
| Notes
| -----
| The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme
| Value Type I) distribution is one of a class of Generalized Extreme
| Value (GEV) distributions used in modeling extreme value problems.
| The Gumbel is a special case of the Extreme Value Type I distribution
| for maximums from distributions with "exponential-like" tails.
|
| The probability density for the Gumbel distribution is
|
| .. math:: p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/
| \beta}},
|
| where :math:`\mu` is the mode, a location parameter, and
| :math:`\beta` is the scale parameter.
|
| The Gumbel (named for German mathematician Emil Julius Gumbel) was used
| very early in the hydrology literature, for modeling the occurrence of
| flood events. It is also used for modeling maximum wind speed and
| rainfall rates. It is a "fat-tailed" distribution - the probability of
| an event in the tail of the distribution is larger than if one used a
| Gaussian, hence the surprisingly frequent occurrence of 100-year
| floods. Floods were initially modeled as a Gaussian process, which
| underestimated the frequency of extreme events.
|
| It is one of a class of extreme value distributions, the Generalized
| Extreme Value (GEV) distributions, which also includes the Weibull and
| Frechet.
|
| The function has a mean of :math:`\mu + 0.57721\beta` and a variance
| of :math:`\frac{\pi^2}{6}\beta^2`.
|
| References
| ----------
| .. [1] Gumbel, E. J., "Statistics of Extremes,"
| New York: Columbia University Press, 1958.
| .. [2] Reiss, R.-D. and Thomas, M., "Statistical Analysis of Extreme
| Values from Insurance, Finance, Hydrology and Other Fields,"
| Basel: Birkhauser Verlag, 2001.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> rng = np.random.default_rng()
| >>> mu, beta = 0, 0.1 # location and scale
| >>> s = rng.gumbel(mu, beta, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 30, density=True)
| >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)
| ... * np.exp( -np.exp( -(bins - mu) /beta) ),
| ... linewidth=2, color='r')
| >>> plt.show()
|
| Show how an extreme value distribution can arise from a Gaussian process
| and compare to a Gaussian:
|
| >>> means = []
| >>> maxima = []
| >>> for i in range(0,1000) :
| ... a = rng.normal(mu, beta, 1000)
| ... means.append(a.mean())
| ... maxima.append(a.max())
| >>> count, bins, ignored = plt.hist(maxima, 30, density=True)
| >>> beta = np.std(maxima) * np.sqrt(6) / np.pi
| >>> mu = np.mean(maxima) - 0.57721*beta
| >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)
| ... * np.exp(-np.exp(-(bins - mu)/beta)),
| ... linewidth=2, color='r')
| >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi))
| ... * np.exp(-(bins - mu)**2 / (2 * beta**2)),
| ... linewidth=2, color='g')
| >>> plt.show()
|
| hypergeometric(...)
| hypergeometric(ngood, nbad, nsample, size=None)
|
| Draw samples from a Hypergeometric distribution.
|
| Samples are drawn from a hypergeometric distribution with specified
| parameters, `ngood` (ways to make a good selection), `nbad` (ways to make
| a bad selection), and `nsample` (number of items sampled, which is less
| than or equal to the sum ``ngood + nbad``).
|
| Parameters
| ----------
| ngood : int or array_like of ints
| Number of ways to make a good selection. Must be nonnegative and
| less than 10**9.
| nbad : int or array_like of ints
| Number of ways to make a bad selection. Must be nonnegative and
| less than 10**9.
| nsample : int or array_like of ints
| Number of items sampled. Must be nonnegative and less than
| ``ngood + nbad``.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if `ngood`, `nbad`, and `nsample`
| are all scalars. Otherwise, ``np.broadcast(ngood, nbad, nsample).size``
| samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized hypergeometric distribution. Each
| sample is the number of good items within a randomly selected subset of
| size `nsample` taken from a set of `ngood` good items and `nbad` bad items.
|
| See Also
| --------
| multivariate_hypergeometric : Draw samples from the multivariate
| hypergeometric distribution.
| scipy.stats.hypergeom : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the Hypergeometric distribution is
|
| .. math:: P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},
|
| where :math:`0 \le x \le n` and :math:`n-b \le x \le g`
|
| for P(x) the probability of ``x`` good results in the drawn sample,
| g = `ngood`, b = `nbad`, and n = `nsample`.
|
| Consider an urn with black and white marbles in it, `ngood` of them
| are black and `nbad` are white. If you draw `nsample` balls without
| replacement, then the hypergeometric distribution describes the
| distribution of black balls in the drawn sample.
|
| Note that this distribution is very similar to the binomial
| distribution, except that in this case, samples are drawn without
| replacement, whereas in the Binomial case samples are drawn with
| replacement (or the sample space is infinite). As the sample space
| becomes large, this distribution approaches the binomial.
|
| The arguments `ngood` and `nbad` each must be less than `10**9`. For
| extremely large arguments, the algorithm that is used to compute the
| samples [4]_ breaks down because of loss of precision in floating point
| calculations. For such large values, if `nsample` is not also large,
| the distribution can be approximated with the binomial distribution,
| `binomial(n=nsample, p=ngood/(ngood + nbad))`.
|
| References
| ----------
| .. [1] Lentner, Marvin, "Elementary Applied Statistics", Bogden
| and Quigley, 1972.
| .. [2] Weisstein, Eric W. "Hypergeometric Distribution." From
| MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/HypergeometricDistribution.html
| .. [3] Wikipedia, "Hypergeometric distribution",
| https://en.wikipedia.org/wiki/Hypergeometric_distribution
| .. [4] Stadlober, Ernst, "The ratio of uniforms approach for generating
| discrete random variates", Journal of Computational and Applied
| Mathematics, 31, pp. 181-189 (1990).
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> rng = np.random.default_rng()
| >>> ngood, nbad, nsamp = 100, 2, 10
| # number of good, number of bad, and number of samples
| >>> s = rng.hypergeometric(ngood, nbad, nsamp, 1000)
| >>> from matplotlib.pyplot import hist
| >>> hist(s)
| # note that it is very unlikely to grab both bad items
|
| Suppose you have an urn with 15 white and 15 black marbles.
| If you pull 15 marbles at random, how likely is it that
| 12 or more of them are one color?
|
| >>> s = rng.hypergeometric(15, 15, 15, 100000)
| >>> sum(s>=12)/100000. + sum(s<=3)/100000.
| # answer = 0.003 ... pretty unlikely!
|
| integers(...)
| integers(low, high=None, size=None, dtype=np.int64, endpoint=False)
|
| Return random integers from `low` (inclusive) to `high` (exclusive), or
| if endpoint=True, `low` (inclusive) to `high` (inclusive). Replaces
| `RandomState.randint` (with endpoint=False) and
| `RandomState.random_integers` (with endpoint=True)
|
| Return random integers from the "discrete uniform" distribution of
| the specified dtype. If `high` is None (the default), then results are
| from 0 to `low`.
|
| Parameters
| ----------
| low : int or array-like of ints
| Lowest (signed) integers to be drawn from the distribution (unless
| ``high=None``, in which case this parameter is 0 and this value is
| used for `high`).
| high : int or array-like of ints, optional
| If provided, one above the largest (signed) integer to be drawn
| from the distribution (see above for behavior if ``high=None``).
| If array-like, must contain integer values
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| dtype : dtype, optional
| Desired dtype of the result. Byteorder must be native.
| The default value is np.int64.
| endpoint : bool, optional
| If true, sample from the interval [low, high] instead of the
| default [low, high)
| Defaults to False
|
| Returns
| -------
| out : int or ndarray of ints
| `size`-shaped array of random integers from the appropriate
| distribution, or a single such random int if `size` not provided.
|
| Notes
| -----
| When using broadcasting with uint64 dtypes, the maximum value (2**64)
| cannot be represented as a standard integer type. The high array (or
| low if high is None) must have object dtype, e.g., array([2**64]).
|
| Examples
| --------
| >>> rng = np.random.default_rng()
| >>> rng.integers(2, size=10)
| array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random
| >>> rng.integers(1, size=10)
| array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
|
| Generate a 2 x 4 array of ints between 0 and 4, inclusive:
|
| >>> rng.integers(5, size=(2, 4))
| array([[4, 0, 2, 1],
| [3, 2, 2, 0]]) # random
|
| Generate a 1 x 3 array with 3 different upper bounds
|
| >>> rng.integers(1, [3, 5, 10])
| array([2, 2, 9]) # random
|
| Generate a 1 by 3 array with 3 different lower bounds
|
| >>> rng.integers([1, 5, 7], 10)
| array([9, 8, 7]) # random
|
| Generate a 2 by 4 array using broadcasting with dtype of uint8
|
| >>> rng.integers([1, 3, 5, 7], [[10], [20]], dtype=np.uint8)
| array([[ 8, 6, 9, 7],
| [ 1, 16, 9, 12]], dtype=uint8) # random
|
| References
| ----------
| .. [1] Daniel Lemire., "Fast Random Integer Generation in an Interval",
| ACM Transactions on Modeling and Computer Simulation 29 (1), 2019,
| http://arxiv.org/abs/1805.10941.
|
| laplace(...)
| laplace(loc=0.0, scale=1.0, size=None)
|
| Draw samples from the Laplace or double exponential distribution with
| specified location (or mean) and scale (decay).
|
| The Laplace distribution is similar to the Gaussian/normal distribution,
| but is sharper at the peak and has fatter tails. It represents the
| difference between two independent, identically distributed exponential
| random variables.
|
| Parameters
| ----------
| loc : float or array_like of floats, optional
| The position, :math:`\mu`, of the distribution peak. Default is 0.
| scale : float or array_like of floats, optional
| :math:`\lambda`, the exponential decay. Default is 1. Must be non-
| negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``loc`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Laplace distribution.
|
| Notes
| -----
| It has the probability density function
|
| .. math:: f(x; \mu, \lambda) = \frac{1}{2\lambda}
| \exp\left(-\frac{|x - \mu|}{\lambda}\right).
|
| The first law of Laplace, from 1774, states that the frequency
| of an error can be expressed as an exponential function of the
| absolute magnitude of the error, which leads to the Laplace
| distribution. For many problems in economics and health
| sciences, this distribution seems to model the data better
| than the standard Gaussian distribution.
|
| References
| ----------
| .. [1] Abramowitz, M. and Stegun, I. A. (Eds.). "Handbook of
| Mathematical Functions with Formulas, Graphs, and Mathematical
| Tables, 9th printing," New York: Dover, 1972.
| .. [2] Kotz, Samuel, et. al. "The Laplace Distribution and
| Generalizations, " Birkhauser, 2001.
| .. [3] Weisstein, Eric W. "Laplace Distribution."
| From MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/LaplaceDistribution.html
| .. [4] Wikipedia, "Laplace distribution",
| https://en.wikipedia.org/wiki/Laplace_distribution
|
| Examples
| --------
| Draw samples from the distribution
|
| >>> loc, scale = 0., 1.
| >>> s = np.random.default_rng().laplace(loc, scale, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 30, density=True)
| >>> x = np.arange(-8., 8., .01)
| >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale)
| >>> plt.plot(x, pdf)
|
| Plot Gaussian for comparison:
|
| >>> g = (1/(scale * np.sqrt(2 * np.pi)) *
| ... np.exp(-(x - loc)**2 / (2 * scale**2)))
| >>> plt.plot(x,g)
|
| logistic(...)
| logistic(loc=0.0, scale=1.0, size=None)
|
| Draw samples from a logistic distribution.
|
| Samples are drawn from a logistic distribution with specified
| parameters, loc (location or mean, also median), and scale (>0).
|
| Parameters
| ----------
| loc : float or array_like of floats, optional
| Parameter of the distribution. Default is 0.
| scale : float or array_like of floats, optional
| Parameter of the distribution. Must be non-negative.
| Default is 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``loc`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized logistic distribution.
|
| See Also
| --------
| scipy.stats.logistic : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the Logistic distribution is
|
| .. math:: P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},
|
| where :math:`\mu` = location and :math:`s` = scale.
|
| The Logistic distribution is used in Extreme Value problems where it
| can act as a mixture of Gumbel distributions, in Epidemiology, and by
| the World Chess Federation (FIDE) where it is used in the Elo ranking
| system, assuming the performance of each player is a logistically
| distributed random variable.
|
| References
| ----------
| .. [1] Reiss, R.-D. and Thomas M. (2001), "Statistical Analysis of
| Extreme Values, from Insurance, Finance, Hydrology and Other
| Fields," Birkhauser Verlag, Basel, pp 132-133.
| .. [2] Weisstein, Eric W. "Logistic Distribution." From
| MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/LogisticDistribution.html
| .. [3] Wikipedia, "Logistic-distribution",
| https://en.wikipedia.org/wiki/Logistic_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> loc, scale = 10, 1
| >>> s = np.random.default_rng().logistic(loc, scale, 10000)
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, bins=50)
|
| # plot against distribution
|
| >>> def logist(x, loc, scale):
| ... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2)
| >>> lgst_val = logist(bins, loc, scale)
| >>> plt.plot(bins, lgst_val * count.max() / lgst_val.max())
| >>> plt.show()
|
| lognormal(...)
| lognormal(mean=0.0, sigma=1.0, size=None)
|
| Draw samples from a log-normal distribution.
|
| Draw samples from a log-normal distribution with specified mean,
| standard deviation, and array shape. Note that the mean and standard
| deviation are not the values for the distribution itself, but of the
| underlying normal distribution it is derived from.
|
| Parameters
| ----------
| mean : float or array_like of floats, optional
| Mean value of the underlying normal distribution. Default is 0.
| sigma : float or array_like of floats, optional
| Standard deviation of the underlying normal distribution. Must be
| non-negative. Default is 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``mean`` and ``sigma`` are both scalars.
| Otherwise, ``np.broadcast(mean, sigma).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized log-normal distribution.
|
| See Also
| --------
| scipy.stats.lognorm : probability density function, distribution,
| cumulative density function, etc.
|
| Notes
| -----
| A variable `x` has a log-normal distribution if `log(x)` is normally
| distributed. The probability density function for the log-normal
| distribution is:
|
| .. math:: p(x) = \frac{1}{\sigma x \sqrt{2\pi}}
| e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}
|
| where :math:`\mu` is the mean and :math:`\sigma` is the standard
| deviation of the normally distributed logarithm of the variable.
| A log-normal distribution results if a random variable is the *product*
| of a large number of independent, identically-distributed variables in
| the same way that a normal distribution results if the variable is the
| *sum* of a large number of independent, identically-distributed
| variables.
|
| References
| ----------
| .. [1] Limpert, E., Stahel, W. A., and Abbt, M., "Log-normal
| Distributions across the Sciences: Keys and Clues,"
| BioScience, Vol. 51, No. 5, May, 2001.
| https://stat.ethz.ch/~stahel/lognormal/bioscience.pdf
| .. [2] Reiss, R.D. and Thomas, M., "Statistical Analysis of Extreme
| Values," Basel: Birkhauser Verlag, 2001, pp. 31-32.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> rng = np.random.default_rng()
| >>> mu, sigma = 3., 1. # mean and standard deviation
| >>> s = rng.lognormal(mu, sigma, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid')
|
| >>> x = np.linspace(min(bins), max(bins), 10000)
| >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))
| ... / (x * sigma * np.sqrt(2 * np.pi)))
|
| >>> plt.plot(x, pdf, linewidth=2, color='r')
| >>> plt.axis('tight')
| >>> plt.show()
|
| Demonstrate that taking the products of random samples from a uniform
| distribution can be fit well by a log-normal probability density
| function.
|
| >>> # Generate a thousand samples: each is the product of 100 random
| >>> # values, drawn from a normal distribution.
| >>> rng = rng
| >>> b = []
| >>> for i in range(1000):
| ... a = 10. + rng.standard_normal(100)
| ... b.append(np.product(a))
|
| >>> b = np.array(b) / np.min(b) # scale values to be positive
| >>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid')
| >>> sigma = np.std(np.log(b))
| >>> mu = np.mean(np.log(b))
|
| >>> x = np.linspace(min(bins), max(bins), 10000)
| >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))
| ... / (x * sigma * np.sqrt(2 * np.pi)))
|
| >>> plt.plot(x, pdf, color='r', linewidth=2)
| >>> plt.show()
|
| logseries(...)
| logseries(p, size=None)
|
| Draw samples from a logarithmic series distribution.
|
| Samples are drawn from a log series distribution with specified
| shape parameter, 0 < ``p`` < 1.
|
| Parameters
| ----------
| p : float or array_like of floats
| Shape parameter for the distribution. Must be in the range (0, 1).
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``p`` is a scalar. Otherwise,
| ``np.array(p).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized logarithmic series distribution.
|
| See Also
| --------
| scipy.stats.logser : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The probability mass function for the Log Series distribution is
|
| .. math:: P(k) = \frac{-p^k}{k \ln(1-p)},
|
| where p = probability.
|
| The log series distribution is frequently used to represent species
| richness and occurrence, first proposed by Fisher, Corbet, and
| Williams in 1943 [2]. It may also be used to model the numbers of
| occupants seen in cars [3].
|
| References
| ----------
| .. [1] Buzas, Martin A.; Culver, Stephen J., Understanding regional
| species diversity through the log series distribution of
| occurrences: BIODIVERSITY RESEARCH Diversity & Distributions,
| Volume 5, Number 5, September 1999 , pp. 187-195(9).
| .. [2] Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The
| relation between the number of species and the number of
| individuals in a random sample of an animal population.
| Journal of Animal Ecology, 12:42-58.
| .. [3] D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small
| Data Sets, CRC Press, 1994.
| .. [4] Wikipedia, "Logarithmic distribution",
| https://en.wikipedia.org/wiki/Logarithmic_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> a = .6
| >>> s = np.random.default_rng().logseries(a, 10000)
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s)
|
| # plot against distribution
|
| >>> def logseries(k, p):
| ... return -p**k/(k*np.log(1-p))
| >>> plt.plot(bins, logseries(bins, a) * count.max()/
| ... logseries(bins, a).max(), 'r')
| >>> plt.show()
|
| multinomial(...)
| multinomial(n, pvals, size=None)
|
| Draw samples from a multinomial distribution.
|
| The multinomial distribution is a multivariate generalization of the
| binomial distribution. Take an experiment with one of ``p``
| possible outcomes. An example of such an experiment is throwing a dice,
| where the outcome can be 1 through 6. Each sample drawn from the
| distribution represents `n` such experiments. Its values,
| ``X_i = [X_0, X_1, ..., X_p]``, represent the number of times the
| outcome was ``i``.
|
| Parameters
| ----------
| n : int or array-like of ints
| Number of experiments.
| pvals : sequence of floats, length p
| Probabilities of each of the ``p`` different outcomes. These
| must sum to 1 (however, the last element is always assumed to
| account for the remaining probability, as long as
| ``sum(pvals[:-1]) <= 1)``.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| out : ndarray
| The drawn samples, of shape *size*, if that was provided. If not,
| the shape is ``(N,)``.
|
| In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
| value drawn from the distribution.
|
| Examples
| --------
| Throw a dice 20 times:
|
| >>> rng = np.random.default_rng()
| >>> rng.multinomial(20, [1/6.]*6, size=1)
| array([[4, 1, 7, 5, 2, 1]]) # random
|
| It landed 4 times on 1, once on 2, etc.
|
| Now, throw the dice 20 times, and 20 times again:
|
| >>> rng.multinomial(20, [1/6.]*6, size=2)
| array([[3, 4, 3, 3, 4, 3],
| [2, 4, 3, 4, 0, 7]]) # random
|
| For the first run, we threw 3 times 1, 4 times 2, etc. For the second,
| we threw 2 times 1, 4 times 2, etc.
|
| Now, do one experiment throwing the dice 10 time, and 10 times again,
| and another throwing the dice 20 times, and 20 times again:
|
| >>> rng.multinomial([[10], [20]], [1/6.]*6, size=2)
| array([[[2, 4, 0, 1, 2, 1],
| [1, 3, 0, 3, 1, 2]],
| [[1, 4, 4, 4, 4, 3],
| [3, 3, 2, 5, 5, 2]]]) # random
|
| The first array shows the outcomes of throwing the dice 10 times, and
| the second shows the outcomes from throwing the dice 20 times.
|
| A loaded die is more likely to land on number 6:
|
| >>> rng.multinomial(100, [1/7.]*5 + [2/7.])
| array([11, 16, 14, 17, 16, 26]) # random
|
| The probability inputs should be normalized. As an implementation
| detail, the value of the last entry is ignored and assumed to take
| up any leftover probability mass, but this should not be relied on.
| A biased coin which has twice as much weight on one side as on the
| other should be sampled like so:
|
| >>> rng.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT
| array([38, 62]) # random
|
| not like:
|
| >>> rng.multinomial(100, [1.0, 2.0]) # WRONG
| Traceback (most recent call last):
| ValueError: pvals < 0, pvals > 1 or pvals contains NaNs
|
| multivariate_hypergeometric(...)
| multivariate_hypergeometric(colors, nsample, size=None,
| method='marginals')
|
| Generate variates from a multivariate hypergeometric distribution.
|
| The multivariate hypergeometric distribution is a generalization
| of the hypergeometric distribution.
|
| Choose ``nsample`` items at random without replacement from a
| collection with ``N`` distinct types. ``N`` is the length of
| ``colors``, and the values in ``colors`` are the number of occurrences
| of that type in the collection. The total number of items in the
| collection is ``sum(colors)``. Each random variate generated by this
| function is a vector of length ``N`` holding the counts of the
| different types that occurred in the ``nsample`` items.
|
| The name ``colors`` comes from a common description of the
| distribution: it is the probability distribution of the number of
| marbles of each color selected without replacement from an urn
| containing marbles of different colors; ``colors[i]`` is the number
| of marbles in the urn with color ``i``.
|
| Parameters
| ----------
| colors : sequence of integers
| The number of each type of item in the collection from which
| a sample is drawn. The values in ``colors`` must be nonnegative.
| To avoid loss of precision in the algorithm, ``sum(colors)``
| must be less than ``10**9`` when `method` is "marginals".
| nsample : int
| The number of items selected. ``nsample`` must not be greater
| than ``sum(colors)``.
| size : int or tuple of ints, optional
| The number of variates to generate, either an integer or a tuple
| holding the shape of the array of variates. If the given size is,
| e.g., ``(k, m)``, then ``k * m`` variates are drawn, where one
| variate is a vector of length ``len(colors)``, and the return value
| has shape ``(k, m, len(colors))``. If `size` is an integer, the
| output has shape ``(size, len(colors))``. Default is None, in
| which case a single variate is returned as an array with shape
| ``(len(colors),)``.
| method : string, optional
| Specify the algorithm that is used to generate the variates.
| Must be 'count' or 'marginals' (the default). See the Notes
| for a description of the methods.
|
| Returns
| -------
| variates : ndarray
| Array of variates drawn from the multivariate hypergeometric
| distribution.
|
| See Also
| --------
| hypergeometric : Draw samples from the (univariate) hypergeometric
| distribution.
|
| Notes
| -----
| The two methods do not return the same sequence of variates.
|
| The "count" algorithm is roughly equivalent to the following numpy
| code::
|
| choices = np.repeat(np.arange(len(colors)), colors)
| selection = np.random.choice(choices, nsample, replace=False)
| variate = np.bincount(selection, minlength=len(colors))
|
| The "count" algorithm uses a temporary array of integers with length
| ``sum(colors)``.
|
| The "marginals" algorithm generates a variate by using repeated
| calls to the univariate hypergeometric sampler. It is roughly
| equivalent to::
|
| variate = np.zeros(len(colors), dtype=np.int64)
| # `remaining` is the cumulative sum of `colors` from the last
| # element to the first; e.g. if `colors` is [3, 1, 5], then
| # `remaining` is [9, 6, 5].
| remaining = np.cumsum(colors[::-1])[::-1]
| for i in range(len(colors)-1):
| if nsample < 1:
| break
| variate[i] = hypergeometric(colors[i], remaining[i+1],
| nsample)
| nsample -= variate[i]
| variate[-1] = nsample
|
| The default method is "marginals". For some cases (e.g. when
| `colors` contains relatively small integers), the "count" method
| can be significantly faster than the "marginals" method. If
| performance of the algorithm is important, test the two methods
| with typical inputs to decide which works best.
|
| .. versionadded:: 1.18.0
|
| Examples
| --------
| >>> colors = [16, 8, 4]
| >>> seed = 4861946401452
| >>> gen = np.random.Generator(np.random.PCG64(seed))
| >>> gen.multivariate_hypergeometric(colors, 6)
| array([5, 0, 1])
| >>> gen.multivariate_hypergeometric(colors, 6, size=3)
| array([[5, 0, 1],
| [2, 2, 2],
| [3, 3, 0]])
| >>> gen.multivariate_hypergeometric(colors, 6, size=(2, 2))
| array([[[3, 2, 1],
| [3, 2, 1]],
| [[4, 1, 1],
| [3, 2, 1]]])
|
| multivariate_normal(...)
| multivariate_normal(mean, cov, size=None, check_valid='warn', tol=1e-8)
|
| Draw random samples from a multivariate normal distribution.
|
| The multivariate normal, multinormal or Gaussian distribution is a
| generalization of the one-dimensional normal distribution to higher
| dimensions. Such a distribution is specified by its mean and
| covariance matrix. These parameters are analogous to the mean
| (average or "center") and variance (standard deviation, or "width,"
| squared) of the one-dimensional normal distribution.
|
| Parameters
| ----------
| mean : 1-D array_like, of length N
| Mean of the N-dimensional distribution.
| cov : 2-D array_like, of shape (N, N)
| Covariance matrix of the distribution. It must be symmetric and
| positive-semidefinite for proper sampling.
| size : int or tuple of ints, optional
| Given a shape of, for example, ``(m,n,k)``, ``m*n*k`` samples are
| generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because
| each sample is `N`-dimensional, the output shape is ``(m,n,k,N)``.
| If no shape is specified, a single (`N`-D) sample is returned.
| check_valid : { 'warn', 'raise', 'ignore' }, optional
| Behavior when the covariance matrix is not positive semidefinite.
| tol : float, optional
| Tolerance when checking the singular values in covariance matrix.
| cov is cast to double before the check.
| method : { 'svd', 'eigh', 'cholesky'}, optional
| The cov input is used to compute a factor matrix A such that
| ``A @ A.T = cov``. This argument is used to select the method
| used to compute the factor matrix A. The default method 'svd' is
| the slowest, while 'cholesky' is the fastest but less robust than
| the slowest method. The method `eigh` uses eigen decomposition to
| compute A and is faster than svd but slower than cholesky.
|
| .. versionadded:: 1.18.0
|
| Returns
| -------
| out : ndarray
| The drawn samples, of shape *size*, if that was provided. If not,
| the shape is ``(N,)``.
|
| In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
| value drawn from the distribution.
|
| Notes
| -----
| The mean is a coordinate in N-dimensional space, which represents the
| location where samples are most likely to be generated. This is
| analogous to the peak of the bell curve for the one-dimensional or
| univariate normal distribution.
|
| Covariance indicates the level to which two variables vary together.
| From the multivariate normal distribution, we draw N-dimensional
| samples, :math:`X = [x_1, x_2, ... x_N]`. The covariance matrix
| element :math:`C_{ij}` is the covariance of :math:`x_i` and :math:`x_j`.
| The element :math:`C_{ii}` is the variance of :math:`x_i` (i.e. its
| "spread").
|
| Instead of specifying the full covariance matrix, popular
| approximations include:
|
| - Spherical covariance (`cov` is a multiple of the identity matrix)
| - Diagonal covariance (`cov` has non-negative elements, and only on
| the diagonal)
|
| This geometrical property can be seen in two dimensions by plotting
| generated data-points:
|
| >>> mean = [0, 0]
| >>> cov = [[1, 0], [0, 100]] # diagonal covariance
|
| Diagonal covariance means that points are oriented along x or y-axis:
|
| >>> import matplotlib.pyplot as plt
| >>> x, y = np.random.default_rng().multivariate_normal(mean, cov, 5000).T
| >>> plt.plot(x, y, 'x')
| >>> plt.axis('equal')
| >>> plt.show()
|
| Note that the covariance matrix must be positive semidefinite (a.k.a.
| nonnegative-definite). Otherwise, the behavior of this method is
| undefined and backwards compatibility is not guaranteed.
|
| References
| ----------
| .. [1] Papoulis, A., "Probability, Random Variables, and Stochastic
| Processes," 3rd ed., New York: McGraw-Hill, 1991.
| .. [2] Duda, R. O., Hart, P. E., and Stork, D. G., "Pattern
| Classification," 2nd ed., New York: Wiley, 2001.
|
| Examples
| --------
| >>> mean = (1, 2)
| >>> cov = [[1, 0], [0, 1]]
| >>> rng = np.random.default_rng()
| >>> x = rng.multivariate_normal(mean, cov, (3, 3))
| >>> x.shape
| (3, 3, 2)
|
| We can use a different method other than the default to factorize cov:
| >>> y = rng.multivariate_normal(mean, cov, (3, 3), method='cholesky')
| >>> y.shape
| (3, 3, 2)
|
| The following is probably true, given that 0.6 is roughly twice the
| standard deviation:
|
| >>> list((x[0,0,:] - mean) < 0.6)
| [True, True] # random
|
| negative_binomial(...)
| negative_binomial(n, p, size=None)
|
| Draw samples from a negative binomial distribution.
|
| Samples are drawn from a negative binomial distribution with specified
| parameters, `n` successes and `p` probability of success where `n`
| is > 0 and `p` is in the interval [0, 1].
|
| Parameters
| ----------
| n : float or array_like of floats
| Parameter of the distribution, > 0.
| p : float or array_like of floats
| Parameter of the distribution, >= 0 and <=1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``n`` and ``p`` are both scalars.
| Otherwise, ``np.broadcast(n, p).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized negative binomial distribution,
| where each sample is equal to N, the number of failures that
| occurred before a total of n successes was reached.
|
| Notes
| -----
| The probability mass function of the negative binomial distribution is
|
| .. math:: P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},
|
| where :math:`n` is the number of successes, :math:`p` is the
| probability of success, :math:`N+n` is the number of trials, and
| :math:`\Gamma` is the gamma function. When :math:`n` is an integer,
| :math:`\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}`, which is
| the more common form of this term in the the pmf. The negative
| binomial distribution gives the probability of N failures given n
| successes, with a success on the last trial.
|
| If one throws a die repeatedly until the third time a "1" appears,
| then the probability distribution of the number of non-"1"s that
| appear before the third "1" is a negative binomial distribution.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Negative Binomial Distribution." From
| MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/NegativeBinomialDistribution.html
| .. [2] Wikipedia, "Negative binomial distribution",
| https://en.wikipedia.org/wiki/Negative_binomial_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| A real world example. A company drills wild-cat oil
| exploration wells, each with an estimated probability of
| success of 0.1. What is the probability of having one success
| for each successive well, that is what is the probability of a
| single success after drilling 5 wells, after 6 wells, etc.?
|
| >>> s = np.random.default_rng().negative_binomial(1, 0.1, 100000)
| >>> for i in range(1, 11): # doctest: +SKIP
| ... probability = sum(s<i) / 100000.
| ... print(i, "wells drilled, probability of one success =", probability)
|
| noncentral_chisquare(...)
| noncentral_chisquare(df, nonc, size=None)
|
| Draw samples from a noncentral chi-square distribution.
|
| The noncentral :math:`\chi^2` distribution is a generalization of
| the :math:`\chi^2` distribution.
|
| Parameters
| ----------
| df : float or array_like of floats
| Degrees of freedom, must be > 0.
|
| .. versionchanged:: 1.10.0
| Earlier NumPy versions required dfnum > 1.
| nonc : float or array_like of floats
| Non-centrality, must be non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``df`` and ``nonc`` are both scalars.
| Otherwise, ``np.broadcast(df, nonc).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized noncentral chi-square distribution.
|
| Notes
| -----
| The probability density function for the noncentral Chi-square
| distribution is
|
| .. math:: P(x;df,nonc) = \sum^{\infty}_{i=0}
| \frac{e^{-nonc/2}(nonc/2)^{i}}{i!}
| P_{Y_{df+2i}}(x),
|
| where :math:`Y_{q}` is the Chi-square with q degrees of freedom.
|
| References
| ----------
| .. [1] Wikipedia, "Noncentral chi-squared distribution"
| https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution
|
| Examples
| --------
| Draw values from the distribution and plot the histogram
|
| >>> rng = np.random.default_rng()
| >>> import matplotlib.pyplot as plt
| >>> values = plt.hist(rng.noncentral_chisquare(3, 20, 100000),
| ... bins=200, density=True)
| >>> plt.show()
|
| Draw values from a noncentral chisquare with very small noncentrality,
| and compare to a chisquare.
|
| >>> plt.figure()
| >>> values = plt.hist(rng.noncentral_chisquare(3, .0000001, 100000),
| ... bins=np.arange(0., 25, .1), density=True)
| >>> values2 = plt.hist(rng.chisquare(3, 100000),
| ... bins=np.arange(0., 25, .1), density=True)
| >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob')
| >>> plt.show()
|
| Demonstrate how large values of non-centrality lead to a more symmetric
| distribution.
|
| >>> plt.figure()
| >>> values = plt.hist(rng.noncentral_chisquare(3, 20, 100000),
| ... bins=200, density=True)
| >>> plt.show()
|
| noncentral_f(...)
| noncentral_f(dfnum, dfden, nonc, size=None)
|
| Draw samples from the noncentral F distribution.
|
| Samples are drawn from an F distribution with specified parameters,
| `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of
| freedom in denominator), where both parameters > 1.
| `nonc` is the non-centrality parameter.
|
| Parameters
| ----------
| dfnum : float or array_like of floats
| Numerator degrees of freedom, must be > 0.
|
| .. versionchanged:: 1.14.0
| Earlier NumPy versions required dfnum > 1.
| dfden : float or array_like of floats
| Denominator degrees of freedom, must be > 0.
| nonc : float or array_like of floats
| Non-centrality parameter, the sum of the squares of the numerator
| means, must be >= 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``dfnum``, ``dfden``, and ``nonc``
| are all scalars. Otherwise, ``np.broadcast(dfnum, dfden, nonc).size``
| samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized noncentral Fisher distribution.
|
| Notes
| -----
| When calculating the power of an experiment (power = probability of
| rejecting the null hypothesis when a specific alternative is true) the
| non-central F statistic becomes important. When the null hypothesis is
| true, the F statistic follows a central F distribution. When the null
| hypothesis is not true, then it follows a non-central F statistic.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Noncentral F-Distribution."
| From MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/NoncentralF-Distribution.html
| .. [2] Wikipedia, "Noncentral F-distribution",
| https://en.wikipedia.org/wiki/Noncentral_F-distribution
|
| Examples
| --------
| In a study, testing for a specific alternative to the null hypothesis
| requires use of the Noncentral F distribution. We need to calculate the
| area in the tail of the distribution that exceeds the value of the F
| distribution for the null hypothesis. We'll plot the two probability
| distributions for comparison.
|
| >>> rng = np.random.default_rng()
| >>> dfnum = 3 # between group deg of freedom
| >>> dfden = 20 # within groups degrees of freedom
| >>> nonc = 3.0
| >>> nc_vals = rng.noncentral_f(dfnum, dfden, nonc, 1000000)
| >>> NF = np.histogram(nc_vals, bins=50, density=True)
| >>> c_vals = rng.f(dfnum, dfden, 1000000)
| >>> F = np.histogram(c_vals, bins=50, density=True)
| >>> import matplotlib.pyplot as plt
| >>> plt.plot(F[1][1:], F[0])
| >>> plt.plot(NF[1][1:], NF[0])
| >>> plt.show()
|
| normal(...)
| normal(loc=0.0, scale=1.0, size=None)
|
| Draw random samples from a normal (Gaussian) distribution.
|
| The probability density function of the normal distribution, first
| derived by De Moivre and 200 years later by both Gauss and Laplace
| independently [2]_, is often called the bell curve because of
| its characteristic shape (see the example below).
|
| The normal distributions occurs often in nature. For example, it
| describes the commonly occurring distribution of samples influenced
| by a large number of tiny, random disturbances, each with its own
| unique distribution [2]_.
|
| Parameters
| ----------
| loc : float or array_like of floats
| Mean ("centre") of the distribution.
| scale : float or array_like of floats
| Standard deviation (spread or "width") of the distribution. Must be
| non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``loc`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized normal distribution.
|
| See Also
| --------
| scipy.stats.norm : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the Gaussian distribution is
|
| .. math:: p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
| e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
|
| where :math:`\mu` is the mean and :math:`\sigma` the standard
| deviation. The square of the standard deviation, :math:`\sigma^2`,
| is called the variance.
|
| The function has its peak at the mean, and its "spread" increases with
| the standard deviation (the function reaches 0.607 times its maximum at
| :math:`x + \sigma` and :math:`x - \sigma` [2]_). This implies that
| :meth:`normal` is more likely to return samples lying close to the
| mean, rather than those far away.
|
| References
| ----------
| .. [1] Wikipedia, "Normal distribution",
| https://en.wikipedia.org/wiki/Normal_distribution
| .. [2] P. R. Peebles Jr., "Central Limit Theorem" in "Probability,
| Random Variables and Random Signal Principles", 4th ed., 2001,
| pp. 51, 51, 125.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> mu, sigma = 0, 0.1 # mean and standard deviation
| >>> s = np.random.default_rng().normal(mu, sigma, 1000)
|
| Verify the mean and the variance:
|
| >>> abs(mu - np.mean(s))
| 0.0 # may vary
|
| >>> abs(sigma - np.std(s, ddof=1))
| 0.1 # may vary
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 30, density=True)
| >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
| ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
| ... linewidth=2, color='r')
| >>> plt.show()
|
| Two-by-four array of samples from N(3, 6.25):
|
| >>> np.random.default_rng().normal(3, 2.5, size=(2, 4))
| array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
| [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
|
| pareto(...)
| pareto(a, size=None)
|
| Draw samples from a Pareto II or Lomax distribution with
| specified shape.
|
| The Lomax or Pareto II distribution is a shifted Pareto
| distribution. The classical Pareto distribution can be
| obtained from the Lomax distribution by adding 1 and
| multiplying by the scale parameter ``m`` (see Notes). The
| smallest value of the Lomax distribution is zero while for the
| classical Pareto distribution it is ``mu``, where the standard
| Pareto distribution has location ``mu = 1``. Lomax can also
| be considered as a simplified version of the Generalized
| Pareto distribution (available in SciPy), with the scale set
| to one and the location set to zero.
|
| The Pareto distribution must be greater than zero, and is
| unbounded above. It is also known as the "80-20 rule". In
| this distribution, 80 percent of the weights are in the lowest
| 20 percent of the range, while the other 20 percent fill the
| remaining 80 percent of the range.
|
| Parameters
| ----------
| a : float or array_like of floats
| Shape of the distribution. Must be positive.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` is a scalar. Otherwise,
| ``np.array(a).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Pareto distribution.
|
| See Also
| --------
| scipy.stats.lomax : probability density function, distribution or
| cumulative density function, etc.
| scipy.stats.genpareto : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the Pareto distribution is
|
| .. math:: p(x) = \frac{am^a}{x^{a+1}}
|
| where :math:`a` is the shape and :math:`m` the scale.
|
| The Pareto distribution, named after the Italian economist
| Vilfredo Pareto, is a power law probability distribution
| useful in many real world problems. Outside the field of
| economics it is generally referred to as the Bradford
| distribution. Pareto developed the distribution to describe
| the distribution of wealth in an economy. It has also found
| use in insurance, web page access statistics, oil field sizes,
| and many other problems, including the download frequency for
| projects in Sourceforge [1]_. It is one of the so-called
| "fat-tailed" distributions.
|
|
| References
| ----------
| .. [1] Francis Hunt and Paul Johnson, On the Pareto Distribution of
| Sourceforge projects.
| .. [2] Pareto, V. (1896). Course of Political Economy. Lausanne.
| .. [3] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme
| Values, Birkhauser Verlag, Basel, pp 23-30.
| .. [4] Wikipedia, "Pareto distribution",
| https://en.wikipedia.org/wiki/Pareto_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> a, m = 3., 2. # shape and mode
| >>> s = (np.random.default_rng().pareto(a, 1000) + 1) * m
|
| Display the histogram of the samples, along with the probability
| density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, _ = plt.hist(s, 100, density=True)
| >>> fit = a*m**a / bins**(a+1)
| >>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r')
| >>> plt.show()
|
| permutation(...)
| permutation(x, axis=0)
|
| Randomly permute a sequence, or return a permuted range.
|
| Parameters
| ----------
| x : int or array_like
| If `x` is an integer, randomly permute ``np.arange(x)``.
| If `x` is an array, make a copy and shuffle the elements
| randomly.
| axis : int, optional
| The axis which `x` is shuffled along. Default is 0.
|
| Returns
| -------
| out : ndarray
| Permuted sequence or array range.
|
| Examples
| --------
| >>> rng = np.random.default_rng()
| >>> rng.permutation(10)
| array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random
|
| >>> rng.permutation([1, 4, 9, 12, 15])
| array([15, 1, 9, 4, 12]) # random
|
| >>> arr = np.arange(9).reshape((3, 3))
| >>> rng.permutation(arr)
| array([[6, 7, 8], # random
| [0, 1, 2],
| [3, 4, 5]])
|
| >>> rng.permutation("abc")
| Traceback (most recent call last):
| ...
| numpy.AxisError: x must be an integer or at least 1-dimensional
|
| >>> arr = np.arange(9).reshape((3, 3))
| >>> rng.permutation(arr, axis=1)
| array([[0, 2, 1], # random
| [3, 5, 4],
| [6, 8, 7]])
|
| poisson(...)
| poisson(lam=1.0, size=None)
|
| Draw samples from a Poisson distribution.
|
| The Poisson distribution is the limit of the binomial distribution
| for large N.
|
| Parameters
| ----------
| lam : float or array_like of floats
| Expectation of interval, must be >= 0. A sequence of expectation
| intervals must be broadcastable over the requested size.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``lam`` is a scalar. Otherwise,
| ``np.array(lam).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Poisson distribution.
|
| Notes
| -----
| The Poisson distribution
|
| .. math:: f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}
|
| For events with an expected separation :math:`\lambda` the Poisson
| distribution :math:`f(k; \lambda)` describes the probability of
| :math:`k` events occurring within the observed
| interval :math:`\lambda`.
|
| Because the output is limited to the range of the C int64 type, a
| ValueError is raised when `lam` is within 10 sigma of the maximum
| representable value.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Poisson Distribution."
| From MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/PoissonDistribution.html
| .. [2] Wikipedia, "Poisson distribution",
| https://en.wikipedia.org/wiki/Poisson_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> import numpy as np
| >>> rng = np.random.default_rng()
| >>> s = rng.poisson(5, 10000)
|
| Display histogram of the sample:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 14, density=True)
| >>> plt.show()
|
| Draw each 100 values for lambda 100 and 500:
|
| >>> s = rng.poisson(lam=(100., 500.), size=(100, 2))
|
| power(...)
| power(a, size=None)
|
| Draws samples in [0, 1] from a power distribution with positive
| exponent a - 1.
|
| Also known as the power function distribution.
|
| Parameters
| ----------
| a : float or array_like of floats
| Parameter of the distribution. Must be non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` is a scalar. Otherwise,
| ``np.array(a).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized power distribution.
|
| Raises
| ------
| ValueError
| If a < 1.
|
| Notes
| -----
| The probability density function is
|
| .. math:: P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.
|
| The power function distribution is just the inverse of the Pareto
| distribution. It may also be seen as a special case of the Beta
| distribution.
|
| It is used, for example, in modeling the over-reporting of insurance
| claims.
|
| References
| ----------
| .. [1] Christian Kleiber, Samuel Kotz, "Statistical size distributions
| in economics and actuarial sciences", Wiley, 2003.
| .. [2] Heckert, N. A. and Filliben, James J. "NIST Handbook 148:
| Dataplot Reference Manual, Volume 2: Let Subcommands and Library
| Functions", National Institute of Standards and Technology
| Handbook Series, June 2003.
| https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> rng = np.random.default_rng()
| >>> a = 5. # shape
| >>> samples = 1000
| >>> s = rng.power(a, samples)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, bins=30)
| >>> x = np.linspace(0, 1, 100)
| >>> y = a*x**(a-1.)
| >>> normed_y = samples*np.diff(bins)[0]*y
| >>> plt.plot(x, normed_y)
| >>> plt.show()
|
| Compare the power function distribution to the inverse of the Pareto.
|
| >>> from scipy import stats # doctest: +SKIP
| >>> rvs = rng.power(5, 1000000)
| >>> rvsp = rng.pareto(5, 1000000)
| >>> xx = np.linspace(0,1,100)
| >>> powpdf = stats.powerlaw.pdf(xx,5) # doctest: +SKIP
|
| >>> plt.figure()
| >>> plt.hist(rvs, bins=50, density=True)
| >>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
| >>> plt.title('power(5)')
|
| >>> plt.figure()
| >>> plt.hist(1./(1.+rvsp), bins=50, density=True)
| >>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
| >>> plt.title('inverse of 1 + Generator.pareto(5)')
|
| >>> plt.figure()
| >>> plt.hist(1./(1.+rvsp), bins=50, density=True)
| >>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
| >>> plt.title('inverse of stats.pareto(5)')
|
| random(...)
| random(size=None, dtype=np.float64, out=None)
|
| Return random floats in the half-open interval [0.0, 1.0).
|
| Results are from the "continuous uniform" distribution over the
| stated interval. To sample :math:`Unif[a, b), b > a` multiply
| the output of `random` by `(b-a)` and add `a`::
|
| (b - a) * random() + a
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| dtype : dtype, optional
| Desired dtype of the result, only `float64` and `float32` are supported.
| Byteorder must be native. The default value is np.float64.
| out : ndarray, optional
| Alternative output array in which to place the result. If size is not None,
| it must have the same shape as the provided size and must match the type of
| the output values.
|
| Returns
| -------
| out : float or ndarray of floats
| Array of random floats of shape `size` (unless ``size=None``, in which
| case a single float is returned).
|
| Examples
| --------
| >>> rng = np.random.default_rng()
| >>> rng.random()
| 0.47108547995356098 # random
| >>> type(rng.random())
| <class 'float'>
| >>> rng.random((5,))
| array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random
|
| Three-by-two array of random numbers from [-5, 0):
|
| >>> 5 * rng.random((3, 2)) - 5
| array([[-3.99149989, -0.52338984], # random
| [-2.99091858, -0.79479508],
| [-1.23204345, -1.75224494]])
|
| rayleigh(...)
| rayleigh(scale=1.0, size=None)
|
| Draw samples from a Rayleigh distribution.
|
| The :math:`\chi` and Weibull distributions are generalizations of the
| Rayleigh.
|
| Parameters
| ----------
| scale : float or array_like of floats, optional
| Scale, also equals the mode. Must be non-negative. Default is 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``scale`` is a scalar. Otherwise,
| ``np.array(scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Rayleigh distribution.
|
| Notes
| -----
| The probability density function for the Rayleigh distribution is
|
| .. math:: P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}
|
| The Rayleigh distribution would arise, for example, if the East
| and North components of the wind velocity had identical zero-mean
| Gaussian distributions. Then the wind speed would have a Rayleigh
| distribution.
|
| References
| ----------
| .. [1] Brighton Webs Ltd., "Rayleigh Distribution,"
| https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp
| .. [2] Wikipedia, "Rayleigh distribution"
| https://en.wikipedia.org/wiki/Rayleigh_distribution
|
| Examples
| --------
| Draw values from the distribution and plot the histogram
|
| >>> from matplotlib.pyplot import hist
| >>> rng = np.random.default_rng()
| >>> values = hist(rng.rayleigh(3, 100000), bins=200, density=True)
|
| Wave heights tend to follow a Rayleigh distribution. If the mean wave
| height is 1 meter, what fraction of waves are likely to be larger than 3
| meters?
|
| >>> meanvalue = 1
| >>> modevalue = np.sqrt(2 / np.pi) * meanvalue
| >>> s = rng.rayleigh(modevalue, 1000000)
|
| The percentage of waves larger than 3 meters is:
|
| >>> 100.*sum(s>3)/1000000.
| 0.087300000000000003 # random
|
| shuffle(...)
| shuffle(x, axis=0)
|
| Modify a sequence in-place by shuffling its contents.
|
| The order of sub-arrays is changed but their contents remains the same.
|
| Parameters
| ----------
| x : array_like
| The array or list to be shuffled.
| axis : int, optional
| The axis which `x` is shuffled along. Default is 0.
| It is only supported on `ndarray` objects.
|
| Returns
| -------
| None
|
| Examples
| --------
| >>> rng = np.random.default_rng()
| >>> arr = np.arange(10)
| >>> rng.shuffle(arr)
| >>> arr
| [1 7 5 2 9 4 3 6 0 8] # random
|
| >>> arr = np.arange(9).reshape((3, 3))
| >>> rng.shuffle(arr)
| >>> arr
| array([[3, 4, 5], # random
| [6, 7, 8],
| [0, 1, 2]])
|
| >>> arr = np.arange(9).reshape((3, 3))
| >>> rng.shuffle(arr, axis=1)
| >>> arr
| array([[2, 0, 1], # random
| [5, 3, 4],
| [8, 6, 7]])
|
| standard_cauchy(...)
| standard_cauchy(size=None)
|
| Draw samples from a standard Cauchy distribution with mode = 0.
|
| Also known as the Lorentz distribution.
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| samples : ndarray or scalar
| The drawn samples.
|
| Notes
| -----
| The probability density function for the full Cauchy distribution is
|
| .. math:: P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+
| (\frac{x-x_0}{\gamma})^2 \bigr] }
|
| and the Standard Cauchy distribution just sets :math:`x_0=0` and
| :math:`\gamma=1`
|
| The Cauchy distribution arises in the solution to the driven harmonic
| oscillator problem, and also describes spectral line broadening. It
| also describes the distribution of values at which a line tilted at
| a random angle will cut the x axis.
|
| When studying hypothesis tests that assume normality, seeing how the
| tests perform on data from a Cauchy distribution is a good indicator of
| their sensitivity to a heavy-tailed distribution, since the Cauchy looks
| very much like a Gaussian distribution, but with heavier tails.
|
| References
| ----------
| .. [1] NIST/SEMATECH e-Handbook of Statistical Methods, "Cauchy
| Distribution",
| https://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm
| .. [2] Weisstein, Eric W. "Cauchy Distribution." From MathWorld--A
| Wolfram Web Resource.
| http://mathworld.wolfram.com/CauchyDistribution.html
| .. [3] Wikipedia, "Cauchy distribution"
| https://en.wikipedia.org/wiki/Cauchy_distribution
|
| Examples
| --------
| Draw samples and plot the distribution:
|
| >>> import matplotlib.pyplot as plt
| >>> s = np.random.default_rng().standard_cauchy(1000000)
| >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well
| >>> plt.hist(s, bins=100)
| >>> plt.show()
|
| standard_exponential(...)
| standard_exponential(size=None, dtype=np.float64, method='zig', out=None)
|
| Draw samples from the standard exponential distribution.
|
| `standard_exponential` is identical to the exponential distribution
| with a scale parameter of 1.
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| dtype : dtype, optional
| Desired dtype of the result, only `float64` and `float32` are supported.
| Byteorder must be native. The default value is np.float64.
| method : str, optional
| Either 'inv' or 'zig'. 'inv' uses the default inverse CDF method.
| 'zig' uses the much faster Ziggurat method of Marsaglia and Tsang.
| out : ndarray, optional
| Alternative output array in which to place the result. If size is not None,
| it must have the same shape as the provided size and must match the type of
| the output values.
|
| Returns
| -------
| out : float or ndarray
| Drawn samples.
|
| Examples
| --------
| Output a 3x8000 array:
|
| >>> n = np.random.default_rng().standard_exponential((3, 8000))
|
| standard_gamma(...)
| standard_gamma(shape, size=None, dtype=np.float64, out=None)
|
| Draw samples from a standard Gamma distribution.
|
| Samples are drawn from a Gamma distribution with specified parameters,
| shape (sometimes designated "k") and scale=1.
|
| Parameters
| ----------
| shape : float or array_like of floats
| Parameter, must be non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``shape`` is a scalar. Otherwise,
| ``np.array(shape).size`` samples are drawn.
| dtype : dtype, optional
| Desired dtype of the result, only `float64` and `float32` are supported.
| Byteorder must be native. The default value is np.float64.
| out : ndarray, optional
| Alternative output array in which to place the result. If size is
| not None, it must have the same shape as the provided size and
| must match the type of the output values.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized standard gamma distribution.
|
| See Also
| --------
| scipy.stats.gamma : probability density function, distribution or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the Gamma distribution is
|
| .. math:: p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},
|
| where :math:`k` is the shape and :math:`\theta` the scale,
| and :math:`\Gamma` is the Gamma function.
|
| The Gamma distribution is often used to model the times to failure of
| electronic components, and arises naturally in processes for which the
| waiting times between Poisson distributed events are relevant.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Gamma Distribution." From MathWorld--A
| Wolfram Web Resource.
| http://mathworld.wolfram.com/GammaDistribution.html
| .. [2] Wikipedia, "Gamma distribution",
| https://en.wikipedia.org/wiki/Gamma_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> shape, scale = 2., 1. # mean and width
| >>> s = np.random.default_rng().standard_gamma(shape, 1000000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> import scipy.special as sps # doctest: +SKIP
| >>> count, bins, ignored = plt.hist(s, 50, density=True)
| >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ # doctest: +SKIP
| ... (sps.gamma(shape) * scale**shape))
| >>> plt.plot(bins, y, linewidth=2, color='r') # doctest: +SKIP
| >>> plt.show()
|
| standard_normal(...)
| standard_normal(size=None, dtype=np.float64, out=None)
|
| Draw samples from a standard Normal distribution (mean=0, stdev=1).
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| dtype : dtype, optional
| Desired dtype of the result, only `float64` and `float32` are supported.
| Byteorder must be native. The default value is np.float64.
| out : ndarray, optional
| Alternative output array in which to place the result. If size is not None,
| it must have the same shape as the provided size and must match the type of
| the output values.
|
| Returns
| -------
| out : float or ndarray
| A floating-point array of shape ``size`` of drawn samples, or a
| single sample if ``size`` was not specified.
|
| See Also
| --------
| normal :
| Equivalent function with additional ``loc`` and ``scale`` arguments
| for setting the mean and standard deviation.
|
| Notes
| -----
| For random samples from :math:`N(\mu, \sigma^2)`, use one of::
|
| mu + sigma * gen.standard_normal(size=...)
| gen.normal(mu, sigma, size=...)
|
| Examples
| --------
| >>> rng = np.random.default_rng()
| >>> rng.standard_normal()
| 2.1923875335537315 #random
|
| >>> s = rng.standard_normal(8000)
| >>> s
| array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random
| -0.38672696, -0.4685006 ]) # random
| >>> s.shape
| (8000,)
| >>> s = rng.standard_normal(size=(3, 4, 2))
| >>> s.shape
| (3, 4, 2)
|
| Two-by-four array of samples from :math:`N(3, 6.25)`:
|
| >>> 3 + 2.5 * rng.standard_normal(size=(2, 4))
| array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
| [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
|
| standard_t(...)
| standard_t(df, size=None)
|
| Draw samples from a standard Student's t distribution with `df` degrees
| of freedom.
|
| A special case of the hyperbolic distribution. As `df` gets
| large, the result resembles that of the standard normal
| distribution (`standard_normal`).
|
| Parameters
| ----------
| df : float or array_like of floats
| Degrees of freedom, must be > 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``df`` is a scalar. Otherwise,
| ``np.array(df).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized standard Student's t distribution.
|
| Notes
| -----
| The probability density function for the t distribution is
|
| .. math:: P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df}
| \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}
|
| The t test is based on an assumption that the data come from a
| Normal distribution. The t test provides a way to test whether
| the sample mean (that is the mean calculated from the data) is
| a good estimate of the true mean.
|
| The derivation of the t-distribution was first published in
| 1908 by William Gosset while working for the Guinness Brewery
| in Dublin. Due to proprietary issues, he had to publish under
| a pseudonym, and so he used the name Student.
|
| References
| ----------
| .. [1] Dalgaard, Peter, "Introductory Statistics With R",
| Springer, 2002.
| .. [2] Wikipedia, "Student's t-distribution"
| https://en.wikipedia.org/wiki/Student's_t-distribution
|
| Examples
| --------
| From Dalgaard page 83 [1]_, suppose the daily energy intake for 11
| women in kilojoules (kJ) is:
|
| >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \
| ... 7515, 8230, 8770])
|
| Does their energy intake deviate systematically from the recommended
| value of 7725 kJ?
|
| We have 10 degrees of freedom, so is the sample mean within 95% of the
| recommended value?
|
| >>> s = np.random.default_rng().standard_t(10, size=100000)
| >>> np.mean(intake)
| 6753.636363636364
| >>> intake.std(ddof=1)
| 1142.1232221373727
|
| Calculate the t statistic, setting the ddof parameter to the unbiased
| value so the divisor in the standard deviation will be degrees of
| freedom, N-1.
|
| >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake)))
| >>> import matplotlib.pyplot as plt
| >>> h = plt.hist(s, bins=100, density=True)
|
| For a one-sided t-test, how far out in the distribution does the t
| statistic appear?
|
| >>> np.sum(s<t) / float(len(s))
| 0.0090699999999999999 #random
|
| So the p-value is about 0.009, which says the null hypothesis has a
| probability of about 99% of being true.
|
| triangular(...)
| triangular(left, mode, right, size=None)
|
| Draw samples from the triangular distribution over the
| interval ``[left, right]``.
|
| The triangular distribution is a continuous probability
| distribution with lower limit left, peak at mode, and upper
| limit right. Unlike the other distributions, these parameters
| directly define the shape of the pdf.
|
| Parameters
| ----------
| left : float or array_like of floats
| Lower limit.
| mode : float or array_like of floats
| The value where the peak of the distribution occurs.
| The value must fulfill the condition ``left <= mode <= right``.
| right : float or array_like of floats
| Upper limit, must be larger than `left`.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``left``, ``mode``, and ``right``
| are all scalars. Otherwise, ``np.broadcast(left, mode, right).size``
| samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized triangular distribution.
|
| Notes
| -----
| The probability density function for the triangular distribution is
|
| .. math:: P(x;l, m, r) = \begin{cases}
| \frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\
| \frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\
| 0& \text{otherwise}.
| \end{cases}
|
| The triangular distribution is often used in ill-defined
| problems where the underlying distribution is not known, but
| some knowledge of the limits and mode exists. Often it is used
| in simulations.
|
| References
| ----------
| .. [1] Wikipedia, "Triangular distribution"
| https://en.wikipedia.org/wiki/Triangular_distribution
|
| Examples
| --------
| Draw values from the distribution and plot the histogram:
|
| >>> import matplotlib.pyplot as plt
| >>> h = plt.hist(np.random.default_rng().triangular(-3, 0, 8, 100000), bins=200,
| ... density=True)
| >>> plt.show()
|
| uniform(...)
| uniform(low=0.0, high=1.0, size=None)
|
| Draw samples from a uniform distribution.
|
| Samples are uniformly distributed over the half-open interval
| ``[low, high)`` (includes low, but excludes high). In other words,
| any value within the given interval is equally likely to be drawn
| by `uniform`.
|
| Parameters
| ----------
| low : float or array_like of floats, optional
| Lower boundary of the output interval. All values generated will be
| greater than or equal to low. The default value is 0.
| high : float or array_like of floats
| Upper boundary of the output interval. All values generated will be
| less than high. The default value is 1.0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``low`` and ``high`` are both scalars.
| Otherwise, ``np.broadcast(low, high).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized uniform distribution.
|
| See Also
| --------
| integers : Discrete uniform distribution, yielding integers.
| random : Floats uniformly distributed over ``[0, 1)``.
|
| Notes
| -----
| The probability density function of the uniform distribution is
|
| .. math:: p(x) = \frac{1}{b - a}
|
| anywhere within the interval ``[a, b)``, and zero elsewhere.
|
| When ``high`` == ``low``, values of ``low`` will be returned.
| If ``high`` < ``low``, the results are officially undefined
| and may eventually raise an error, i.e. do not rely on this
| function to behave when passed arguments satisfying that
| inequality condition.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> s = np.random.default_rng().uniform(-1,0,1000)
|
| All values are within the given interval:
|
| >>> np.all(s >= -1)
| True
| >>> np.all(s < 0)
| True
|
| Display the histogram of the samples, along with the
| probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 15, density=True)
| >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r')
| >>> plt.show()
|
| vonmises(...)
| vonmises(mu, kappa, size=None)
|
| Draw samples from a von Mises distribution.
|
| Samples are drawn from a von Mises distribution with specified mode
| (mu) and dispersion (kappa), on the interval [-pi, pi].
|
| The von Mises distribution (also known as the circular normal
| distribution) is a continuous probability distribution on the unit
| circle. It may be thought of as the circular analogue of the normal
| distribution.
|
| Parameters
| ----------
| mu : float or array_like of floats
| Mode ("center") of the distribution.
| kappa : float or array_like of floats
| Dispersion of the distribution, has to be >=0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``mu`` and ``kappa`` are both scalars.
| Otherwise, ``np.broadcast(mu, kappa).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized von Mises distribution.
|
| See Also
| --------
| scipy.stats.vonmises : probability density function, distribution, or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the von Mises distribution is
|
| .. math:: p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},
|
| where :math:`\mu` is the mode and :math:`\kappa` the dispersion,
| and :math:`I_0(\kappa)` is the modified Bessel function of order 0.
|
| The von Mises is named for Richard Edler von Mises, who was born in
| Austria-Hungary, in what is now the Ukraine. He fled to the United
| States in 1939 and became a professor at Harvard. He worked in
| probability theory, aerodynamics, fluid mechanics, and philosophy of
| science.
|
| References
| ----------
| .. [1] Abramowitz, M. and Stegun, I. A. (Eds.). "Handbook of
| Mathematical Functions with Formulas, Graphs, and Mathematical
| Tables, 9th printing," New York: Dover, 1972.
| .. [2] von Mises, R., "Mathematical Theory of Probability
| and Statistics", New York: Academic Press, 1964.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> mu, kappa = 0.0, 4.0 # mean and dispersion
| >>> s = np.random.default_rng().vonmises(mu, kappa, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> from scipy.special import i0 # doctest: +SKIP
| >>> plt.hist(s, 50, density=True)
| >>> x = np.linspace(-np.pi, np.pi, num=51)
| >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) # doctest: +SKIP
| >>> plt.plot(x, y, linewidth=2, color='r') # doctest: +SKIP
| >>> plt.show()
|
| wald(...)
| wald(mean, scale, size=None)
|
| Draw samples from a Wald, or inverse Gaussian, distribution.
|
| As the scale approaches infinity, the distribution becomes more like a
| Gaussian. Some references claim that the Wald is an inverse Gaussian
| with mean equal to 1, but this is by no means universal.
|
| The inverse Gaussian distribution was first studied in relationship to
| Brownian motion. In 1956 M.C.K. Tweedie used the name inverse Gaussian
| because there is an inverse relationship between the time to cover a
| unit distance and distance covered in unit time.
|
| Parameters
| ----------
| mean : float or array_like of floats
| Distribution mean, must be > 0.
| scale : float or array_like of floats
| Scale parameter, must be > 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``mean`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(mean, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Wald distribution.
|
| Notes
| -----
| The probability density function for the Wald distribution is
|
| .. math:: P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^
| \frac{-scale(x-mean)^2}{2\cdotp mean^2x}
|
| As noted above the inverse Gaussian distribution first arise
| from attempts to model Brownian motion. It is also a
| competitor to the Weibull for use in reliability modeling and
| modeling stock returns and interest rate processes.
|
| References
| ----------
| .. [1] Brighton Webs Ltd., Wald Distribution,
| https://web.archive.org/web/20090423014010/http://www.brighton-webs.co.uk:80/distributions/wald.asp
| .. [2] Chhikara, Raj S., and Folks, J. Leroy, "The Inverse Gaussian
| Distribution: Theory : Methodology, and Applications", CRC Press,
| 1988.
| .. [3] Wikipedia, "Inverse Gaussian distribution"
| https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution
|
| Examples
| --------
| Draw values from the distribution and plot the histogram:
|
| >>> import matplotlib.pyplot as plt
| >>> h = plt.hist(np.random.default_rng().wald(3, 2, 100000), bins=200, density=True)
| >>> plt.show()
|
| weibull(...)
| weibull(a, size=None)
|
| Draw samples from a Weibull distribution.
|
| Draw samples from a 1-parameter Weibull distribution with the given
| shape parameter `a`.
|
| .. math:: X = (-ln(U))^{1/a}
|
| Here, U is drawn from the uniform distribution over (0,1].
|
| The more common 2-parameter Weibull, including a scale parameter
| :math:`\lambda` is just :math:`X = \lambda(-ln(U))^{1/a}`.
|
| Parameters
| ----------
| a : float or array_like of floats
| Shape parameter of the distribution. Must be nonnegative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` is a scalar. Otherwise,
| ``np.array(a).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Weibull distribution.
|
| See Also
| --------
| scipy.stats.weibull_max
| scipy.stats.weibull_min
| scipy.stats.genextreme
| gumbel
|
| Notes
| -----
| The Weibull (or Type III asymptotic extreme value distribution
| for smallest values, SEV Type III, or Rosin-Rammler
| distribution) is one of a class of Generalized Extreme Value
| (GEV) distributions used in modeling extreme value problems.
| This class includes the Gumbel and Frechet distributions.
|
| The probability density for the Weibull distribution is
|
| .. math:: p(x) = \frac{a}
| {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},
|
| where :math:`a` is the shape and :math:`\lambda` the scale.
|
| The function has its peak (the mode) at
| :math:`\lambda(\frac{a-1}{a})^{1/a}`.
|
| When ``a = 1``, the Weibull distribution reduces to the exponential
| distribution.
|
| References
| ----------
| .. [1] Waloddi Weibull, Royal Technical University, Stockholm,
| 1939 "A Statistical Theory Of The Strength Of Materials",
| Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939,
| Generalstabens Litografiska Anstalts Forlag, Stockholm.
| .. [2] Waloddi Weibull, "A Statistical Distribution Function of
| Wide Applicability", Journal Of Applied Mechanics ASME Paper
| 1951.
| .. [3] Wikipedia, "Weibull distribution",
| https://en.wikipedia.org/wiki/Weibull_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> rng = np.random.default_rng()
| >>> a = 5. # shape
| >>> s = rng.weibull(a, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> x = np.arange(1,100.)/50.
| >>> def weib(x,n,a):
| ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a)
|
| >>> count, bins, ignored = plt.hist(rng.weibull(5.,1000))
| >>> x = np.arange(1,100.)/50.
| >>> scale = count.max()/weib(x, 1., 5.).max()
| >>> plt.plot(x, weib(x, 1., 5.)*scale)
| >>> plt.show()
|
| zipf(...)
| zipf(a, size=None)
|
| Draw samples from a Zipf distribution.
|
| Samples are drawn from a Zipf distribution with specified parameter
| `a` > 1.
|
| The Zipf distribution (also known as the zeta distribution) is a
| continuous probability distribution that satisfies Zipf's law: the
| frequency of an item is inversely proportional to its rank in a
| frequency table.
|
| Parameters
| ----------
| a : float or array_like of floats
| Distribution parameter. Must be greater than 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` is a scalar. Otherwise,
| ``np.array(a).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Zipf distribution.
|
| See Also
| --------
| scipy.stats.zipf : probability density function, distribution, or
| cumulative density function, etc.
|
| Notes
| -----
| The probability density for the Zipf distribution is
|
| .. math:: p(x) = \frac{x^{-a}}{\zeta(a)},
|
| where :math:`\zeta` is the Riemann Zeta function.
|
| It is named for the American linguist George Kingsley Zipf, who noted
| that the frequency of any word in a sample of a language is inversely
| proportional to its rank in the frequency table.
|
| References
| ----------
| .. [1] Zipf, G. K., "Selected Studies of the Principle of Relative
| Frequency in Language," Cambridge, MA: Harvard Univ. Press,
| 1932.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> a = 2. # parameter
| >>> s = np.random.default_rng().zipf(a, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> from scipy import special # doctest: +SKIP
|
| Truncate s values at 50 so plot is interesting:
|
| >>> count, bins, ignored = plt.hist(s[s<50],
| ... 50, density=True)
| >>> x = np.arange(1., 50.)
| >>> y = x**(-a) / special.zetac(a) # doctest: +SKIP
| >>> plt.plot(x, y/max(y), linewidth=2, color='r') # doctest: +SKIP
| >>> plt.show()
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| bit_generator
| Gets the bit generator instance used by the generator
|
| Returns
| -------
| bit_generator : BitGenerator
| The bit generator instance used by the generator
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __pyx_vtable__ = <capsule object NULL>
class MT19937(numpy.random._bit_generator.BitGenerator)
| MT19937(seed=None)
|
| Container for the Mersenne Twister pseudo-random number generator.
|
| Parameters
| ----------
| seed : {None, int, array_like[ints], SeedSequence}, optional
| A seed to initialize the `BitGenerator`. If None, then fresh,
| unpredictable entropy will be pulled from the OS. If an ``int`` or
| ``array_like[ints]`` is passed, then it will be passed to
| `SeedSequence` to derive the initial `BitGenerator` state. One may also
| pass in a `SeedSequence` instance.
|
| Attributes
| ----------
| lock: threading.Lock
| Lock instance that is shared so that the same bit git generator can
| be used in multiple Generators without corrupting the state. Code that
| generates values from a bit generator should hold the bit generator's
| lock.
|
| Notes
| -----
| ``MT19937`` provides a capsule containing function pointers that produce
| doubles, and unsigned 32 and 64- bit integers [1]_. These are not
| directly consumable in Python and must be consumed by a ``Generator``
| or similar object that supports low-level access.
|
| The Python stdlib module "random" also contains a Mersenne Twister
| pseudo-random number generator.
|
| **State and Seeding**
|
| The ``MT19937`` state vector consists of a 624-element array of
| 32-bit unsigned integers plus a single integer value between 0 and 624
| that indexes the current position within the main array.
|
| The input seed is processed by `SeedSequence` to fill the whole state. The
| first element is reset such that only its most significant bit is set.
|
| **Parallel Features**
|
| The preferred way to use a BitGenerator in parallel applications is to use
| the `SeedSequence.spawn` method to obtain entropy values, and to use these
| to generate new BitGenerators:
|
| >>> from numpy.random import Generator, MT19937, SeedSequence
| >>> sg = SeedSequence(1234)
| >>> rg = [Generator(MT19937(s)) for s in sg.spawn(10)]
|
| Another method is to use `MT19937.jumped` which advances the state as-if
| :math:`2^{128}` random numbers have been generated ([1]_, [2]_). This
| allows the original sequence to be split so that distinct segments can be
| used in each worker process. All generators should be chained to ensure
| that the segments come from the same sequence.
|
| >>> from numpy.random import Generator, MT19937, SeedSequence
| >>> sg = SeedSequence(1234)
| >>> bit_generator = MT19937(sg)
| >>> rg = []
| >>> for _ in range(10):
| ... rg.append(Generator(bit_generator))
| ... # Chain the BitGenerators
| ... bit_generator = bit_generator.jumped()
|
| **Compatibility Guarantee**
|
| ``MT19937`` makes a guarantee that a fixed seed and will always produce
| the same random integer stream.
|
| References
| ----------
| .. [1] Hiroshi Haramoto, Makoto Matsumoto, and Pierre L'Ecuyer, "A Fast
| Jump Ahead Algorithm for Linear Recurrences in a Polynomial Space",
| Sequences and Their Applications - SETA, 290--298, 2008.
| .. [2] Hiroshi Haramoto, Makoto Matsumoto, Takuji Nishimura, François
| Panneton, Pierre L'Ecuyer, "Efficient Jump Ahead for F2-Linear
| Random Number Generators", INFORMS JOURNAL ON COMPUTING, Vol. 20,
| No. 3, Summer 2008, pp. 385-390.
|
| Method resolution order:
| MT19937
| numpy.random._bit_generator.BitGenerator
| builtins.object
|
| Methods defined here:
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce_cython__(...)
|
| __setstate_cython__(...)
|
| jumped(...)
| jumped(jumps=1)
|
| Returns a new bit generator with the state jumped
|
| The state of the returned big generator is jumped as-if
| 2**(128 * jumps) random numbers have been generated.
|
| Parameters
| ----------
| jumps : integer, positive
| Number of times to jump the state of the bit generator returned
|
| Returns
| -------
| bit_generator : MT19937
| New instance of generator jumped iter times
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| state
| Get or set the PRNG state
|
| Returns
| -------
| state : dict
| Dictionary containing the information required to describe the
| state of the PRNG
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __pyx_vtable__ = <capsule object NULL>
|
| ----------------------------------------------------------------------
| Methods inherited from numpy.random._bit_generator.BitGenerator:
|
| __getstate__(...)
|
| __reduce__(...)
| Helper for pickle.
|
| __setstate__(...)
|
| random_raw(...)
| random_raw(self, size=None)
|
| Return randoms as generated by the underlying BitGenerator
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| output : bool, optional
| Output values. Used for performance testing since the generated
| values are not returned.
|
| Returns
| -------
| out : uint or ndarray
| Drawn samples.
|
| Notes
| -----
| This method directly exposes the the raw underlying pseudo-random
| number generator. All values are returned as unsigned 64-bit
| values irrespective of the number of bits produced by the PRNG.
|
| See the class docstring for the number of bits returned.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from numpy.random._bit_generator.BitGenerator:
|
| capsule
|
| cffi
| CFFI interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing CFFI wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| ctypes
| ctypes interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing ctypes wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| lock
class PCG64(numpy.random._bit_generator.BitGenerator)
| PCG64(seed_seq=None)
|
| BitGenerator for the PCG-64 pseudo-random number generator.
|
| Parameters
| ----------
| seed : {None, int, array_like[ints], SeedSequence}, optional
| A seed to initialize the `BitGenerator`. If None, then fresh,
| unpredictable entropy will be pulled from the OS. If an ``int`` or
| ``array_like[ints]`` is passed, then it will be passed to
| `SeedSequence` to derive the initial `BitGenerator` state. One may also
| pass in a `SeedSequence` instance.
|
| Notes
| -----
| PCG-64 is a 128-bit implementation of O'Neill's permutation congruential
| generator ([1]_, [2]_). PCG-64 has a period of :math:`2^{128}` and supports
| advancing an arbitrary number of steps as well as :math:`2^{127}` streams.
| The specific member of the PCG family that we use is PCG XSL RR 128/64
| as described in the paper ([2]_).
|
| ``PCG64`` provides a capsule containing function pointers that produce
| doubles, and unsigned 32 and 64- bit integers. These are not
| directly consumable in Python and must be consumed by a ``Generator``
| or similar object that supports low-level access.
|
| Supports the method :meth:`advance` to advance the RNG an arbitrary number of
| steps. The state of the PCG-64 RNG is represented by 2 128-bit unsigned
| integers.
|
| **State and Seeding**
|
| The ``PCG64`` state vector consists of 2 unsigned 128-bit values,
| which are represented externally as Python ints. One is the state of the
| PRNG, which is advanced by a linear congruential generator (LCG). The
| second is a fixed odd increment used in the LCG.
|
| The input seed is processed by `SeedSequence` to generate both values. The
| increment is not independently settable.
|
| **Parallel Features**
|
| The preferred way to use a BitGenerator in parallel applications is to use
| the `SeedSequence.spawn` method to obtain entropy values, and to use these
| to generate new BitGenerators:
|
| >>> from numpy.random import Generator, PCG64, SeedSequence
| >>> sg = SeedSequence(1234)
| >>> rg = [Generator(PCG64(s)) for s in sg.spawn(10)]
|
| **Compatibility Guarantee**
|
| ``PCG64`` makes a guarantee that a fixed seed and will always produce
| the same random integer stream.
|
| References
| ----------
| .. [1] `"PCG, A Family of Better Random Number Generators"
| <http://www.pcg-random.org/>`_
| .. [2] O'Neill, Melissa E. `"PCG: A Family of Simple Fast Space-Efficient
| Statistically Good Algorithms for Random Number Generation"
| <https://www.cs.hmc.edu/tr/hmc-cs-2014-0905.pdf>`_
|
| Method resolution order:
| PCG64
| numpy.random._bit_generator.BitGenerator
| builtins.object
|
| Methods defined here:
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce_cython__(...)
|
| __setstate_cython__(...)
|
| advance(...)
| advance(delta)
|
| Advance the underlying RNG as-if delta draws have occurred.
|
| Parameters
| ----------
| delta : integer, positive
| Number of draws to advance the RNG. Must be less than the
| size state variable in the underlying RNG.
|
| Returns
| -------
| self : PCG64
| RNG advanced delta steps
|
| Notes
| -----
| Advancing a RNG updates the underlying RNG state as-if a given
| number of calls to the underlying RNG have been made. In general
| there is not a one-to-one relationship between the number output
| random values from a particular distribution and the number of
| draws from the core RNG. This occurs for two reasons:
|
| * The random values are simulated using a rejection-based method
| and so, on average, more than one value from the underlying
| RNG is required to generate an single draw.
| * The number of bits required to generate a simulated value
| differs from the number of bits generated by the underlying
| RNG. For example, two 16-bit integer values can be simulated
| from a single draw of a 32-bit RNG.
|
| Advancing the RNG state resets any pre-computed random numbers.
| This is required to ensure exact reproducibility.
|
| jumped(...)
| jumped(jumps=1)
|
| Returns a new bit generator with the state jumped.
|
| Jumps the state as-if jumps * 210306068529402873165736369884012333109
| random numbers have been generated.
|
| Parameters
| ----------
| jumps : integer, positive
| Number of times to jump the state of the bit generator returned
|
| Returns
| -------
| bit_generator : PCG64
| New instance of generator jumped iter times
|
| Notes
| -----
| The step size is phi-1 when multiplied by 2**128 where phi is the
| golden ratio.
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| state
| Get or set the PRNG state
|
| Returns
| -------
| state : dict
| Dictionary containing the information required to describe the
| state of the PRNG
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __pyx_vtable__ = <capsule object NULL>
|
| ----------------------------------------------------------------------
| Methods inherited from numpy.random._bit_generator.BitGenerator:
|
| __getstate__(...)
|
| __reduce__(...)
| Helper for pickle.
|
| __setstate__(...)
|
| random_raw(...)
| random_raw(self, size=None)
|
| Return randoms as generated by the underlying BitGenerator
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| output : bool, optional
| Output values. Used for performance testing since the generated
| values are not returned.
|
| Returns
| -------
| out : uint or ndarray
| Drawn samples.
|
| Notes
| -----
| This method directly exposes the the raw underlying pseudo-random
| number generator. All values are returned as unsigned 64-bit
| values irrespective of the number of bits produced by the PRNG.
|
| See the class docstring for the number of bits returned.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from numpy.random._bit_generator.BitGenerator:
|
| capsule
|
| cffi
| CFFI interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing CFFI wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| ctypes
| ctypes interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing ctypes wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| lock
class Philox(numpy.random._bit_generator.BitGenerator)
| Philox(seed=None, counter=None, key=None)
|
| Container for the Philox (4x64) pseudo-random number generator.
|
| Parameters
| ----------
| seed : {None, int, array_like[ints], SeedSequence}, optional
| A seed to initialize the `BitGenerator`. If None, then fresh,
| unpredictable entropy will be pulled from the OS. If an ``int`` or
| ``array_like[ints]`` is passed, then it will be passed to
| `SeedSequence` to derive the initial `BitGenerator` state. One may also
| pass in a `SeedSequence` instance.
| counter : {None, int, array_like}, optional
| Counter to use in the Philox state. Can be either
| a Python int (long in 2.x) in [0, 2**256) or a 4-element uint64 array.
| If not provided, the RNG is initialized at 0.
| key : {None, int, array_like}, optional
| Key to use in the Philox state. Unlike ``seed``, the value in key is
| directly set. Can be either a Python int in [0, 2**128) or a 2-element
| uint64 array. `key` and ``seed`` cannot both be used.
|
| Attributes
| ----------
| lock: threading.Lock
| Lock instance that is shared so that the same bit git generator can
| be used in multiple Generators without corrupting the state. Code that
| generates values from a bit generator should hold the bit generator's
| lock.
|
| Notes
| -----
| Philox is a 64-bit PRNG that uses a counter-based design based on weaker
| (and faster) versions of cryptographic functions [1]_. Instances using
| different values of the key produce independent sequences. Philox has a
| period of :math:`2^{256} - 1` and supports arbitrary advancing and jumping
| the sequence in increments of :math:`2^{128}`. These features allow
| multiple non-overlapping sequences to be generated.
|
| ``Philox`` provides a capsule containing function pointers that produce
| doubles, and unsigned 32 and 64- bit integers. These are not
| directly consumable in Python and must be consumed by a ``Generator``
| or similar object that supports low-level access.
|
| **State and Seeding**
|
| The ``Philox`` state vector consists of a 256-bit value encoded as
| a 4-element uint64 array and a 128-bit value encoded as a 2-element uint64
| array. The former is a counter which is incremented by 1 for every 4 64-bit
| randoms produced. The second is a key which determined the sequence
| produced. Using different keys produces independent sequences.
|
| The input ``seed`` is processed by `SeedSequence` to generate the key. The
| counter is set to 0.
|
| Alternately, one can omit the ``seed`` parameter and set the ``key`` and
| ``counter`` directly.
|
| **Parallel Features**
|
| The preferred way to use a BitGenerator in parallel applications is to use
| the `SeedSequence.spawn` method to obtain entropy values, and to use these
| to generate new BitGenerators:
|
| >>> from numpy.random import Generator, Philox, SeedSequence
| >>> sg = SeedSequence(1234)
| >>> rg = [Generator(Philox(s)) for s in sg.spawn(10)]
|
| ``Philox`` can be used in parallel applications by calling the ``jumped``
| method to advances the state as-if :math:`2^{128}` random numbers have
| been generated. Alternatively, ``advance`` can be used to advance the
| counter for any positive step in [0, 2**256). When using ``jumped``, all
| generators should be chained to ensure that the segments come from the same
| sequence.
|
| >>> from numpy.random import Generator, Philox
| >>> bit_generator = Philox(1234)
| >>> rg = []
| >>> for _ in range(10):
| ... rg.append(Generator(bit_generator))
| ... bit_generator = bit_generator.jumped()
|
| Alternatively, ``Philox`` can be used in parallel applications by using
| a sequence of distinct keys where each instance uses different key.
|
| >>> key = 2**96 + 2**33 + 2**17 + 2**9
| >>> rg = [Generator(Philox(key=key+i)) for i in range(10)]
|
| **Compatibility Guarantee**
|
| ``Philox`` makes a guarantee that a fixed ``seed`` will always produce
| the same random integer stream.
|
| Examples
| --------
| >>> from numpy.random import Generator, Philox
| >>> rg = Generator(Philox(1234))
| >>> rg.standard_normal()
| 0.123 # random
|
| References
| ----------
| .. [1] John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw,
| "Parallel Random Numbers: As Easy as 1, 2, 3," Proceedings of
| the International Conference for High Performance Computing,
| Networking, Storage and Analysis (SC11), New York, NY: ACM, 2011.
|
| Method resolution order:
| Philox
| numpy.random._bit_generator.BitGenerator
| builtins.object
|
| Methods defined here:
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce_cython__(...)
|
| __setstate_cython__(...)
|
| advance(...)
| advance(delta)
|
| Advance the underlying RNG as-if delta draws have occurred.
|
| Parameters
| ----------
| delta : integer, positive
| Number of draws to advance the RNG. Must be less than the
| size state variable in the underlying RNG.
|
| Returns
| -------
| self : Philox
| RNG advanced delta steps
|
| Notes
| -----
| Advancing a RNG updates the underlying RNG state as-if a given
| number of calls to the underlying RNG have been made. In general
| there is not a one-to-one relationship between the number output
| random values from a particular distribution and the number of
| draws from the core RNG. This occurs for two reasons:
|
| * The random values are simulated using a rejection-based method
| and so, on average, more than one value from the underlying
| RNG is required to generate an single draw.
| * The number of bits required to generate a simulated value
| differs from the number of bits generated by the underlying
| RNG. For example, two 16-bit integer values can be simulated
| from a single draw of a 32-bit RNG.
|
| Advancing the RNG state resets any pre-computed random numbers.
| This is required to ensure exact reproducibility.
|
| jumped(...)
| jumped(jumps=1)
|
| Returns a new bit generator with the state jumped
|
| The state of the returned big generator is jumped as-if
| 2**(128 * jumps) random numbers have been generated.
|
| Parameters
| ----------
| jumps : integer, positive
| Number of times to jump the state of the bit generator returned
|
| Returns
| -------
| bit_generator : Philox
| New instance of generator jumped iter times
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| state
| Get or set the PRNG state
|
| Returns
| -------
| state : dict
| Dictionary containing the information required to describe the
| state of the PRNG
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __pyx_vtable__ = <capsule object NULL>
|
| ----------------------------------------------------------------------
| Methods inherited from numpy.random._bit_generator.BitGenerator:
|
| __getstate__(...)
|
| __reduce__(...)
| Helper for pickle.
|
| __setstate__(...)
|
| random_raw(...)
| random_raw(self, size=None)
|
| Return randoms as generated by the underlying BitGenerator
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| output : bool, optional
| Output values. Used for performance testing since the generated
| values are not returned.
|
| Returns
| -------
| out : uint or ndarray
| Drawn samples.
|
| Notes
| -----
| This method directly exposes the the raw underlying pseudo-random
| number generator. All values are returned as unsigned 64-bit
| values irrespective of the number of bits produced by the PRNG.
|
| See the class docstring for the number of bits returned.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from numpy.random._bit_generator.BitGenerator:
|
| capsule
|
| cffi
| CFFI interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing CFFI wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| ctypes
| ctypes interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing ctypes wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| lock
class RandomState(builtins.object)
| RandomState(seed=None)
|
| Container for the slow Mersenne Twister pseudo-random number generator.
| Consider using a different BitGenerator with the Generator container
| instead.
|
| `RandomState` and `Generator` expose a number of methods for generating
| random numbers drawn from a variety of probability distributions. In
| addition to the distribution-specific arguments, each method takes a
| keyword argument `size` that defaults to ``None``. If `size` is ``None``,
| then a single value is generated and returned. If `size` is an integer,
| then a 1-D array filled with generated values is returned. If `size` is a
| tuple, then an array with that shape is filled and returned.
|
| **Compatibility Guarantee**
|
| A fixed bit generator using a fixed seed and a fixed series of calls to
| 'RandomState' methods using the same parameters will always produce the
| same results up to roundoff error except when the values were incorrect.
| `RandomState` is effectively frozen and will only receive updates that
| are required by changes in the the internals of Numpy. More substantial
| changes, including algorithmic improvements, are reserved for
| `Generator`.
|
| Parameters
| ----------
| seed : {None, int, array_like, BitGenerator}, optional
| Random seed used to initialize the pseudo-random number generator or
| an instantized BitGenerator. If an integer or array, used as a seed for
| the MT19937 BitGenerator. Values can be any integer between 0 and
| 2**32 - 1 inclusive, an array (or other sequence) of such integers,
| or ``None`` (the default). If `seed` is ``None``, then the `MT19937`
| BitGenerator is initialized by reading data from ``/dev/urandom``
| (or the Windows analogue) if available or seed from the clock
| otherwise.
|
| Notes
| -----
| The Python stdlib module "random" also contains a Mersenne Twister
| pseudo-random number generator with a number of methods that are similar
| to the ones available in `RandomState`. `RandomState`, besides being
| NumPy-aware, has the advantage that it provides a much larger number
| of probability distributions to choose from.
|
| See Also
| --------
| Generator
| MT19937
| numpy.random.BitGenerator
|
| Methods defined here:
|
| __getstate__(...)
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| __setstate__(...)
|
| __str__(self, /)
| Return str(self).
|
| beta(...)
| beta(a, b, size=None)
|
| Draw samples from a Beta distribution.
|
| The Beta distribution is a special case of the Dirichlet distribution,
| and is related to the Gamma distribution. It has the probability
| distribution function
|
| .. math:: f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1}
| (1 - x)^{\beta - 1},
|
| where the normalization, B, is the beta function,
|
| .. math:: B(\alpha, \beta) = \int_0^1 t^{\alpha - 1}
| (1 - t)^{\beta - 1} dt.
|
| It is often seen in Bayesian inference and order statistics.
|
| .. note::
| New code should use the ``beta`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| a : float or array_like of floats
| Alpha, positive (>0).
| b : float or array_like of floats
| Beta, positive (>0).
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` and ``b`` are both scalars.
| Otherwise, ``np.broadcast(a, b).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized beta distribution.
|
| See Also
| --------
| Generator.beta: which should be used for new code.
|
| binomial(...)
| binomial(n, p, size=None)
|
| Draw samples from a binomial distribution.
|
| Samples are drawn from a binomial distribution with specified
| parameters, n trials and p probability of success where
| n an integer >= 0 and p is in the interval [0,1]. (n may be
| input as a float, but it is truncated to an integer in use)
|
| .. note::
| New code should use the ``binomial`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| n : int or array_like of ints
| Parameter of the distribution, >= 0. Floats are also accepted,
| but they will be truncated to integers.
| p : float or array_like of floats
| Parameter of the distribution, >= 0 and <=1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``n`` and ``p`` are both scalars.
| Otherwise, ``np.broadcast(n, p).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized binomial distribution, where
| each sample is equal to the number of successes over the n trials.
|
| See Also
| --------
| scipy.stats.binom : probability density function, distribution or
| cumulative density function, etc.
| Generator.binomial: which should be used for new code.
|
| Notes
| -----
| The probability density for the binomial distribution is
|
| .. math:: P(N) = \binom{n}{N}p^N(1-p)^{n-N},
|
| where :math:`n` is the number of trials, :math:`p` is the probability
| of success, and :math:`N` is the number of successes.
|
| When estimating the standard error of a proportion in a population by
| using a random sample, the normal distribution works well unless the
| product p*n <=5, where p = population proportion estimate, and n =
| number of samples, in which case the binomial distribution is used
| instead. For example, a sample of 15 people shows 4 who are left
| handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4,
| so the binomial distribution should be used in this case.
|
| References
| ----------
| .. [1] Dalgaard, Peter, "Introductory Statistics with R",
| Springer-Verlag, 2002.
| .. [2] Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill,
| Fifth Edition, 2002.
| .. [3] Lentner, Marvin, "Elementary Applied Statistics", Bogden
| and Quigley, 1972.
| .. [4] Weisstein, Eric W. "Binomial Distribution." From MathWorld--A
| Wolfram Web Resource.
| http://mathworld.wolfram.com/BinomialDistribution.html
| .. [5] Wikipedia, "Binomial distribution",
| https://en.wikipedia.org/wiki/Binomial_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> n, p = 10, .5 # number of trials, probability of each trial
| >>> s = np.random.binomial(n, p, 1000)
| # result of flipping a coin 10 times, tested 1000 times.
|
| A real world example. A company drills 9 wild-cat oil exploration
| wells, each with an estimated probability of success of 0.1. All nine
| wells fail. What is the probability of that happening?
|
| Let's do 20,000 trials of the model, and count the number that
| generate zero positive results.
|
| >>> sum(np.random.binomial(9, 0.1, 20000) == 0)/20000.
| # answer = 0.38885, or 38%.
|
| bytes(...)
| bytes(length)
|
| Return random bytes.
|
| .. note::
| New code should use the ``bytes`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| length : int
| Number of random bytes.
|
| Returns
| -------
| out : str
| String of length `length`.
|
| See Also
| --------
| Generator.bytes: which should be used for new code.
|
| Examples
| --------
| >>> np.random.bytes(10)
| ' eh\x85\x022SZ\xbf\xa4' #random
|
| chisquare(...)
| chisquare(df, size=None)
|
| Draw samples from a chi-square distribution.
|
| When `df` independent random variables, each with standard normal
| distributions (mean 0, variance 1), are squared and summed, the
| resulting distribution is chi-square (see Notes). This distribution
| is often used in hypothesis testing.
|
| .. note::
| New code should use the ``chisquare`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| df : float or array_like of floats
| Number of degrees of freedom, must be > 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``df`` is a scalar. Otherwise,
| ``np.array(df).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized chi-square distribution.
|
| Raises
| ------
| ValueError
| When `df` <= 0 or when an inappropriate `size` (e.g. ``size=-1``)
| is given.
|
| See Also
| --------
| Generator.chisquare: which should be used for new code.
|
| Notes
| -----
| The variable obtained by summing the squares of `df` independent,
| standard normally distributed random variables:
|
| .. math:: Q = \sum_{i=0}^{\mathtt{df}} X^2_i
|
| is chi-square distributed, denoted
|
| .. math:: Q \sim \chi^2_k.
|
| The probability density function of the chi-squared distribution is
|
| .. math:: p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)}
| x^{k/2 - 1} e^{-x/2},
|
| where :math:`\Gamma` is the gamma function,
|
| .. math:: \Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.
|
| References
| ----------
| .. [1] NIST "Engineering Statistics Handbook"
| https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm
|
| Examples
| --------
| >>> np.random.chisquare(2,4)
| array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random
|
| choice(...)
| choice(a, size=None, replace=True, p=None)
|
| Generates a random sample from a given 1-D array
|
| .. versionadded:: 1.7.0
|
| .. note::
| New code should use the ``choice`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| a : 1-D array-like or int
| If an ndarray, a random sample is generated from its elements.
| If an int, the random sample is generated as if a were np.arange(a)
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| replace : boolean, optional
| Whether the sample is with or without replacement
| p : 1-D array-like, optional
| The probabilities associated with each entry in a.
| If not given the sample assumes a uniform distribution over all
| entries in a.
|
| Returns
| -------
| samples : single item or ndarray
| The generated random samples
|
| Raises
| ------
| ValueError
| If a is an int and less than zero, if a or p are not 1-dimensional,
| if a is an array-like of size 0, if p is not a vector of
| probabilities, if a and p have different lengths, or if
| replace=False and the sample size is greater than the population
| size
|
| See Also
| --------
| randint, shuffle, permutation
| Generator.choice: which should be used in new code
|
| Examples
| --------
| Generate a uniform random sample from np.arange(5) of size 3:
|
| >>> np.random.choice(5, 3)
| array([0, 3, 4]) # random
| >>> #This is equivalent to np.random.randint(0,5,3)
|
| Generate a non-uniform random sample from np.arange(5) of size 3:
|
| >>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])
| array([3, 3, 0]) # random
|
| Generate a uniform random sample from np.arange(5) of size 3 without
| replacement:
|
| >>> np.random.choice(5, 3, replace=False)
| array([3,1,0]) # random
| >>> #This is equivalent to np.random.permutation(np.arange(5))[:3]
|
| Generate a non-uniform random sample from np.arange(5) of size
| 3 without replacement:
|
| >>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0])
| array([2, 3, 0]) # random
|
| Any of the above can be repeated with an arbitrary array-like
| instead of just integers. For instance:
|
| >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
| >>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3])
| array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random
| dtype='<U11')
|
| dirichlet(...)
| dirichlet(alpha, size=None)
|
| Draw samples from the Dirichlet distribution.
|
| Draw `size` samples of dimension k from a Dirichlet distribution. A
| Dirichlet-distributed random variable can be seen as a multivariate
| generalization of a Beta distribution. The Dirichlet distribution
| is a conjugate prior of a multinomial distribution in Bayesian
| inference.
|
| .. note::
| New code should use the ``dirichlet`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| alpha : sequence of floats, length k
| Parameter of the distribution (length ``k`` for sample of
| length ``k``).
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| vector of length ``k`` is returned.
|
| Returns
| -------
| samples : ndarray,
| The drawn samples, of shape ``(size, k)``.
|
| Raises
| -------
| ValueError
| If any value in ``alpha`` is less than or equal to zero
|
| See Also
| --------
| Generator.dirichlet: which should be used for new code.
|
| Notes
| -----
| The Dirichlet distribution is a distribution over vectors
| :math:`x` that fulfil the conditions :math:`x_i>0` and
| :math:`\sum_{i=1}^k x_i = 1`.
|
| The probability density function :math:`p` of a
| Dirichlet-distributed random vector :math:`X` is
| proportional to
|
| .. math:: p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},
|
| where :math:`\alpha` is a vector containing the positive
| concentration parameters.
|
| The method uses the following property for computation: let :math:`Y`
| be a random vector which has components that follow a standard gamma
| distribution, then :math:`X = \frac{1}{\sum_{i=1}^k{Y_i}} Y`
| is Dirichlet-distributed
|
| References
| ----------
| .. [1] David McKay, "Information Theory, Inference and Learning
| Algorithms," chapter 23,
| http://www.inference.org.uk/mackay/itila/
| .. [2] Wikipedia, "Dirichlet distribution",
| https://en.wikipedia.org/wiki/Dirichlet_distribution
|
| Examples
| --------
| Taking an example cited in Wikipedia, this distribution can be used if
| one wanted to cut strings (each of initial length 1.0) into K pieces
| with different lengths, where each piece had, on average, a designated
| average length, but allowing some variation in the relative sizes of
| the pieces.
|
| >>> s = np.random.dirichlet((10, 5, 3), 20).transpose()
|
| >>> import matplotlib.pyplot as plt
| >>> plt.barh(range(20), s[0])
| >>> plt.barh(range(20), s[1], left=s[0], color='g')
| >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r')
| >>> plt.title("Lengths of Strings")
|
| exponential(...)
| exponential(scale=1.0, size=None)
|
| Draw samples from an exponential distribution.
|
| Its probability density function is
|
| .. math:: f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),
|
| for ``x > 0`` and 0 elsewhere. :math:`\beta` is the scale parameter,
| which is the inverse of the rate parameter :math:`\lambda = 1/\beta`.
| The rate parameter is an alternative, widely used parameterization
| of the exponential distribution [3]_.
|
| The exponential distribution is a continuous analogue of the
| geometric distribution. It describes many common situations, such as
| the size of raindrops measured over many rainstorms [1]_, or the time
| between page requests to Wikipedia [2]_.
|
| .. note::
| New code should use the ``exponential`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| scale : float or array_like of floats
| The scale parameter, :math:`\beta = 1/\lambda`. Must be
| non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``scale`` is a scalar. Otherwise,
| ``np.array(scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized exponential distribution.
|
| See Also
| --------
| Generator.exponential: which should be used for new code.
|
| References
| ----------
| .. [1] Peyton Z. Peebles Jr., "Probability, Random Variables and
| Random Signal Principles", 4th ed, 2001, p. 57.
| .. [2] Wikipedia, "Poisson process",
| https://en.wikipedia.org/wiki/Poisson_process
| .. [3] Wikipedia, "Exponential distribution",
| https://en.wikipedia.org/wiki/Exponential_distribution
|
| f(...)
| f(dfnum, dfden, size=None)
|
| Draw samples from an F distribution.
|
| Samples are drawn from an F distribution with specified parameters,
| `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of
| freedom in denominator), where both parameters must be greater than
| zero.
|
| The random variate of the F distribution (also known as the
| Fisher distribution) is a continuous probability distribution
| that arises in ANOVA tests, and is the ratio of two chi-square
| variates.
|
| .. note::
| New code should use the ``f`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| dfnum : float or array_like of floats
| Degrees of freedom in numerator, must be > 0.
| dfden : float or array_like of float
| Degrees of freedom in denominator, must be > 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``dfnum`` and ``dfden`` are both scalars.
| Otherwise, ``np.broadcast(dfnum, dfden).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Fisher distribution.
|
| See Also
| --------
| scipy.stats.f : probability density function, distribution or
| cumulative density function, etc.
| Generator.f: which should be used for new code.
|
| Notes
| -----
| The F statistic is used to compare in-group variances to between-group
| variances. Calculating the distribution depends on the sampling, and
| so it is a function of the respective degrees of freedom in the
| problem. The variable `dfnum` is the number of samples minus one, the
| between-groups degrees of freedom, while `dfden` is the within-groups
| degrees of freedom, the sum of the number of samples in each group
| minus the number of groups.
|
| References
| ----------
| .. [1] Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill,
| Fifth Edition, 2002.
| .. [2] Wikipedia, "F-distribution",
| https://en.wikipedia.org/wiki/F-distribution
|
| Examples
| --------
| An example from Glantz[1], pp 47-40:
|
| Two groups, children of diabetics (25 people) and children from people
| without diabetes (25 controls). Fasting blood glucose was measured,
| case group had a mean value of 86.1, controls had a mean value of
| 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these
| data consistent with the null hypothesis that the parents diabetic
| status does not affect their children's blood glucose levels?
| Calculating the F statistic from the data gives a value of 36.01.
|
| Draw samples from the distribution:
|
| >>> dfnum = 1. # between group degrees of freedom
| >>> dfden = 48. # within groups degrees of freedom
| >>> s = np.random.f(dfnum, dfden, 1000)
|
| The lower bound for the top 1% of the samples is :
|
| >>> np.sort(s)[-10]
| 7.61988120985 # random
|
| So there is about a 1% chance that the F statistic will exceed 7.62,
| the measured value is 36, so the null hypothesis is rejected at the 1%
| level.
|
| gamma(...)
| gamma(shape, scale=1.0, size=None)
|
| Draw samples from a Gamma distribution.
|
| Samples are drawn from a Gamma distribution with specified parameters,
| `shape` (sometimes designated "k") and `scale` (sometimes designated
| "theta"), where both parameters are > 0.
|
| .. note::
| New code should use the ``gamma`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| shape : float or array_like of floats
| The shape of the gamma distribution. Must be non-negative.
| scale : float or array_like of floats, optional
| The scale of the gamma distribution. Must be non-negative.
| Default is equal to 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``shape`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(shape, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized gamma distribution.
|
| See Also
| --------
| scipy.stats.gamma : probability density function, distribution or
| cumulative density function, etc.
| Generator.gamma: which should be used for new code.
|
| Notes
| -----
| The probability density for the Gamma distribution is
|
| .. math:: p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},
|
| where :math:`k` is the shape and :math:`\theta` the scale,
| and :math:`\Gamma` is the Gamma function.
|
| The Gamma distribution is often used to model the times to failure of
| electronic components, and arises naturally in processes for which the
| waiting times between Poisson distributed events are relevant.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Gamma Distribution." From MathWorld--A
| Wolfram Web Resource.
| http://mathworld.wolfram.com/GammaDistribution.html
| .. [2] Wikipedia, "Gamma distribution",
| https://en.wikipedia.org/wiki/Gamma_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2)
| >>> s = np.random.gamma(shape, scale, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> import scipy.special as sps # doctest: +SKIP
| >>> count, bins, ignored = plt.hist(s, 50, density=True)
| >>> y = bins**(shape-1)*(np.exp(-bins/scale) / # doctest: +SKIP
| ... (sps.gamma(shape)*scale**shape))
| >>> plt.plot(bins, y, linewidth=2, color='r') # doctest: +SKIP
| >>> plt.show()
|
| geometric(...)
| geometric(p, size=None)
|
| Draw samples from the geometric distribution.
|
| Bernoulli trials are experiments with one of two outcomes:
| success or failure (an example of such an experiment is flipping
| a coin). The geometric distribution models the number of trials
| that must be run in order to achieve success. It is therefore
| supported on the positive integers, ``k = 1, 2, ...``.
|
| The probability mass function of the geometric distribution is
|
| .. math:: f(k) = (1 - p)^{k - 1} p
|
| where `p` is the probability of success of an individual trial.
|
| .. note::
| New code should use the ``geometric`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| p : float or array_like of floats
| The probability of success of an individual trial.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``p`` is a scalar. Otherwise,
| ``np.array(p).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized geometric distribution.
|
| See Also
| --------
| Generator.geometric: which should be used for new code.
|
| Examples
| --------
| Draw ten thousand values from the geometric distribution,
| with the probability of an individual success equal to 0.35:
|
| >>> z = np.random.geometric(p=0.35, size=10000)
|
| How many trials succeeded after a single run?
|
| >>> (z == 1).sum() / 10000.
| 0.34889999999999999 #random
|
| get_state(...)
| get_state()
|
| Return a tuple representing the internal state of the generator.
|
| For more details, see `set_state`.
|
| Returns
| -------
| out : {tuple(str, ndarray of 624 uints, int, int, float), dict}
| The returned tuple has the following items:
|
| 1. the string 'MT19937'.
| 2. a 1-D array of 624 unsigned integer keys.
| 3. an integer ``pos``.
| 4. an integer ``has_gauss``.
| 5. a float ``cached_gaussian``.
|
| If `legacy` is False, or the BitGenerator is not NT19937, then
| state is returned as a dictionary.
|
| legacy : bool
| Flag indicating the return a legacy tuple state when the BitGenerator
| is MT19937.
|
| See Also
| --------
| set_state
|
| Notes
| -----
| `set_state` and `get_state` are not needed to work with any of the
| random distributions in NumPy. If the internal state is manually altered,
| the user should know exactly what he/she is doing.
|
| gumbel(...)
| gumbel(loc=0.0, scale=1.0, size=None)
|
| Draw samples from a Gumbel distribution.
|
| Draw samples from a Gumbel distribution with specified location and
| scale. For more information on the Gumbel distribution, see
| Notes and References below.
|
| .. note::
| New code should use the ``gumbel`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| loc : float or array_like of floats, optional
| The location of the mode of the distribution. Default is 0.
| scale : float or array_like of floats, optional
| The scale parameter of the distribution. Default is 1. Must be non-
| negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``loc`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Gumbel distribution.
|
| See Also
| --------
| scipy.stats.gumbel_l
| scipy.stats.gumbel_r
| scipy.stats.genextreme
| weibull
| Generator.gumbel: which should be used for new code.
|
| Notes
| -----
| The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme
| Value Type I) distribution is one of a class of Generalized Extreme
| Value (GEV) distributions used in modeling extreme value problems.
| The Gumbel is a special case of the Extreme Value Type I distribution
| for maximums from distributions with "exponential-like" tails.
|
| The probability density for the Gumbel distribution is
|
| .. math:: p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/
| \beta}},
|
| where :math:`\mu` is the mode, a location parameter, and
| :math:`\beta` is the scale parameter.
|
| The Gumbel (named for German mathematician Emil Julius Gumbel) was used
| very early in the hydrology literature, for modeling the occurrence of
| flood events. It is also used for modeling maximum wind speed and
| rainfall rates. It is a "fat-tailed" distribution - the probability of
| an event in the tail of the distribution is larger than if one used a
| Gaussian, hence the surprisingly frequent occurrence of 100-year
| floods. Floods were initially modeled as a Gaussian process, which
| underestimated the frequency of extreme events.
|
| It is one of a class of extreme value distributions, the Generalized
| Extreme Value (GEV) distributions, which also includes the Weibull and
| Frechet.
|
| The function has a mean of :math:`\mu + 0.57721\beta` and a variance
| of :math:`\frac{\pi^2}{6}\beta^2`.
|
| References
| ----------
| .. [1] Gumbel, E. J., "Statistics of Extremes,"
| New York: Columbia University Press, 1958.
| .. [2] Reiss, R.-D. and Thomas, M., "Statistical Analysis of Extreme
| Values from Insurance, Finance, Hydrology and Other Fields,"
| Basel: Birkhauser Verlag, 2001.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> mu, beta = 0, 0.1 # location and scale
| >>> s = np.random.gumbel(mu, beta, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 30, density=True)
| >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)
| ... * np.exp( -np.exp( -(bins - mu) /beta) ),
| ... linewidth=2, color='r')
| >>> plt.show()
|
| Show how an extreme value distribution can arise from a Gaussian process
| and compare to a Gaussian:
|
| >>> means = []
| >>> maxima = []
| >>> for i in range(0,1000) :
| ... a = np.random.normal(mu, beta, 1000)
| ... means.append(a.mean())
| ... maxima.append(a.max())
| >>> count, bins, ignored = plt.hist(maxima, 30, density=True)
| >>> beta = np.std(maxima) * np.sqrt(6) / np.pi
| >>> mu = np.mean(maxima) - 0.57721*beta
| >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)
| ... * np.exp(-np.exp(-(bins - mu)/beta)),
| ... linewidth=2, color='r')
| >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi))
| ... * np.exp(-(bins - mu)**2 / (2 * beta**2)),
| ... linewidth=2, color='g')
| >>> plt.show()
|
| hypergeometric(...)
| hypergeometric(ngood, nbad, nsample, size=None)
|
| Draw samples from a Hypergeometric distribution.
|
| Samples are drawn from a hypergeometric distribution with specified
| parameters, `ngood` (ways to make a good selection), `nbad` (ways to make
| a bad selection), and `nsample` (number of items sampled, which is less
| than or equal to the sum ``ngood + nbad``).
|
| .. note::
| New code should use the ``hypergeometric`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| ngood : int or array_like of ints
| Number of ways to make a good selection. Must be nonnegative.
| nbad : int or array_like of ints
| Number of ways to make a bad selection. Must be nonnegative.
| nsample : int or array_like of ints
| Number of items sampled. Must be at least 1 and at most
| ``ngood + nbad``.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if `ngood`, `nbad`, and `nsample`
| are all scalars. Otherwise, ``np.broadcast(ngood, nbad, nsample).size``
| samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized hypergeometric distribution. Each
| sample is the number of good items within a randomly selected subset of
| size `nsample` taken from a set of `ngood` good items and `nbad` bad items.
|
| See Also
| --------
| scipy.stats.hypergeom : probability density function, distribution or
| cumulative density function, etc.
| Generator.hypergeometric: which should be used for new code.
|
| Notes
| -----
| The probability density for the Hypergeometric distribution is
|
| .. math:: P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},
|
| where :math:`0 \le x \le n` and :math:`n-b \le x \le g`
|
| for P(x) the probability of ``x`` good results in the drawn sample,
| g = `ngood`, b = `nbad`, and n = `nsample`.
|
| Consider an urn with black and white marbles in it, `ngood` of them
| are black and `nbad` are white. If you draw `nsample` balls without
| replacement, then the hypergeometric distribution describes the
| distribution of black balls in the drawn sample.
|
| Note that this distribution is very similar to the binomial
| distribution, except that in this case, samples are drawn without
| replacement, whereas in the Binomial case samples are drawn with
| replacement (or the sample space is infinite). As the sample space
| becomes large, this distribution approaches the binomial.
|
| References
| ----------
| .. [1] Lentner, Marvin, "Elementary Applied Statistics", Bogden
| and Quigley, 1972.
| .. [2] Weisstein, Eric W. "Hypergeometric Distribution." From
| MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/HypergeometricDistribution.html
| .. [3] Wikipedia, "Hypergeometric distribution",
| https://en.wikipedia.org/wiki/Hypergeometric_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> ngood, nbad, nsamp = 100, 2, 10
| # number of good, number of bad, and number of samples
| >>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000)
| >>> from matplotlib.pyplot import hist
| >>> hist(s)
| # note that it is very unlikely to grab both bad items
|
| Suppose you have an urn with 15 white and 15 black marbles.
| If you pull 15 marbles at random, how likely is it that
| 12 or more of them are one color?
|
| >>> s = np.random.hypergeometric(15, 15, 15, 100000)
| >>> sum(s>=12)/100000. + sum(s<=3)/100000.
| # answer = 0.003 ... pretty unlikely!
|
| laplace(...)
| laplace(loc=0.0, scale=1.0, size=None)
|
| Draw samples from the Laplace or double exponential distribution with
| specified location (or mean) and scale (decay).
|
| The Laplace distribution is similar to the Gaussian/normal distribution,
| but is sharper at the peak and has fatter tails. It represents the
| difference between two independent, identically distributed exponential
| random variables.
|
| .. note::
| New code should use the ``laplace`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| loc : float or array_like of floats, optional
| The position, :math:`\mu`, of the distribution peak. Default is 0.
| scale : float or array_like of floats, optional
| :math:`\lambda`, the exponential decay. Default is 1. Must be non-
| negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``loc`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Laplace distribution.
|
| See Also
| --------
| Generator.laplace: which should be used for new code.
|
| Notes
| -----
| It has the probability density function
|
| .. math:: f(x; \mu, \lambda) = \frac{1}{2\lambda}
| \exp\left(-\frac{|x - \mu|}{\lambda}\right).
|
| The first law of Laplace, from 1774, states that the frequency
| of an error can be expressed as an exponential function of the
| absolute magnitude of the error, which leads to the Laplace
| distribution. For many problems in economics and health
| sciences, this distribution seems to model the data better
| than the standard Gaussian distribution.
|
| References
| ----------
| .. [1] Abramowitz, M. and Stegun, I. A. (Eds.). "Handbook of
| Mathematical Functions with Formulas, Graphs, and Mathematical
| Tables, 9th printing," New York: Dover, 1972.
| .. [2] Kotz, Samuel, et. al. "The Laplace Distribution and
| Generalizations, " Birkhauser, 2001.
| .. [3] Weisstein, Eric W. "Laplace Distribution."
| From MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/LaplaceDistribution.html
| .. [4] Wikipedia, "Laplace distribution",
| https://en.wikipedia.org/wiki/Laplace_distribution
|
| Examples
| --------
| Draw samples from the distribution
|
| >>> loc, scale = 0., 1.
| >>> s = np.random.laplace(loc, scale, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 30, density=True)
| >>> x = np.arange(-8., 8., .01)
| >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale)
| >>> plt.plot(x, pdf)
|
| Plot Gaussian for comparison:
|
| >>> g = (1/(scale * np.sqrt(2 * np.pi)) *
| ... np.exp(-(x - loc)**2 / (2 * scale**2)))
| >>> plt.plot(x,g)
|
| logistic(...)
| logistic(loc=0.0, scale=1.0, size=None)
|
| Draw samples from a logistic distribution.
|
| Samples are drawn from a logistic distribution with specified
| parameters, loc (location or mean, also median), and scale (>0).
|
| .. note::
| New code should use the ``logistic`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| loc : float or array_like of floats, optional
| Parameter of the distribution. Default is 0.
| scale : float or array_like of floats, optional
| Parameter of the distribution. Must be non-negative.
| Default is 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``loc`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized logistic distribution.
|
| See Also
| --------
| scipy.stats.logistic : probability density function, distribution or
| cumulative density function, etc.
| Generator.logistic: which should be used for new code.
|
| Notes
| -----
| The probability density for the Logistic distribution is
|
| .. math:: P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},
|
| where :math:`\mu` = location and :math:`s` = scale.
|
| The Logistic distribution is used in Extreme Value problems where it
| can act as a mixture of Gumbel distributions, in Epidemiology, and by
| the World Chess Federation (FIDE) where it is used in the Elo ranking
| system, assuming the performance of each player is a logistically
| distributed random variable.
|
| References
| ----------
| .. [1] Reiss, R.-D. and Thomas M. (2001), "Statistical Analysis of
| Extreme Values, from Insurance, Finance, Hydrology and Other
| Fields," Birkhauser Verlag, Basel, pp 132-133.
| .. [2] Weisstein, Eric W. "Logistic Distribution." From
| MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/LogisticDistribution.html
| .. [3] Wikipedia, "Logistic-distribution",
| https://en.wikipedia.org/wiki/Logistic_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> loc, scale = 10, 1
| >>> s = np.random.logistic(loc, scale, 10000)
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, bins=50)
|
| # plot against distribution
|
| >>> def logist(x, loc, scale):
| ... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2)
| >>> lgst_val = logist(bins, loc, scale)
| >>> plt.plot(bins, lgst_val * count.max() / lgst_val.max())
| >>> plt.show()
|
| lognormal(...)
| lognormal(mean=0.0, sigma=1.0, size=None)
|
| Draw samples from a log-normal distribution.
|
| Draw samples from a log-normal distribution with specified mean,
| standard deviation, and array shape. Note that the mean and standard
| deviation are not the values for the distribution itself, but of the
| underlying normal distribution it is derived from.
|
| .. note::
| New code should use the ``lognormal`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| mean : float or array_like of floats, optional
| Mean value of the underlying normal distribution. Default is 0.
| sigma : float or array_like of floats, optional
| Standard deviation of the underlying normal distribution. Must be
| non-negative. Default is 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``mean`` and ``sigma`` are both scalars.
| Otherwise, ``np.broadcast(mean, sigma).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized log-normal distribution.
|
| See Also
| --------
| scipy.stats.lognorm : probability density function, distribution,
| cumulative density function, etc.
| Generator.lognormal: which should be used for new code.
|
| Notes
| -----
| A variable `x` has a log-normal distribution if `log(x)` is normally
| distributed. The probability density function for the log-normal
| distribution is:
|
| .. math:: p(x) = \frac{1}{\sigma x \sqrt{2\pi}}
| e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}
|
| where :math:`\mu` is the mean and :math:`\sigma` is the standard
| deviation of the normally distributed logarithm of the variable.
| A log-normal distribution results if a random variable is the *product*
| of a large number of independent, identically-distributed variables in
| the same way that a normal distribution results if the variable is the
| *sum* of a large number of independent, identically-distributed
| variables.
|
| References
| ----------
| .. [1] Limpert, E., Stahel, W. A., and Abbt, M., "Log-normal
| Distributions across the Sciences: Keys and Clues,"
| BioScience, Vol. 51, No. 5, May, 2001.
| https://stat.ethz.ch/~stahel/lognormal/bioscience.pdf
| .. [2] Reiss, R.D. and Thomas, M., "Statistical Analysis of Extreme
| Values," Basel: Birkhauser Verlag, 2001, pp. 31-32.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> mu, sigma = 3., 1. # mean and standard deviation
| >>> s = np.random.lognormal(mu, sigma, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid')
|
| >>> x = np.linspace(min(bins), max(bins), 10000)
| >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))
| ... / (x * sigma * np.sqrt(2 * np.pi)))
|
| >>> plt.plot(x, pdf, linewidth=2, color='r')
| >>> plt.axis('tight')
| >>> plt.show()
|
| Demonstrate that taking the products of random samples from a uniform
| distribution can be fit well by a log-normal probability density
| function.
|
| >>> # Generate a thousand samples: each is the product of 100 random
| >>> # values, drawn from a normal distribution.
| >>> b = []
| >>> for i in range(1000):
| ... a = 10. + np.random.standard_normal(100)
| ... b.append(np.product(a))
|
| >>> b = np.array(b) / np.min(b) # scale values to be positive
| >>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid')
| >>> sigma = np.std(np.log(b))
| >>> mu = np.mean(np.log(b))
|
| >>> x = np.linspace(min(bins), max(bins), 10000)
| >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))
| ... / (x * sigma * np.sqrt(2 * np.pi)))
|
| >>> plt.plot(x, pdf, color='r', linewidth=2)
| >>> plt.show()
|
| logseries(...)
| logseries(p, size=None)
|
| Draw samples from a logarithmic series distribution.
|
| Samples are drawn from a log series distribution with specified
| shape parameter, 0 < ``p`` < 1.
|
| .. note::
| New code should use the ``logseries`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| p : float or array_like of floats
| Shape parameter for the distribution. Must be in the range (0, 1).
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``p`` is a scalar. Otherwise,
| ``np.array(p).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized logarithmic series distribution.
|
| See Also
| --------
| scipy.stats.logser : probability density function, distribution or
| cumulative density function, etc.
| Generator.logseries: which should be used for new code.
|
| Notes
| -----
| The probability density for the Log Series distribution is
|
| .. math:: P(k) = \frac{-p^k}{k \ln(1-p)},
|
| where p = probability.
|
| The log series distribution is frequently used to represent species
| richness and occurrence, first proposed by Fisher, Corbet, and
| Williams in 1943 [2]. It may also be used to model the numbers of
| occupants seen in cars [3].
|
| References
| ----------
| .. [1] Buzas, Martin A.; Culver, Stephen J., Understanding regional
| species diversity through the log series distribution of
| occurrences: BIODIVERSITY RESEARCH Diversity & Distributions,
| Volume 5, Number 5, September 1999 , pp. 187-195(9).
| .. [2] Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The
| relation between the number of species and the number of
| individuals in a random sample of an animal population.
| Journal of Animal Ecology, 12:42-58.
| .. [3] D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small
| Data Sets, CRC Press, 1994.
| .. [4] Wikipedia, "Logarithmic distribution",
| https://en.wikipedia.org/wiki/Logarithmic_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> a = .6
| >>> s = np.random.logseries(a, 10000)
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s)
|
| # plot against distribution
|
| >>> def logseries(k, p):
| ... return -p**k/(k*np.log(1-p))
| >>> plt.plot(bins, logseries(bins, a)*count.max()/
| ... logseries(bins, a).max(), 'r')
| >>> plt.show()
|
| multinomial(...)
| multinomial(n, pvals, size=None)
|
| Draw samples from a multinomial distribution.
|
| The multinomial distribution is a multivariate generalization of the
| binomial distribution. Take an experiment with one of ``p``
| possible outcomes. An example of such an experiment is throwing a dice,
| where the outcome can be 1 through 6. Each sample drawn from the
| distribution represents `n` such experiments. Its values,
| ``X_i = [X_0, X_1, ..., X_p]``, represent the number of times the
| outcome was ``i``.
|
| .. note::
| New code should use the ``multinomial`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| n : int
| Number of experiments.
| pvals : sequence of floats, length p
| Probabilities of each of the ``p`` different outcomes. These
| must sum to 1 (however, the last element is always assumed to
| account for the remaining probability, as long as
| ``sum(pvals[:-1]) <= 1)``.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| out : ndarray
| The drawn samples, of shape *size*, if that was provided. If not,
| the shape is ``(N,)``.
|
| In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
| value drawn from the distribution.
|
| See Also
| --------
| Generator.multinomial: which should be used for new code.
|
| Examples
| --------
| Throw a dice 20 times:
|
| >>> np.random.multinomial(20, [1/6.]*6, size=1)
| array([[4, 1, 7, 5, 2, 1]]) # random
|
| It landed 4 times on 1, once on 2, etc.
|
| Now, throw the dice 20 times, and 20 times again:
|
| >>> np.random.multinomial(20, [1/6.]*6, size=2)
| array([[3, 4, 3, 3, 4, 3], # random
| [2, 4, 3, 4, 0, 7]])
|
| For the first run, we threw 3 times 1, 4 times 2, etc. For the second,
| we threw 2 times 1, 4 times 2, etc.
|
| A loaded die is more likely to land on number 6:
|
| >>> np.random.multinomial(100, [1/7.]*5 + [2/7.])
| array([11, 16, 14, 17, 16, 26]) # random
|
| The probability inputs should be normalized. As an implementation
| detail, the value of the last entry is ignored and assumed to take
| up any leftover probability mass, but this should not be relied on.
| A biased coin which has twice as much weight on one side as on the
| other should be sampled like so:
|
| >>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT
| array([38, 62]) # random
|
| not like:
|
| >>> np.random.multinomial(100, [1.0, 2.0]) # WRONG
| Traceback (most recent call last):
| ValueError: pvals < 0, pvals > 1 or pvals contains NaNs
|
| multivariate_normal(...)
| multivariate_normal(mean, cov, size=None, check_valid='warn', tol=1e-8)
|
| Draw random samples from a multivariate normal distribution.
|
| The multivariate normal, multinormal or Gaussian distribution is a
| generalization of the one-dimensional normal distribution to higher
| dimensions. Such a distribution is specified by its mean and
| covariance matrix. These parameters are analogous to the mean
| (average or "center") and variance (standard deviation, or "width,"
| squared) of the one-dimensional normal distribution.
|
| .. note::
| New code should use the ``multivariate_normal`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| mean : 1-D array_like, of length N
| Mean of the N-dimensional distribution.
| cov : 2-D array_like, of shape (N, N)
| Covariance matrix of the distribution. It must be symmetric and
| positive-semidefinite for proper sampling.
| size : int or tuple of ints, optional
| Given a shape of, for example, ``(m,n,k)``, ``m*n*k`` samples are
| generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because
| each sample is `N`-dimensional, the output shape is ``(m,n,k,N)``.
| If no shape is specified, a single (`N`-D) sample is returned.
| check_valid : { 'warn', 'raise', 'ignore' }, optional
| Behavior when the covariance matrix is not positive semidefinite.
| tol : float, optional
| Tolerance when checking the singular values in covariance matrix.
| cov is cast to double before the check.
|
| Returns
| -------
| out : ndarray
| The drawn samples, of shape *size*, if that was provided. If not,
| the shape is ``(N,)``.
|
| In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
| value drawn from the distribution.
|
| See Also
| --------
| Generator.multivariate_normal: which should be used for new code.
|
| Notes
| -----
| The mean is a coordinate in N-dimensional space, which represents the
| location where samples are most likely to be generated. This is
| analogous to the peak of the bell curve for the one-dimensional or
| univariate normal distribution.
|
| Covariance indicates the level to which two variables vary together.
| From the multivariate normal distribution, we draw N-dimensional
| samples, :math:`X = [x_1, x_2, ... x_N]`. The covariance matrix
| element :math:`C_{ij}` is the covariance of :math:`x_i` and :math:`x_j`.
| The element :math:`C_{ii}` is the variance of :math:`x_i` (i.e. its
| "spread").
|
| Instead of specifying the full covariance matrix, popular
| approximations include:
|
| - Spherical covariance (`cov` is a multiple of the identity matrix)
| - Diagonal covariance (`cov` has non-negative elements, and only on
| the diagonal)
|
| This geometrical property can be seen in two dimensions by plotting
| generated data-points:
|
| >>> mean = [0, 0]
| >>> cov = [[1, 0], [0, 100]] # diagonal covariance
|
| Diagonal covariance means that points are oriented along x or y-axis:
|
| >>> import matplotlib.pyplot as plt
| >>> x, y = np.random.multivariate_normal(mean, cov, 5000).T
| >>> plt.plot(x, y, 'x')
| >>> plt.axis('equal')
| >>> plt.show()
|
| Note that the covariance matrix must be positive semidefinite (a.k.a.
| nonnegative-definite). Otherwise, the behavior of this method is
| undefined and backwards compatibility is not guaranteed.
|
| References
| ----------
| .. [1] Papoulis, A., "Probability, Random Variables, and Stochastic
| Processes," 3rd ed., New York: McGraw-Hill, 1991.
| .. [2] Duda, R. O., Hart, P. E., and Stork, D. G., "Pattern
| Classification," 2nd ed., New York: Wiley, 2001.
|
| Examples
| --------
| >>> mean = (1, 2)
| >>> cov = [[1, 0], [0, 1]]
| >>> x = np.random.multivariate_normal(mean, cov, (3, 3))
| >>> x.shape
| (3, 3, 2)
|
| The following is probably true, given that 0.6 is roughly twice the
| standard deviation:
|
| >>> list((x[0,0,:] - mean) < 0.6)
| [True, True] # random
|
| negative_binomial(...)
| negative_binomial(n, p, size=None)
|
| Draw samples from a negative binomial distribution.
|
| Samples are drawn from a negative binomial distribution with specified
| parameters, `n` successes and `p` probability of success where `n`
| is > 0 and `p` is in the interval [0, 1].
|
| .. note::
| New code should use the ``negative_binomial`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| n : float or array_like of floats
| Parameter of the distribution, > 0.
| p : float or array_like of floats
| Parameter of the distribution, >= 0 and <=1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``n`` and ``p`` are both scalars.
| Otherwise, ``np.broadcast(n, p).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized negative binomial distribution,
| where each sample is equal to N, the number of failures that
| occurred before a total of n successes was reached.
|
| See Also
| --------
| Generator.negative_binomial: which should be used for new code.
|
| Notes
| -----
| The probability mass function of the negative binomial distribution is
|
| .. math:: P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},
|
| where :math:`n` is the number of successes, :math:`p` is the
| probability of success, :math:`N+n` is the number of trials, and
| :math:`\Gamma` is the gamma function. When :math:`n` is an integer,
| :math:`\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}`, which is
| the more common form of this term in the the pmf. The negative
| binomial distribution gives the probability of N failures given n
| successes, with a success on the last trial.
|
| If one throws a die repeatedly until the third time a "1" appears,
| then the probability distribution of the number of non-"1"s that
| appear before the third "1" is a negative binomial distribution.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Negative Binomial Distribution." From
| MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/NegativeBinomialDistribution.html
| .. [2] Wikipedia, "Negative binomial distribution",
| https://en.wikipedia.org/wiki/Negative_binomial_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| A real world example. A company drills wild-cat oil
| exploration wells, each with an estimated probability of
| success of 0.1. What is the probability of having one success
| for each successive well, that is what is the probability of a
| single success after drilling 5 wells, after 6 wells, etc.?
|
| >>> s = np.random.negative_binomial(1, 0.1, 100000)
| >>> for i in range(1, 11): # doctest: +SKIP
| ... probability = sum(s<i) / 100000.
| ... print(i, "wells drilled, probability of one success =", probability)
|
| noncentral_chisquare(...)
| noncentral_chisquare(df, nonc, size=None)
|
| Draw samples from a noncentral chi-square distribution.
|
| The noncentral :math:`\chi^2` distribution is a generalization of
| the :math:`\chi^2` distribution.
|
| .. note::
| New code should use the ``noncentral_chisquare`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| df : float or array_like of floats
| Degrees of freedom, must be > 0.
|
| .. versionchanged:: 1.10.0
| Earlier NumPy versions required dfnum > 1.
| nonc : float or array_like of floats
| Non-centrality, must be non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``df`` and ``nonc`` are both scalars.
| Otherwise, ``np.broadcast(df, nonc).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized noncentral chi-square distribution.
|
| See Also
| --------
| Generator.noncentral_chisquare: which should be used for new code.
|
| Notes
| -----
| The probability density function for the noncentral Chi-square
| distribution is
|
| .. math:: P(x;df,nonc) = \sum^{\infty}_{i=0}
| \frac{e^{-nonc/2}(nonc/2)^{i}}{i!}
| P_{Y_{df+2i}}(x),
|
| where :math:`Y_{q}` is the Chi-square with q degrees of freedom.
|
| References
| ----------
| .. [1] Wikipedia, "Noncentral chi-squared distribution"
| https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution
|
| Examples
| --------
| Draw values from the distribution and plot the histogram
|
| >>> import matplotlib.pyplot as plt
| >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000),
| ... bins=200, density=True)
| >>> plt.show()
|
| Draw values from a noncentral chisquare with very small noncentrality,
| and compare to a chisquare.
|
| >>> plt.figure()
| >>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000),
| ... bins=np.arange(0., 25, .1), density=True)
| >>> values2 = plt.hist(np.random.chisquare(3, 100000),
| ... bins=np.arange(0., 25, .1), density=True)
| >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob')
| >>> plt.show()
|
| Demonstrate how large values of non-centrality lead to a more symmetric
| distribution.
|
| >>> plt.figure()
| >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000),
| ... bins=200, density=True)
| >>> plt.show()
|
| noncentral_f(...)
| noncentral_f(dfnum, dfden, nonc, size=None)
|
| Draw samples from the noncentral F distribution.
|
| Samples are drawn from an F distribution with specified parameters,
| `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of
| freedom in denominator), where both parameters > 1.
| `nonc` is the non-centrality parameter.
|
| .. note::
| New code should use the ``noncentral_f`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| dfnum : float or array_like of floats
| Numerator degrees of freedom, must be > 0.
|
| .. versionchanged:: 1.14.0
| Earlier NumPy versions required dfnum > 1.
| dfden : float or array_like of floats
| Denominator degrees of freedom, must be > 0.
| nonc : float or array_like of floats
| Non-centrality parameter, the sum of the squares of the numerator
| means, must be >= 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``dfnum``, ``dfden``, and ``nonc``
| are all scalars. Otherwise, ``np.broadcast(dfnum, dfden, nonc).size``
| samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized noncentral Fisher distribution.
|
| See Also
| --------
| Generator.noncentral_f: which should be used for new code.
|
| Notes
| -----
| When calculating the power of an experiment (power = probability of
| rejecting the null hypothesis when a specific alternative is true) the
| non-central F statistic becomes important. When the null hypothesis is
| true, the F statistic follows a central F distribution. When the null
| hypothesis is not true, then it follows a non-central F statistic.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Noncentral F-Distribution."
| From MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/NoncentralF-Distribution.html
| .. [2] Wikipedia, "Noncentral F-distribution",
| https://en.wikipedia.org/wiki/Noncentral_F-distribution
|
| Examples
| --------
| In a study, testing for a specific alternative to the null hypothesis
| requires use of the Noncentral F distribution. We need to calculate the
| area in the tail of the distribution that exceeds the value of the F
| distribution for the null hypothesis. We'll plot the two probability
| distributions for comparison.
|
| >>> dfnum = 3 # between group deg of freedom
| >>> dfden = 20 # within groups degrees of freedom
| >>> nonc = 3.0
| >>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000)
| >>> NF = np.histogram(nc_vals, bins=50, density=True)
| >>> c_vals = np.random.f(dfnum, dfden, 1000000)
| >>> F = np.histogram(c_vals, bins=50, density=True)
| >>> import matplotlib.pyplot as plt
| >>> plt.plot(F[1][1:], F[0])
| >>> plt.plot(NF[1][1:], NF[0])
| >>> plt.show()
|
| normal(...)
| normal(loc=0.0, scale=1.0, size=None)
|
| Draw random samples from a normal (Gaussian) distribution.
|
| The probability density function of the normal distribution, first
| derived by De Moivre and 200 years later by both Gauss and Laplace
| independently [2]_, is often called the bell curve because of
| its characteristic shape (see the example below).
|
| The normal distributions occurs often in nature. For example, it
| describes the commonly occurring distribution of samples influenced
| by a large number of tiny, random disturbances, each with its own
| unique distribution [2]_.
|
| .. note::
| New code should use the ``normal`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| loc : float or array_like of floats
| Mean ("centre") of the distribution.
| scale : float or array_like of floats
| Standard deviation (spread or "width") of the distribution. Must be
| non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``loc`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized normal distribution.
|
| See Also
| --------
| scipy.stats.norm : probability density function, distribution or
| cumulative density function, etc.
| Generator.normal: which should be used for new code.
|
| Notes
| -----
| The probability density for the Gaussian distribution is
|
| .. math:: p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
| e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
|
| where :math:`\mu` is the mean and :math:`\sigma` the standard
| deviation. The square of the standard deviation, :math:`\sigma^2`,
| is called the variance.
|
| The function has its peak at the mean, and its "spread" increases with
| the standard deviation (the function reaches 0.607 times its maximum at
| :math:`x + \sigma` and :math:`x - \sigma` [2]_). This implies that
| normal is more likely to return samples lying close to the mean, rather
| than those far away.
|
| References
| ----------
| .. [1] Wikipedia, "Normal distribution",
| https://en.wikipedia.org/wiki/Normal_distribution
| .. [2] P. R. Peebles Jr., "Central Limit Theorem" in "Probability,
| Random Variables and Random Signal Principles", 4th ed., 2001,
| pp. 51, 51, 125.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> mu, sigma = 0, 0.1 # mean and standard deviation
| >>> s = np.random.normal(mu, sigma, 1000)
|
| Verify the mean and the variance:
|
| >>> abs(mu - np.mean(s))
| 0.0 # may vary
|
| >>> abs(sigma - np.std(s, ddof=1))
| 0.1 # may vary
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 30, density=True)
| >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
| ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
| ... linewidth=2, color='r')
| >>> plt.show()
|
| Two-by-four array of samples from N(3, 6.25):
|
| >>> np.random.normal(3, 2.5, size=(2, 4))
| array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
| [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
|
| pareto(...)
| pareto(a, size=None)
|
| Draw samples from a Pareto II or Lomax distribution with
| specified shape.
|
| The Lomax or Pareto II distribution is a shifted Pareto
| distribution. The classical Pareto distribution can be
| obtained from the Lomax distribution by adding 1 and
| multiplying by the scale parameter ``m`` (see Notes). The
| smallest value of the Lomax distribution is zero while for the
| classical Pareto distribution it is ``mu``, where the standard
| Pareto distribution has location ``mu = 1``. Lomax can also
| be considered as a simplified version of the Generalized
| Pareto distribution (available in SciPy), with the scale set
| to one and the location set to zero.
|
| The Pareto distribution must be greater than zero, and is
| unbounded above. It is also known as the "80-20 rule". In
| this distribution, 80 percent of the weights are in the lowest
| 20 percent of the range, while the other 20 percent fill the
| remaining 80 percent of the range.
|
| .. note::
| New code should use the ``pareto`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| a : float or array_like of floats
| Shape of the distribution. Must be positive.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` is a scalar. Otherwise,
| ``np.array(a).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Pareto distribution.
|
| See Also
| --------
| scipy.stats.lomax : probability density function, distribution or
| cumulative density function, etc.
| scipy.stats.genpareto : probability density function, distribution or
| cumulative density function, etc.
| Generator.pareto: which should be used for new code.
|
| Notes
| -----
| The probability density for the Pareto distribution is
|
| .. math:: p(x) = \frac{am^a}{x^{a+1}}
|
| where :math:`a` is the shape and :math:`m` the scale.
|
| The Pareto distribution, named after the Italian economist
| Vilfredo Pareto, is a power law probability distribution
| useful in many real world problems. Outside the field of
| economics it is generally referred to as the Bradford
| distribution. Pareto developed the distribution to describe
| the distribution of wealth in an economy. It has also found
| use in insurance, web page access statistics, oil field sizes,
| and many other problems, including the download frequency for
| projects in Sourceforge [1]_. It is one of the so-called
| "fat-tailed" distributions.
|
| References
| ----------
| .. [1] Francis Hunt and Paul Johnson, On the Pareto Distribution of
| Sourceforge projects.
| .. [2] Pareto, V. (1896). Course of Political Economy. Lausanne.
| .. [3] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme
| Values, Birkhauser Verlag, Basel, pp 23-30.
| .. [4] Wikipedia, "Pareto distribution",
| https://en.wikipedia.org/wiki/Pareto_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> a, m = 3., 2. # shape and mode
| >>> s = (np.random.pareto(a, 1000) + 1) * m
|
| Display the histogram of the samples, along with the probability
| density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, _ = plt.hist(s, 100, density=True)
| >>> fit = a*m**a / bins**(a+1)
| >>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r')
| >>> plt.show()
|
| permutation(...)
| permutation(x)
|
| Randomly permute a sequence, or return a permuted range.
|
| If `x` is a multi-dimensional array, it is only shuffled along its
| first index.
|
| .. note::
| New code should use the ``permutation`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| x : int or array_like
| If `x` is an integer, randomly permute ``np.arange(x)``.
| If `x` is an array, make a copy and shuffle the elements
| randomly.
|
| Returns
| -------
| out : ndarray
| Permuted sequence or array range.
|
| See Also
| --------
| Generator.permutation: which should be used for new code.
|
| Examples
| --------
| >>> np.random.permutation(10)
| array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random
|
| >>> np.random.permutation([1, 4, 9, 12, 15])
| array([15, 1, 9, 4, 12]) # random
|
| >>> arr = np.arange(9).reshape((3, 3))
| >>> np.random.permutation(arr)
| array([[6, 7, 8], # random
| [0, 1, 2],
| [3, 4, 5]])
|
| poisson(...)
| poisson(lam=1.0, size=None)
|
| Draw samples from a Poisson distribution.
|
| The Poisson distribution is the limit of the binomial distribution
| for large N.
|
| .. note::
| New code should use the ``poisson`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| lam : float or array_like of floats
| Expectation of interval, must be >= 0. A sequence of expectation
| intervals must be broadcastable over the requested size.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``lam`` is a scalar. Otherwise,
| ``np.array(lam).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Poisson distribution.
|
| See Also
| --------
| Generator.poisson: which should be used for new code.
|
| Notes
| -----
| The Poisson distribution
|
| .. math:: f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}
|
| For events with an expected separation :math:`\lambda` the Poisson
| distribution :math:`f(k; \lambda)` describes the probability of
| :math:`k` events occurring within the observed
| interval :math:`\lambda`.
|
| Because the output is limited to the range of the C int64 type, a
| ValueError is raised when `lam` is within 10 sigma of the maximum
| representable value.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Poisson Distribution."
| From MathWorld--A Wolfram Web Resource.
| http://mathworld.wolfram.com/PoissonDistribution.html
| .. [2] Wikipedia, "Poisson distribution",
| https://en.wikipedia.org/wiki/Poisson_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> import numpy as np
| >>> s = np.random.poisson(5, 10000)
|
| Display histogram of the sample:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 14, density=True)
| >>> plt.show()
|
| Draw each 100 values for lambda 100 and 500:
|
| >>> s = np.random.poisson(lam=(100., 500.), size=(100, 2))
|
| power(...)
| power(a, size=None)
|
| Draws samples in [0, 1] from a power distribution with positive
| exponent a - 1.
|
| Also known as the power function distribution.
|
| .. note::
| New code should use the ``power`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| a : float or array_like of floats
| Parameter of the distribution. Must be non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` is a scalar. Otherwise,
| ``np.array(a).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized power distribution.
|
| Raises
| ------
| ValueError
| If a < 1.
|
| See Also
| --------
| Generator.power: which should be used for new code.
|
| Notes
| -----
| The probability density function is
|
| .. math:: P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.
|
| The power function distribution is just the inverse of the Pareto
| distribution. It may also be seen as a special case of the Beta
| distribution.
|
| It is used, for example, in modeling the over-reporting of insurance
| claims.
|
| References
| ----------
| .. [1] Christian Kleiber, Samuel Kotz, "Statistical size distributions
| in economics and actuarial sciences", Wiley, 2003.
| .. [2] Heckert, N. A. and Filliben, James J. "NIST Handbook 148:
| Dataplot Reference Manual, Volume 2: Let Subcommands and Library
| Functions", National Institute of Standards and Technology
| Handbook Series, June 2003.
| https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> a = 5. # shape
| >>> samples = 1000
| >>> s = np.random.power(a, samples)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, bins=30)
| >>> x = np.linspace(0, 1, 100)
| >>> y = a*x**(a-1.)
| >>> normed_y = samples*np.diff(bins)[0]*y
| >>> plt.plot(x, normed_y)
| >>> plt.show()
|
| Compare the power function distribution to the inverse of the Pareto.
|
| >>> from scipy import stats # doctest: +SKIP
| >>> rvs = np.random.power(5, 1000000)
| >>> rvsp = np.random.pareto(5, 1000000)
| >>> xx = np.linspace(0,1,100)
| >>> powpdf = stats.powerlaw.pdf(xx,5) # doctest: +SKIP
|
| >>> plt.figure()
| >>> plt.hist(rvs, bins=50, density=True)
| >>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
| >>> plt.title('np.random.power(5)')
|
| >>> plt.figure()
| >>> plt.hist(1./(1.+rvsp), bins=50, density=True)
| >>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
| >>> plt.title('inverse of 1 + np.random.pareto(5)')
|
| >>> plt.figure()
| >>> plt.hist(1./(1.+rvsp), bins=50, density=True)
| >>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
| >>> plt.title('inverse of stats.pareto(5)')
|
| rand(...)
| rand(d0, d1, ..., dn)
|
| Random values in a given shape.
|
| .. note::
| This is a convenience function for users porting code from Matlab,
| and wraps `random_sample`. That function takes a
| tuple to specify the size of the output, which is consistent with
| other NumPy functions like `numpy.zeros` and `numpy.ones`.
|
| Create an array of the given shape and populate it with
| random samples from a uniform distribution
| over ``[0, 1)``.
|
| Parameters
| ----------
| d0, d1, ..., dn : int, optional
| The dimensions of the returned array, must be non-negative.
| If no argument is given a single Python float is returned.
|
| Returns
| -------
| out : ndarray, shape ``(d0, d1, ..., dn)``
| Random values.
|
| See Also
| --------
| random
|
| Examples
| --------
| >>> np.random.rand(3,2)
| array([[ 0.14022471, 0.96360618], #random
| [ 0.37601032, 0.25528411], #random
| [ 0.49313049, 0.94909878]]) #random
|
| randint(...)
| randint(low, high=None, size=None, dtype=int)
|
| Return random integers from `low` (inclusive) to `high` (exclusive).
|
| Return random integers from the "discrete uniform" distribution of
| the specified dtype in the "half-open" interval [`low`, `high`). If
| `high` is None (the default), then results are from [0, `low`).
|
| .. note::
| New code should use the ``integers`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| low : int or array-like of ints
| Lowest (signed) integers to be drawn from the distribution (unless
| ``high=None``, in which case this parameter is one above the
| *highest* such integer).
| high : int or array-like of ints, optional
| If provided, one above the largest (signed) integer to be drawn
| from the distribution (see above for behavior if ``high=None``).
| If array-like, must contain integer values
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| dtype : dtype, optional
| Desired dtype of the result. Byteorder must be native.
| The default value is int.
|
| .. versionadded:: 1.11.0
|
| Returns
| -------
| out : int or ndarray of ints
| `size`-shaped array of random integers from the appropriate
| distribution, or a single such random int if `size` not provided.
|
| See Also
| --------
| random_integers : similar to `randint`, only for the closed
| interval [`low`, `high`], and 1 is the lowest value if `high` is
| omitted.
| Generator.integers: which should be used for new code.
|
| Examples
| --------
| >>> np.random.randint(2, size=10)
| array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random
| >>> np.random.randint(1, size=10)
| array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
|
| Generate a 2 x 4 array of ints between 0 and 4, inclusive:
|
| >>> np.random.randint(5, size=(2, 4))
| array([[4, 0, 2, 1], # random
| [3, 2, 2, 0]])
|
| Generate a 1 x 3 array with 3 different upper bounds
|
| >>> np.random.randint(1, [3, 5, 10])
| array([2, 2, 9]) # random
|
| Generate a 1 by 3 array with 3 different lower bounds
|
| >>> np.random.randint([1, 5, 7], 10)
| array([9, 8, 7]) # random
|
| Generate a 2 by 4 array using broadcasting with dtype of uint8
|
| >>> np.random.randint([1, 3, 5, 7], [[10], [20]], dtype=np.uint8)
| array([[ 8, 6, 9, 7], # random
| [ 1, 16, 9, 12]], dtype=uint8)
|
| randn(...)
| randn(d0, d1, ..., dn)
|
| Return a sample (or samples) from the "standard normal" distribution.
|
| .. note::
| This is a convenience function for users porting code from Matlab,
| and wraps `standard_normal`. That function takes a
| tuple to specify the size of the output, which is consistent with
| other NumPy functions like `numpy.zeros` and `numpy.ones`.
|
| .. note::
| New code should use the ``standard_normal`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| If positive int_like arguments are provided, `randn` generates an array
| of shape ``(d0, d1, ..., dn)``, filled
| with random floats sampled from a univariate "normal" (Gaussian)
| distribution of mean 0 and variance 1. A single float randomly sampled
| from the distribution is returned if no argument is provided.
|
| Parameters
| ----------
| d0, d1, ..., dn : int, optional
| The dimensions of the returned array, must be non-negative.
| If no argument is given a single Python float is returned.
|
| Returns
| -------
| Z : ndarray or float
| A ``(d0, d1, ..., dn)``-shaped array of floating-point samples from
| the standard normal distribution, or a single such float if
| no parameters were supplied.
|
| See Also
| --------
| standard_normal : Similar, but takes a tuple as its argument.
| normal : Also accepts mu and sigma arguments.
| Generator.standard_normal: which should be used for new code.
|
| Notes
| -----
| For random samples from :math:`N(\mu, \sigma^2)`, use:
|
| ``sigma * np.random.randn(...) + mu``
|
| Examples
| --------
| >>> np.random.randn()
| 2.1923875335537315 # random
|
| Two-by-four array of samples from N(3, 6.25):
|
| >>> 3 + 2.5 * np.random.randn(2, 4)
| array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
| [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
|
| random(...)
| random(size=None)
|
| Return random floats in the half-open interval [0.0, 1.0). Alias for
| `random_sample` to ease forward-porting to the new random API.
|
| random_integers(...)
| random_integers(low, high=None, size=None)
|
| Random integers of type `np.int_` between `low` and `high`, inclusive.
|
| Return random integers of type `np.int_` from the "discrete uniform"
| distribution in the closed interval [`low`, `high`]. If `high` is
| None (the default), then results are from [1, `low`]. The `np.int_`
| type translates to the C long integer type and its precision
| is platform dependent.
|
| This function has been deprecated. Use randint instead.
|
| .. deprecated:: 1.11.0
|
| Parameters
| ----------
| low : int
| Lowest (signed) integer to be drawn from the distribution (unless
| ``high=None``, in which case this parameter is the *highest* such
| integer).
| high : int, optional
| If provided, the largest (signed) integer to be drawn from the
| distribution (see above for behavior if ``high=None``).
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| out : int or ndarray of ints
| `size`-shaped array of random integers from the appropriate
| distribution, or a single such random int if `size` not provided.
|
| See Also
| --------
| randint : Similar to `random_integers`, only for the half-open
| interval [`low`, `high`), and 0 is the lowest value if `high` is
| omitted.
|
| Notes
| -----
| To sample from N evenly spaced floating-point numbers between a and b,
| use::
|
| a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.)
|
| Examples
| --------
| >>> np.random.random_integers(5)
| 4 # random
| >>> type(np.random.random_integers(5))
| <class 'numpy.int64'>
| >>> np.random.random_integers(5, size=(3,2))
| array([[5, 4], # random
| [3, 3],
| [4, 5]])
|
| Choose five random numbers from the set of five evenly-spaced
| numbers between 0 and 2.5, inclusive (*i.e.*, from the set
| :math:`{0, 5/8, 10/8, 15/8, 20/8}`):
|
| >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4.
| array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ]) # random
|
| Roll two six sided dice 1000 times and sum the results:
|
| >>> d1 = np.random.random_integers(1, 6, 1000)
| >>> d2 = np.random.random_integers(1, 6, 1000)
| >>> dsums = d1 + d2
|
| Display results as a histogram:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(dsums, 11, density=True)
| >>> plt.show()
|
| random_sample(...)
| random_sample(size=None)
|
| Return random floats in the half-open interval [0.0, 1.0).
|
| Results are from the "continuous uniform" distribution over the
| stated interval. To sample :math:`Unif[a, b), b > a` multiply
| the output of `random_sample` by `(b-a)` and add `a`::
|
| (b - a) * random_sample() + a
|
| .. note::
| New code should use the ``random`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| out : float or ndarray of floats
| Array of random floats of shape `size` (unless ``size=None``, in which
| case a single float is returned).
|
| See Also
| --------
| Generator.random: which should be used for new code.
|
| Examples
| --------
| >>> np.random.random_sample()
| 0.47108547995356098 # random
| >>> type(np.random.random_sample())
| <class 'float'>
| >>> np.random.random_sample((5,))
| array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random
|
| Three-by-two array of random numbers from [-5, 0):
|
| >>> 5 * np.random.random_sample((3, 2)) - 5
| array([[-3.99149989, -0.52338984], # random
| [-2.99091858, -0.79479508],
| [-1.23204345, -1.75224494]])
|
| rayleigh(...)
| rayleigh(scale=1.0, size=None)
|
| Draw samples from a Rayleigh distribution.
|
| The :math:`\chi` and Weibull distributions are generalizations of the
| Rayleigh.
|
| .. note::
| New code should use the ``rayleigh`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| scale : float or array_like of floats, optional
| Scale, also equals the mode. Must be non-negative. Default is 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``scale`` is a scalar. Otherwise,
| ``np.array(scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Rayleigh distribution.
|
| See Also
| --------
| Generator.rayleigh: which should be used for new code.
|
| Notes
| -----
| The probability density function for the Rayleigh distribution is
|
| .. math:: P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}
|
| The Rayleigh distribution would arise, for example, if the East
| and North components of the wind velocity had identical zero-mean
| Gaussian distributions. Then the wind speed would have a Rayleigh
| distribution.
|
| References
| ----------
| .. [1] Brighton Webs Ltd., "Rayleigh Distribution,"
| https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp
| .. [2] Wikipedia, "Rayleigh distribution"
| https://en.wikipedia.org/wiki/Rayleigh_distribution
|
| Examples
| --------
| Draw values from the distribution and plot the histogram
|
| >>> from matplotlib.pyplot import hist
| >>> values = hist(np.random.rayleigh(3, 100000), bins=200, density=True)
|
| Wave heights tend to follow a Rayleigh distribution. If the mean wave
| height is 1 meter, what fraction of waves are likely to be larger than 3
| meters?
|
| >>> meanvalue = 1
| >>> modevalue = np.sqrt(2 / np.pi) * meanvalue
| >>> s = np.random.rayleigh(modevalue, 1000000)
|
| The percentage of waves larger than 3 meters is:
|
| >>> 100.*sum(s>3)/1000000.
| 0.087300000000000003 # random
|
| seed(...)
| seed(self, seed=None)
|
| Reseed a legacy MT19937 BitGenerator
|
| Notes
| -----
| This is a convenience, legacy function.
|
| The best practice is to **not** reseed a BitGenerator, rather to
| recreate a new one. This method is here for legacy reasons.
| This example demonstrates best practice.
|
| >>> from numpy.random import MT19937
| >>> from numpy.random import RandomState, SeedSequence
| >>> rs = RandomState(MT19937(SeedSequence(123456789)))
| # Later, you want to restart the stream
| >>> rs = RandomState(MT19937(SeedSequence(987654321)))
|
| set_state(...)
| set_state(state)
|
| Set the internal state of the generator from a tuple.
|
| For use if one has reason to manually (re-)set the internal state of
| the bit generator used by the RandomState instance. By default,
| RandomState uses the "Mersenne Twister"[1]_ pseudo-random number
| generating algorithm.
|
| Parameters
| ----------
| state : {tuple(str, ndarray of 624 uints, int, int, float), dict}
| The `state` tuple has the following items:
|
| 1. the string 'MT19937', specifying the Mersenne Twister algorithm.
| 2. a 1-D array of 624 unsigned integers ``keys``.
| 3. an integer ``pos``.
| 4. an integer ``has_gauss``.
| 5. a float ``cached_gaussian``.
|
| If state is a dictionary, it is directly set using the BitGenerators
| `state` property.
|
| Returns
| -------
| out : None
| Returns 'None' on success.
|
| See Also
| --------
| get_state
|
| Notes
| -----
| `set_state` and `get_state` are not needed to work with any of the
| random distributions in NumPy. If the internal state is manually altered,
| the user should know exactly what he/she is doing.
|
| For backwards compatibility, the form (str, array of 624 uints, int) is
| also accepted although it is missing some information about the cached
| Gaussian value: ``state = ('MT19937', keys, pos)``.
|
| References
| ----------
| .. [1] M. Matsumoto and T. Nishimura, "Mersenne Twister: A
| 623-dimensionally equidistributed uniform pseudorandom number
| generator," *ACM Trans. on Modeling and Computer Simulation*,
| Vol. 8, No. 1, pp. 3-30, Jan. 1998.
|
| shuffle(...)
| shuffle(x)
|
| Modify a sequence in-place by shuffling its contents.
|
| This function only shuffles the array along the first axis of a
| multi-dimensional array. The order of sub-arrays is changed but
| their contents remains the same.
|
| .. note::
| New code should use the ``shuffle`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| x : array_like
| The array or list to be shuffled.
|
| Returns
| -------
| None
|
| See Also
| --------
| Generator.shuffle: which should be used for new code.
|
| Examples
| --------
| >>> arr = np.arange(10)
| >>> np.random.shuffle(arr)
| >>> arr
| [1 7 5 2 9 4 3 6 0 8] # random
|
| Multi-dimensional arrays are only shuffled along the first axis:
|
| >>> arr = np.arange(9).reshape((3, 3))
| >>> np.random.shuffle(arr)
| >>> arr
| array([[3, 4, 5], # random
| [6, 7, 8],
| [0, 1, 2]])
|
| standard_cauchy(...)
| standard_cauchy(size=None)
|
| Draw samples from a standard Cauchy distribution with mode = 0.
|
| Also known as the Lorentz distribution.
|
| .. note::
| New code should use the ``standard_cauchy`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| samples : ndarray or scalar
| The drawn samples.
|
| See Also
| --------
| Generator.standard_cauchy: which should be used for new code.
|
| Notes
| -----
| The probability density function for the full Cauchy distribution is
|
| .. math:: P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+
| (\frac{x-x_0}{\gamma})^2 \bigr] }
|
| and the Standard Cauchy distribution just sets :math:`x_0=0` and
| :math:`\gamma=1`
|
| The Cauchy distribution arises in the solution to the driven harmonic
| oscillator problem, and also describes spectral line broadening. It
| also describes the distribution of values at which a line tilted at
| a random angle will cut the x axis.
|
| When studying hypothesis tests that assume normality, seeing how the
| tests perform on data from a Cauchy distribution is a good indicator of
| their sensitivity to a heavy-tailed distribution, since the Cauchy looks
| very much like a Gaussian distribution, but with heavier tails.
|
| References
| ----------
| .. [1] NIST/SEMATECH e-Handbook of Statistical Methods, "Cauchy
| Distribution",
| https://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm
| .. [2] Weisstein, Eric W. "Cauchy Distribution." From MathWorld--A
| Wolfram Web Resource.
| http://mathworld.wolfram.com/CauchyDistribution.html
| .. [3] Wikipedia, "Cauchy distribution"
| https://en.wikipedia.org/wiki/Cauchy_distribution
|
| Examples
| --------
| Draw samples and plot the distribution:
|
| >>> import matplotlib.pyplot as plt
| >>> s = np.random.standard_cauchy(1000000)
| >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well
| >>> plt.hist(s, bins=100)
| >>> plt.show()
|
| standard_exponential(...)
| standard_exponential(size=None)
|
| Draw samples from the standard exponential distribution.
|
| `standard_exponential` is identical to the exponential distribution
| with a scale parameter of 1.
|
| .. note::
| New code should use the ``standard_exponential`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| out : float or ndarray
| Drawn samples.
|
| See Also
| --------
| Generator.standard_exponential: which should be used for new code.
|
| Examples
| --------
| Output a 3x8000 array:
|
| >>> n = np.random.standard_exponential((3, 8000))
|
| standard_gamma(...)
| standard_gamma(shape, size=None)
|
| Draw samples from a standard Gamma distribution.
|
| Samples are drawn from a Gamma distribution with specified parameters,
| shape (sometimes designated "k") and scale=1.
|
| .. note::
| New code should use the ``standard_gamma`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| shape : float or array_like of floats
| Parameter, must be non-negative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``shape`` is a scalar. Otherwise,
| ``np.array(shape).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized standard gamma distribution.
|
| See Also
| --------
| scipy.stats.gamma : probability density function, distribution or
| cumulative density function, etc.
| Generator.standard_gamma: which should be used for new code.
|
| Notes
| -----
| The probability density for the Gamma distribution is
|
| .. math:: p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},
|
| where :math:`k` is the shape and :math:`\theta` the scale,
| and :math:`\Gamma` is the Gamma function.
|
| The Gamma distribution is often used to model the times to failure of
| electronic components, and arises naturally in processes for which the
| waiting times between Poisson distributed events are relevant.
|
| References
| ----------
| .. [1] Weisstein, Eric W. "Gamma Distribution." From MathWorld--A
| Wolfram Web Resource.
| http://mathworld.wolfram.com/GammaDistribution.html
| .. [2] Wikipedia, "Gamma distribution",
| https://en.wikipedia.org/wiki/Gamma_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> shape, scale = 2., 1. # mean and width
| >>> s = np.random.standard_gamma(shape, 1000000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> import scipy.special as sps # doctest: +SKIP
| >>> count, bins, ignored = plt.hist(s, 50, density=True)
| >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ # doctest: +SKIP
| ... (sps.gamma(shape) * scale**shape))
| >>> plt.plot(bins, y, linewidth=2, color='r') # doctest: +SKIP
| >>> plt.show()
|
| standard_normal(...)
| standard_normal(size=None)
|
| Draw samples from a standard Normal distribution (mean=0, stdev=1).
|
| .. note::
| New code should use the ``standard_normal`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| out : float or ndarray
| A floating-point array of shape ``size`` of drawn samples, or a
| single sample if ``size`` was not specified.
|
| See Also
| --------
| normal :
| Equivalent function with additional ``loc`` and ``scale`` arguments
| for setting the mean and standard deviation.
| Generator.standard_normal: which should be used for new code.
|
| Notes
| -----
| For random samples from :math:`N(\mu, \sigma^2)`, use one of::
|
| mu + sigma * np.random.standard_normal(size=...)
| np.random.normal(mu, sigma, size=...)
|
| Examples
| --------
| >>> np.random.standard_normal()
| 2.1923875335537315 #random
|
| >>> s = np.random.standard_normal(8000)
| >>> s
| array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random
| -0.38672696, -0.4685006 ]) # random
| >>> s.shape
| (8000,)
| >>> s = np.random.standard_normal(size=(3, 4, 2))
| >>> s.shape
| (3, 4, 2)
|
| Two-by-four array of samples from :math:`N(3, 6.25)`:
|
| >>> 3 + 2.5 * np.random.standard_normal(size=(2, 4))
| array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
| [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
|
| standard_t(...)
| standard_t(df, size=None)
|
| Draw samples from a standard Student's t distribution with `df` degrees
| of freedom.
|
| A special case of the hyperbolic distribution. As `df` gets
| large, the result resembles that of the standard normal
| distribution (`standard_normal`).
|
| .. note::
| New code should use the ``standard_t`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| df : float or array_like of floats
| Degrees of freedom, must be > 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``df`` is a scalar. Otherwise,
| ``np.array(df).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized standard Student's t distribution.
|
| See Also
| --------
| Generator.standard_t: which should be used for new code.
|
| Notes
| -----
| The probability density function for the t distribution is
|
| .. math:: P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df}
| \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}
|
| The t test is based on an assumption that the data come from a
| Normal distribution. The t test provides a way to test whether
| the sample mean (that is the mean calculated from the data) is
| a good estimate of the true mean.
|
| The derivation of the t-distribution was first published in
| 1908 by William Gosset while working for the Guinness Brewery
| in Dublin. Due to proprietary issues, he had to publish under
| a pseudonym, and so he used the name Student.
|
| References
| ----------
| .. [1] Dalgaard, Peter, "Introductory Statistics With R",
| Springer, 2002.
| .. [2] Wikipedia, "Student's t-distribution"
| https://en.wikipedia.org/wiki/Student's_t-distribution
|
| Examples
| --------
| From Dalgaard page 83 [1]_, suppose the daily energy intake for 11
| women in kilojoules (kJ) is:
|
| >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \
| ... 7515, 8230, 8770])
|
| Does their energy intake deviate systematically from the recommended
| value of 7725 kJ?
|
| We have 10 degrees of freedom, so is the sample mean within 95% of the
| recommended value?
|
| >>> s = np.random.standard_t(10, size=100000)
| >>> np.mean(intake)
| 6753.636363636364
| >>> intake.std(ddof=1)
| 1142.1232221373727
|
| Calculate the t statistic, setting the ddof parameter to the unbiased
| value so the divisor in the standard deviation will be degrees of
| freedom, N-1.
|
| >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake)))
| >>> import matplotlib.pyplot as plt
| >>> h = plt.hist(s, bins=100, density=True)
|
| For a one-sided t-test, how far out in the distribution does the t
| statistic appear?
|
| >>> np.sum(s<t) / float(len(s))
| 0.0090699999999999999 #random
|
| So the p-value is about 0.009, which says the null hypothesis has a
| probability of about 99% of being true.
|
| tomaxint(...)
| tomaxint(size=None)
|
| Return a sample of uniformly distributed random integers in the interval
| [0, ``np.iinfo(np.int_).max``]. The `np.int_` type translates to the C long
| integer type and its precision is platform dependent.
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
|
| Returns
| -------
| out : ndarray
| Drawn samples, with shape `size`.
|
| See Also
| --------
| randint : Uniform sampling over a given half-open interval of integers.
| random_integers : Uniform sampling over a given closed interval of
| integers.
|
| Examples
| --------
| >>> rs = np.random.RandomState() # need a RandomState object
| >>> rs.tomaxint((2,2,2))
| array([[[1170048599, 1600360186], # random
| [ 739731006, 1947757578]],
| [[1871712945, 752307660],
| [1601631370, 1479324245]]])
| >>> rs.tomaxint((2,2,2)) < np.iinfo(np.int_).max
| array([[[ True, True],
| [ True, True]],
| [[ True, True],
| [ True, True]]])
|
| triangular(...)
| triangular(left, mode, right, size=None)
|
| Draw samples from the triangular distribution over the
| interval ``[left, right]``.
|
| The triangular distribution is a continuous probability
| distribution with lower limit left, peak at mode, and upper
| limit right. Unlike the other distributions, these parameters
| directly define the shape of the pdf.
|
| .. note::
| New code should use the ``triangular`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| left : float or array_like of floats
| Lower limit.
| mode : float or array_like of floats
| The value where the peak of the distribution occurs.
| The value must fulfill the condition ``left <= mode <= right``.
| right : float or array_like of floats
| Upper limit, must be larger than `left`.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``left``, ``mode``, and ``right``
| are all scalars. Otherwise, ``np.broadcast(left, mode, right).size``
| samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized triangular distribution.
|
| See Also
| --------
| Generator.triangular: which should be used for new code.
|
| Notes
| -----
| The probability density function for the triangular distribution is
|
| .. math:: P(x;l, m, r) = \begin{cases}
| \frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\
| \frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\
| 0& \text{otherwise}.
| \end{cases}
|
| The triangular distribution is often used in ill-defined
| problems where the underlying distribution is not known, but
| some knowledge of the limits and mode exists. Often it is used
| in simulations.
|
| References
| ----------
| .. [1] Wikipedia, "Triangular distribution"
| https://en.wikipedia.org/wiki/Triangular_distribution
|
| Examples
| --------
| Draw values from the distribution and plot the histogram:
|
| >>> import matplotlib.pyplot as plt
| >>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200,
| ... density=True)
| >>> plt.show()
|
| uniform(...)
| uniform(low=0.0, high=1.0, size=None)
|
| Draw samples from a uniform distribution.
|
| Samples are uniformly distributed over the half-open interval
| ``[low, high)`` (includes low, but excludes high). In other words,
| any value within the given interval is equally likely to be drawn
| by `uniform`.
|
| .. note::
| New code should use the ``uniform`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| low : float or array_like of floats, optional
| Lower boundary of the output interval. All values generated will be
| greater than or equal to low. The default value is 0.
| high : float or array_like of floats
| Upper boundary of the output interval. All values generated will be
| less than high. The default value is 1.0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``low`` and ``high`` are both scalars.
| Otherwise, ``np.broadcast(low, high).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized uniform distribution.
|
| See Also
| --------
| randint : Discrete uniform distribution, yielding integers.
| random_integers : Discrete uniform distribution over the closed
| interval ``[low, high]``.
| random_sample : Floats uniformly distributed over ``[0, 1)``.
| random : Alias for `random_sample`.
| rand : Convenience function that accepts dimensions as input, e.g.,
| ``rand(2,2)`` would generate a 2-by-2 array of floats,
| uniformly distributed over ``[0, 1)``.
| Generator.uniform: which should be used for new code.
|
| Notes
| -----
| The probability density function of the uniform distribution is
|
| .. math:: p(x) = \frac{1}{b - a}
|
| anywhere within the interval ``[a, b)``, and zero elsewhere.
|
| When ``high`` == ``low``, values of ``low`` will be returned.
| If ``high`` < ``low``, the results are officially undefined
| and may eventually raise an error, i.e. do not rely on this
| function to behave when passed arguments satisfying that
| inequality condition.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> s = np.random.uniform(-1,0,1000)
|
| All values are within the given interval:
|
| >>> np.all(s >= -1)
| True
| >>> np.all(s < 0)
| True
|
| Display the histogram of the samples, along with the
| probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> count, bins, ignored = plt.hist(s, 15, density=True)
| >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r')
| >>> plt.show()
|
| vonmises(...)
| vonmises(mu, kappa, size=None)
|
| Draw samples from a von Mises distribution.
|
| Samples are drawn from a von Mises distribution with specified mode
| (mu) and dispersion (kappa), on the interval [-pi, pi].
|
| The von Mises distribution (also known as the circular normal
| distribution) is a continuous probability distribution on the unit
| circle. It may be thought of as the circular analogue of the normal
| distribution.
|
| .. note::
| New code should use the ``vonmises`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| mu : float or array_like of floats
| Mode ("center") of the distribution.
| kappa : float or array_like of floats
| Dispersion of the distribution, has to be >=0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``mu`` and ``kappa`` are both scalars.
| Otherwise, ``np.broadcast(mu, kappa).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized von Mises distribution.
|
| See Also
| --------
| scipy.stats.vonmises : probability density function, distribution, or
| cumulative density function, etc.
| Generator.vonmises: which should be used for new code.
|
| Notes
| -----
| The probability density for the von Mises distribution is
|
| .. math:: p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},
|
| where :math:`\mu` is the mode and :math:`\kappa` the dispersion,
| and :math:`I_0(\kappa)` is the modified Bessel function of order 0.
|
| The von Mises is named for Richard Edler von Mises, who was born in
| Austria-Hungary, in what is now the Ukraine. He fled to the United
| States in 1939 and became a professor at Harvard. He worked in
| probability theory, aerodynamics, fluid mechanics, and philosophy of
| science.
|
| References
| ----------
| .. [1] Abramowitz, M. and Stegun, I. A. (Eds.). "Handbook of
| Mathematical Functions with Formulas, Graphs, and Mathematical
| Tables, 9th printing," New York: Dover, 1972.
| .. [2] von Mises, R., "Mathematical Theory of Probability
| and Statistics", New York: Academic Press, 1964.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> mu, kappa = 0.0, 4.0 # mean and dispersion
| >>> s = np.random.vonmises(mu, kappa, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> from scipy.special import i0 # doctest: +SKIP
| >>> plt.hist(s, 50, density=True)
| >>> x = np.linspace(-np.pi, np.pi, num=51)
| >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) # doctest: +SKIP
| >>> plt.plot(x, y, linewidth=2, color='r') # doctest: +SKIP
| >>> plt.show()
|
| wald(...)
| wald(mean, scale, size=None)
|
| Draw samples from a Wald, or inverse Gaussian, distribution.
|
| As the scale approaches infinity, the distribution becomes more like a
| Gaussian. Some references claim that the Wald is an inverse Gaussian
| with mean equal to 1, but this is by no means universal.
|
| The inverse Gaussian distribution was first studied in relationship to
| Brownian motion. In 1956 M.C.K. Tweedie used the name inverse Gaussian
| because there is an inverse relationship between the time to cover a
| unit distance and distance covered in unit time.
|
| .. note::
| New code should use the ``wald`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| mean : float or array_like of floats
| Distribution mean, must be > 0.
| scale : float or array_like of floats
| Scale parameter, must be > 0.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``mean`` and ``scale`` are both scalars.
| Otherwise, ``np.broadcast(mean, scale).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Wald distribution.
|
| See Also
| --------
| Generator.wald: which should be used for new code.
|
| Notes
| -----
| The probability density function for the Wald distribution is
|
| .. math:: P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^
| \frac{-scale(x-mean)^2}{2\cdotp mean^2x}
|
| As noted above the inverse Gaussian distribution first arise
| from attempts to model Brownian motion. It is also a
| competitor to the Weibull for use in reliability modeling and
| modeling stock returns and interest rate processes.
|
| References
| ----------
| .. [1] Brighton Webs Ltd., Wald Distribution,
| https://web.archive.org/web/20090423014010/http://www.brighton-webs.co.uk:80/distributions/wald.asp
| .. [2] Chhikara, Raj S., and Folks, J. Leroy, "The Inverse Gaussian
| Distribution: Theory : Methodology, and Applications", CRC Press,
| 1988.
| .. [3] Wikipedia, "Inverse Gaussian distribution"
| https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution
|
| Examples
| --------
| Draw values from the distribution and plot the histogram:
|
| >>> import matplotlib.pyplot as plt
| >>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, density=True)
| >>> plt.show()
|
| weibull(...)
| weibull(a, size=None)
|
| Draw samples from a Weibull distribution.
|
| Draw samples from a 1-parameter Weibull distribution with the given
| shape parameter `a`.
|
| .. math:: X = (-ln(U))^{1/a}
|
| Here, U is drawn from the uniform distribution over (0,1].
|
| The more common 2-parameter Weibull, including a scale parameter
| :math:`\lambda` is just :math:`X = \lambda(-ln(U))^{1/a}`.
|
| .. note::
| New code should use the ``weibull`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| a : float or array_like of floats
| Shape parameter of the distribution. Must be nonnegative.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` is a scalar. Otherwise,
| ``np.array(a).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Weibull distribution.
|
| See Also
| --------
| scipy.stats.weibull_max
| scipy.stats.weibull_min
| scipy.stats.genextreme
| gumbel
| Generator.weibull: which should be used for new code.
|
| Notes
| -----
| The Weibull (or Type III asymptotic extreme value distribution
| for smallest values, SEV Type III, or Rosin-Rammler
| distribution) is one of a class of Generalized Extreme Value
| (GEV) distributions used in modeling extreme value problems.
| This class includes the Gumbel and Frechet distributions.
|
| The probability density for the Weibull distribution is
|
| .. math:: p(x) = \frac{a}
| {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},
|
| where :math:`a` is the shape and :math:`\lambda` the scale.
|
| The function has its peak (the mode) at
| :math:`\lambda(\frac{a-1}{a})^{1/a}`.
|
| When ``a = 1``, the Weibull distribution reduces to the exponential
| distribution.
|
| References
| ----------
| .. [1] Waloddi Weibull, Royal Technical University, Stockholm,
| 1939 "A Statistical Theory Of The Strength Of Materials",
| Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939,
| Generalstabens Litografiska Anstalts Forlag, Stockholm.
| .. [2] Waloddi Weibull, "A Statistical Distribution Function of
| Wide Applicability", Journal Of Applied Mechanics ASME Paper
| 1951.
| .. [3] Wikipedia, "Weibull distribution",
| https://en.wikipedia.org/wiki/Weibull_distribution
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> a = 5. # shape
| >>> s = np.random.weibull(a, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> x = np.arange(1,100.)/50.
| >>> def weib(x,n,a):
| ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a)
|
| >>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000))
| >>> x = np.arange(1,100.)/50.
| >>> scale = count.max()/weib(x, 1., 5.).max()
| >>> plt.plot(x, weib(x, 1., 5.)*scale)
| >>> plt.show()
|
| zipf(...)
| zipf(a, size=None)
|
| Draw samples from a Zipf distribution.
|
| Samples are drawn from a Zipf distribution with specified parameter
| `a` > 1.
|
| The Zipf distribution (also known as the zeta distribution) is a
| continuous probability distribution that satisfies Zipf's law: the
| frequency of an item is inversely proportional to its rank in a
| frequency table.
|
| .. note::
| New code should use the ``zipf`` method of a ``default_rng()``
| instance instead; see `random-quick-start`.
|
| Parameters
| ----------
| a : float or array_like of floats
| Distribution parameter. Must be greater than 1.
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. If size is ``None`` (default),
| a single value is returned if ``a`` is a scalar. Otherwise,
| ``np.array(a).size`` samples are drawn.
|
| Returns
| -------
| out : ndarray or scalar
| Drawn samples from the parameterized Zipf distribution.
|
| See Also
| --------
| scipy.stats.zipf : probability density function, distribution, or
| cumulative density function, etc.
| Generator.zipf: which should be used for new code.
|
| Notes
| -----
| The probability density for the Zipf distribution is
|
| .. math:: p(x) = \frac{x^{-a}}{\zeta(a)},
|
| where :math:`\zeta` is the Riemann Zeta function.
|
| It is named for the American linguist George Kingsley Zipf, who noted
| that the frequency of any word in a sample of a language is inversely
| proportional to its rank in the frequency table.
|
| References
| ----------
| .. [1] Zipf, G. K., "Selected Studies of the Principle of Relative
| Frequency in Language," Cambridge, MA: Harvard Univ. Press,
| 1932.
|
| Examples
| --------
| Draw samples from the distribution:
|
| >>> a = 2. # parameter
| >>> s = np.random.zipf(a, 1000)
|
| Display the histogram of the samples, along with
| the probability density function:
|
| >>> import matplotlib.pyplot as plt
| >>> from scipy import special # doctest: +SKIP
|
| Truncate s values at 50 so plot is interesting:
|
| >>> count, bins, ignored = plt.hist(s[s<50], 50, density=True)
| >>> x = np.arange(1., 50.)
| >>> y = x**(-a) / special.zetac(a) # doctest: +SKIP
| >>> plt.plot(x, y/max(y), linewidth=2, color='r') # doctest: +SKIP
| >>> plt.show()
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __pyx_vtable__ = <capsule object NULL>
class SFC64(numpy.random._bit_generator.BitGenerator)
| SFC64(seed=None)
|
| BitGenerator for Chris Doty-Humphrey's Small Fast Chaotic PRNG.
|
| Parameters
| ----------
| seed : {None, int, array_like[ints], SeedSequence}, optional
| A seed to initialize the `BitGenerator`. If None, then fresh,
| unpredictable entropy will be pulled from the OS. If an ``int`` or
| ``array_like[ints]`` is passed, then it will be passed to
| `SeedSequence` to derive the initial `BitGenerator` state. One may also
| pass in a `SeedSequence` instance.
|
| Notes
| -----
| ``SFC64`` is a 256-bit implementation of Chris Doty-Humphrey's Small Fast
| Chaotic PRNG ([1]_). ``SFC64`` has a few different cycles that one might be
| on, depending on the seed; the expected period will be about
| :math:`2^{255}` ([2]_). ``SFC64`` incorporates a 64-bit counter which means
| that the absolute minimum cycle length is :math:`2^{64}` and that distinct
| seeds will not run into each other for at least :math:`2^{64}` iterations.
|
| ``SFC64`` provides a capsule containing function pointers that produce
| doubles, and unsigned 32 and 64- bit integers. These are not
| directly consumable in Python and must be consumed by a ``Generator``
| or similar object that supports low-level access.
|
| **State and Seeding**
|
| The ``SFC64`` state vector consists of 4 unsigned 64-bit values. The last
| is a 64-bit counter that increments by 1 each iteration.
|
| The input seed is processed by `SeedSequence` to generate the first
| 3 values, then the ``SFC64`` algorithm is iterated a small number of times
| to mix.
|
| **Compatibility Guarantee**
|
| ``SFC64`` makes a guarantee that a fixed seed will always produce the same
| random integer stream.
|
| References
| ----------
| .. [1] `"PractRand"
| <http://pracrand.sourceforge.net/RNG_engines.txt>`_
| .. [2] `"Random Invertible Mapping Statistics"
| <http://www.pcg-random.org/posts/random-invertible-mapping-statistics.html>`_
|
| Method resolution order:
| SFC64
| numpy.random._bit_generator.BitGenerator
| builtins.object
|
| Methods defined here:
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce_cython__(...)
|
| __setstate_cython__(...)
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| state
| Get or set the PRNG state
|
| Returns
| -------
| state : dict
| Dictionary containing the information required to describe the
| state of the PRNG
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __pyx_vtable__ = <capsule object NULL>
|
| ----------------------------------------------------------------------
| Methods inherited from numpy.random._bit_generator.BitGenerator:
|
| __getstate__(...)
|
| __reduce__(...)
| Helper for pickle.
|
| __setstate__(...)
|
| random_raw(...)
| random_raw(self, size=None)
|
| Return randoms as generated by the underlying BitGenerator
|
| Parameters
| ----------
| size : int or tuple of ints, optional
| Output shape. If the given shape is, e.g., ``(m, n, k)``, then
| ``m * n * k`` samples are drawn. Default is None, in which case a
| single value is returned.
| output : bool, optional
| Output values. Used for performance testing since the generated
| values are not returned.
|
| Returns
| -------
| out : uint or ndarray
| Drawn samples.
|
| Notes
| -----
| This method directly exposes the the raw underlying pseudo-random
| number generator. All values are returned as unsigned 64-bit
| values irrespective of the number of bits produced by the PRNG.
|
| See the class docstring for the number of bits returned.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from numpy.random._bit_generator.BitGenerator:
|
| capsule
|
| cffi
| CFFI interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing CFFI wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| ctypes
| ctypes interface
|
| Returns
| -------
| interface : namedtuple
| Named tuple containing ctypes wrapper
|
| * state_address - Memory address of the state struct
| * state - pointer to the state struct
| * next_uint64 - function pointer to produce 64 bit integers
| * next_uint32 - function pointer to produce 32 bit integers
| * next_double - function pointer to produce doubles
| * bitgen - pointer to the bit generator struct
|
| lock
class SeedSequence(builtins.object)
| SeedSequence(entropy=None, *, spawn_key=(), pool_size=4)
|
| SeedSequence mixes sources of entropy in a reproducible way to set the
| initial state for independent and very probably non-overlapping
| BitGenerators.
|
| Once the SeedSequence is instantiated, you can call the `generate_state`
| method to get an appropriately sized seed. Calling `spawn(n) <spawn>` will
| create ``n`` SeedSequences that can be used to seed independent
| BitGenerators, i.e. for different threads.
|
| Parameters
| ----------
| entropy : {None, int, sequence[int]}, optional
| The entropy for creating a `SeedSequence`.
| spawn_key : {(), sequence[int]}, optional
| A third source of entropy, used internally when calling
| `SeedSequence.spawn`
| pool_size : {int}, optional
| Size of the pooled entropy to store. Default is 4 to give a 128-bit
| entropy pool. 8 (for 256 bits) is another reasonable choice if working
| with larger PRNGs, but there is very little to be gained by selecting
| another value.
| n_children_spawned : {int}, optional
| The number of children already spawned. Only pass this if
| reconstructing a `SeedSequence` from a serialized form.
|
| Notes
| -----
|
| Best practice for achieving reproducible bit streams is to use
| the default ``None`` for the initial entropy, and then use
| `SeedSequence.entropy` to log/pickle the `entropy` for reproducibility:
|
| >>> sq1 = np.random.SeedSequence()
| >>> sq1.entropy
| 243799254704924441050048792905230269161 # random
| >>> sq2 = np.random.SeedSequence(sq1.entropy)
| >>> np.all(sq1.generate_state(10) == sq2.generate_state(10))
| True
|
| Methods defined here:
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce__ = __reduce_cython__(...)
|
| __repr__(self, /)
| Return repr(self).
|
| __setstate__ = __setstate_cython__(...)
|
| generate_state(...)
| generate_state(n_words, dtype=np.uint32)
|
| Return the requested number of words for PRNG seeding.
|
| A BitGenerator should call this method in its constructor with
| an appropriate `n_words` parameter to properly seed itself.
|
| Parameters
| ----------
| n_words : int
| dtype : np.uint32 or np.uint64, optional
| The size of each word. This should only be either `uint32` or
| `uint64`. Strings (`'uint32'`, `'uint64'`) are fine. Note that
| requesting `uint64` will draw twice as many bits as `uint32` for
| the same `n_words`. This is a convenience for `BitGenerator`s that
| express their states as `uint64` arrays.
|
| Returns
| -------
| state : uint32 or uint64 array, shape=(n_words,)
|
| spawn(...)
| spawn(n_children)
|
| Spawn a number of child `SeedSequence` s by extending the
| `spawn_key`.
|
| Parameters
| ----------
| n_children : int
|
| Returns
| -------
| seqs : list of `SeedSequence` s
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| entropy
|
| n_children_spawned
|
| pool
|
| pool_size
|
| spawn_key
|
| state
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __pyx_vtable__ = <capsule object NULL>
FUNCTIONS
beta(...) method of numpy.random.mtrand.RandomState instance
beta(a, b, size=None)
Draw samples from a Beta distribution.
The Beta distribution is a special case of the Dirichlet distribution,
and is related to the Gamma distribution. It has the probability
distribution function
.. math:: f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1}
(1 - x)^{\beta - 1},
where the normalization, B, is the beta function,
.. math:: B(\alpha, \beta) = \int_0^1 t^{\alpha - 1}
(1 - t)^{\beta - 1} dt.
It is often seen in Bayesian inference and order statistics.
.. note::
New code should use the ``beta`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
a : float or array_like of floats
Alpha, positive (>0).
b : float or array_like of floats
Beta, positive (>0).
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``a`` and ``b`` are both scalars.
Otherwise, ``np.broadcast(a, b).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized beta distribution.
See Also
--------
Generator.beta: which should be used for new code.
binomial(...) method of numpy.random.mtrand.RandomState instance
binomial(n, p, size=None)
Draw samples from a binomial distribution.
Samples are drawn from a binomial distribution with specified
parameters, n trials and p probability of success where
n an integer >= 0 and p is in the interval [0,1]. (n may be
input as a float, but it is truncated to an integer in use)
.. note::
New code should use the ``binomial`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
n : int or array_like of ints
Parameter of the distribution, >= 0. Floats are also accepted,
but they will be truncated to integers.
p : float or array_like of floats
Parameter of the distribution, >= 0 and <=1.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``n`` and ``p`` are both scalars.
Otherwise, ``np.broadcast(n, p).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized binomial distribution, where
each sample is equal to the number of successes over the n trials.
See Also
--------
scipy.stats.binom : probability density function, distribution or
cumulative density function, etc.
Generator.binomial: which should be used for new code.
Notes
-----
The probability density for the binomial distribution is
.. math:: P(N) = \binom{n}{N}p^N(1-p)^{n-N},
where :math:`n` is the number of trials, :math:`p` is the probability
of success, and :math:`N` is the number of successes.
When estimating the standard error of a proportion in a population by
using a random sample, the normal distribution works well unless the
product p*n <=5, where p = population proportion estimate, and n =
number of samples, in which case the binomial distribution is used
instead. For example, a sample of 15 people shows 4 who are left
handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4,
so the binomial distribution should be used in this case.
References
----------
.. [1] Dalgaard, Peter, "Introductory Statistics with R",
Springer-Verlag, 2002.
.. [2] Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill,
Fifth Edition, 2002.
.. [3] Lentner, Marvin, "Elementary Applied Statistics", Bogden
and Quigley, 1972.
.. [4] Weisstein, Eric W. "Binomial Distribution." From MathWorld--A
Wolfram Web Resource.
http://mathworld.wolfram.com/BinomialDistribution.html
.. [5] Wikipedia, "Binomial distribution",
https://en.wikipedia.org/wiki/Binomial_distribution
Examples
--------
Draw samples from the distribution:
>>> n, p = 10, .5 # number of trials, probability of each trial
>>> s = np.random.binomial(n, p, 1000)
# result of flipping a coin 10 times, tested 1000 times.
A real world example. A company drills 9 wild-cat oil exploration
wells, each with an estimated probability of success of 0.1. All nine
wells fail. What is the probability of that happening?
Let's do 20,000 trials of the model, and count the number that
generate zero positive results.
>>> sum(np.random.binomial(9, 0.1, 20000) == 0)/20000.
# answer = 0.38885, or 38%.
bytes(...) method of numpy.random.mtrand.RandomState instance
bytes(length)
Return random bytes.
.. note::
New code should use the ``bytes`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
length : int
Number of random bytes.
Returns
-------
out : str
String of length `length`.
See Also
--------
Generator.bytes: which should be used for new code.
Examples
--------
>>> np.random.bytes(10)
' eh\x85\x022SZ\xbf\xa4' #random
chisquare(...) method of numpy.random.mtrand.RandomState instance
chisquare(df, size=None)
Draw samples from a chi-square distribution.
When `df` independent random variables, each with standard normal
distributions (mean 0, variance 1), are squared and summed, the
resulting distribution is chi-square (see Notes). This distribution
is often used in hypothesis testing.
.. note::
New code should use the ``chisquare`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
df : float or array_like of floats
Number of degrees of freedom, must be > 0.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``df`` is a scalar. Otherwise,
``np.array(df).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized chi-square distribution.
Raises
------
ValueError
When `df` <= 0 or when an inappropriate `size` (e.g. ``size=-1``)
is given.
See Also
--------
Generator.chisquare: which should be used for new code.
Notes
-----
The variable obtained by summing the squares of `df` independent,
standard normally distributed random variables:
.. math:: Q = \sum_{i=0}^{\mathtt{df}} X^2_i
is chi-square distributed, denoted
.. math:: Q \sim \chi^2_k.
The probability density function of the chi-squared distribution is
.. math:: p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)}
x^{k/2 - 1} e^{-x/2},
where :math:`\Gamma` is the gamma function,
.. math:: \Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.
References
----------
.. [1] NIST "Engineering Statistics Handbook"
https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm
Examples
--------
>>> np.random.chisquare(2,4)
array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random
choice(...) method of numpy.random.mtrand.RandomState instance
choice(a, size=None, replace=True, p=None)
Generates a random sample from a given 1-D array
.. versionadded:: 1.7.0
.. note::
New code should use the ``choice`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
a : 1-D array-like or int
If an ndarray, a random sample is generated from its elements.
If an int, the random sample is generated as if a were np.arange(a)
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
replace : boolean, optional
Whether the sample is with or without replacement
p : 1-D array-like, optional
The probabilities associated with each entry in a.
If not given the sample assumes a uniform distribution over all
entries in a.
Returns
-------
samples : single item or ndarray
The generated random samples
Raises
------
ValueError
If a is an int and less than zero, if a or p are not 1-dimensional,
if a is an array-like of size 0, if p is not a vector of
probabilities, if a and p have different lengths, or if
replace=False and the sample size is greater than the population
size
See Also
--------
randint, shuffle, permutation
Generator.choice: which should be used in new code
Examples
--------
Generate a uniform random sample from np.arange(5) of size 3:
>>> np.random.choice(5, 3)
array([0, 3, 4]) # random
>>> #This is equivalent to np.random.randint(0,5,3)
Generate a non-uniform random sample from np.arange(5) of size 3:
>>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])
array([3, 3, 0]) # random
Generate a uniform random sample from np.arange(5) of size 3 without
replacement:
>>> np.random.choice(5, 3, replace=False)
array([3,1,0]) # random
>>> #This is equivalent to np.random.permutation(np.arange(5))[:3]
Generate a non-uniform random sample from np.arange(5) of size
3 without replacement:
>>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0])
array([2, 3, 0]) # random
Any of the above can be repeated with an arbitrary array-like
instead of just integers. For instance:
>>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
>>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3])
array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random
dtype='<U11')
default_rng(...)
Construct a new Generator with the default BitGenerator (PCG64).
Parameters
----------
seed : {None, int, array_like[ints], SeedSequence, BitGenerator, Generator}, optional
A seed to initialize the `BitGenerator`. If None, then fresh,
unpredictable entropy will be pulled from the OS. If an ``int`` or
``array_like[ints]`` is passed, then it will be passed to
`SeedSequence` to derive the initial `BitGenerator` state. One may also
pass in a`SeedSequence` instance
Additionally, when passed a `BitGenerator`, it will be wrapped by
`Generator`. If passed a `Generator`, it will be returned unaltered.
Returns
-------
Generator
The initialized generator object.
Notes
-----
If ``seed`` is not a `BitGenerator` or a `Generator`, a new `BitGenerator`
is instantiated. This function does not manage a default global instance.
dirichlet(...) method of numpy.random.mtrand.RandomState instance
dirichlet(alpha, size=None)
Draw samples from the Dirichlet distribution.
Draw `size` samples of dimension k from a Dirichlet distribution. A
Dirichlet-distributed random variable can be seen as a multivariate
generalization of a Beta distribution. The Dirichlet distribution
is a conjugate prior of a multinomial distribution in Bayesian
inference.
.. note::
New code should use the ``dirichlet`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
alpha : sequence of floats, length k
Parameter of the distribution (length ``k`` for sample of
length ``k``).
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
vector of length ``k`` is returned.
Returns
-------
samples : ndarray,
The drawn samples, of shape ``(size, k)``.
Raises
-------
ValueError
If any value in ``alpha`` is less than or equal to zero
See Also
--------
Generator.dirichlet: which should be used for new code.
Notes
-----
The Dirichlet distribution is a distribution over vectors
:math:`x` that fulfil the conditions :math:`x_i>0` and
:math:`\sum_{i=1}^k x_i = 1`.
The probability density function :math:`p` of a
Dirichlet-distributed random vector :math:`X` is
proportional to
.. math:: p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},
where :math:`\alpha` is a vector containing the positive
concentration parameters.
The method uses the following property for computation: let :math:`Y`
be a random vector which has components that follow a standard gamma
distribution, then :math:`X = \frac{1}{\sum_{i=1}^k{Y_i}} Y`
is Dirichlet-distributed
References
----------
.. [1] David McKay, "Information Theory, Inference and Learning
Algorithms," chapter 23,
http://www.inference.org.uk/mackay/itila/
.. [2] Wikipedia, "Dirichlet distribution",
https://en.wikipedia.org/wiki/Dirichlet_distribution
Examples
--------
Taking an example cited in Wikipedia, this distribution can be used if
one wanted to cut strings (each of initial length 1.0) into K pieces
with different lengths, where each piece had, on average, a designated
average length, but allowing some variation in the relative sizes of
the pieces.
>>> s = np.random.dirichlet((10, 5, 3), 20).transpose()
>>> import matplotlib.pyplot as plt
>>> plt.barh(range(20), s[0])
>>> plt.barh(range(20), s[1], left=s[0], color='g')
>>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r')
>>> plt.title("Lengths of Strings")
exponential(...) method of numpy.random.mtrand.RandomState instance
exponential(scale=1.0, size=None)
Draw samples from an exponential distribution.
Its probability density function is
.. math:: f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),
for ``x > 0`` and 0 elsewhere. :math:`\beta` is the scale parameter,
which is the inverse of the rate parameter :math:`\lambda = 1/\beta`.
The rate parameter is an alternative, widely used parameterization
of the exponential distribution [3]_.
The exponential distribution is a continuous analogue of the
geometric distribution. It describes many common situations, such as
the size of raindrops measured over many rainstorms [1]_, or the time
between page requests to Wikipedia [2]_.
.. note::
New code should use the ``exponential`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
scale : float or array_like of floats
The scale parameter, :math:`\beta = 1/\lambda`. Must be
non-negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``scale`` is a scalar. Otherwise,
``np.array(scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized exponential distribution.
See Also
--------
Generator.exponential: which should be used for new code.
References
----------
.. [1] Peyton Z. Peebles Jr., "Probability, Random Variables and
Random Signal Principles", 4th ed, 2001, p. 57.
.. [2] Wikipedia, "Poisson process",
https://en.wikipedia.org/wiki/Poisson_process
.. [3] Wikipedia, "Exponential distribution",
https://en.wikipedia.org/wiki/Exponential_distribution
f(...) method of numpy.random.mtrand.RandomState instance
f(dfnum, dfden, size=None)
Draw samples from an F distribution.
Samples are drawn from an F distribution with specified parameters,
`dfnum` (degrees of freedom in numerator) and `dfden` (degrees of
freedom in denominator), where both parameters must be greater than
zero.
The random variate of the F distribution (also known as the
Fisher distribution) is a continuous probability distribution
that arises in ANOVA tests, and is the ratio of two chi-square
variates.
.. note::
New code should use the ``f`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
dfnum : float or array_like of floats
Degrees of freedom in numerator, must be > 0.
dfden : float or array_like of float
Degrees of freedom in denominator, must be > 0.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``dfnum`` and ``dfden`` are both scalars.
Otherwise, ``np.broadcast(dfnum, dfden).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Fisher distribution.
See Also
--------
scipy.stats.f : probability density function, distribution or
cumulative density function, etc.
Generator.f: which should be used for new code.
Notes
-----
The F statistic is used to compare in-group variances to between-group
variances. Calculating the distribution depends on the sampling, and
so it is a function of the respective degrees of freedom in the
problem. The variable `dfnum` is the number of samples minus one, the
between-groups degrees of freedom, while `dfden` is the within-groups
degrees of freedom, the sum of the number of samples in each group
minus the number of groups.
References
----------
.. [1] Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill,
Fifth Edition, 2002.
.. [2] Wikipedia, "F-distribution",
https://en.wikipedia.org/wiki/F-distribution
Examples
--------
An example from Glantz[1], pp 47-40:
Two groups, children of diabetics (25 people) and children from people
without diabetes (25 controls). Fasting blood glucose was measured,
case group had a mean value of 86.1, controls had a mean value of
82.2. Standard deviations were 2.09 and 2.49 respectively. Are these
data consistent with the null hypothesis that the parents diabetic
status does not affect their children's blood glucose levels?
Calculating the F statistic from the data gives a value of 36.01.
Draw samples from the distribution:
>>> dfnum = 1. # between group degrees of freedom
>>> dfden = 48. # within groups degrees of freedom
>>> s = np.random.f(dfnum, dfden, 1000)
The lower bound for the top 1% of the samples is :
>>> np.sort(s)[-10]
7.61988120985 # random
So there is about a 1% chance that the F statistic will exceed 7.62,
the measured value is 36, so the null hypothesis is rejected at the 1%
level.
gamma(...) method of numpy.random.mtrand.RandomState instance
gamma(shape, scale=1.0, size=None)
Draw samples from a Gamma distribution.
Samples are drawn from a Gamma distribution with specified parameters,
`shape` (sometimes designated "k") and `scale` (sometimes designated
"theta"), where both parameters are > 0.
.. note::
New code should use the ``gamma`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
shape : float or array_like of floats
The shape of the gamma distribution. Must be non-negative.
scale : float or array_like of floats, optional
The scale of the gamma distribution. Must be non-negative.
Default is equal to 1.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``shape`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(shape, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized gamma distribution.
See Also
--------
scipy.stats.gamma : probability density function, distribution or
cumulative density function, etc.
Generator.gamma: which should be used for new code.
Notes
-----
The probability density for the Gamma distribution is
.. math:: p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},
where :math:`k` is the shape and :math:`\theta` the scale,
and :math:`\Gamma` is the Gamma function.
The Gamma distribution is often used to model the times to failure of
electronic components, and arises naturally in processes for which the
waiting times between Poisson distributed events are relevant.
References
----------
.. [1] Weisstein, Eric W. "Gamma Distribution." From MathWorld--A
Wolfram Web Resource.
http://mathworld.wolfram.com/GammaDistribution.html
.. [2] Wikipedia, "Gamma distribution",
https://en.wikipedia.org/wiki/Gamma_distribution
Examples
--------
Draw samples from the distribution:
>>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2)
>>> s = np.random.gamma(shape, scale, 1000)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> import scipy.special as sps # doctest: +SKIP
>>> count, bins, ignored = plt.hist(s, 50, density=True)
>>> y = bins**(shape-1)*(np.exp(-bins/scale) / # doctest: +SKIP
... (sps.gamma(shape)*scale**shape))
>>> plt.plot(bins, y, linewidth=2, color='r') # doctest: +SKIP
>>> plt.show()
geometric(...) method of numpy.random.mtrand.RandomState instance
geometric(p, size=None)
Draw samples from the geometric distribution.
Bernoulli trials are experiments with one of two outcomes:
success or failure (an example of such an experiment is flipping
a coin). The geometric distribution models the number of trials
that must be run in order to achieve success. It is therefore
supported on the positive integers, ``k = 1, 2, ...``.
The probability mass function of the geometric distribution is
.. math:: f(k) = (1 - p)^{k - 1} p
where `p` is the probability of success of an individual trial.
.. note::
New code should use the ``geometric`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
p : float or array_like of floats
The probability of success of an individual trial.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``p`` is a scalar. Otherwise,
``np.array(p).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized geometric distribution.
See Also
--------
Generator.geometric: which should be used for new code.
Examples
--------
Draw ten thousand values from the geometric distribution,
with the probability of an individual success equal to 0.35:
>>> z = np.random.geometric(p=0.35, size=10000)
How many trials succeeded after a single run?
>>> (z == 1).sum() / 10000.
0.34889999999999999 #random
get_state(...) method of numpy.random.mtrand.RandomState instance
get_state()
Return a tuple representing the internal state of the generator.
For more details, see `set_state`.
Returns
-------
out : {tuple(str, ndarray of 624 uints, int, int, float), dict}
The returned tuple has the following items:
1. the string 'MT19937'.
2. a 1-D array of 624 unsigned integer keys.
3. an integer ``pos``.
4. an integer ``has_gauss``.
5. a float ``cached_gaussian``.
If `legacy` is False, or the BitGenerator is not NT19937, then
state is returned as a dictionary.
legacy : bool
Flag indicating the return a legacy tuple state when the BitGenerator
is MT19937.
See Also
--------
set_state
Notes
-----
`set_state` and `get_state` are not needed to work with any of the
random distributions in NumPy. If the internal state is manually altered,
the user should know exactly what he/she is doing.
gumbel(...) method of numpy.random.mtrand.RandomState instance
gumbel(loc=0.0, scale=1.0, size=None)
Draw samples from a Gumbel distribution.
Draw samples from a Gumbel distribution with specified location and
scale. For more information on the Gumbel distribution, see
Notes and References below.
.. note::
New code should use the ``gumbel`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
loc : float or array_like of floats, optional
The location of the mode of the distribution. Default is 0.
scale : float or array_like of floats, optional
The scale parameter of the distribution. Default is 1. Must be non-
negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``loc`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Gumbel distribution.
See Also
--------
scipy.stats.gumbel_l
scipy.stats.gumbel_r
scipy.stats.genextreme
weibull
Generator.gumbel: which should be used for new code.
Notes
-----
The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme
Value Type I) distribution is one of a class of Generalized Extreme
Value (GEV) distributions used in modeling extreme value problems.
The Gumbel is a special case of the Extreme Value Type I distribution
for maximums from distributions with "exponential-like" tails.
The probability density for the Gumbel distribution is
.. math:: p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/
\beta}},
where :math:`\mu` is the mode, a location parameter, and
:math:`\beta` is the scale parameter.
The Gumbel (named for German mathematician Emil Julius Gumbel) was used
very early in the hydrology literature, for modeling the occurrence of
flood events. It is also used for modeling maximum wind speed and
rainfall rates. It is a "fat-tailed" distribution - the probability of
an event in the tail of the distribution is larger than if one used a
Gaussian, hence the surprisingly frequent occurrence of 100-year
floods. Floods were initially modeled as a Gaussian process, which
underestimated the frequency of extreme events.
It is one of a class of extreme value distributions, the Generalized
Extreme Value (GEV) distributions, which also includes the Weibull and
Frechet.
The function has a mean of :math:`\mu + 0.57721\beta` and a variance
of :math:`\frac{\pi^2}{6}\beta^2`.
References
----------
.. [1] Gumbel, E. J., "Statistics of Extremes,"
New York: Columbia University Press, 1958.
.. [2] Reiss, R.-D. and Thomas, M., "Statistical Analysis of Extreme
Values from Insurance, Finance, Hydrology and Other Fields,"
Basel: Birkhauser Verlag, 2001.
Examples
--------
Draw samples from the distribution:
>>> mu, beta = 0, 0.1 # location and scale
>>> s = np.random.gumbel(mu, beta, 1000)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 30, density=True)
>>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)
... * np.exp( -np.exp( -(bins - mu) /beta) ),
... linewidth=2, color='r')
>>> plt.show()
Show how an extreme value distribution can arise from a Gaussian process
and compare to a Gaussian:
>>> means = []
>>> maxima = []
>>> for i in range(0,1000) :
... a = np.random.normal(mu, beta, 1000)
... means.append(a.mean())
... maxima.append(a.max())
>>> count, bins, ignored = plt.hist(maxima, 30, density=True)
>>> beta = np.std(maxima) * np.sqrt(6) / np.pi
>>> mu = np.mean(maxima) - 0.57721*beta
>>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)
... * np.exp(-np.exp(-(bins - mu)/beta)),
... linewidth=2, color='r')
>>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi))
... * np.exp(-(bins - mu)**2 / (2 * beta**2)),
... linewidth=2, color='g')
>>> plt.show()
hypergeometric(...) method of numpy.random.mtrand.RandomState instance
hypergeometric(ngood, nbad, nsample, size=None)
Draw samples from a Hypergeometric distribution.
Samples are drawn from a hypergeometric distribution with specified
parameters, `ngood` (ways to make a good selection), `nbad` (ways to make
a bad selection), and `nsample` (number of items sampled, which is less
than or equal to the sum ``ngood + nbad``).
.. note::
New code should use the ``hypergeometric`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
ngood : int or array_like of ints
Number of ways to make a good selection. Must be nonnegative.
nbad : int or array_like of ints
Number of ways to make a bad selection. Must be nonnegative.
nsample : int or array_like of ints
Number of items sampled. Must be at least 1 and at most
``ngood + nbad``.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if `ngood`, `nbad`, and `nsample`
are all scalars. Otherwise, ``np.broadcast(ngood, nbad, nsample).size``
samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized hypergeometric distribution. Each
sample is the number of good items within a randomly selected subset of
size `nsample` taken from a set of `ngood` good items and `nbad` bad items.
See Also
--------
scipy.stats.hypergeom : probability density function, distribution or
cumulative density function, etc.
Generator.hypergeometric: which should be used for new code.
Notes
-----
The probability density for the Hypergeometric distribution is
.. math:: P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},
where :math:`0 \le x \le n` and :math:`n-b \le x \le g`
for P(x) the probability of ``x`` good results in the drawn sample,
g = `ngood`, b = `nbad`, and n = `nsample`.
Consider an urn with black and white marbles in it, `ngood` of them
are black and `nbad` are white. If you draw `nsample` balls without
replacement, then the hypergeometric distribution describes the
distribution of black balls in the drawn sample.
Note that this distribution is very similar to the binomial
distribution, except that in this case, samples are drawn without
replacement, whereas in the Binomial case samples are drawn with
replacement (or the sample space is infinite). As the sample space
becomes large, this distribution approaches the binomial.
References
----------
.. [1] Lentner, Marvin, "Elementary Applied Statistics", Bogden
and Quigley, 1972.
.. [2] Weisstein, Eric W. "Hypergeometric Distribution." From
MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/HypergeometricDistribution.html
.. [3] Wikipedia, "Hypergeometric distribution",
https://en.wikipedia.org/wiki/Hypergeometric_distribution
Examples
--------
Draw samples from the distribution:
>>> ngood, nbad, nsamp = 100, 2, 10
# number of good, number of bad, and number of samples
>>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000)
>>> from matplotlib.pyplot import hist
>>> hist(s)
# note that it is very unlikely to grab both bad items
Suppose you have an urn with 15 white and 15 black marbles.
If you pull 15 marbles at random, how likely is it that
12 or more of them are one color?
>>> s = np.random.hypergeometric(15, 15, 15, 100000)
>>> sum(s>=12)/100000. + sum(s<=3)/100000.
# answer = 0.003 ... pretty unlikely!
laplace(...) method of numpy.random.mtrand.RandomState instance
laplace(loc=0.0, scale=1.0, size=None)
Draw samples from the Laplace or double exponential distribution with
specified location (or mean) and scale (decay).
The Laplace distribution is similar to the Gaussian/normal distribution,
but is sharper at the peak and has fatter tails. It represents the
difference between two independent, identically distributed exponential
random variables.
.. note::
New code should use the ``laplace`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
loc : float or array_like of floats, optional
The position, :math:`\mu`, of the distribution peak. Default is 0.
scale : float or array_like of floats, optional
:math:`\lambda`, the exponential decay. Default is 1. Must be non-
negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``loc`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Laplace distribution.
See Also
--------
Generator.laplace: which should be used for new code.
Notes
-----
It has the probability density function
.. math:: f(x; \mu, \lambda) = \frac{1}{2\lambda}
\exp\left(-\frac{|x - \mu|}{\lambda}\right).
The first law of Laplace, from 1774, states that the frequency
of an error can be expressed as an exponential function of the
absolute magnitude of the error, which leads to the Laplace
distribution. For many problems in economics and health
sciences, this distribution seems to model the data better
than the standard Gaussian distribution.
References
----------
.. [1] Abramowitz, M. and Stegun, I. A. (Eds.). "Handbook of
Mathematical Functions with Formulas, Graphs, and Mathematical
Tables, 9th printing," New York: Dover, 1972.
.. [2] Kotz, Samuel, et. al. "The Laplace Distribution and
Generalizations, " Birkhauser, 2001.
.. [3] Weisstein, Eric W. "Laplace Distribution."
From MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/LaplaceDistribution.html
.. [4] Wikipedia, "Laplace distribution",
https://en.wikipedia.org/wiki/Laplace_distribution
Examples
--------
Draw samples from the distribution
>>> loc, scale = 0., 1.
>>> s = np.random.laplace(loc, scale, 1000)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 30, density=True)
>>> x = np.arange(-8., 8., .01)
>>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale)
>>> plt.plot(x, pdf)
Plot Gaussian for comparison:
>>> g = (1/(scale * np.sqrt(2 * np.pi)) *
... np.exp(-(x - loc)**2 / (2 * scale**2)))
>>> plt.plot(x,g)
logistic(...) method of numpy.random.mtrand.RandomState instance
logistic(loc=0.0, scale=1.0, size=None)
Draw samples from a logistic distribution.
Samples are drawn from a logistic distribution with specified
parameters, loc (location or mean, also median), and scale (>0).
.. note::
New code should use the ``logistic`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
loc : float or array_like of floats, optional
Parameter of the distribution. Default is 0.
scale : float or array_like of floats, optional
Parameter of the distribution. Must be non-negative.
Default is 1.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``loc`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized logistic distribution.
See Also
--------
scipy.stats.logistic : probability density function, distribution or
cumulative density function, etc.
Generator.logistic: which should be used for new code.
Notes
-----
The probability density for the Logistic distribution is
.. math:: P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},
where :math:`\mu` = location and :math:`s` = scale.
The Logistic distribution is used in Extreme Value problems where it
can act as a mixture of Gumbel distributions, in Epidemiology, and by
the World Chess Federation (FIDE) where it is used in the Elo ranking
system, assuming the performance of each player is a logistically
distributed random variable.
References
----------
.. [1] Reiss, R.-D. and Thomas M. (2001), "Statistical Analysis of
Extreme Values, from Insurance, Finance, Hydrology and Other
Fields," Birkhauser Verlag, Basel, pp 132-133.
.. [2] Weisstein, Eric W. "Logistic Distribution." From
MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/LogisticDistribution.html
.. [3] Wikipedia, "Logistic-distribution",
https://en.wikipedia.org/wiki/Logistic_distribution
Examples
--------
Draw samples from the distribution:
>>> loc, scale = 10, 1
>>> s = np.random.logistic(loc, scale, 10000)
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, bins=50)
# plot against distribution
>>> def logist(x, loc, scale):
... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2)
>>> lgst_val = logist(bins, loc, scale)
>>> plt.plot(bins, lgst_val * count.max() / lgst_val.max())
>>> plt.show()
lognormal(...) method of numpy.random.mtrand.RandomState instance
lognormal(mean=0.0, sigma=1.0, size=None)
Draw samples from a log-normal distribution.
Draw samples from a log-normal distribution with specified mean,
standard deviation, and array shape. Note that the mean and standard
deviation are not the values for the distribution itself, but of the
underlying normal distribution it is derived from.
.. note::
New code should use the ``lognormal`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
mean : float or array_like of floats, optional
Mean value of the underlying normal distribution. Default is 0.
sigma : float or array_like of floats, optional
Standard deviation of the underlying normal distribution. Must be
non-negative. Default is 1.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``mean`` and ``sigma`` are both scalars.
Otherwise, ``np.broadcast(mean, sigma).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized log-normal distribution.
See Also
--------
scipy.stats.lognorm : probability density function, distribution,
cumulative density function, etc.
Generator.lognormal: which should be used for new code.
Notes
-----
A variable `x` has a log-normal distribution if `log(x)` is normally
distributed. The probability density function for the log-normal
distribution is:
.. math:: p(x) = \frac{1}{\sigma x \sqrt{2\pi}}
e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}
where :math:`\mu` is the mean and :math:`\sigma` is the standard
deviation of the normally distributed logarithm of the variable.
A log-normal distribution results if a random variable is the *product*
of a large number of independent, identically-distributed variables in
the same way that a normal distribution results if the variable is the
*sum* of a large number of independent, identically-distributed
variables.
References
----------
.. [1] Limpert, E., Stahel, W. A., and Abbt, M., "Log-normal
Distributions across the Sciences: Keys and Clues,"
BioScience, Vol. 51, No. 5, May, 2001.
https://stat.ethz.ch/~stahel/lognormal/bioscience.pdf
.. [2] Reiss, R.D. and Thomas, M., "Statistical Analysis of Extreme
Values," Basel: Birkhauser Verlag, 2001, pp. 31-32.
Examples
--------
Draw samples from the distribution:
>>> mu, sigma = 3., 1. # mean and standard deviation
>>> s = np.random.lognormal(mu, sigma, 1000)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid')
>>> x = np.linspace(min(bins), max(bins), 10000)
>>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))
... / (x * sigma * np.sqrt(2 * np.pi)))
>>> plt.plot(x, pdf, linewidth=2, color='r')
>>> plt.axis('tight')
>>> plt.show()
Demonstrate that taking the products of random samples from a uniform
distribution can be fit well by a log-normal probability density
function.
>>> # Generate a thousand samples: each is the product of 100 random
>>> # values, drawn from a normal distribution.
>>> b = []
>>> for i in range(1000):
... a = 10. + np.random.standard_normal(100)
... b.append(np.product(a))
>>> b = np.array(b) / np.min(b) # scale values to be positive
>>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid')
>>> sigma = np.std(np.log(b))
>>> mu = np.mean(np.log(b))
>>> x = np.linspace(min(bins), max(bins), 10000)
>>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))
... / (x * sigma * np.sqrt(2 * np.pi)))
>>> plt.plot(x, pdf, color='r', linewidth=2)
>>> plt.show()
logseries(...) method of numpy.random.mtrand.RandomState instance
logseries(p, size=None)
Draw samples from a logarithmic series distribution.
Samples are drawn from a log series distribution with specified
shape parameter, 0 < ``p`` < 1.
.. note::
New code should use the ``logseries`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
p : float or array_like of floats
Shape parameter for the distribution. Must be in the range (0, 1).
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``p`` is a scalar. Otherwise,
``np.array(p).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized logarithmic series distribution.
See Also
--------
scipy.stats.logser : probability density function, distribution or
cumulative density function, etc.
Generator.logseries: which should be used for new code.
Notes
-----
The probability density for the Log Series distribution is
.. math:: P(k) = \frac{-p^k}{k \ln(1-p)},
where p = probability.
The log series distribution is frequently used to represent species
richness and occurrence, first proposed by Fisher, Corbet, and
Williams in 1943 [2]. It may also be used to model the numbers of
occupants seen in cars [3].
References
----------
.. [1] Buzas, Martin A.; Culver, Stephen J., Understanding regional
species diversity through the log series distribution of
occurrences: BIODIVERSITY RESEARCH Diversity & Distributions,
Volume 5, Number 5, September 1999 , pp. 187-195(9).
.. [2] Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The
relation between the number of species and the number of
individuals in a random sample of an animal population.
Journal of Animal Ecology, 12:42-58.
.. [3] D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small
Data Sets, CRC Press, 1994.
.. [4] Wikipedia, "Logarithmic distribution",
https://en.wikipedia.org/wiki/Logarithmic_distribution
Examples
--------
Draw samples from the distribution:
>>> a = .6
>>> s = np.random.logseries(a, 10000)
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s)
# plot against distribution
>>> def logseries(k, p):
... return -p**k/(k*np.log(1-p))
>>> plt.plot(bins, logseries(bins, a)*count.max()/
... logseries(bins, a).max(), 'r')
>>> plt.show()
multinomial(...) method of numpy.random.mtrand.RandomState instance
multinomial(n, pvals, size=None)
Draw samples from a multinomial distribution.
The multinomial distribution is a multivariate generalization of the
binomial distribution. Take an experiment with one of ``p``
possible outcomes. An example of such an experiment is throwing a dice,
where the outcome can be 1 through 6. Each sample drawn from the
distribution represents `n` such experiments. Its values,
``X_i = [X_0, X_1, ..., X_p]``, represent the number of times the
outcome was ``i``.
.. note::
New code should use the ``multinomial`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
n : int
Number of experiments.
pvals : sequence of floats, length p
Probabilities of each of the ``p`` different outcomes. These
must sum to 1 (however, the last element is always assumed to
account for the remaining probability, as long as
``sum(pvals[:-1]) <= 1)``.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
Returns
-------
out : ndarray
The drawn samples, of shape *size*, if that was provided. If not,
the shape is ``(N,)``.
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
value drawn from the distribution.
See Also
--------
Generator.multinomial: which should be used for new code.
Examples
--------
Throw a dice 20 times:
>>> np.random.multinomial(20, [1/6.]*6, size=1)
array([[4, 1, 7, 5, 2, 1]]) # random
It landed 4 times on 1, once on 2, etc.
Now, throw the dice 20 times, and 20 times again:
>>> np.random.multinomial(20, [1/6.]*6, size=2)
array([[3, 4, 3, 3, 4, 3], # random
[2, 4, 3, 4, 0, 7]])
For the first run, we threw 3 times 1, 4 times 2, etc. For the second,
we threw 2 times 1, 4 times 2, etc.
A loaded die is more likely to land on number 6:
>>> np.random.multinomial(100, [1/7.]*5 + [2/7.])
array([11, 16, 14, 17, 16, 26]) # random
The probability inputs should be normalized. As an implementation
detail, the value of the last entry is ignored and assumed to take
up any leftover probability mass, but this should not be relied on.
A biased coin which has twice as much weight on one side as on the
other should be sampled like so:
>>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT
array([38, 62]) # random
not like:
>>> np.random.multinomial(100, [1.0, 2.0]) # WRONG
Traceback (most recent call last):
ValueError: pvals < 0, pvals > 1 or pvals contains NaNs
multivariate_normal(...) method of numpy.random.mtrand.RandomState instance
multivariate_normal(mean, cov, size=None, check_valid='warn', tol=1e-8)
Draw random samples from a multivariate normal distribution.
The multivariate normal, multinormal or Gaussian distribution is a
generalization of the one-dimensional normal distribution to higher
dimensions. Such a distribution is specified by its mean and
covariance matrix. These parameters are analogous to the mean
(average or "center") and variance (standard deviation, or "width,"
squared) of the one-dimensional normal distribution.
.. note::
New code should use the ``multivariate_normal`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
mean : 1-D array_like, of length N
Mean of the N-dimensional distribution.
cov : 2-D array_like, of shape (N, N)
Covariance matrix of the distribution. It must be symmetric and
positive-semidefinite for proper sampling.
size : int or tuple of ints, optional
Given a shape of, for example, ``(m,n,k)``, ``m*n*k`` samples are
generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because
each sample is `N`-dimensional, the output shape is ``(m,n,k,N)``.
If no shape is specified, a single (`N`-D) sample is returned.
check_valid : { 'warn', 'raise', 'ignore' }, optional
Behavior when the covariance matrix is not positive semidefinite.
tol : float, optional
Tolerance when checking the singular values in covariance matrix.
cov is cast to double before the check.
Returns
-------
out : ndarray
The drawn samples, of shape *size*, if that was provided. If not,
the shape is ``(N,)``.
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
value drawn from the distribution.
See Also
--------
Generator.multivariate_normal: which should be used for new code.
Notes
-----
The mean is a coordinate in N-dimensional space, which represents the
location where samples are most likely to be generated. This is
analogous to the peak of the bell curve for the one-dimensional or
univariate normal distribution.
Covariance indicates the level to which two variables vary together.
From the multivariate normal distribution, we draw N-dimensional
samples, :math:`X = [x_1, x_2, ... x_N]`. The covariance matrix
element :math:`C_{ij}` is the covariance of :math:`x_i` and :math:`x_j`.
The element :math:`C_{ii}` is the variance of :math:`x_i` (i.e. its
"spread").
Instead of specifying the full covariance matrix, popular
approximations include:
- Spherical covariance (`cov` is a multiple of the identity matrix)
- Diagonal covariance (`cov` has non-negative elements, and only on
the diagonal)
This geometrical property can be seen in two dimensions by plotting
generated data-points:
>>> mean = [0, 0]
>>> cov = [[1, 0], [0, 100]] # diagonal covariance
Diagonal covariance means that points are oriented along x or y-axis:
>>> import matplotlib.pyplot as plt
>>> x, y = np.random.multivariate_normal(mean, cov, 5000).T
>>> plt.plot(x, y, 'x')
>>> plt.axis('equal')
>>> plt.show()
Note that the covariance matrix must be positive semidefinite (a.k.a.
nonnegative-definite). Otherwise, the behavior of this method is
undefined and backwards compatibility is not guaranteed.
References
----------
.. [1] Papoulis, A., "Probability, Random Variables, and Stochastic
Processes," 3rd ed., New York: McGraw-Hill, 1991.
.. [2] Duda, R. O., Hart, P. E., and Stork, D. G., "Pattern
Classification," 2nd ed., New York: Wiley, 2001.
Examples
--------
>>> mean = (1, 2)
>>> cov = [[1, 0], [0, 1]]
>>> x = np.random.multivariate_normal(mean, cov, (3, 3))
>>> x.shape
(3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the
standard deviation:
>>> list((x[0,0,:] - mean) < 0.6)
[True, True] # random
negative_binomial(...) method of numpy.random.mtrand.RandomState instance
negative_binomial(n, p, size=None)
Draw samples from a negative binomial distribution.
Samples are drawn from a negative binomial distribution with specified
parameters, `n` successes and `p` probability of success where `n`
is > 0 and `p` is in the interval [0, 1].
.. note::
New code should use the ``negative_binomial`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
n : float or array_like of floats
Parameter of the distribution, > 0.
p : float or array_like of floats
Parameter of the distribution, >= 0 and <=1.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``n`` and ``p`` are both scalars.
Otherwise, ``np.broadcast(n, p).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized negative binomial distribution,
where each sample is equal to N, the number of failures that
occurred before a total of n successes was reached.
See Also
--------
Generator.negative_binomial: which should be used for new code.
Notes
-----
The probability mass function of the negative binomial distribution is
.. math:: P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},
where :math:`n` is the number of successes, :math:`p` is the
probability of success, :math:`N+n` is the number of trials, and
:math:`\Gamma` is the gamma function. When :math:`n` is an integer,
:math:`\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}`, which is
the more common form of this term in the the pmf. The negative
binomial distribution gives the probability of N failures given n
successes, with a success on the last trial.
If one throws a die repeatedly until the third time a "1" appears,
then the probability distribution of the number of non-"1"s that
appear before the third "1" is a negative binomial distribution.
References
----------
.. [1] Weisstein, Eric W. "Negative Binomial Distribution." From
MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/NegativeBinomialDistribution.html
.. [2] Wikipedia, "Negative binomial distribution",
https://en.wikipedia.org/wiki/Negative_binomial_distribution
Examples
--------
Draw samples from the distribution:
A real world example. A company drills wild-cat oil
exploration wells, each with an estimated probability of
success of 0.1. What is the probability of having one success
for each successive well, that is what is the probability of a
single success after drilling 5 wells, after 6 wells, etc.?
>>> s = np.random.negative_binomial(1, 0.1, 100000)
>>> for i in range(1, 11): # doctest: +SKIP
... probability = sum(s<i) / 100000.
... print(i, "wells drilled, probability of one success =", probability)
noncentral_chisquare(...) method of numpy.random.mtrand.RandomState instance
noncentral_chisquare(df, nonc, size=None)
Draw samples from a noncentral chi-square distribution.
The noncentral :math:`\chi^2` distribution is a generalization of
the :math:`\chi^2` distribution.
.. note::
New code should use the ``noncentral_chisquare`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
df : float or array_like of floats
Degrees of freedom, must be > 0.
.. versionchanged:: 1.10.0
Earlier NumPy versions required dfnum > 1.
nonc : float or array_like of floats
Non-centrality, must be non-negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``df`` and ``nonc`` are both scalars.
Otherwise, ``np.broadcast(df, nonc).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized noncentral chi-square distribution.
See Also
--------
Generator.noncentral_chisquare: which should be used for new code.
Notes
-----
The probability density function for the noncentral Chi-square
distribution is
.. math:: P(x;df,nonc) = \sum^{\infty}_{i=0}
\frac{e^{-nonc/2}(nonc/2)^{i}}{i!}
P_{Y_{df+2i}}(x),
where :math:`Y_{q}` is the Chi-square with q degrees of freedom.
References
----------
.. [1] Wikipedia, "Noncentral chi-squared distribution"
https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution
Examples
--------
Draw values from the distribution and plot the histogram
>>> import matplotlib.pyplot as plt
>>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000),
... bins=200, density=True)
>>> plt.show()
Draw values from a noncentral chisquare with very small noncentrality,
and compare to a chisquare.
>>> plt.figure()
>>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000),
... bins=np.arange(0., 25, .1), density=True)
>>> values2 = plt.hist(np.random.chisquare(3, 100000),
... bins=np.arange(0., 25, .1), density=True)
>>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob')
>>> plt.show()
Demonstrate how large values of non-centrality lead to a more symmetric
distribution.
>>> plt.figure()
>>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000),
... bins=200, density=True)
>>> plt.show()
noncentral_f(...) method of numpy.random.mtrand.RandomState instance
noncentral_f(dfnum, dfden, nonc, size=None)
Draw samples from the noncentral F distribution.
Samples are drawn from an F distribution with specified parameters,
`dfnum` (degrees of freedom in numerator) and `dfden` (degrees of
freedom in denominator), where both parameters > 1.
`nonc` is the non-centrality parameter.
.. note::
New code should use the ``noncentral_f`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
dfnum : float or array_like of floats
Numerator degrees of freedom, must be > 0.
.. versionchanged:: 1.14.0
Earlier NumPy versions required dfnum > 1.
dfden : float or array_like of floats
Denominator degrees of freedom, must be > 0.
nonc : float or array_like of floats
Non-centrality parameter, the sum of the squares of the numerator
means, must be >= 0.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``dfnum``, ``dfden``, and ``nonc``
are all scalars. Otherwise, ``np.broadcast(dfnum, dfden, nonc).size``
samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized noncentral Fisher distribution.
See Also
--------
Generator.noncentral_f: which should be used for new code.
Notes
-----
When calculating the power of an experiment (power = probability of
rejecting the null hypothesis when a specific alternative is true) the
non-central F statistic becomes important. When the null hypothesis is
true, the F statistic follows a central F distribution. When the null
hypothesis is not true, then it follows a non-central F statistic.
References
----------
.. [1] Weisstein, Eric W. "Noncentral F-Distribution."
From MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/NoncentralF-Distribution.html
.. [2] Wikipedia, "Noncentral F-distribution",
https://en.wikipedia.org/wiki/Noncentral_F-distribution
Examples
--------
In a study, testing for a specific alternative to the null hypothesis
requires use of the Noncentral F distribution. We need to calculate the
area in the tail of the distribution that exceeds the value of the F
distribution for the null hypothesis. We'll plot the two probability
distributions for comparison.
>>> dfnum = 3 # between group deg of freedom
>>> dfden = 20 # within groups degrees of freedom
>>> nonc = 3.0
>>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000)
>>> NF = np.histogram(nc_vals, bins=50, density=True)
>>> c_vals = np.random.f(dfnum, dfden, 1000000)
>>> F = np.histogram(c_vals, bins=50, density=True)
>>> import matplotlib.pyplot as plt
>>> plt.plot(F[1][1:], F[0])
>>> plt.plot(NF[1][1:], NF[0])
>>> plt.show()
normal(...) method of numpy.random.mtrand.RandomState instance
normal(loc=0.0, scale=1.0, size=None)
Draw random samples from a normal (Gaussian) distribution.
The probability density function of the normal distribution, first
derived by De Moivre and 200 years later by both Gauss and Laplace
independently [2]_, is often called the bell curve because of
its characteristic shape (see the example below).
The normal distributions occurs often in nature. For example, it
describes the commonly occurring distribution of samples influenced
by a large number of tiny, random disturbances, each with its own
unique distribution [2]_.
.. note::
New code should use the ``normal`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
loc : float or array_like of floats
Mean ("centre") of the distribution.
scale : float or array_like of floats
Standard deviation (spread or "width") of the distribution. Must be
non-negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``loc`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized normal distribution.
See Also
--------
scipy.stats.norm : probability density function, distribution or
cumulative density function, etc.
Generator.normal: which should be used for new code.
Notes
-----
The probability density for the Gaussian distribution is
.. math:: p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
where :math:`\mu` is the mean and :math:`\sigma` the standard
deviation. The square of the standard deviation, :math:`\sigma^2`,
is called the variance.
The function has its peak at the mean, and its "spread" increases with
the standard deviation (the function reaches 0.607 times its maximum at
:math:`x + \sigma` and :math:`x - \sigma` [2]_). This implies that
normal is more likely to return samples lying close to the mean, rather
than those far away.
References
----------
.. [1] Wikipedia, "Normal distribution",
https://en.wikipedia.org/wiki/Normal_distribution
.. [2] P. R. Peebles Jr., "Central Limit Theorem" in "Probability,
Random Variables and Random Signal Principles", 4th ed., 2001,
pp. 51, 51, 125.
Examples
--------
Draw samples from the distribution:
>>> mu, sigma = 0, 0.1 # mean and standard deviation
>>> s = np.random.normal(mu, sigma, 1000)
Verify the mean and the variance:
>>> abs(mu - np.mean(s))
0.0 # may vary
>>> abs(sigma - np.std(s, ddof=1))
0.1 # may vary
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 30, density=True)
>>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
... linewidth=2, color='r')
>>> plt.show()
Two-by-four array of samples from N(3, 6.25):
>>> np.random.normal(3, 2.5, size=(2, 4))
array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
[ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
pareto(...) method of numpy.random.mtrand.RandomState instance
pareto(a, size=None)
Draw samples from a Pareto II or Lomax distribution with
specified shape.
The Lomax or Pareto II distribution is a shifted Pareto
distribution. The classical Pareto distribution can be
obtained from the Lomax distribution by adding 1 and
multiplying by the scale parameter ``m`` (see Notes). The
smallest value of the Lomax distribution is zero while for the
classical Pareto distribution it is ``mu``, where the standard
Pareto distribution has location ``mu = 1``. Lomax can also
be considered as a simplified version of the Generalized
Pareto distribution (available in SciPy), with the scale set
to one and the location set to zero.
The Pareto distribution must be greater than zero, and is
unbounded above. It is also known as the "80-20 rule". In
this distribution, 80 percent of the weights are in the lowest
20 percent of the range, while the other 20 percent fill the
remaining 80 percent of the range.
.. note::
New code should use the ``pareto`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
a : float or array_like of floats
Shape of the distribution. Must be positive.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``a`` is a scalar. Otherwise,
``np.array(a).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Pareto distribution.
See Also
--------
scipy.stats.lomax : probability density function, distribution or
cumulative density function, etc.
scipy.stats.genpareto : probability density function, distribution or
cumulative density function, etc.
Generator.pareto: which should be used for new code.
Notes
-----
The probability density for the Pareto distribution is
.. math:: p(x) = \frac{am^a}{x^{a+1}}
where :math:`a` is the shape and :math:`m` the scale.
The Pareto distribution, named after the Italian economist
Vilfredo Pareto, is a power law probability distribution
useful in many real world problems. Outside the field of
economics it is generally referred to as the Bradford
distribution. Pareto developed the distribution to describe
the distribution of wealth in an economy. It has also found
use in insurance, web page access statistics, oil field sizes,
and many other problems, including the download frequency for
projects in Sourceforge [1]_. It is one of the so-called
"fat-tailed" distributions.
References
----------
.. [1] Francis Hunt and Paul Johnson, On the Pareto Distribution of
Sourceforge projects.
.. [2] Pareto, V. (1896). Course of Political Economy. Lausanne.
.. [3] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme
Values, Birkhauser Verlag, Basel, pp 23-30.
.. [4] Wikipedia, "Pareto distribution",
https://en.wikipedia.org/wiki/Pareto_distribution
Examples
--------
Draw samples from the distribution:
>>> a, m = 3., 2. # shape and mode
>>> s = (np.random.pareto(a, 1000) + 1) * m
Display the histogram of the samples, along with the probability
density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, _ = plt.hist(s, 100, density=True)
>>> fit = a*m**a / bins**(a+1)
>>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r')
>>> plt.show()
permutation(...) method of numpy.random.mtrand.RandomState instance
permutation(x)
Randomly permute a sequence, or return a permuted range.
If `x` is a multi-dimensional array, it is only shuffled along its
first index.
.. note::
New code should use the ``permutation`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
x : int or array_like
If `x` is an integer, randomly permute ``np.arange(x)``.
If `x` is an array, make a copy and shuffle the elements
randomly.
Returns
-------
out : ndarray
Permuted sequence or array range.
See Also
--------
Generator.permutation: which should be used for new code.
Examples
--------
>>> np.random.permutation(10)
array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random
>>> np.random.permutation([1, 4, 9, 12, 15])
array([15, 1, 9, 4, 12]) # random
>>> arr = np.arange(9).reshape((3, 3))
>>> np.random.permutation(arr)
array([[6, 7, 8], # random
[0, 1, 2],
[3, 4, 5]])
poisson(...) method of numpy.random.mtrand.RandomState instance
poisson(lam=1.0, size=None)
Draw samples from a Poisson distribution.
The Poisson distribution is the limit of the binomial distribution
for large N.
.. note::
New code should use the ``poisson`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
lam : float or array_like of floats
Expectation of interval, must be >= 0. A sequence of expectation
intervals must be broadcastable over the requested size.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``lam`` is a scalar. Otherwise,
``np.array(lam).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Poisson distribution.
See Also
--------
Generator.poisson: which should be used for new code.
Notes
-----
The Poisson distribution
.. math:: f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}
For events with an expected separation :math:`\lambda` the Poisson
distribution :math:`f(k; \lambda)` describes the probability of
:math:`k` events occurring within the observed
interval :math:`\lambda`.
Because the output is limited to the range of the C int64 type, a
ValueError is raised when `lam` is within 10 sigma of the maximum
representable value.
References
----------
.. [1] Weisstein, Eric W. "Poisson Distribution."
From MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/PoissonDistribution.html
.. [2] Wikipedia, "Poisson distribution",
https://en.wikipedia.org/wiki/Poisson_distribution
Examples
--------
Draw samples from the distribution:
>>> import numpy as np
>>> s = np.random.poisson(5, 10000)
Display histogram of the sample:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 14, density=True)
>>> plt.show()
Draw each 100 values for lambda 100 and 500:
>>> s = np.random.poisson(lam=(100., 500.), size=(100, 2))
power(...) method of numpy.random.mtrand.RandomState instance
power(a, size=None)
Draws samples in [0, 1] from a power distribution with positive
exponent a - 1.
Also known as the power function distribution.
.. note::
New code should use the ``power`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
a : float or array_like of floats
Parameter of the distribution. Must be non-negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``a`` is a scalar. Otherwise,
``np.array(a).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized power distribution.
Raises
------
ValueError
If a < 1.
See Also
--------
Generator.power: which should be used for new code.
Notes
-----
The probability density function is
.. math:: P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.
The power function distribution is just the inverse of the Pareto
distribution. It may also be seen as a special case of the Beta
distribution.
It is used, for example, in modeling the over-reporting of insurance
claims.
References
----------
.. [1] Christian Kleiber, Samuel Kotz, "Statistical size distributions
in economics and actuarial sciences", Wiley, 2003.
.. [2] Heckert, N. A. and Filliben, James J. "NIST Handbook 148:
Dataplot Reference Manual, Volume 2: Let Subcommands and Library
Functions", National Institute of Standards and Technology
Handbook Series, June 2003.
https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf
Examples
--------
Draw samples from the distribution:
>>> a = 5. # shape
>>> samples = 1000
>>> s = np.random.power(a, samples)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, bins=30)
>>> x = np.linspace(0, 1, 100)
>>> y = a*x**(a-1.)
>>> normed_y = samples*np.diff(bins)[0]*y
>>> plt.plot(x, normed_y)
>>> plt.show()
Compare the power function distribution to the inverse of the Pareto.
>>> from scipy import stats # doctest: +SKIP
>>> rvs = np.random.power(5, 1000000)
>>> rvsp = np.random.pareto(5, 1000000)
>>> xx = np.linspace(0,1,100)
>>> powpdf = stats.powerlaw.pdf(xx,5) # doctest: +SKIP
>>> plt.figure()
>>> plt.hist(rvs, bins=50, density=True)
>>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
>>> plt.title('np.random.power(5)')
>>> plt.figure()
>>> plt.hist(1./(1.+rvsp), bins=50, density=True)
>>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
>>> plt.title('inverse of 1 + np.random.pareto(5)')
>>> plt.figure()
>>> plt.hist(1./(1.+rvsp), bins=50, density=True)
>>> plt.plot(xx,powpdf,'r-') # doctest: +SKIP
>>> plt.title('inverse of stats.pareto(5)')
rand(...) method of numpy.random.mtrand.RandomState instance
rand(d0, d1, ..., dn)
Random values in a given shape.
.. note::
This is a convenience function for users porting code from Matlab,
and wraps `random_sample`. That function takes a
tuple to specify the size of the output, which is consistent with
other NumPy functions like `numpy.zeros` and `numpy.ones`.
Create an array of the given shape and populate it with
random samples from a uniform distribution
over ``[0, 1)``.
Parameters
----------
d0, d1, ..., dn : int, optional
The dimensions of the returned array, must be non-negative.
If no argument is given a single Python float is returned.
Returns
-------
out : ndarray, shape ``(d0, d1, ..., dn)``
Random values.
See Also
--------
random
Examples
--------
>>> np.random.rand(3,2)
array([[ 0.14022471, 0.96360618], #random
[ 0.37601032, 0.25528411], #random
[ 0.49313049, 0.94909878]]) #random
randint(...) method of numpy.random.mtrand.RandomState instance
randint(low, high=None, size=None, dtype=int)
Return random integers from `low` (inclusive) to `high` (exclusive).
Return random integers from the "discrete uniform" distribution of
the specified dtype in the "half-open" interval [`low`, `high`). If
`high` is None (the default), then results are from [0, `low`).
.. note::
New code should use the ``integers`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
low : int or array-like of ints
Lowest (signed) integers to be drawn from the distribution (unless
``high=None``, in which case this parameter is one above the
*highest* such integer).
high : int or array-like of ints, optional
If provided, one above the largest (signed) integer to be drawn
from the distribution (see above for behavior if ``high=None``).
If array-like, must contain integer values
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
dtype : dtype, optional
Desired dtype of the result. Byteorder must be native.
The default value is int.
.. versionadded:: 1.11.0
Returns
-------
out : int or ndarray of ints
`size`-shaped array of random integers from the appropriate
distribution, or a single such random int if `size` not provided.
See Also
--------
random_integers : similar to `randint`, only for the closed
interval [`low`, `high`], and 1 is the lowest value if `high` is
omitted.
Generator.integers: which should be used for new code.
Examples
--------
>>> np.random.randint(2, size=10)
array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random
>>> np.random.randint(1, size=10)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
Generate a 2 x 4 array of ints between 0 and 4, inclusive:
>>> np.random.randint(5, size=(2, 4))
array([[4, 0, 2, 1], # random
[3, 2, 2, 0]])
Generate a 1 x 3 array with 3 different upper bounds
>>> np.random.randint(1, [3, 5, 10])
array([2, 2, 9]) # random
Generate a 1 by 3 array with 3 different lower bounds
>>> np.random.randint([1, 5, 7], 10)
array([9, 8, 7]) # random
Generate a 2 by 4 array using broadcasting with dtype of uint8
>>> np.random.randint([1, 3, 5, 7], [[10], [20]], dtype=np.uint8)
array([[ 8, 6, 9, 7], # random
[ 1, 16, 9, 12]], dtype=uint8)
randn(...) method of numpy.random.mtrand.RandomState instance
randn(d0, d1, ..., dn)
Return a sample (or samples) from the "standard normal" distribution.
.. note::
This is a convenience function for users porting code from Matlab,
and wraps `standard_normal`. That function takes a
tuple to specify the size of the output, which is consistent with
other NumPy functions like `numpy.zeros` and `numpy.ones`.
.. note::
New code should use the ``standard_normal`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
If positive int_like arguments are provided, `randn` generates an array
of shape ``(d0, d1, ..., dn)``, filled
with random floats sampled from a univariate "normal" (Gaussian)
distribution of mean 0 and variance 1. A single float randomly sampled
from the distribution is returned if no argument is provided.
Parameters
----------
d0, d1, ..., dn : int, optional
The dimensions of the returned array, must be non-negative.
If no argument is given a single Python float is returned.
Returns
-------
Z : ndarray or float
A ``(d0, d1, ..., dn)``-shaped array of floating-point samples from
the standard normal distribution, or a single such float if
no parameters were supplied.
See Also
--------
standard_normal : Similar, but takes a tuple as its argument.
normal : Also accepts mu and sigma arguments.
Generator.standard_normal: which should be used for new code.
Notes
-----
For random samples from :math:`N(\mu, \sigma^2)`, use:
``sigma * np.random.randn(...) + mu``
Examples
--------
>>> np.random.randn()
2.1923875335537315 # random
Two-by-four array of samples from N(3, 6.25):
>>> 3 + 2.5 * np.random.randn(2, 4)
array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
[ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
random(...) method of numpy.random.mtrand.RandomState instance
random(size=None)
Return random floats in the half-open interval [0.0, 1.0). Alias for
`random_sample` to ease forward-porting to the new random API.
random_integers(...) method of numpy.random.mtrand.RandomState instance
random_integers(low, high=None, size=None)
Random integers of type `np.int_` between `low` and `high`, inclusive.
Return random integers of type `np.int_` from the "discrete uniform"
distribution in the closed interval [`low`, `high`]. If `high` is
None (the default), then results are from [1, `low`]. The `np.int_`
type translates to the C long integer type and its precision
is platform dependent.
This function has been deprecated. Use randint instead.
.. deprecated:: 1.11.0
Parameters
----------
low : int
Lowest (signed) integer to be drawn from the distribution (unless
``high=None``, in which case this parameter is the *highest* such
integer).
high : int, optional
If provided, the largest (signed) integer to be drawn from the
distribution (see above for behavior if ``high=None``).
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
Returns
-------
out : int or ndarray of ints
`size`-shaped array of random integers from the appropriate
distribution, or a single such random int if `size` not provided.
See Also
--------
randint : Similar to `random_integers`, only for the half-open
interval [`low`, `high`), and 0 is the lowest value if `high` is
omitted.
Notes
-----
To sample from N evenly spaced floating-point numbers between a and b,
use::
a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.)
Examples
--------
>>> np.random.random_integers(5)
4 # random
>>> type(np.random.random_integers(5))
<class 'numpy.int64'>
>>> np.random.random_integers(5, size=(3,2))
array([[5, 4], # random
[3, 3],
[4, 5]])
Choose five random numbers from the set of five evenly-spaced
numbers between 0 and 2.5, inclusive (*i.e.*, from the set
:math:`{0, 5/8, 10/8, 15/8, 20/8}`):
>>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4.
array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ]) # random
Roll two six sided dice 1000 times and sum the results:
>>> d1 = np.random.random_integers(1, 6, 1000)
>>> d2 = np.random.random_integers(1, 6, 1000)
>>> dsums = d1 + d2
Display results as a histogram:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(dsums, 11, density=True)
>>> plt.show()
random_sample(...) method of numpy.random.mtrand.RandomState instance
random_sample(size=None)
Return random floats in the half-open interval [0.0, 1.0).
Results are from the "continuous uniform" distribution over the
stated interval. To sample :math:`Unif[a, b), b > a` multiply
the output of `random_sample` by `(b-a)` and add `a`::
(b - a) * random_sample() + a
.. note::
New code should use the ``random`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
Returns
-------
out : float or ndarray of floats
Array of random floats of shape `size` (unless ``size=None``, in which
case a single float is returned).
See Also
--------
Generator.random: which should be used for new code.
Examples
--------
>>> np.random.random_sample()
0.47108547995356098 # random
>>> type(np.random.random_sample())
<class 'float'>
>>> np.random.random_sample((5,))
array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random
Three-by-two array of random numbers from [-5, 0):
>>> 5 * np.random.random_sample((3, 2)) - 5
array([[-3.99149989, -0.52338984], # random
[-2.99091858, -0.79479508],
[-1.23204345, -1.75224494]])
ranf(...)
This is an alias of `random_sample`. See `random_sample` for the complete
documentation.
rayleigh(...) method of numpy.random.mtrand.RandomState instance
rayleigh(scale=1.0, size=None)
Draw samples from a Rayleigh distribution.
The :math:`\chi` and Weibull distributions are generalizations of the
Rayleigh.
.. note::
New code should use the ``rayleigh`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
scale : float or array_like of floats, optional
Scale, also equals the mode. Must be non-negative. Default is 1.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``scale`` is a scalar. Otherwise,
``np.array(scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Rayleigh distribution.
See Also
--------
Generator.rayleigh: which should be used for new code.
Notes
-----
The probability density function for the Rayleigh distribution is
.. math:: P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}
The Rayleigh distribution would arise, for example, if the East
and North components of the wind velocity had identical zero-mean
Gaussian distributions. Then the wind speed would have a Rayleigh
distribution.
References
----------
.. [1] Brighton Webs Ltd., "Rayleigh Distribution,"
https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp
.. [2] Wikipedia, "Rayleigh distribution"
https://en.wikipedia.org/wiki/Rayleigh_distribution
Examples
--------
Draw values from the distribution and plot the histogram
>>> from matplotlib.pyplot import hist
>>> values = hist(np.random.rayleigh(3, 100000), bins=200, density=True)
Wave heights tend to follow a Rayleigh distribution. If the mean wave
height is 1 meter, what fraction of waves are likely to be larger than 3
meters?
>>> meanvalue = 1
>>> modevalue = np.sqrt(2 / np.pi) * meanvalue
>>> s = np.random.rayleigh(modevalue, 1000000)
The percentage of waves larger than 3 meters is:
>>> 100.*sum(s>3)/1000000.
0.087300000000000003 # random
sample(...)
This is an alias of `random_sample`. See `random_sample` for the complete
documentation.
seed(...) method of numpy.random.mtrand.RandomState instance
seed(self, seed=None)
Reseed a legacy MT19937 BitGenerator
Notes
-----
This is a convenience, legacy function.
The best practice is to **not** reseed a BitGenerator, rather to
recreate a new one. This method is here for legacy reasons.
This example demonstrates best practice.
>>> from numpy.random import MT19937
>>> from numpy.random import RandomState, SeedSequence
>>> rs = RandomState(MT19937(SeedSequence(123456789)))
# Later, you want to restart the stream
>>> rs = RandomState(MT19937(SeedSequence(987654321)))
set_state(...) method of numpy.random.mtrand.RandomState instance
set_state(state)
Set the internal state of the generator from a tuple.
For use if one has reason to manually (re-)set the internal state of
the bit generator used by the RandomState instance. By default,
RandomState uses the "Mersenne Twister"[1]_ pseudo-random number
generating algorithm.
Parameters
----------
state : {tuple(str, ndarray of 624 uints, int, int, float), dict}
The `state` tuple has the following items:
1. the string 'MT19937', specifying the Mersenne Twister algorithm.
2. a 1-D array of 624 unsigned integers ``keys``.
3. an integer ``pos``.
4. an integer ``has_gauss``.
5. a float ``cached_gaussian``.
If state is a dictionary, it is directly set using the BitGenerators
`state` property.
Returns
-------
out : None
Returns 'None' on success.
See Also
--------
get_state
Notes
-----
`set_state` and `get_state` are not needed to work with any of the
random distributions in NumPy. If the internal state is manually altered,
the user should know exactly what he/she is doing.
For backwards compatibility, the form (str, array of 624 uints, int) is
also accepted although it is missing some information about the cached
Gaussian value: ``state = ('MT19937', keys, pos)``.
References
----------
.. [1] M. Matsumoto and T. Nishimura, "Mersenne Twister: A
623-dimensionally equidistributed uniform pseudorandom number
generator," *ACM Trans. on Modeling and Computer Simulation*,
Vol. 8, No. 1, pp. 3-30, Jan. 1998.
shuffle(...) method of numpy.random.mtrand.RandomState instance
shuffle(x)
Modify a sequence in-place by shuffling its contents.
This function only shuffles the array along the first axis of a
multi-dimensional array. The order of sub-arrays is changed but
their contents remains the same.
.. note::
New code should use the ``shuffle`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
x : array_like
The array or list to be shuffled.
Returns
-------
None
See Also
--------
Generator.shuffle: which should be used for new code.
Examples
--------
>>> arr = np.arange(10)
>>> np.random.shuffle(arr)
>>> arr
[1 7 5 2 9 4 3 6 0 8] # random
Multi-dimensional arrays are only shuffled along the first axis:
>>> arr = np.arange(9).reshape((3, 3))
>>> np.random.shuffle(arr)
>>> arr
array([[3, 4, 5], # random
[6, 7, 8],
[0, 1, 2]])
standard_cauchy(...) method of numpy.random.mtrand.RandomState instance
standard_cauchy(size=None)
Draw samples from a standard Cauchy distribution with mode = 0.
Also known as the Lorentz distribution.
.. note::
New code should use the ``standard_cauchy`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
Returns
-------
samples : ndarray or scalar
The drawn samples.
See Also
--------
Generator.standard_cauchy: which should be used for new code.
Notes
-----
The probability density function for the full Cauchy distribution is
.. math:: P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+
(\frac{x-x_0}{\gamma})^2 \bigr] }
and the Standard Cauchy distribution just sets :math:`x_0=0` and
:math:`\gamma=1`
The Cauchy distribution arises in the solution to the driven harmonic
oscillator problem, and also describes spectral line broadening. It
also describes the distribution of values at which a line tilted at
a random angle will cut the x axis.
When studying hypothesis tests that assume normality, seeing how the
tests perform on data from a Cauchy distribution is a good indicator of
their sensitivity to a heavy-tailed distribution, since the Cauchy looks
very much like a Gaussian distribution, but with heavier tails.
References
----------
.. [1] NIST/SEMATECH e-Handbook of Statistical Methods, "Cauchy
Distribution",
https://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm
.. [2] Weisstein, Eric W. "Cauchy Distribution." From MathWorld--A
Wolfram Web Resource.
http://mathworld.wolfram.com/CauchyDistribution.html
.. [3] Wikipedia, "Cauchy distribution"
https://en.wikipedia.org/wiki/Cauchy_distribution
Examples
--------
Draw samples and plot the distribution:
>>> import matplotlib.pyplot as plt
>>> s = np.random.standard_cauchy(1000000)
>>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well
>>> plt.hist(s, bins=100)
>>> plt.show()
standard_exponential(...) method of numpy.random.mtrand.RandomState instance
standard_exponential(size=None)
Draw samples from the standard exponential distribution.
`standard_exponential` is identical to the exponential distribution
with a scale parameter of 1.
.. note::
New code should use the ``standard_exponential`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
Returns
-------
out : float or ndarray
Drawn samples.
See Also
--------
Generator.standard_exponential: which should be used for new code.
Examples
--------
Output a 3x8000 array:
>>> n = np.random.standard_exponential((3, 8000))
standard_gamma(...) method of numpy.random.mtrand.RandomState instance
standard_gamma(shape, size=None)
Draw samples from a standard Gamma distribution.
Samples are drawn from a Gamma distribution with specified parameters,
shape (sometimes designated "k") and scale=1.
.. note::
New code should use the ``standard_gamma`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
shape : float or array_like of floats
Parameter, must be non-negative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``shape`` is a scalar. Otherwise,
``np.array(shape).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized standard gamma distribution.
See Also
--------
scipy.stats.gamma : probability density function, distribution or
cumulative density function, etc.
Generator.standard_gamma: which should be used for new code.
Notes
-----
The probability density for the Gamma distribution is
.. math:: p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},
where :math:`k` is the shape and :math:`\theta` the scale,
and :math:`\Gamma` is the Gamma function.
The Gamma distribution is often used to model the times to failure of
electronic components, and arises naturally in processes for which the
waiting times between Poisson distributed events are relevant.
References
----------
.. [1] Weisstein, Eric W. "Gamma Distribution." From MathWorld--A
Wolfram Web Resource.
http://mathworld.wolfram.com/GammaDistribution.html
.. [2] Wikipedia, "Gamma distribution",
https://en.wikipedia.org/wiki/Gamma_distribution
Examples
--------
Draw samples from the distribution:
>>> shape, scale = 2., 1. # mean and width
>>> s = np.random.standard_gamma(shape, 1000000)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> import scipy.special as sps # doctest: +SKIP
>>> count, bins, ignored = plt.hist(s, 50, density=True)
>>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ # doctest: +SKIP
... (sps.gamma(shape) * scale**shape))
>>> plt.plot(bins, y, linewidth=2, color='r') # doctest: +SKIP
>>> plt.show()
standard_normal(...) method of numpy.random.mtrand.RandomState instance
standard_normal(size=None)
Draw samples from a standard Normal distribution (mean=0, stdev=1).
.. note::
New code should use the ``standard_normal`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
Returns
-------
out : float or ndarray
A floating-point array of shape ``size`` of drawn samples, or a
single sample if ``size`` was not specified.
See Also
--------
normal :
Equivalent function with additional ``loc`` and ``scale`` arguments
for setting the mean and standard deviation.
Generator.standard_normal: which should be used for new code.
Notes
-----
For random samples from :math:`N(\mu, \sigma^2)`, use one of::
mu + sigma * np.random.standard_normal(size=...)
np.random.normal(mu, sigma, size=...)
Examples
--------
>>> np.random.standard_normal()
2.1923875335537315 #random
>>> s = np.random.standard_normal(8000)
>>> s
array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random
-0.38672696, -0.4685006 ]) # random
>>> s.shape
(8000,)
>>> s = np.random.standard_normal(size=(3, 4, 2))
>>> s.shape
(3, 4, 2)
Two-by-four array of samples from :math:`N(3, 6.25)`:
>>> 3 + 2.5 * np.random.standard_normal(size=(2, 4))
array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random
[ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
standard_t(...) method of numpy.random.mtrand.RandomState instance
standard_t(df, size=None)
Draw samples from a standard Student's t distribution with `df` degrees
of freedom.
A special case of the hyperbolic distribution. As `df` gets
large, the result resembles that of the standard normal
distribution (`standard_normal`).
.. note::
New code should use the ``standard_t`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
df : float or array_like of floats
Degrees of freedom, must be > 0.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``df`` is a scalar. Otherwise,
``np.array(df).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized standard Student's t distribution.
See Also
--------
Generator.standard_t: which should be used for new code.
Notes
-----
The probability density function for the t distribution is
.. math:: P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df}
\Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}
The t test is based on an assumption that the data come from a
Normal distribution. The t test provides a way to test whether
the sample mean (that is the mean calculated from the data) is
a good estimate of the true mean.
The derivation of the t-distribution was first published in
1908 by William Gosset while working for the Guinness Brewery
in Dublin. Due to proprietary issues, he had to publish under
a pseudonym, and so he used the name Student.
References
----------
.. [1] Dalgaard, Peter, "Introductory Statistics With R",
Springer, 2002.
.. [2] Wikipedia, "Student's t-distribution"
https://en.wikipedia.org/wiki/Student's_t-distribution
Examples
--------
From Dalgaard page 83 [1]_, suppose the daily energy intake for 11
women in kilojoules (kJ) is:
>>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \
... 7515, 8230, 8770])
Does their energy intake deviate systematically from the recommended
value of 7725 kJ?
We have 10 degrees of freedom, so is the sample mean within 95% of the
recommended value?
>>> s = np.random.standard_t(10, size=100000)
>>> np.mean(intake)
6753.636363636364
>>> intake.std(ddof=1)
1142.1232221373727
Calculate the t statistic, setting the ddof parameter to the unbiased
value so the divisor in the standard deviation will be degrees of
freedom, N-1.
>>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake)))
>>> import matplotlib.pyplot as plt
>>> h = plt.hist(s, bins=100, density=True)
For a one-sided t-test, how far out in the distribution does the t
statistic appear?
>>> np.sum(s<t) / float(len(s))
0.0090699999999999999 #random
So the p-value is about 0.009, which says the null hypothesis has a
probability of about 99% of being true.
triangular(...) method of numpy.random.mtrand.RandomState instance
triangular(left, mode, right, size=None)
Draw samples from the triangular distribution over the
interval ``[left, right]``.
The triangular distribution is a continuous probability
distribution with lower limit left, peak at mode, and upper
limit right. Unlike the other distributions, these parameters
directly define the shape of the pdf.
.. note::
New code should use the ``triangular`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
left : float or array_like of floats
Lower limit.
mode : float or array_like of floats
The value where the peak of the distribution occurs.
The value must fulfill the condition ``left <= mode <= right``.
right : float or array_like of floats
Upper limit, must be larger than `left`.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``left``, ``mode``, and ``right``
are all scalars. Otherwise, ``np.broadcast(left, mode, right).size``
samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized triangular distribution.
See Also
--------
Generator.triangular: which should be used for new code.
Notes
-----
The probability density function for the triangular distribution is
.. math:: P(x;l, m, r) = \begin{cases}
\frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\
\frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\
0& \text{otherwise}.
\end{cases}
The triangular distribution is often used in ill-defined
problems where the underlying distribution is not known, but
some knowledge of the limits and mode exists. Often it is used
in simulations.
References
----------
.. [1] Wikipedia, "Triangular distribution"
https://en.wikipedia.org/wiki/Triangular_distribution
Examples
--------
Draw values from the distribution and plot the histogram:
>>> import matplotlib.pyplot as plt
>>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200,
... density=True)
>>> plt.show()
uniform(...) method of numpy.random.mtrand.RandomState instance
uniform(low=0.0, high=1.0, size=None)
Draw samples from a uniform distribution.
Samples are uniformly distributed over the half-open interval
``[low, high)`` (includes low, but excludes high). In other words,
any value within the given interval is equally likely to be drawn
by `uniform`.
.. note::
New code should use the ``uniform`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
low : float or array_like of floats, optional
Lower boundary of the output interval. All values generated will be
greater than or equal to low. The default value is 0.
high : float or array_like of floats
Upper boundary of the output interval. All values generated will be
less than high. The default value is 1.0.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``low`` and ``high`` are both scalars.
Otherwise, ``np.broadcast(low, high).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized uniform distribution.
See Also
--------
randint : Discrete uniform distribution, yielding integers.
random_integers : Discrete uniform distribution over the closed
interval ``[low, high]``.
random_sample : Floats uniformly distributed over ``[0, 1)``.
random : Alias for `random_sample`.
rand : Convenience function that accepts dimensions as input, e.g.,
``rand(2,2)`` would generate a 2-by-2 array of floats,
uniformly distributed over ``[0, 1)``.
Generator.uniform: which should be used for new code.
Notes
-----
The probability density function of the uniform distribution is
.. math:: p(x) = \frac{1}{b - a}
anywhere within the interval ``[a, b)``, and zero elsewhere.
When ``high`` == ``low``, values of ``low`` will be returned.
If ``high`` < ``low``, the results are officially undefined
and may eventually raise an error, i.e. do not rely on this
function to behave when passed arguments satisfying that
inequality condition.
Examples
--------
Draw samples from the distribution:
>>> s = np.random.uniform(-1,0,1000)
All values are within the given interval:
>>> np.all(s >= -1)
True
>>> np.all(s < 0)
True
Display the histogram of the samples, along with the
probability density function:
>>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 15, density=True)
>>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r')
>>> plt.show()
vonmises(...) method of numpy.random.mtrand.RandomState instance
vonmises(mu, kappa, size=None)
Draw samples from a von Mises distribution.
Samples are drawn from a von Mises distribution with specified mode
(mu) and dispersion (kappa), on the interval [-pi, pi].
The von Mises distribution (also known as the circular normal
distribution) is a continuous probability distribution on the unit
circle. It may be thought of as the circular analogue of the normal
distribution.
.. note::
New code should use the ``vonmises`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
mu : float or array_like of floats
Mode ("center") of the distribution.
kappa : float or array_like of floats
Dispersion of the distribution, has to be >=0.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``mu`` and ``kappa`` are both scalars.
Otherwise, ``np.broadcast(mu, kappa).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized von Mises distribution.
See Also
--------
scipy.stats.vonmises : probability density function, distribution, or
cumulative density function, etc.
Generator.vonmises: which should be used for new code.
Notes
-----
The probability density for the von Mises distribution is
.. math:: p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},
where :math:`\mu` is the mode and :math:`\kappa` the dispersion,
and :math:`I_0(\kappa)` is the modified Bessel function of order 0.
The von Mises is named for Richard Edler von Mises, who was born in
Austria-Hungary, in what is now the Ukraine. He fled to the United
States in 1939 and became a professor at Harvard. He worked in
probability theory, aerodynamics, fluid mechanics, and philosophy of
science.
References
----------
.. [1] Abramowitz, M. and Stegun, I. A. (Eds.). "Handbook of
Mathematical Functions with Formulas, Graphs, and Mathematical
Tables, 9th printing," New York: Dover, 1972.
.. [2] von Mises, R., "Mathematical Theory of Probability
and Statistics", New York: Academic Press, 1964.
Examples
--------
Draw samples from the distribution:
>>> mu, kappa = 0.0, 4.0 # mean and dispersion
>>> s = np.random.vonmises(mu, kappa, 1000)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> from scipy.special import i0 # doctest: +SKIP
>>> plt.hist(s, 50, density=True)
>>> x = np.linspace(-np.pi, np.pi, num=51)
>>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) # doctest: +SKIP
>>> plt.plot(x, y, linewidth=2, color='r') # doctest: +SKIP
>>> plt.show()
wald(...) method of numpy.random.mtrand.RandomState instance
wald(mean, scale, size=None)
Draw samples from a Wald, or inverse Gaussian, distribution.
As the scale approaches infinity, the distribution becomes more like a
Gaussian. Some references claim that the Wald is an inverse Gaussian
with mean equal to 1, but this is by no means universal.
The inverse Gaussian distribution was first studied in relationship to
Brownian motion. In 1956 M.C.K. Tweedie used the name inverse Gaussian
because there is an inverse relationship between the time to cover a
unit distance and distance covered in unit time.
.. note::
New code should use the ``wald`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
mean : float or array_like of floats
Distribution mean, must be > 0.
scale : float or array_like of floats
Scale parameter, must be > 0.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``mean`` and ``scale`` are both scalars.
Otherwise, ``np.broadcast(mean, scale).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Wald distribution.
See Also
--------
Generator.wald: which should be used for new code.
Notes
-----
The probability density function for the Wald distribution is
.. math:: P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^
\frac{-scale(x-mean)^2}{2\cdotp mean^2x}
As noted above the inverse Gaussian distribution first arise
from attempts to model Brownian motion. It is also a
competitor to the Weibull for use in reliability modeling and
modeling stock returns and interest rate processes.
References
----------
.. [1] Brighton Webs Ltd., Wald Distribution,
https://web.archive.org/web/20090423014010/http://www.brighton-webs.co.uk:80/distributions/wald.asp
.. [2] Chhikara, Raj S., and Folks, J. Leroy, "The Inverse Gaussian
Distribution: Theory : Methodology, and Applications", CRC Press,
1988.
.. [3] Wikipedia, "Inverse Gaussian distribution"
https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution
Examples
--------
Draw values from the distribution and plot the histogram:
>>> import matplotlib.pyplot as plt
>>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, density=True)
>>> plt.show()
weibull(...) method of numpy.random.mtrand.RandomState instance
weibull(a, size=None)
Draw samples from a Weibull distribution.
Draw samples from a 1-parameter Weibull distribution with the given
shape parameter `a`.
.. math:: X = (-ln(U))^{1/a}
Here, U is drawn from the uniform distribution over (0,1].
The more common 2-parameter Weibull, including a scale parameter
:math:`\lambda` is just :math:`X = \lambda(-ln(U))^{1/a}`.
.. note::
New code should use the ``weibull`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
a : float or array_like of floats
Shape parameter of the distribution. Must be nonnegative.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``a`` is a scalar. Otherwise,
``np.array(a).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Weibull distribution.
See Also
--------
scipy.stats.weibull_max
scipy.stats.weibull_min
scipy.stats.genextreme
gumbel
Generator.weibull: which should be used for new code.
Notes
-----
The Weibull (or Type III asymptotic extreme value distribution
for smallest values, SEV Type III, or Rosin-Rammler
distribution) is one of a class of Generalized Extreme Value
(GEV) distributions used in modeling extreme value problems.
This class includes the Gumbel and Frechet distributions.
The probability density for the Weibull distribution is
.. math:: p(x) = \frac{a}
{\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},
where :math:`a` is the shape and :math:`\lambda` the scale.
The function has its peak (the mode) at
:math:`\lambda(\frac{a-1}{a})^{1/a}`.
When ``a = 1``, the Weibull distribution reduces to the exponential
distribution.
References
----------
.. [1] Waloddi Weibull, Royal Technical University, Stockholm,
1939 "A Statistical Theory Of The Strength Of Materials",
Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939,
Generalstabens Litografiska Anstalts Forlag, Stockholm.
.. [2] Waloddi Weibull, "A Statistical Distribution Function of
Wide Applicability", Journal Of Applied Mechanics ASME Paper
1951.
.. [3] Wikipedia, "Weibull distribution",
https://en.wikipedia.org/wiki/Weibull_distribution
Examples
--------
Draw samples from the distribution:
>>> a = 5. # shape
>>> s = np.random.weibull(a, 1000)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> x = np.arange(1,100.)/50.
>>> def weib(x,n,a):
... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a)
>>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000))
>>> x = np.arange(1,100.)/50.
>>> scale = count.max()/weib(x, 1., 5.).max()
>>> plt.plot(x, weib(x, 1., 5.)*scale)
>>> plt.show()
zipf(...) method of numpy.random.mtrand.RandomState instance
zipf(a, size=None)
Draw samples from a Zipf distribution.
Samples are drawn from a Zipf distribution with specified parameter
`a` > 1.
The Zipf distribution (also known as the zeta distribution) is a
continuous probability distribution that satisfies Zipf's law: the
frequency of an item is inversely proportional to its rank in a
frequency table.
.. note::
New code should use the ``zipf`` method of a ``default_rng()``
instance instead; see `random-quick-start`.
Parameters
----------
a : float or array_like of floats
Distribution parameter. Must be greater than 1.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. If size is ``None`` (default),
a single value is returned if ``a`` is a scalar. Otherwise,
``np.array(a).size`` samples are drawn.
Returns
-------
out : ndarray or scalar
Drawn samples from the parameterized Zipf distribution.
See Also
--------
scipy.stats.zipf : probability density function, distribution, or
cumulative density function, etc.
Generator.zipf: which should be used for new code.
Notes
-----
The probability density for the Zipf distribution is
.. math:: p(x) = \frac{x^{-a}}{\zeta(a)},
where :math:`\zeta` is the Riemann Zeta function.
It is named for the American linguist George Kingsley Zipf, who noted
that the frequency of any word in a sample of a language is inversely
proportional to its rank in the frequency table.
References
----------
.. [1] Zipf, G. K., "Selected Studies of the Principle of Relative
Frequency in Language," Cambridge, MA: Harvard Univ. Press,
1932.
Examples
--------
Draw samples from the distribution:
>>> a = 2. # parameter
>>> s = np.random.zipf(a, 1000)
Display the histogram of the samples, along with
the probability density function:
>>> import matplotlib.pyplot as plt
>>> from scipy import special # doctest: +SKIP
Truncate s values at 50 so plot is interesting:
>>> count, bins, ignored = plt.hist(s[s<50], 50, density=True)
>>> x = np.arange(1., 50.)
>>> y = x**(-a) / special.zetac(a) # doctest: +SKIP
>>> plt.plot(x, y/max(y), linewidth=2, color='r') # doctest: +SKIP
>>> plt.show()
DATA
__all__ = ['beta', 'binomial', 'bytes', 'chisquare', 'choice', 'dirich...
FILE
c:\programdata\anaconda3\lib\site-packages\numpy\random\__init__.py
###Markdown
Scatter Plots sencillos
###Code
N = 2000
random_x = np.random.randn(N)
random_y = np.random.randn(N)
trace = go.Scatter(x = random_x, y = random_y, mode = "markers")
py.iplot([trace], filename = "basic-scatter")
plot_url = py.plot([trace], filename = "basic-scatter-inline")
plot_url
###Output
_____no_output_____
###Markdown
Gráficos combinados
###Code
N = 200
rand_x = np.linspace(0,1, N)
rand_y0 = np.random.randn(N) + 3
rand_y1 = np.random.randn(N)
rand_y2 = np.random.randn(N) - 3
trace0 = go.Scatter(x = rand_x, y = rand_y0, mode="markers", name="Puntos")
trace1 = go.Scatter(x = rand_x, y = rand_y1, mode="lines", name="Líneas")
trace2 = go.Scatter(x = rand_x, y = rand_y2, mode="lines+markers", name="Puntos y líneas")
data = [trace0, trace1, trace2]
py.iplot(data, filename = "scatter-line-plot")
###Output
_____no_output_____
###Markdown
Estilizado de gráficos
###Code
trace = go.Scatter(x = random_x, y = random_y, name = "Puntos de estilo guay", mode="markers",
marker = dict(size = 12, color = "rgba(140,20,20,0.8)", line = dict(width=2, color="rgb(10,10,10)")))
layout = dict(title = "Scatter Plot Estilizado", xaxis = dict(zeroline = False), yaxis = dict(zeroline=False))
fig = dict(data = [trace], layout = layout)
py.iplot(fig)
trace = go.Scatter(x = random_x, y = random_y, name = "Puntos de estilo guay", mode="markers",
marker = dict(size = 8, color = "rgba(10,80,220,0.25)", line = dict(width=1, color="rgb(10,10,80)")))
fig = dict(data = [trace], layout = layout)
py.iplot(fig)
trace = go.Histogram(x = random_x, name = "Puntos de estilo guay")
fig = dict(data = [trace], layout = layout)
py.iplot(fig)
trace = go.Box(x = random_x, name = "Puntos de estilo guay", fillcolor = "rgba(180,25,95,0.6)")
fig = dict(data = [trace], layout = layout)
py.iplot(fig, filename = "basic-scatter-inline")
help(go.Box)
###Output
Help on class Box in module plotly.graph_objs._box:
class Box(plotly.basedatatypes.BaseTraceType)
| Box(arg=None, alignmentgroup=None, boxmean=None, boxpoints=None, customdata=None, customdatasrc=None, dx=None, dy=None, fillcolor=None, hoverinfo=None, hoverinfosrc=None, hoverlabel=None, hoveron=None, hovertemplate=None, hovertemplatesrc=None, hovertext=None, hovertextsrc=None, ids=None, idssrc=None, jitter=None, legendgroup=None, line=None, lowerfence=None, lowerfencesrc=None, marker=None, mean=None, meansrc=None, median=None, mediansrc=None, meta=None, metasrc=None, name=None, notched=None, notchspan=None, notchspansrc=None, notchwidth=None, offsetgroup=None, opacity=None, orientation=None, pointpos=None, q1=None, q1src=None, q3=None, q3src=None, quartilemethod=None, sd=None, sdsrc=None, selected=None, selectedpoints=None, showlegend=None, stream=None, text=None, textsrc=None, uid=None, uirevision=None, unselected=None, upperfence=None, upperfencesrc=None, visible=None, whiskerwidth=None, width=None, x=None, x0=None, xaxis=None, xcalendar=None, xperiod=None, xperiod0=None, xperiodalignment=None, xsrc=None, y=None, y0=None, yaxis=None, ycalendar=None, yperiod=None, yperiod0=None, yperiodalignment=None, ysrc=None, **kwargs)
|
| Base class for the all trace types.
|
| Specific trace type classes (Scatter, Bar, etc.) are code generated as
| subclasses of this class.
|
| Method resolution order:
| Box
| plotly.basedatatypes.BaseTraceType
| plotly.basedatatypes.BaseTraceHierarchyType
| plotly.basedatatypes.BasePlotlyType
| builtins.object
|
| Methods defined here:
|
| __init__(self, arg=None, alignmentgroup=None, boxmean=None, boxpoints=None, customdata=None, customdatasrc=None, dx=None, dy=None, fillcolor=None, hoverinfo=None, hoverinfosrc=None, hoverlabel=None, hoveron=None, hovertemplate=None, hovertemplatesrc=None, hovertext=None, hovertextsrc=None, ids=None, idssrc=None, jitter=None, legendgroup=None, line=None, lowerfence=None, lowerfencesrc=None, marker=None, mean=None, meansrc=None, median=None, mediansrc=None, meta=None, metasrc=None, name=None, notched=None, notchspan=None, notchspansrc=None, notchwidth=None, offsetgroup=None, opacity=None, orientation=None, pointpos=None, q1=None, q1src=None, q3=None, q3src=None, quartilemethod=None, sd=None, sdsrc=None, selected=None, selectedpoints=None, showlegend=None, stream=None, text=None, textsrc=None, uid=None, uirevision=None, unselected=None, upperfence=None, upperfencesrc=None, visible=None, whiskerwidth=None, width=None, x=None, x0=None, xaxis=None, xcalendar=None, xperiod=None, xperiod0=None, xperiodalignment=None, xsrc=None, y=None, y0=None, yaxis=None, ycalendar=None, yperiod=None, yperiod0=None, yperiodalignment=None, ysrc=None, **kwargs)
| Construct a new Box object
|
| Each box spans from quartile 1 (Q1) to quartile 3 (Q3). The
| second quartile (Q2, i.e. the median) is marked by a line
| inside the box. The fences grow outward from the boxes' edges,
| by default they span +/- 1.5 times the interquartile range
| (IQR: Q3-Q1), The sample mean and standard deviation as well as
| notches and the sample, outlier and suspected outliers points
| can be optionally added to the box plot. The values and
| positions corresponding to each boxes can be input using two
| signatures. The first signature expects users to supply the
| sample values in the `y` data array for vertical boxes (`x` for
| horizontal boxes). By supplying an `x` (`y`) array, one box per
| distinct `x` (`y`) value is drawn If no `x` (`y`) list is
| provided, a single box is drawn. In this case, the box is
| positioned with the trace `name` or with `x0` (`y0`) if
| provided. The second signature expects users to supply the
| boxes corresponding Q1, median and Q3 statistics in the `q1`,
| `median` and `q3` data arrays respectively. Other box features
| relying on statistics namely `lowerfence`, `upperfence`,
| `notchspan` can be set directly by the users. To have plotly
| compute them or to show sample points besides the boxes, users
| can set the `y` data array for vertical boxes (`x` for
| horizontal boxes) to a 2D array with the outer length
| corresponding to the number of boxes in the traces and the
| inner length corresponding the sample size.
|
| Parameters
| ----------
| arg
| dict of properties compatible with this constructor or
| an instance of :class:`plotly.graph_objs.Box`
| alignmentgroup
| Set several traces linked to the same position axis or
| matching axes to the same alignmentgroup. This controls
| whether bars compute their positional range dependently
| or independently.
| boxmean
| If True, the mean of the box(es)' underlying
| distribution is drawn as a dashed line inside the
| box(es). If "sd" the standard deviation is also drawn.
| Defaults to True when `mean` is set. Defaults to "sd"
| when `sd` is set Otherwise defaults to False.
| boxpoints
| If "outliers", only the sample points lying outside the
| whiskers are shown If "suspectedoutliers", the outlier
| points are shown and points either less than 4*Q1-3*Q3
| or greater than 4*Q3-3*Q1 are highlighted (see
| `outliercolor`) If "all", all sample points are shown
| If False, only the box(es) are shown with no sample
| points Defaults to "suspectedoutliers" when
| `marker.outliercolor` or `marker.line.outliercolor` is
| set. Defaults to "all" under the q1/median/q3
| signature. Otherwise defaults to "outliers".
| customdata
| Assigns extra data each datum. This may be useful when
| listening to hover, click and selection events. Note
| that, "scatter" traces also appends customdata items in
| the markers DOM elements
| customdatasrc
| Sets the source reference on Chart Studio Cloud for
| customdata .
| dx
| Sets the x coordinate step for multi-box traces set
| using q1/median/q3.
| dy
| Sets the y coordinate step for multi-box traces set
| using q1/median/q3.
| fillcolor
| Sets the fill color. Defaults to a half-transparent
| variant of the line color, marker color, or marker line
| color, whichever is available.
| hoverinfo
| Determines which trace information appear on hover. If
| `none` or `skip` are set, no information is displayed
| upon hovering. But, if `none` is set, click and hover
| events are still fired.
| hoverinfosrc
| Sets the source reference on Chart Studio Cloud for
| hoverinfo .
| hoverlabel
| :class:`plotly.graph_objects.box.Hoverlabel` instance
| or dict with compatible properties
| hoveron
| Do the hover effects highlight individual boxes or
| sample points or both?
| hovertemplate
| Template string used for rendering the information that
| appear on hover box. Note that this will override
| `hoverinfo`. Variables are inserted using %{variable},
| for example "y: %{y}". Numbers are formatted using
| d3-format's syntax %{variable:d3-format}, for example
| "Price: %{y:$.2f}". https://github.com/d3/d3-3.x-api-
| reference/blob/master/Formatting.md#d3_format for
| details on the formatting syntax. Dates are formatted
| using d3-time-format's syntax %{variable|d3-time-
| format}, for example "Day: %{2019-01-01|%A}".
| https://github.com/d3/d3-time-format#locale_format for
| details on the date formatting syntax. The variables
| available in `hovertemplate` are the ones emitted as
| event data described at this link
| https://plotly.com/javascript/plotlyjs-events/#event-
| data. Additionally, every attributes that can be
| specified per-point (the ones that are `arrayOk: true`)
| are available. Anything contained in tag `<extra>` is
| displayed in the secondary box, for example
| "<extra>{fullData.name}</extra>". To hide the secondary
| box completely, use an empty tag `<extra></extra>`.
| hovertemplatesrc
| Sets the source reference on Chart Studio Cloud for
| hovertemplate .
| hovertext
| Same as `text`.
| hovertextsrc
| Sets the source reference on Chart Studio Cloud for
| hovertext .
| ids
| Assigns id labels to each datum. These ids for object
| constancy of data points during animation. Should be an
| array of strings, not numbers or any other type.
| idssrc
| Sets the source reference on Chart Studio Cloud for
| ids .
| jitter
| Sets the amount of jitter in the sample points drawn.
| If 0, the sample points align along the distribution
| axis. If 1, the sample points are drawn in a random
| jitter of width equal to the width of the box(es).
| legendgroup
| Sets the legend group for this trace. Traces part of
| the same legend group hide/show at the same time when
| toggling legend items.
| line
| :class:`plotly.graph_objects.box.Line` instance or dict
| with compatible properties
| lowerfence
| Sets the lower fence values. There should be as many
| items as the number of boxes desired. This attribute
| has effect only under the q1/median/q3 signature. If
| `lowerfence` is not provided but a sample (in `y` or
| `x`) is set, we compute the lower as the last sample
| point below 1.5 times the IQR.
| lowerfencesrc
| Sets the source reference on Chart Studio Cloud for
| lowerfence .
| marker
| :class:`plotly.graph_objects.box.Marker` instance or
| dict with compatible properties
| mean
| Sets the mean values. There should be as many items as
| the number of boxes desired. This attribute has effect
| only under the q1/median/q3 signature. If `mean` is not
| provided but a sample (in `y` or `x`) is set, we
| compute the mean for each box using the sample values.
| meansrc
| Sets the source reference on Chart Studio Cloud for
| mean .
| median
| Sets the median values. There should be as many items
| as the number of boxes desired.
| mediansrc
| Sets the source reference on Chart Studio Cloud for
| median .
| meta
| Assigns extra meta information associated with this
| trace that can be used in various text attributes.
| Attributes such as trace `name`, graph, axis and
| colorbar `title.text`, annotation `text`
| `rangeselector`, `updatemenues` and `sliders` `label`
| text all support `meta`. To access the trace `meta`
| values in an attribute in the same trace, simply use
| `%{meta[i]}` where `i` is the index or key of the
| `meta` item in question. To access trace `meta` in
| layout attributes, use `%{data[n[.meta[i]}` where `i`
| is the index or key of the `meta` and `n` is the trace
| index.
| metasrc
| Sets the source reference on Chart Studio Cloud for
| meta .
| name
| Sets the trace name. The trace name appear as the
| legend item and on hover. For box traces, the name will
| also be used for the position coordinate, if `x` and
| `x0` (`y` and `y0` if horizontal) are missing and the
| position axis is categorical
| notched
| Determines whether or not notches are drawn. Notches
| displays a confidence interval around the median. We
| compute the confidence interval as median +/- 1.57 *
| IQR / sqrt(N), where IQR is the interquartile range and
| N is the sample size. If two boxes' notches do not
| overlap there is 95% confidence their medians differ.
| See https://sites.google.com/site/davidsstatistics/home
| /notched-box-plots for more info. Defaults to False
| unless `notchwidth` or `notchspan` is set.
| notchspan
| Sets the notch span from the boxes' `median` values.
| There should be as many items as the number of boxes
| desired. This attribute has effect only under the
| q1/median/q3 signature. If `notchspan` is not provided
| but a sample (in `y` or `x`) is set, we compute it as
| 1.57 * IQR / sqrt(N), where N is the sample size.
| notchspansrc
| Sets the source reference on Chart Studio Cloud for
| notchspan .
| notchwidth
| Sets the width of the notches relative to the box'
| width. For example, with 0, the notches are as wide as
| the box(es).
| offsetgroup
| Set several traces linked to the same position axis or
| matching axes to the same offsetgroup where bars of the
| same position coordinate will line up.
| opacity
| Sets the opacity of the trace.
| orientation
| Sets the orientation of the box(es). If "v" ("h"), the
| distribution is visualized along the vertical
| (horizontal).
| pointpos
| Sets the position of the sample points in relation to
| the box(es). If 0, the sample points are places over
| the center of the box(es). Positive (negative) values
| correspond to positions to the right (left) for
| vertical boxes and above (below) for horizontal boxes
| q1
| Sets the Quartile 1 values. There should be as many
| items as the number of boxes desired.
| q1src
| Sets the source reference on Chart Studio Cloud for q1
| .
| q3
| Sets the Quartile 3 values. There should be as many
| items as the number of boxes desired.
| q3src
| Sets the source reference on Chart Studio Cloud for q3
| .
| quartilemethod
| Sets the method used to compute the sample's Q1 and Q3
| quartiles. The "linear" method uses the 25th percentile
| for Q1 and 75th percentile for Q3 as computed using
| method #10 (listed on http://www.amstat.org/publication
| s/jse/v14n3/langford.html). The "exclusive" method uses
| the median to divide the ordered dataset into two
| halves if the sample is odd, it does not include the
| median in either half - Q1 is then the median of the
| lower half and Q3 the median of the upper half. The
| "inclusive" method also uses the median to divide the
| ordered dataset into two halves but if the sample is
| odd, it includes the median in both halves - Q1 is then
| the median of the lower half and Q3 the median of the
| upper half.
| sd
| Sets the standard deviation values. There should be as
| many items as the number of boxes desired. This
| attribute has effect only under the q1/median/q3
| signature. If `sd` is not provided but a sample (in `y`
| or `x`) is set, we compute the standard deviation for
| each box using the sample values.
| sdsrc
| Sets the source reference on Chart Studio Cloud for sd
| .
| selected
| :class:`plotly.graph_objects.box.Selected` instance or
| dict with compatible properties
| selectedpoints
| Array containing integer indices of selected points.
| Has an effect only for traces that support selections.
| Note that an empty array means an empty selection where
| the `unselected` are turned on for all points, whereas,
| any other non-array values means no selection all where
| the `selected` and `unselected` styles have no effect.
| showlegend
| Determines whether or not an item corresponding to this
| trace is shown in the legend.
| stream
| :class:`plotly.graph_objects.box.Stream` instance or
| dict with compatible properties
| text
| Sets the text elements associated with each sample
| value. If a single string, the same string appears over
| all the data points. If an array of string, the items
| are mapped in order to the this trace's (x,y)
| coordinates. To be seen, trace `hoverinfo` must contain
| a "text" flag.
| textsrc
| Sets the source reference on Chart Studio Cloud for
| text .
| uid
| Assign an id to this trace, Use this to provide object
| constancy between traces during animations and
| transitions.
| uirevision
| Controls persistence of some user-driven changes to the
| trace: `constraintrange` in `parcoords` traces, as well
| as some `editable: true` modifications such as `name`
| and `colorbar.title`. Defaults to `layout.uirevision`.
| Note that other user-driven trace attribute changes are
| controlled by `layout` attributes: `trace.visible` is
| controlled by `layout.legend.uirevision`,
| `selectedpoints` is controlled by
| `layout.selectionrevision`, and `colorbar.(x|y)`
| (accessible with `config: {editable: true}`) is
| controlled by `layout.editrevision`. Trace changes are
| tracked by `uid`, which only falls back on trace index
| if no `uid` is provided. So if your app can add/remove
| traces before the end of the `data` array, such that
| the same trace has a different index, you can still
| preserve user-driven changes if you give each trace a
| `uid` that stays with it as it moves.
| unselected
| :class:`plotly.graph_objects.box.Unselected` instance
| or dict with compatible properties
| upperfence
| Sets the upper fence values. There should be as many
| items as the number of boxes desired. This attribute
| has effect only under the q1/median/q3 signature. If
| `upperfence` is not provided but a sample (in `y` or
| `x`) is set, we compute the lower as the last sample
| point above 1.5 times the IQR.
| upperfencesrc
| Sets the source reference on Chart Studio Cloud for
| upperfence .
| visible
| Determines whether or not this trace is visible. If
| "legendonly", the trace is not drawn, but can appear as
| a legend item (provided that the legend itself is
| visible).
| whiskerwidth
| Sets the width of the whiskers relative to the box'
| width. For example, with 1, the whiskers are as wide as
| the box(es).
| width
| Sets the width of the box in data coordinate If 0
| (default value) the width is automatically selected
| based on the positions of other box traces in the same
| subplot.
| x
| Sets the x sample data or coordinates. See overview for
| more info.
| x0
| Sets the x coordinate for single-box traces or the
| starting coordinate for multi-box traces set using
| q1/median/q3. See overview for more info.
| xaxis
| Sets a reference between this trace's x coordinates and
| a 2D cartesian x axis. If "x" (the default value), the
| x coordinates refer to `layout.xaxis`. If "x2", the x
| coordinates refer to `layout.xaxis2`, and so on.
| xcalendar
| Sets the calendar system to use with `x` date data.
| xperiod
| Only relevant when the axis `type` is "date". Sets the
| period positioning in milliseconds or "M<n>" on the x
| axis. Special values in the form of "M<n>" could be
| used to declare the number of months. In this case `n`
| must be a positive integer.
| xperiod0
| Only relevant when the axis `type` is "date". Sets the
| base for period positioning in milliseconds or date
| string on the x0 axis. When `x0period` is round number
| of weeks, the `x0period0` by default would be on a
| Sunday i.e. 2000-01-02, otherwise it would be at
| 2000-01-01.
| xperiodalignment
| Only relevant when the axis `type` is "date". Sets the
| alignment of data points on the x axis.
| xsrc
| Sets the source reference on Chart Studio Cloud for x
| .
| y
| Sets the y sample data or coordinates. See overview for
| more info.
| y0
| Sets the y coordinate for single-box traces or the
| starting coordinate for multi-box traces set using
| q1/median/q3. See overview for more info.
| yaxis
| Sets a reference between this trace's y coordinates and
| a 2D cartesian y axis. If "y" (the default value), the
| y coordinates refer to `layout.yaxis`. If "y2", the y
| coordinates refer to `layout.yaxis2`, and so on.
| ycalendar
| Sets the calendar system to use with `y` date data.
| yperiod
| Only relevant when the axis `type` is "date". Sets the
| period positioning in milliseconds or "M<n>" on the y
| axis. Special values in the form of "M<n>" could be
| used to declare the number of months. In this case `n`
| must be a positive integer.
| yperiod0
| Only relevant when the axis `type` is "date". Sets the
| base for period positioning in milliseconds or date
| string on the y0 axis. When `y0period` is round number
| of weeks, the `y0period0` by default would be on a
| Sunday i.e. 2000-01-02, otherwise it would be at
| 2000-01-01.
| yperiodalignment
| Only relevant when the axis `type` is "date". Sets the
| alignment of data points on the y axis.
| ysrc
| Sets the source reference on Chart Studio Cloud for y
| .
|
| Returns
| -------
| Box
|
| ----------------------------------------------------------------------
| Readonly properties defined here:
|
| type
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| alignmentgroup
| Set several traces linked to the same position axis or matching
| axes to the same alignmentgroup. This controls whether bars
| compute their positional range dependently or independently.
|
| The 'alignmentgroup' property is a string and must be specified as:
| - A string
| - A number that will be converted to a string
|
| Returns
| -------
| str
|
| boxmean
| If True, the mean of the box(es)' underlying distribution is
| drawn as a dashed line inside the box(es). If "sd" the standard
| deviation is also drawn. Defaults to True when `mean` is set.
| Defaults to "sd" when `sd` is set Otherwise defaults to False.
|
| The 'boxmean' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| [True, 'sd', False]
|
| Returns
| -------
| Any
|
| boxpoints
| If "outliers", only the sample points lying outside the
| whiskers are shown If "suspectedoutliers", the outlier points
| are shown and points either less than 4*Q1-3*Q3 or greater than
| 4*Q3-3*Q1 are highlighted (see `outliercolor`) If "all", all
| sample points are shown If False, only the box(es) are shown
| with no sample points Defaults to "suspectedoutliers" when
| `marker.outliercolor` or `marker.line.outliercolor` is set.
| Defaults to "all" under the q1/median/q3 signature. Otherwise
| defaults to "outliers".
|
| The 'boxpoints' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| ['all', 'outliers', 'suspectedoutliers', False]
|
| Returns
| -------
| Any
|
| customdata
| Assigns extra data each datum. This may be useful when
| listening to hover, click and selection events. Note that,
| "scatter" traces also appends customdata items in the markers
| DOM elements
|
| The 'customdata' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| customdatasrc
| Sets the source reference on Chart Studio Cloud for customdata
| .
|
| The 'customdatasrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| dx
| Sets the x coordinate step for multi-box traces set using
| q1/median/q3.
|
| The 'dx' property is a number and may be specified as:
| - An int or float
|
| Returns
| -------
| int|float
|
| dy
| Sets the y coordinate step for multi-box traces set using
| q1/median/q3.
|
| The 'dy' property is a number and may be specified as:
| - An int or float
|
| Returns
| -------
| int|float
|
| fillcolor
| Sets the fill color. Defaults to a half-transparent variant of
| the line color, marker color, or marker line color, whichever
| is available.
|
| The 'fillcolor' property is a color and may be specified as:
| - A hex string (e.g. '#ff0000')
| - An rgb/rgba string (e.g. 'rgb(255,0,0)')
| - An hsl/hsla string (e.g. 'hsl(0,100%,50%)')
| - An hsv/hsva string (e.g. 'hsv(0,100%,100%)')
| - A named CSS color:
| aliceblue, antiquewhite, aqua, aquamarine, azure,
| beige, bisque, black, blanchedalmond, blue,
| blueviolet, brown, burlywood, cadetblue,
| chartreuse, chocolate, coral, cornflowerblue,
| cornsilk, crimson, cyan, darkblue, darkcyan,
| darkgoldenrod, darkgray, darkgrey, darkgreen,
| darkkhaki, darkmagenta, darkolivegreen, darkorange,
| darkorchid, darkred, darksalmon, darkseagreen,
| darkslateblue, darkslategray, darkslategrey,
| darkturquoise, darkviolet, deeppink, deepskyblue,
| dimgray, dimgrey, dodgerblue, firebrick,
| floralwhite, forestgreen, fuchsia, gainsboro,
| ghostwhite, gold, goldenrod, gray, grey, green,
| greenyellow, honeydew, hotpink, indianred, indigo,
| ivory, khaki, lavender, lavenderblush, lawngreen,
| lemonchiffon, lightblue, lightcoral, lightcyan,
| lightgoldenrodyellow, lightgray, lightgrey,
| lightgreen, lightpink, lightsalmon, lightseagreen,
| lightskyblue, lightslategray, lightslategrey,
| lightsteelblue, lightyellow, lime, limegreen,
| linen, magenta, maroon, mediumaquamarine,
| mediumblue, mediumorchid, mediumpurple,
| mediumseagreen, mediumslateblue, mediumspringgreen,
| mediumturquoise, mediumvioletred, midnightblue,
| mintcream, mistyrose, moccasin, navajowhite, navy,
| oldlace, olive, olivedrab, orange, orangered,
| orchid, palegoldenrod, palegreen, paleturquoise,
| palevioletred, papayawhip, peachpuff, peru, pink,
| plum, powderblue, purple, red, rosybrown,
| royalblue, rebeccapurple, saddlebrown, salmon,
| sandybrown, seagreen, seashell, sienna, silver,
| skyblue, slateblue, slategray, slategrey, snow,
| springgreen, steelblue, tan, teal, thistle, tomato,
| turquoise, violet, wheat, white, whitesmoke,
| yellow, yellowgreen
|
| Returns
| -------
| str
|
| hoverinfo
| Determines which trace information appear on hover. If `none`
| or `skip` are set, no information is displayed upon hovering.
| But, if `none` is set, click and hover events are still fired.
|
| The 'hoverinfo' property is a flaglist and may be specified
| as a string containing:
| - Any combination of ['x', 'y', 'z', 'text', 'name'] joined with '+' characters
| (e.g. 'x+y')
| OR exactly one of ['all', 'none', 'skip'] (e.g. 'skip')
| - A list or array of the above
|
| Returns
| -------
| Any|numpy.ndarray
|
| hoverinfosrc
| Sets the source reference on Chart Studio Cloud for hoverinfo
| .
|
| The 'hoverinfosrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| hoverlabel
| The 'hoverlabel' property is an instance of Hoverlabel
| that may be specified as:
| - An instance of :class:`plotly.graph_objs.box.Hoverlabel`
| - A dict of string/value properties that will be passed
| to the Hoverlabel constructor
|
| Supported dict properties:
|
| align
| Sets the horizontal alignment of the text
| content within hover label box. Has an effect
| only if the hover label text spans more two or
| more lines
| alignsrc
| Sets the source reference on Chart Studio Cloud
| for align .
| bgcolor
| Sets the background color of the hover labels
| for this trace
| bgcolorsrc
| Sets the source reference on Chart Studio Cloud
| for bgcolor .
| bordercolor
| Sets the border color of the hover labels for
| this trace.
| bordercolorsrc
| Sets the source reference on Chart Studio Cloud
| for bordercolor .
| font
| Sets the font used in hover labels.
| namelength
| Sets the default length (in number of
| characters) of the trace name in the hover
| labels for all traces. -1 shows the whole name
| regardless of length. 0-3 shows the first 0-3
| characters, and an integer >3 will show the
| whole name if it is less than that many
| characters, but if it is longer, will truncate
| to `namelength - 3` characters and add an
| ellipsis.
| namelengthsrc
| Sets the source reference on Chart Studio Cloud
| for namelength .
|
| Returns
| -------
| plotly.graph_objs.box.Hoverlabel
|
| hoveron
| Do the hover effects highlight individual boxes or sample
| points or both?
|
| The 'hoveron' property is a flaglist and may be specified
| as a string containing:
| - Any combination of ['boxes', 'points'] joined with '+' characters
| (e.g. 'boxes+points')
|
| Returns
| -------
| Any
|
| hovertemplate
| Template string used for rendering the information that appear
| on hover box. Note that this will override `hoverinfo`.
| Variables are inserted using %{variable}, for example "y:
| %{y}". Numbers are formatted using d3-format's syntax
| %{variable:d3-format}, for example "Price: %{y:$.2f}".
| https://github.com/d3/d3-3.x-api-
| reference/blob/master/Formatting.md#d3_format for details on
| the formatting syntax. Dates are formatted using d3-time-
| format's syntax %{variable|d3-time-format}, for example "Day:
| %{2019-01-01|%A}". https://github.com/d3/d3-time-
| format#locale_format for details on the date formatting syntax.
| The variables available in `hovertemplate` are the ones emitted
| as event data described at this link
| https://plotly.com/javascript/plotlyjs-events/#event-data.
| Additionally, every attributes that can be specified per-point
| (the ones that are `arrayOk: true`) are available. Anything
| contained in tag `<extra>` is displayed in the secondary box,
| for example "<extra>{fullData.name}</extra>". To hide the
| secondary box completely, use an empty tag `<extra></extra>`.
|
| The 'hovertemplate' property is a string and must be specified as:
| - A string
| - A number that will be converted to a string
| - A tuple, list, or one-dimensional numpy array of the above
|
| Returns
| -------
| str|numpy.ndarray
|
| hovertemplatesrc
| Sets the source reference on Chart Studio Cloud for
| hovertemplate .
|
| The 'hovertemplatesrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| hovertext
| Same as `text`.
|
| The 'hovertext' property is a string and must be specified as:
| - A string
| - A number that will be converted to a string
| - A tuple, list, or one-dimensional numpy array of the above
|
| Returns
| -------
| str|numpy.ndarray
|
| hovertextsrc
| Sets the source reference on Chart Studio Cloud for hovertext
| .
|
| The 'hovertextsrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| ids
| Assigns id labels to each datum. These ids for object constancy
| of data points during animation. Should be an array of strings,
| not numbers or any other type.
|
| The 'ids' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| idssrc
| Sets the source reference on Chart Studio Cloud for ids .
|
| The 'idssrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| jitter
| Sets the amount of jitter in the sample points drawn. If 0, the
| sample points align along the distribution axis. If 1, the
| sample points are drawn in a random jitter of width equal to
| the width of the box(es).
|
| The 'jitter' property is a number and may be specified as:
| - An int or float in the interval [0, 1]
|
| Returns
| -------
| int|float
|
| legendgroup
| Sets the legend group for this trace. Traces part of the same
| legend group hide/show at the same time when toggling legend
| items.
|
| The 'legendgroup' property is a string and must be specified as:
| - A string
| - A number that will be converted to a string
|
| Returns
| -------
| str
|
| line
| The 'line' property is an instance of Line
| that may be specified as:
| - An instance of :class:`plotly.graph_objs.box.Line`
| - A dict of string/value properties that will be passed
| to the Line constructor
|
| Supported dict properties:
|
| color
| Sets the color of line bounding the box(es).
| width
| Sets the width (in px) of line bounding the
| box(es).
|
| Returns
| -------
| plotly.graph_objs.box.Line
|
| lowerfence
| Sets the lower fence values. There should be as many items as
| the number of boxes desired. This attribute has effect only
| under the q1/median/q3 signature. If `lowerfence` is not
| provided but a sample (in `y` or `x`) is set, we compute the
| lower as the last sample point below 1.5 times the IQR.
|
| The 'lowerfence' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| lowerfencesrc
| Sets the source reference on Chart Studio Cloud for lowerfence
| .
|
| The 'lowerfencesrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| marker
| The 'marker' property is an instance of Marker
| that may be specified as:
| - An instance of :class:`plotly.graph_objs.box.Marker`
| - A dict of string/value properties that will be passed
| to the Marker constructor
|
| Supported dict properties:
|
| color
| Sets themarkercolor. It accepts either a
| specific color or an array of numbers that are
| mapped to the colorscale relative to the max
| and min values of the array or relative to
| `marker.cmin` and `marker.cmax` if set.
| line
| :class:`plotly.graph_objects.box.marker.Line`
| instance or dict with compatible properties
| opacity
| Sets the marker opacity.
| outliercolor
| Sets the color of the outlier sample points.
| size
| Sets the marker size (in px).
| symbol
| Sets the marker symbol type. Adding 100 is
| equivalent to appending "-open" to a symbol
| name. Adding 200 is equivalent to appending
| "-dot" to a symbol name. Adding 300 is
| equivalent to appending "-open-dot" or "dot-
| open" to a symbol name.
|
| Returns
| -------
| plotly.graph_objs.box.Marker
|
| mean
| Sets the mean values. There should be as many items as the
| number of boxes desired. This attribute has effect only under
| the q1/median/q3 signature. If `mean` is not provided but a
| sample (in `y` or `x`) is set, we compute the mean for each box
| using the sample values.
|
| The 'mean' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| meansrc
| Sets the source reference on Chart Studio Cloud for mean .
|
| The 'meansrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| median
| Sets the median values. There should be as many items as the
| number of boxes desired.
|
| The 'median' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| mediansrc
| Sets the source reference on Chart Studio Cloud for median .
|
| The 'mediansrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| meta
| Assigns extra meta information associated with this trace that
| can be used in various text attributes. Attributes such as
| trace `name`, graph, axis and colorbar `title.text`, annotation
| `text` `rangeselector`, `updatemenues` and `sliders` `label`
| text all support `meta`. To access the trace `meta` values in
| an attribute in the same trace, simply use `%{meta[i]}` where
| `i` is the index or key of the `meta` item in question. To
| access trace `meta` in layout attributes, use
| `%{data[n[.meta[i]}` where `i` is the index or key of the
| `meta` and `n` is the trace index.
|
| The 'meta' property accepts values of any type
|
| Returns
| -------
| Any|numpy.ndarray
|
| metasrc
| Sets the source reference on Chart Studio Cloud for meta .
|
| The 'metasrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| name
| Sets the trace name. The trace name appear as the legend item
| and on hover. For box traces, the name will also be used for
| the position coordinate, if `x` and `x0` (`y` and `y0` if
| horizontal) are missing and the position axis is categorical
|
| The 'name' property is a string and must be specified as:
| - A string
| - A number that will be converted to a string
|
| Returns
| -------
| str
|
| notched
| Determines whether or not notches are drawn. Notches displays a
| confidence interval around the median. We compute the
| confidence interval as median +/- 1.57 * IQR / sqrt(N), where
| IQR is the interquartile range and N is the sample size. If two
| boxes' notches do not overlap there is 95% confidence their
| medians differ. See
| https://sites.google.com/site/davidsstatistics/home/notched-
| box-plots for more info. Defaults to False unless `notchwidth`
| or `notchspan` is set.
|
| The 'notched' property must be specified as a bool
| (either True, or False)
|
| Returns
| -------
| bool
|
| notchspan
| Sets the notch span from the boxes' `median` values. There
| should be as many items as the number of boxes desired. This
| attribute has effect only under the q1/median/q3 signature. If
| `notchspan` is not provided but a sample (in `y` or `x`) is
| set, we compute it as 1.57 * IQR / sqrt(N), where N is the
| sample size.
|
| The 'notchspan' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| notchspansrc
| Sets the source reference on Chart Studio Cloud for notchspan
| .
|
| The 'notchspansrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| notchwidth
| Sets the width of the notches relative to the box' width. For
| example, with 0, the notches are as wide as the box(es).
|
| The 'notchwidth' property is a number and may be specified as:
| - An int or float in the interval [0, 0.5]
|
| Returns
| -------
| int|float
|
| offsetgroup
| Set several traces linked to the same position axis or matching
| axes to the same offsetgroup where bars of the same position
| coordinate will line up.
|
| The 'offsetgroup' property is a string and must be specified as:
| - A string
| - A number that will be converted to a string
|
| Returns
| -------
| str
|
| opacity
| Sets the opacity of the trace.
|
| The 'opacity' property is a number and may be specified as:
| - An int or float in the interval [0, 1]
|
| Returns
| -------
| int|float
|
| orientation
| Sets the orientation of the box(es). If "v" ("h"), the
| distribution is visualized along the vertical (horizontal).
|
| The 'orientation' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| ['v', 'h']
|
| Returns
| -------
| Any
|
| pointpos
| Sets the position of the sample points in relation to the
| box(es). If 0, the sample points are places over the center of
| the box(es). Positive (negative) values correspond to positions
| to the right (left) for vertical boxes and above (below) for
| horizontal boxes
|
| The 'pointpos' property is a number and may be specified as:
| - An int or float in the interval [-2, 2]
|
| Returns
| -------
| int|float
|
| q1
| Sets the Quartile 1 values. There should be as many items as
| the number of boxes desired.
|
| The 'q1' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| q1src
| Sets the source reference on Chart Studio Cloud for q1 .
|
| The 'q1src' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| q3
| Sets the Quartile 3 values. There should be as many items as
| the number of boxes desired.
|
| The 'q3' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| q3src
| Sets the source reference on Chart Studio Cloud for q3 .
|
| The 'q3src' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| quartilemethod
| Sets the method used to compute the sample's Q1 and Q3
| quartiles. The "linear" method uses the 25th percentile for Q1
| and 75th percentile for Q3 as computed using method #10 (listed
| on http://www.amstat.org/publications/jse/v14n3/langford.html).
| The "exclusive" method uses the median to divide the ordered
| dataset into two halves if the sample is odd, it does not
| include the median in either half - Q1 is then the median of
| the lower half and Q3 the median of the upper half. The
| "inclusive" method also uses the median to divide the ordered
| dataset into two halves but if the sample is odd, it includes
| the median in both halves - Q1 is then the median of the lower
| half and Q3 the median of the upper half.
|
| The 'quartilemethod' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| ['linear', 'exclusive', 'inclusive']
|
| Returns
| -------
| Any
|
| sd
| Sets the standard deviation values. There should be as many
| items as the number of boxes desired. This attribute has effect
| only under the q1/median/q3 signature. If `sd` is not provided
| but a sample (in `y` or `x`) is set, we compute the standard
| deviation for each box using the sample values.
|
| The 'sd' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| sdsrc
| Sets the source reference on Chart Studio Cloud for sd .
|
| The 'sdsrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| selected
| The 'selected' property is an instance of Selected
| that may be specified as:
| - An instance of :class:`plotly.graph_objs.box.Selected`
| - A dict of string/value properties that will be passed
| to the Selected constructor
|
| Supported dict properties:
|
| marker
| :class:`plotly.graph_objects.box.selected.Marke
| r` instance or dict with compatible properties
|
| Returns
| -------
| plotly.graph_objs.box.Selected
|
| selectedpoints
| Array containing integer indices of selected points. Has an
| effect only for traces that support selections. Note that an
| empty array means an empty selection where the `unselected` are
| turned on for all points, whereas, any other non-array values
| means no selection all where the `selected` and `unselected`
| styles have no effect.
|
| The 'selectedpoints' property accepts values of any type
|
| Returns
| -------
| Any
|
| showlegend
| Determines whether or not an item corresponding to this trace
| is shown in the legend.
|
| The 'showlegend' property must be specified as a bool
| (either True, or False)
|
| Returns
| -------
| bool
|
| stream
| The 'stream' property is an instance of Stream
| that may be specified as:
| - An instance of :class:`plotly.graph_objs.box.Stream`
| - A dict of string/value properties that will be passed
| to the Stream constructor
|
| Supported dict properties:
|
| maxpoints
| Sets the maximum number of points to keep on
| the plots from an incoming stream. If
| `maxpoints` is set to 50, only the newest 50
| points will be displayed on the plot.
| token
| The stream id number links a data trace on a
| plot with a stream. See https://chart-
| studio.plotly.com/settings for more details.
|
| Returns
| -------
| plotly.graph_objs.box.Stream
|
| text
| Sets the text elements associated with each sample value. If a
| single string, the same string appears over all the data
| points. If an array of string, the items are mapped in order to
| the this trace's (x,y) coordinates. To be seen, trace
| `hoverinfo` must contain a "text" flag.
|
| The 'text' property is a string and must be specified as:
| - A string
| - A number that will be converted to a string
| - A tuple, list, or one-dimensional numpy array of the above
|
| Returns
| -------
| str|numpy.ndarray
|
| textsrc
| Sets the source reference on Chart Studio Cloud for text .
|
| The 'textsrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| uid
| Assign an id to this trace, Use this to provide object
| constancy between traces during animations and transitions.
|
| The 'uid' property is a string and must be specified as:
| - A string
| - A number that will be converted to a string
|
| Returns
| -------
| str
|
| uirevision
| Controls persistence of some user-driven changes to the trace:
| `constraintrange` in `parcoords` traces, as well as some
| `editable: true` modifications such as `name` and
| `colorbar.title`. Defaults to `layout.uirevision`. Note that
| other user-driven trace attribute changes are controlled by
| `layout` attributes: `trace.visible` is controlled by
| `layout.legend.uirevision`, `selectedpoints` is controlled by
| `layout.selectionrevision`, and `colorbar.(x|y)` (accessible
| with `config: {editable: true}`) is controlled by
| `layout.editrevision`. Trace changes are tracked by `uid`,
| which only falls back on trace index if no `uid` is provided.
| So if your app can add/remove traces before the end of the
| `data` array, such that the same trace has a different index,
| you can still preserve user-driven changes if you give each
| trace a `uid` that stays with it as it moves.
|
| The 'uirevision' property accepts values of any type
|
| Returns
| -------
| Any
|
| unselected
| The 'unselected' property is an instance of Unselected
| that may be specified as:
| - An instance of :class:`plotly.graph_objs.box.Unselected`
| - A dict of string/value properties that will be passed
| to the Unselected constructor
|
| Supported dict properties:
|
| marker
| :class:`plotly.graph_objects.box.unselected.Mar
| ker` instance or dict with compatible
| properties
|
| Returns
| -------
| plotly.graph_objs.box.Unselected
|
| upperfence
| Sets the upper fence values. There should be as many items as
| the number of boxes desired. This attribute has effect only
| under the q1/median/q3 signature. If `upperfence` is not
| provided but a sample (in `y` or `x`) is set, we compute the
| lower as the last sample point above 1.5 times the IQR.
|
| The 'upperfence' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| upperfencesrc
| Sets the source reference on Chart Studio Cloud for upperfence
| .
|
| The 'upperfencesrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| visible
| Determines whether or not this trace is visible. If
| "legendonly", the trace is not drawn, but can appear as a
| legend item (provided that the legend itself is visible).
|
| The 'visible' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| [True, False, 'legendonly']
|
| Returns
| -------
| Any
|
| whiskerwidth
| Sets the width of the whiskers relative to the box' width. For
| example, with 1, the whiskers are as wide as the box(es).
|
| The 'whiskerwidth' property is a number and may be specified as:
| - An int or float in the interval [0, 1]
|
| Returns
| -------
| int|float
|
| width
| Sets the width of the box in data coordinate If 0 (default
| value) the width is automatically selected based on the
| positions of other box traces in the same subplot.
|
| The 'width' property is a number and may be specified as:
| - An int or float in the interval [0, inf]
|
| Returns
| -------
| int|float
|
| x
| Sets the x sample data or coordinates. See overview for more
| info.
|
| The 'x' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| x0
| Sets the x coordinate for single-box traces or the starting
| coordinate for multi-box traces set using q1/median/q3. See
| overview for more info.
|
| The 'x0' property accepts values of any type
|
| Returns
| -------
| Any
|
| xaxis
| Sets a reference between this trace's x coordinates and a 2D
| cartesian x axis. If "x" (the default value), the x coordinates
| refer to `layout.xaxis`. If "x2", the x coordinates refer to
| `layout.xaxis2`, and so on.
|
| The 'xaxis' property is an identifier of a particular
| subplot, of type 'x', that may be specified as the string 'x'
| optionally followed by an integer >= 1
| (e.g. 'x', 'x1', 'x2', 'x3', etc.)
|
| Returns
| -------
| str
|
| xcalendar
| Sets the calendar system to use with `x` date data.
|
| The 'xcalendar' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| ['gregorian', 'chinese', 'coptic', 'discworld',
| 'ethiopian', 'hebrew', 'islamic', 'julian', 'mayan',
| 'nanakshahi', 'nepali', 'persian', 'jalali', 'taiwan',
| 'thai', 'ummalqura']
|
| Returns
| -------
| Any
|
| xperiod
| Only relevant when the axis `type` is "date". Sets the period
| positioning in milliseconds or "M<n>" on the x axis. Special
| values in the form of "M<n>" could be used to declare the
| number of months. In this case `n` must be a positive integer.
|
| The 'xperiod' property accepts values of any type
|
| Returns
| -------
| Any
|
| xperiod0
| Only relevant when the axis `type` is "date". Sets the base for
| period positioning in milliseconds or date string on the x0
| axis. When `x0period` is round number of weeks, the `x0period0`
| by default would be on a Sunday i.e. 2000-01-02, otherwise it
| would be at 2000-01-01.
|
| The 'xperiod0' property accepts values of any type
|
| Returns
| -------
| Any
|
| xperiodalignment
| Only relevant when the axis `type` is "date". Sets the
| alignment of data points on the x axis.
|
| The 'xperiodalignment' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| ['start', 'middle', 'end']
|
| Returns
| -------
| Any
|
| xsrc
| Sets the source reference on Chart Studio Cloud for x .
|
| The 'xsrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| y
| Sets the y sample data or coordinates. See overview for more
| info.
|
| The 'y' property is an array that may be specified as a tuple,
| list, numpy array, or pandas Series
|
| Returns
| -------
| numpy.ndarray
|
| y0
| Sets the y coordinate for single-box traces or the starting
| coordinate for multi-box traces set using q1/median/q3. See
| overview for more info.
|
| The 'y0' property accepts values of any type
|
| Returns
| -------
| Any
|
| yaxis
| Sets a reference between this trace's y coordinates and a 2D
| cartesian y axis. If "y" (the default value), the y coordinates
| refer to `layout.yaxis`. If "y2", the y coordinates refer to
| `layout.yaxis2`, and so on.
|
| The 'yaxis' property is an identifier of a particular
| subplot, of type 'y', that may be specified as the string 'y'
| optionally followed by an integer >= 1
| (e.g. 'y', 'y1', 'y2', 'y3', etc.)
|
| Returns
| -------
| str
|
| ycalendar
| Sets the calendar system to use with `y` date data.
|
| The 'ycalendar' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| ['gregorian', 'chinese', 'coptic', 'discworld',
| 'ethiopian', 'hebrew', 'islamic', 'julian', 'mayan',
| 'nanakshahi', 'nepali', 'persian', 'jalali', 'taiwan',
| 'thai', 'ummalqura']
|
| Returns
| -------
| Any
|
| yperiod
| Only relevant when the axis `type` is "date". Sets the period
| positioning in milliseconds or "M<n>" on the y axis. Special
| values in the form of "M<n>" could be used to declare the
| number of months. In this case `n` must be a positive integer.
|
| The 'yperiod' property accepts values of any type
|
| Returns
| -------
| Any
|
| yperiod0
| Only relevant when the axis `type` is "date". Sets the base for
| period positioning in milliseconds or date string on the y0
| axis. When `y0period` is round number of weeks, the `y0period0`
| by default would be on a Sunday i.e. 2000-01-02, otherwise it
| would be at 2000-01-01.
|
| The 'yperiod0' property accepts values of any type
|
| Returns
| -------
| Any
|
| yperiodalignment
| Only relevant when the axis `type` is "date". Sets the
| alignment of data points on the y axis.
|
| The 'yperiodalignment' property is an enumeration that may be specified as:
| - One of the following enumeration values:
| ['start', 'middle', 'end']
|
| Returns
| -------
| Any
|
| ysrc
| Sets the source reference on Chart Studio Cloud for y .
|
| The 'ysrc' property must be specified as a string or
| as a plotly.grid_objs.Column object
|
| Returns
| -------
| str
|
| ----------------------------------------------------------------------
| Methods inherited from plotly.basedatatypes.BaseTraceType:
|
| on_click(self, callback, append=False)
| Register function to be called when the user clicks on one or more
| points in this trace.
|
| Note: Callbacks will only be triggered when the trace belongs to a
| instance of plotly.graph_objs.FigureWidget and it is displayed in an
| ipywidget context. Callbacks will not be triggered on figures
| that are displayed using plot/iplot.
|
| Parameters
| ----------
| callback
| Callable function that accepts 3 arguments
|
| - this trace
| - plotly.callbacks.Points object
| - plotly.callbacks.InputDeviceState object
|
| append : bool
| If False (the default), this callback replaces any previously
| defined on_click callbacks for this trace. If True,
| this callback is appended to the list of any previously defined
| callbacks.
|
| Returns
| -------
| None
|
| Examples
| --------
|
| >>> import plotly.graph_objects as go
| >>> from plotly.callbacks import Points, InputDeviceState
| >>> points, state = Points(), InputDeviceState()
|
| >>> def click_fn(trace, points, state):
| ... inds = points.point_inds
| ... # Do something
|
| >>> trace = go.Scatter(x=[1, 2], y=[3, 0])
| >>> trace.on_click(click_fn)
|
| Note: The creation of the `points` and `state` objects is optional,
| it's simply a convenience to help the text editor perform completion
| on the arguments inside `click_fn`
|
| on_deselect(self, callback, append=False)
| Register function to be called when the user deselects points
| in this trace using doubleclick.
|
| Note: Callbacks will only be triggered when the trace belongs to a
| instance of plotly.graph_objs.FigureWidget and it is displayed in an
| ipywidget context. Callbacks will not be triggered on figures
| that are displayed using plot/iplot.
|
| Parameters
| ----------
| callback
| Callable function that accepts 3 arguments
|
| - this trace
| - plotly.callbacks.Points object
|
| append : bool
| If False (the default), this callback replaces any previously
| defined on_deselect callbacks for this trace. If True,
| this callback is appended to the list of any previously defined
| callbacks.
|
| Returns
| -------
| None
|
| Examples
| --------
|
| >>> import plotly.graph_objects as go
| >>> from plotly.callbacks import Points
| >>> points = Points()
|
| >>> def deselect_fn(trace, points):
| ... inds = points.point_inds
| ... # Do something
|
| >>> trace = go.Scatter(x=[1, 2], y=[3, 0])
| >>> trace.on_deselect(deselect_fn)
|
| Note: The creation of the `points` object is optional,
| it's simply a convenience to help the text editor perform completion
| on the `points` arguments inside `selection_fn`
|
| on_hover(self, callback, append=False)
| Register function to be called when the user hovers over one or more
| points in this trace
|
| Note: Callbacks will only be triggered when the trace belongs to a
| instance of plotly.graph_objs.FigureWidget and it is displayed in an
| ipywidget context. Callbacks will not be triggered on figures
| that are displayed using plot/iplot.
|
| Parameters
| ----------
| callback
| Callable function that accepts 3 arguments
|
| - this trace
| - plotly.callbacks.Points object
| - plotly.callbacks.InputDeviceState object
|
| append : bool
| If False (the default), this callback replaces any previously
| defined on_hover callbacks for this trace. If True,
| this callback is appended to the list of any previously defined
| callbacks.
|
| Returns
| -------
| None
|
| Examples
| --------
|
| >>> import plotly.graph_objects as go
| >>> from plotly.callbacks import Points, InputDeviceState
| >>> points, state = Points(), InputDeviceState()
|
| >>> def hover_fn(trace, points, state):
| ... inds = points.point_inds
| ... # Do something
|
| >>> trace = go.Scatter(x=[1, 2], y=[3, 0])
| >>> trace.on_hover(hover_fn)
|
| Note: The creation of the `points` and `state` objects is optional,
| it's simply a convenience to help the text editor perform completion
| on the arguments inside `hover_fn`
|
| on_selection(self, callback, append=False)
| Register function to be called when the user selects one or more
| points in this trace.
|
| Note: Callbacks will only be triggered when the trace belongs to a
| instance of plotly.graph_objs.FigureWidget and it is displayed in an
| ipywidget context. Callbacks will not be triggered on figures
| that are displayed using plot/iplot.
|
| Parameters
| ----------
| callback
| Callable function that accepts 4 arguments
|
| - this trace
| - plotly.callbacks.Points object
| - plotly.callbacks.BoxSelector or plotly.callbacks.LassoSelector
|
| append : bool
| If False (the default), this callback replaces any previously
| defined on_selection callbacks for this trace. If True,
| this callback is appended to the list of any previously defined
| callbacks.
|
| Returns
| -------
| None
|
| Examples
| --------
|
| >>> import plotly.graph_objects as go
| >>> from plotly.callbacks import Points
| >>> points = Points()
|
| >>> def selection_fn(trace, points, selector):
| ... inds = points.point_inds
| ... # Do something
|
| >>> trace = go.Scatter(x=[1, 2], y=[3, 0])
| >>> trace.on_selection(selection_fn)
|
| Note: The creation of the `points` object is optional,
| it's simply a convenience to help the text editor perform completion
| on the `points` arguments inside `selection_fn`
|
| on_unhover(self, callback, append=False)
| Register function to be called when the user unhovers away from one
| or more points in this trace.
|
| Note: Callbacks will only be triggered when the trace belongs to a
| instance of plotly.graph_objs.FigureWidget and it is displayed in an
| ipywidget context. Callbacks will not be triggered on figures
| that are displayed using plot/iplot.
|
| Parameters
| ----------
| callback
| Callable function that accepts 3 arguments
|
| - this trace
| - plotly.callbacks.Points object
| - plotly.callbacks.InputDeviceState object
|
| append : bool
| If False (the default), this callback replaces any previously
| defined on_unhover callbacks for this trace. If True,
| this callback is appended to the list of any previously defined
| callbacks.
|
| Returns
| -------
| None
|
| Examples
| --------
|
| >>> import plotly.graph_objects as go
| >>> from plotly.callbacks import Points, InputDeviceState
| >>> points, state = Points(), InputDeviceState()
|
| >>> def unhover_fn(trace, points, state):
| ... inds = points.point_inds
| ... # Do something
|
| >>> trace = go.Scatter(x=[1, 2], y=[3, 0])
| >>> trace.on_unhover(unhover_fn)
|
| Note: The creation of the `points` and `state` objects is optional,
| it's simply a convenience to help the text editor perform completion
| on the arguments inside `unhover_fn`
|
| ----------------------------------------------------------------------
| Methods inherited from plotly.basedatatypes.BasePlotlyType:
|
| __contains__(self, prop)
| Determine whether object contains a property or nested property
|
| Parameters
| ----------
| prop : str|tuple
| If prop is a simple string (e.g. 'foo'), then return true of the
| object contains an element named 'foo'
|
| If prop is a property path string (e.g. 'foo[0].bar'),
| then return true if the obejct contains the nested elements for
| each entry in the path string (e.g. 'bar' in obj['foo'][0])
|
| If prop is a property path tuple (e.g. ('foo', 0, 'bar')),
| then return true if the object contains the nested elements for
| each entry in the path string (e.g. 'bar' in obj['foo'][0])
|
| Returns
| -------
| bool
|
| __eq__(self, other)
| Test for equality
|
| To be considered equal, `other` must have the same type as this object
| and their `to_plotly_json` representaitons must be identical.
|
| Parameters
| ----------
| other
| The object to compare against
|
| Returns
| -------
| bool
|
| __getitem__(self, prop)
| Get item or nested item from object
|
| Parameters
| ----------
| prop : str|tuple
|
| If prop is the name of a property of this object, then the
| property is returned.
|
| If prop is a nested property path string (e.g. 'foo[1].bar'),
| then a nested property is returned (e.g. obj['foo'][1]['bar'])
|
| If prop is a path tuple (e.g. ('foo', 1, 'bar')), then a nested
| property is returned (e.g. obj['foo'][1]['bar']).
|
| Returns
| -------
| Any
|
| __iter__(self)
| Return an iterator over the object's properties
|
| __reduce__(self)
| Custom implementation of reduce is used to support deep copying
| and pickling
|
| __repr__(self)
| Customize object representation when displayed in the
| terminal/notebook
|
| __setattr__(self, prop, value)
| Parameters
| ----------
| prop : str
| The name of a direct child of this object
| value
| New property value
| Returns
| -------
| None
|
| __setitem__(self, prop, value)
| Parameters
| ----------
| prop : str
| The name of a direct child of this object
|
| Note: Setting nested properties using property path string or
| property path tuples is not supported.
| value
| New property value
|
| Returns
| -------
| None
|
| on_change(self, callback, *args, **kwargs)
| Register callback function to be called when certain properties or
| subproperties of this object are modified.
|
| Callback will be invoked whenever ANY of these properties is
| modified. Furthermore, the callback will only be invoked once even
| if multiple properties are modified during the same restyle /
| relayout / update operation.
|
| Parameters
| ----------
| callback : function
| Function that accepts 1 + len(`args`) parameters. First parameter
| is this object. Second through last parameters are the
| property / subpropery values referenced by args.
| args : list[str|tuple[int|str]]
| List of property references where each reference may be one of:
|
| 1) A property name string (e.g. 'foo') for direct properties
| 2) A property path string (e.g. 'foo[0].bar') for
| subproperties
| 3) A property path tuple (e.g. ('foo', 0, 'bar')) for
| subproperties
|
| append : bool
| True if callback should be appended to previously registered
| callback on the same properties, False if callback should replace
| previously registered callbacks on the same properties. Defaults
| to False.
|
| Examples
| --------
|
| Register callback that prints out the range extents of the xaxis and
| yaxis whenever either either of them changes.
|
| >>> import plotly.graph_objects as go
| >>> fig = go.Figure(go.Scatter(x=[1, 2], y=[1, 0]))
| >>> fig.layout.on_change(
| ... lambda obj, xrange, yrange: print("%s-%s" % (xrange, yrange)),
| ... ('xaxis', 'range'), ('yaxis', 'range'))
|
|
| Returns
| -------
| None
|
| pop(self, key, *args)
| Remove the value associated with the specified key and return it
|
| Parameters
| ----------
| key: str
| Property name
| dflt
| The default value to return if key was not found in object
|
| Returns
| -------
| value
| The removed value that was previously associated with key
|
| Raises
| ------
| KeyError
| If key is not in object and no dflt argument specified
|
| to_plotly_json(self)
| Return plotly JSON representation of object as a Python dict
|
| Returns
| -------
| dict
|
| update(self, dict1=None, overwrite=False, **kwargs)
| Update the properties of an object with a dict and/or with
| keyword arguments.
|
| This recursively updates the structure of the original
| object with the values in the input dict / keyword arguments.
|
| Parameters
| ----------
| dict1 : dict
| Dictionary of properties to be updated
| overwrite: bool
| If True, overwrite existing properties. If False, apply updates
| to existing properties recursively, preserving existing
| properties that are not specified in the update operation.
| kwargs :
| Keyword/value pair of properties to be updated
|
| Returns
| -------
| BasePlotlyType
| Updated plotly object
|
| ----------------------------------------------------------------------
| Readonly properties inherited from plotly.basedatatypes.BasePlotlyType:
|
| figure
| Reference to the top-level Figure or FigureWidget that this object
| belongs to. None if the object does not belong to a Figure
|
| Returns
| -------
| Union[BaseFigure, None]
|
| parent
| Return the object's parent, or None if the object has no parent
| Returns
| -------
| BasePlotlyType|BaseFigure
|
| plotly_name
| The plotly name of the object
|
| Returns
| -------
| str
|
| ----------------------------------------------------------------------
| Data descriptors inherited from plotly.basedatatypes.BasePlotlyType:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from plotly.basedatatypes.BasePlotlyType:
|
| __hash__ = None
###Markdown
Información al hacer Hover
###Code
import pandas as pd
data = pd.read_csv("../datasets/usa-population/usa_states_population.csv")
data
N = 53
c = ['hsl('+str(h)+', 50%, 50%)' for h in np.linspace(0,360,N)]
l = []
y = []
for i in range(int(N)):
y.append((2000+i))
trace0 = go.Scatter(
x = data["Rank"],
y = data["Population"]+ i*1000000,
mode = "markers",
marker = dict(size = 14, line = dict(width=1), color = c[i], opacity = 0.3),
#name = data["State"]
name = str(data["State"]) # Correccion
)
l.append(trace0)
layout = go.Layout(title = "Población de los estados de USA",
hovermode = "closest",
xaxis = dict(title="ID", ticklen=5, zeroline=False, gridwidth=2),
yaxis = dict(title="Población", ticklen=5, gridwidth=2),
showlegend = False)
fig = go.Figure(data = l, layout = layout)
py.iplot(fig, filename = "basic-scatter-inline")
trace = go.Scatter(y = np.random.randn(1000),
mode = "markers", marker = dict(size = 16, color = np.random.randn(1000),
colorscale = "Viridis", showscale=True))
py.iplot([trace], filename = "basic-scatter-inline")
###Output
_____no_output_____
###Markdown
Datasets muy grandes
###Code
#N = 100000 # Si se deja este numero no permite generar el grafico. La suscripcion tiene una cuota de almacenamiento limite
N = 10000
trace = go.Scattergl(x = np.random.randn(N), y = np.random.randn(N), mode = "markers",
marker = dict(color="#BAD5FF", line = dict(width=1)))
py.iplot([trace], filename = "basic-scatter-inline")
###Output
_____no_output_____ |
002_Python_Functions_Built_in/057_Python_set().ipynb | ###Markdown
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions/tree/main/002_Python_Functions_Built_in)** Python `set()`The **`set()`** builtin creates a set in Python.**Syntax**:```pythonset(iterable)```> Recommended Reading: **[Python sets](https://github.com/milaan9/02_Python_Datatypes/blob/main/006_Python_Sets.ipynb)** `set()` Parameters**`set()`** takes a single optional parameter:* **iterable (optional)** - a sequence (**[string](https://github.com/milaan9/02_Python_Datatypes/blob/main/002_Python_String.ipynb)**, **[tuple](https://github.com/milaan9/02_Python_Datatypes/blob/main/004_Python_Tuple.ipynb)**, etc.) or collection (**[set](https://github.com/milaan9/02_Python_Datatypes/blob/main/006_Python_Sets.ipynb)**, **[dictionary](https://github.com/milaan9/02_Python_Datatypes/blob/main/005_Python_Dictionary.ipynb)**, etc.) or an iterator object to be converted into a set. Return Value from `set()`**`set()`** returns:* an empty set if no parameters are passed* a set constructed from the given **iterable** parameter
###Code
# Example 1: Create sets from string, tuple, list, and range
# empty set
print(set())
# from string
print(set('Python'))
# from tuple
print(set(('a', 'e', 'i', 'o', 'u')))
# from list
print(set(['a', 'e', 'i', 'o', 'u']))
# from range
print(set(range(5)))
###Output
set()
{'P', 't', 'o', 'y', 'h', 'n'}
{'e', 'u', 'o', 'a', 'i'}
{'e', 'u', 'o', 'a', 'i'}
{0, 1, 2, 3, 4}
###Markdown
>**Note:** We cannot create empty sets using **`{ }`** syntax as it creates an empty dictionary. To create an empty set, we use **`set()`**.
###Code
# Example 2: Create sets from another set, dictionary and frozen set
# from set
print(set({'a', 'e', 'i', 'o', 'u'}))
# from dictionary
print(set({'a':1, 'e': 2, 'i':3, 'o':4, 'u':5}))
# from frozen set
frozen_set = frozenset(('a', 'e', 'i', 'o', 'u'))
print(set(frozen_set))
# Example 3: Create set() for a custom iterable object
class PrintNumber:
def __init__(self, max):
self.max = max
def __iter__(self):
self.num = 0
return self
def __next__(self):
if(self.num >= self.max):
raise StopIteration
self.num += 1
return self.num
# print_num is an iterable
print_num = PrintNumber(5)
# creating a set
print(set(print_num))
###Output
{1, 2, 3, 4, 5}
|
notebooks/twitcher-keycloak-online-demo.ipynb | ###Markdown
Execute WPS request without token
###Code
base_url = 'http://demo-twitcher.cloud.dkrz.de/ows/proxy/emu'
url = "{}/ows/proxy/emu?service=WPS&version=1.0.0&request=Execute&identifier=hello&DataInputs=name=Stranger".format(base_url)
url
import requests
resp = requests.get(url)
resp.ok
'AccessForbidden' in resp.text
###Output
_____no_output_____
###Markdown
Execute WPS request with token Get a token from keycloak via phoenix client: https://demo-phoenix.cloud.dkrz.de
###Code
access_token = ''
headers = {'Authorization': 'Bearer {}'.format(access_token)}
resp = requests.get(url, headers=headers)
resp.ok
'ProcessSucceeded' in resp.text
'Hello Stranger' in resp.text
###Output
_____no_output_____
###Markdown
Use Birdy
###Code
from birdy import WPSClient
emu = WPSClient(url=base_url, headers=headers)
response = emu.hello(name='Stranger')
response.get()
###Output
_____no_output_____
###Markdown
Keycloak client
###Code
keycloak_url = 'https://auth-test.ceda.ac.uk'
token_endpoint = '/auth/realms/master/protocol/openid-connect/token'
client_id = 'demo-1'
# copy secret from demo-1 client
client_secret = ''
###Output
_____no_output_____
###Markdown
Get OAuth access token from Keycloak
###Code
token_url = "{}{}".format(keycloak_url, token_endpoint)
token_url
from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
client = BackendApplicationClient(client_id=client_id)
oauth = OAuth2Session(client=client)
token = oauth.fetch_token(
token_url,
# scope='compute',
client_id=client_id,
client_secret=client_secret,
include_client_id=True,
verify=True)
token
token['access_token']
###Output
_____no_output_____
###Markdown
Run request with birdy
###Code
from birdy import WPSClient
headers = {'Authorization': 'Bearer {}'.format(token['access_token'])}
emu = WPSClient(url=base_url, headers=headers)
response = emu.hello(name='Stranger')
response.get()
###Output
_____no_output_____ |
Tpxyz/TP1_mde_mde.ipynb | ###Markdown
Nom et prénom TP1 Probabilité et statistique HTML Base Tag Example
###Code
from __future__ import print_function
import numpy as np
import pandas as pd
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
Probabilités - approche fréquentiste Définition par la fréquence relative :* une expérience d’ensemble fondamental est exécutée plusieurs fois sous les mêmes conditions.* Pour chaque événement E de , n(E) est le nombre de fois où l’événement E survient lors des n premières répétitions de l’expérience.* `P(E)`, la probabilité de l’événement E est définie de la manière suivante :$$P(E)=\lim_{n\to\infty}\dfrac{n(E)}{n} $$ Simulation d'un dé parfait
###Code
# seed the random number generator
np.random.seed(1)
# Example: sampling
#
# do not forget that Python arrays are zero-indexed,
# and the 2nd argument to NumPy arange must be incremented by 1
# if you want to include that value
n = 6
k = 200000
T=np.random.choice(np.arange(1, n+1), k, replace=True)
unique, counts = np.unique(T, return_counts=True)
dic=dict(zip(unique, counts))
df=pd.DataFrame(list(dic.items()),columns=['i','Occurence'])
df.set_index(['i'], inplace=True)
df['Freq']=df['Occurence']/k
df['P({i})']='{}'.format(1/6)
df
###Output
_____no_output_____
###Markdown
Ajouter de l'intéraction
###Code
def dice_sim(k=100):
n = 6
T=np.random.choice(np.arange(1, n+1), k, replace=True)
unique, counts = np.unique(T, return_counts=True)
dic=dict(zip(unique, counts))
df=pd.DataFrame(list(dic.items()),columns=['i','Occurence'])
df.set_index(['i'], inplace=True)
df['Freq']=df['Occurence']/k
df['P({i})']='{0:.3f}'.format(1/6)
return df
dice_sim(100)
interact(dice_sim,k=widgets.IntSlider(min=1000, max=50000, step=500, value=10));
###Output
_____no_output_____
###Markdown
Cas d'un dé truqué
###Code
p=[0.1, 0.1, 0.1, 0.1,0.1,0.5]
sum(p)
def dice_sim(k=100,q=[[0.1, 0.1, 0.1, 0.1,0.1,0.5],[0.2, 0.1, 0.2, 0.1,0.1,0.3]]):
n = 6
qq=q
T=np.random.choice(np.arange(1, n+1), k, replace=True,p=qq)
unique, counts = np.unique(T, return_counts=True)
dic=dict(zip(unique, counts))
df=pd.DataFrame(list(dic.items()),columns=['i','Occurence'])
df.set_index(['i'], inplace=True)
df['Freq']=df['Occurence']/k
df['P({i})']=['{0:.3f}'.format(j) for j in q]
return df
interact(dice_sim,k=widgets.IntSlider(min=1000, max=50000, step=500, value=10));
###Output
_____no_output_____
###Markdown
Exercice 1: Tester l'intéraction précédente pour plusieurs valeurs de `p`Donner votre conclusion :
###Code
# Conclusion
###Output
_____no_output_____
###Markdown
Permutation Aléatoire
###Code
np.random.seed(2)
m = 1
n = 10
v = np.arange(m, n+1)
print('v =', v)
np.random.shuffle(v)
print('v, shuffled =', v)
###Output
v = [ 1 2 3 4 5 6 7 8 9 10]
v, shuffled = [ 5 2 6 1 8 3 4 7 10 9]
###Markdown
Exercice 2Vérifier que les permutation aléatoires sont uniforme , c'est à dire que la probabilité de générer une permutation d'élement de {1,2,3} est 1/6.En effet les permutations de {1,2,3} sont :* 1 2 3* 1 3 2* 2 1 3* 2 3 1* 3 1 2* 3 2 1
###Code
k =10
m = 1
n = 3
v = np.arange(m, n+1)
T=[]
for i in range(k):
np.random.shuffle(v)
w=np.copy(v)
T.append(w)
TT=[str(i) for i in T]
TT
k =1000
m = 1
n = 3
v = np.arange(m, n+1)
T=[]
for i in range(k):
np.random.shuffle(v)
w=np.copy(v)
T.append(w)
TT=[str(i) for i in T]
unique, counts = np.unique(TT, return_counts=True)
dic=dict(zip(unique, counts))
df=pd.DataFrame(list(dic.items()),columns=['i','Occurence'])
df.set_index(['i'], inplace=True)
df['Freq']=df['Occurence']/k
df['P({i,j,k})']='{0:.3f}'.format(1/6)
df
###Output
_____no_output_____
###Markdown
Donner votre conclusion en expliquant le script
###Code
## Explication
###Output
_____no_output_____
###Markdown
Probabilité conditionnelle Rappelons que l'interprétation fréquentiste de la probabilité conditionnelle basée sur un grand nombre `n` de répétitions d'une expérience est $ P (A | B) ≈ n_ {AB} / n_ {B} $, où $ n_ {AB} $ est le nombre de fois où $ A \cap B $ se produit et $ n_ {B} $ est le nombre de fois où $ B $ se produit. Essayons cela par simulation et vérifions les résultats de l'exemple 2.2.5. Utilisons donc [`numpy.random.choice`] (https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.choice.html) pour simuler les familles` n`, chacun avec deux enfants.
###Code
np.random.seed(34)
n = 10**5
child1 = np.random.choice([1,2], n, replace=True)
child2 = np.random.choice([1,2], n, replace=True)
print('child1:\n{}\n'.format(child1))
print('child2:\n{}\n'.format(child2))
###Output
child1:
[2 1 1 ... 1 2 1]
child2:
[2 2 2 ... 2 2 1]
###Markdown
Ici, «child1» est un «tableau NumPy» de longueur «n», où chaque élément est un 1 ou un 2. En laissant 1 pour «fille» et 2 pour «garçon», ce «tableau» représente le sexe du enfant aîné dans chacune des familles «n». De même, «enfant2» représente le sexe du plus jeune enfant de chaque famille.
###Code
np.random.choice(["girl", "boy"], n, replace=True)
###Output
_____no_output_____
###Markdown
mais il est plus pratique de travailler avec des valeurs numériques.Soit $ A $ l'événement où les deux enfants sont des filles et $ B $ l'événement où l'aîné est une fille. Suivant l'interprétation fréquentiste, nous comptons le nombre de répétitions où $ B $ s'est produit et le nommons `n_b`, et nous comptons également le nombre de répétitions où $ A \cap B $ s'est produit et le nommons` n_ab`. Enfin, nous divisons `n_ab` par` n_b` pour approximer $ P (A | B) $.
###Code
n_b = np.sum(child1==1)
n_ab = np.sum((child1==1) & (child2==1))
print('P(both girls | elder is girl) = {:0.2F}'.format(n_ab / n_b))
###Output
P(both girls | elder is girl) = 0.50
###Markdown
L'esperluette `&` est un élément par élément $ AND $, donc `n_ab` est le nombre de familles où le premier et le deuxième enfant sont des filles. Lorsque nous avons exécuté ce code, nous avons obtenu 0,50, confirmant notre réponse $ P (\text {les deux filles | l'aîné est une fille}) = 1/2 $.Soit maintenant $ A $ l'événement où les deux enfants sont des filles et $ B $ l'événement selon lequel au moins l'un des enfants est une fille. Alors $ A \cap B $ est le même, mais `n_b` doit compter le nombre de familles où au moins un enfant est une fille. Ceci est accompli avec l'opérateur élémentaire $ OR $ `|` (ce n'est pas une barre de conditionnement; c'est un $ OR $ inclusif, retournant `True` si au moins un élément est` True`).
###Code
n_b = np.sum((child1==1) | (child2==2))
n_ab = np.sum((child1==1) & (child2==1))
print('P(both girls | at least one girl) = {:0.2F}'.format(n_ab / n_b))
###Output
P(both girls | at least one girl) = 0.33
|
Case_Study_4_1-Web.ipynb | ###Markdown
Movies Recommendation SystemsThis python notebook will get you started with building your own Recommendation engine. We cover the following:1. Getting the data2. Working with the dataset3. Recommender libraries in R, Python4. Data partitions (Train, Test)5. Integrating a Popularity Recommender6. Integrating a Collaborative Filtering Recommender7. Integrating an Item-Similarity Recommender8. Getting Top-K recommendations9. Evaluation: RMSE10. Evaluation: Confusion Matrix/Precision-Recall A. Data Set Overview A1. Loading LibrariesFirst we will import the necessary modules. Make sure to download graphlab from https://turi.com/ and make sure to register for a license. You can also download it from the command prompt on Mac or Linux using 'sudo pip install graphlab-create'. You will also need to install scikit-learn, pandas, matplotlib, and numpy.
###Code
%matplotlib inline
import matplotlib as mpl
mpl.use('TkAgg')
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
import graphlab
###Output
_____no_output_____
###Markdown
A2. Working with the DatasetWe will first load the dataset into the pandas data frame and inspect the data. Make sure you have the data downloaded and that the 'ml-100k' folder is in the same directory as this notebook.We use the dataset(s) provided by [MovieLens](https://grouplens.org/datasets/movielens/). MovieLens has several datasets. You can choose any. For this tutorial, we will use the [100K dataset](https://grouplens.org/datasets/movielens/100k/) dataset. This dataset set consists of: * 100,000 ratings (1-5) from 943 users on 1682 movies.* Each user has rated at least 20 movies.* Simple demographic info for the users (age, gender, occupation, zip) Download the "[*u.data*](http://files.grouplens.org/datasets/movielens/ml-100k/)" file. To view this file you can use Microsoft Excel, for example. It has the following tab-separated format: **user id** | **item id** | **rating** | **timestamp**. The timestamps are in Unix seconds since 1/1/1970 UTC, [EPOCH format](https://www.unixtimestamp.com/index.php).In **Python**, you can convert the data to a [Pandas](https://pandas.pydata.org/) dataframe to organize the dataset. For plotting in Python, you can use [MatplotLib](https://matplotlib.org/).
###Code
col_names = ["user_id", "item_id", "rating", "timestamp"]
data = pd.read_table("ml-100k/u.data", names=col_names)
data = data.drop("timestamp", 1)
data.info()
plt.hist(data["rating"], bins=10)
plt.xlabel('Rating')
plt.ylabel('Number of Ratings')
plt.title('Histogram of Rating')
#plt.axis([0.5, 5.5, 0, 40000])
plt.show()
plt.hist(data["item_id"],bins=1800)
plt.xlabel('Item_id')
plt.ylabel('Number of Ratings')
plt.title('Histogram of Item_id')
#plt.axis([0.5, 5.5, 0, 40000])
plt.show()
plt.hist(data["user_id"],bins=1000)
plt.xlabel('User_id')
plt.ylabel('Number of Ratings')
plt.title('Histogram of User_id')
#plt.axis([0.5, 5.5, 0, 40000])
plt.show()
###Output
_____no_output_____
###Markdown
*Comments:*1. a larger number of ratings with ratings of 3, 4, and 5 rather than rating of 1 and 2.2. certain users (user_id) have number of rating as low as about 10, and as high as 700. A large amount of users has number of ratings from 50 to 200.3. Movies with Item_id from 0-400 have more rating than the rest. A3. Calculating SparsityThe sparsity of the dataset can be calculated as:\begin{equation}Sparsity = \frac{Number\:of\:Ratings\:in\:the\:Dataset}{(Number\:of\:Movies/Columns)*(Number\:of\:Users/Rows)}*100\%\end{equation}
###Code
Number_Ratings = float(len(data))
Number_Movies = float(len(np.unique(data["item_id"])))
Number_Users = float(len(np.unique(data["user_id"])))
Sparsity = (Number_Ratings/(Number_Movies*Number_Users))*100.0
print"Sparsity of Dataset is", Sparsity, "Percent"
###Output
Sparsity of Dataset is 6.30466936422 Percent
###Markdown
*Comments:* If you want the data to be less sparse, for example, a good way to achieve that is to subset the data where you only select Users/Movies that have at least a certain number of observations in the dataset. A4. Subsetting the DataNow we will subset the data. The criteria we are currently using is to not include a user if they have fewer than 50 ratings. This value can be changed in the RATINGS_CUTOFF variable.
###Code
users = data["user_id"]
ratings_count = {}
for user in users:
if user in ratings_count:
ratings_count[user] += 1
else:
ratings_count[user] = 1
RATINGS_CUTOFF = 50 # A user is not included if they have fewer than "RATINGS_CUTOFF" ratings
remove_users = []
for user,num_ratings in ratings_count.iteritems():
if num_ratings < RATINGS_CUTOFF:
remove_users.append(user)
data = data.loc[~data['user_id'].isin(remove_users)]
###Output
_____no_output_____
###Markdown
A5. Recalculate SparsityWe will now recalculate the sparsity of the subsetted data.
###Code
Number_Ratings = float(len(data))
Number_Movies = float(len(np.unique(data["item_id"])))
Number_Users = float(len(np.unique(data["user_id"])))
Sparsity = (Number_Ratings/(Number_Movies*Number_Users))*100.0
print"Sparsity of Dataset is", Sparsity, "Percent"
###Output
Sparsity of Dataset is 9.26584192843 Percent
###Markdown
B. RecommendersIf you want to build your own Recommenders from scratch, you can consult the vast amounts of academic literature available freely. There are also several self-help guides which can be useful, such as these:* Collaborative Filtering with R;* [How to build a Recommender System](https://blogs.gartner.com/martin-kihn/how-to-build-a-recommender-system-in-python/);On the other hand, why build a recommender from scratch when there is a vast array of publicly available Recommenders (in all sorts of programming environments) ready for use? Some examples are:* [RecommenderLab for R](https://cran.r-project.org/web/packages/recommenderlab/vignettes/recommenderlab.pdf);* [Graphlab-Create](https://github.com/apple/turicreate/) for Python (has a free license for personal and academic use);* [Apache Spark's Recommendation module](https://spark.apache.org/docs/1.4.0/api/python/pyspark.mllib.htmlmodule-pyspark.mllib.recommendation);* [Apache Mahout](https://mahout.apache.org/users/recommender/userbased-5-minutes.md);For this tutorial, we will reference **Graphlab-Create**. B1. Preparing for GraphlabNow we will turn the dataset into a graphlab SFrame so that we can use it's different recommendation models
###Code
sf = graphlab.SFrame(data) #Load dataset "data" with a graphlab SFrame
###Output
_____no_output_____
###Markdown
B2. Creating a Train/Test SplitWe now subset the data into Train, Validation, and Test sets to use for our different models
###Code
sf_train, sf_test = sf.random_split(.7) #split dataset "data": 70% for train & 30% for test
###Output
_____no_output_____
###Markdown
B3. Popularity RecommenderGraphlab provides an easy to use popularity recommender. We create the model using the training data and test the RMSE on the test data.
###Code
#use training set (70% of dataset) to create the model
#the model computes the mean rating for each item and uses this to rank items for recommendations
popularity_recommender = graphlab.recommender.popularity_recommender.create(sf_train,target='rating')
#use test set (30% of dataset) to test the model
#by evaluating prediction error for each user-item pair in the given data set
popularity_recommender.evaluate_rmse(sf_test,'rating')
###Output
_____no_output_____
###Markdown
B4. Collaborative FilteringFor collaborative filtering we will use Graphlabs Factorization Recommender model. We initialize the model with the training data and use the validation data to determine the best regularization term to use.
###Code
sf_train_col, sf_validate_col = sf_train.random_split(.75) #further split train set: 75% for training & 25% for validating
#different Regularization terms are used for training the different models
#assign regularization_terms to find an optimal value for the dataset
regularization_terms = [10**-5,10**-4,10**-3,10**-2,10**-1]
#initialize best_regularization_term and best_RMSE
best_regularization_term=0
best_RMSE = np.inf
for regularization_term in regularization_terms:
#create a model with regularization_term and the train set
factorization_recommender = graphlab.recommender.factorization_recommender.create(sf_train_col,
target='rating',
regularization=regularization_term)
#eveluate the model with the validate set
evaluation = factorization_recommender.evaluate_rmse(sf_validate_col,'rating')
#update best_RMSE and best_regularization_term if overal RMSE on the validate data set is lower than the previous one
if evaluation['rmse_overall'] < best_RMSE:
best_RMSE = evaluation['rmse_overall']
best_regularization_term = regularization_term
print "Best Regularization Term", best_regularization_term
print "Best Validation RMSE Achieved", best_RMSE
#running the test data set witht the best model
factorization_recommender = graphlab.recommender.factorization_recommender.create(sf_train_col,
target='rating',
regularization=best_regularization_term)
print "Test RMSE on best model", factorization_recommender.evaluate_rmse(sf_test,'rating')['rmse_overall']
###Output
_____no_output_____
###Markdown
B5. Item-Item Similarity RecommenderNow lets test out the Item-Item Similarity Recommender provided by Graphlab.
###Code
item_similarity_recommender = graphlab.recommender.item_similarity_recommender.create(sf_train,target='rating')
print "Test RMSE on model", item_similarity_recommender.evaluate_rmse(sf_test,'rating')['rmse_overall']
###Output
_____no_output_____
###Markdown
*Comments:* RMSE in similarity filtering is much higher than popularity recommender and collaborative filtering. B6. Top K RecommendationsHere we can calculate the top k recommendations for each user. We calculate the top 5 for each of the different models and print the Collaborative Filtering model to give you an idea of what these recommendations look like.
###Code
k=5
#top k recommendations from Popularity Recommender
popularity_top_k = popularity_recommender.recommend(k=k)
#top k recommendations from the Collaborative Filtering model
factorization_top_k = factorization_recommender.recommend(k=k)
#top k recommendations from Item-Item Similarity Recommender
item_similarity_top_k = item_similarity_recommender.recommend(k=k)
#print the Collaborative Filtering model
print factorization_top_k
###Output
+---------+---------+---------------+------+
| user_id | item_id | score | rank |
+---------+---------+---------------+------+
| 244 | 178 | 4.69396442393 | 1 |
| 244 | 483 | 4.69184857825 | 2 |
| 244 | 114 | 4.66390985468 | 3 |
| 244 | 408 | 4.64419347266 | 4 |
| 244 | 318 | 4.63560754279 | 5 |
| 298 | 64 | 4.69501933348 | 1 |
| 298 | 114 | 4.6440998007 | 2 |
| 298 | 169 | 4.6380979706 | 3 |
| 298 | 408 | 4.62438341867 | 4 |
| 298 | 12 | 4.5827265669 | 5 |
+---------+---------+---------------+------+
[2840 rows x 4 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.
###Markdown
B7. Precision/Recall comparison between the three modelsHere we calculate the precision/recall matrices between the three different models.
###Code
models = [popularity_recommender,factorization_recommender,item_similarity_recommender]
model_names = ['popularity_recommender','factorization_recommender','item_similarity_recommender']
precision_recall = graphlab.recommender.util.compare_models(sf_test,models,metric='precision_recall',model_names=model_names)
###Output
PROGRESS: Evaluate model popularity_recommender
Precision and recall summary statistics by cutoff
+--------+-------------------+-------------------+
| cutoff | mean_precision | mean_recall |
+--------+-------------------+-------------------+
| 1 | 0.0 | 0.0 |
| 2 | 0.0 | 0.0 |
| 3 | 0.000586854460094 | 0.000125754527163 |
| 4 | 0.00044014084507 | 0.000125754527163 |
| 5 | 0.000352112676056 | 0.000125754527163 |
| 6 | 0.000293427230047 | 0.000125754527163 |
| 7 | 0.000503018108652 | 0.000133981458847 |
| 8 | 0.000660211267606 | 0.000160258524224 |
| 9 | 0.000782472613459 | 0.000168485455907 |
| 10 | 0.00105633802817 | 0.00020315842176 |
+--------+-------------------+-------------------+
[10 rows x 3 columns]
PROGRESS: Evaluate model factorization_recommender
Precision and recall summary statistics by cutoff
+--------+----------------+------------------+
| cutoff | mean_precision | mean_recall |
+--------+----------------+------------------+
| 1 | 0.161971830986 | 0.00340268857593 |
| 2 | 0.12764084507 | 0.00533711835484 |
| 3 | 0.118544600939 | 0.00761945366216 |
| 4 | 0.103433098592 | 0.00876494784352 |
| 5 | 0.101056338028 | 0.0108703857913 |
| 6 | 0.106807511737 | 0.0135953573377 |
| 7 | 0.11569416499 | 0.0178900108791 |
| 8 | 0.116197183099 | 0.0207420083033 |
| 9 | 0.113654147105 | 0.0228071895557 |
| 10 | 0.111443661972 | 0.024663728299 |
+--------+----------------+------------------+
[10 rows x 3 columns]
PROGRESS: Evaluate model item_similarity_recommender
Precision and recall summary statistics by cutoff
+--------+----------------+-----------------+
| cutoff | mean_precision | mean_recall |
+--------+----------------+-----------------+
| 1 | 0.573943661972 | 0.0156045294534 |
| 2 | 0.543133802817 | 0.0289464127846 |
| 3 | 0.533450704225 | 0.0430832902986 |
| 4 | 0.516285211268 | 0.0550399361212 |
| 5 | 0.508450704225 | 0.0672602884222 |
| 6 | 0.49882629108 | 0.0785304620578 |
| 7 | 0.484406438632 | 0.0878912603022 |
| 8 | 0.473591549296 | 0.0977090443051 |
| 9 | 0.466549295775 | 0.10764380375 |
| 10 | 0.461795774648 | 0.118284684131 |
+--------+----------------+-----------------+
[10 rows x 3 columns]
|
presentations/2016-03-11(Nexa Wall Street Columns High Resolution - Visualizing Receptive Fields and Data Clusters).ipynb | ###Markdown
Nexa Well Street Columns High Resolution (30 x 30). Visualizing Receptive Fields and Data Clusters.In this notebook we study how the receptive fields coming from the data for nexa wall street with high resolution looks like.
###Code
import h5py
import sys
sys.path.append("../")
import matplotlib.pyplot as plt
%matplotlib inline
from visualization.data_clustering import visualize_data_cluster_text_to_image_columns
###Output
_____no_output_____
###Markdown
Mixed receptive fields
###Code
# First we load the file
file_location = '../results_database/text_wall_street_columns_30_semi_constantNdata.hdf5'
file_location = '../results_database/text_wall_street_columns_30_Ndata20.hdf5'
# The first one is with constant number of features. The second one is whith Ndata
# equal to 20 for all the examples. Better for visualization
max_lag = 6
run_name = '/test' + str(max_lag)
f = h5py.File(file_location, 'r')
Nside = 30
# Nexa parameters
Nspatial_clusters = max_lag
Ntime_clusters = 20
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
matrix = np.zeros((Nside, max_lag))
for cluster in cluster_to_index:
cluster_indexes = cluster_to_index[str(cluster)]
for index in cluster_indexes:
first_index = index // max_lag
second_index = index % max_lag
matrix[first_index, second_index] = cluster
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.imshow(matrix, origin='lower', interpolation='none')
cluster = 0
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True, Nside=Nside, Ncolumns=max_lag)
###Output
_____no_output_____
###Markdown
Independent Receptive FIelds
###Code
# First we load the file
file_location = '../results_database/text_wall_street_columns_30_semi_constantNdata.hdf5'
file_location = '../results_database/text_wall_street_columns_30.hdf5'
file_location = '../results_database/text_wall_street_columns_30_Ndata20.hdf5'
max_lag = 6
run_name = '/indep' + str(max_lag)
f = h5py.File(file_location, 'r')
Nside = 30
# Nexa parameters
Nspatial_clusters = max_lag
Ntime_clusters = 20
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
matrix = np.zeros((Nside, max_lag))
for cluster in cluster_to_index:
cluster_indexes = cluster_to_index[str(cluster)]
for index in cluster_indexes:
first_index = index // max_lag
second_index = index % max_lag
matrix[first_index, second_index] = cluster
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.imshow(matrix, origin='lower', interpolation='none')
cluster = 0
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True, Nside=Nside, Ncolumns=max_lag)
cluster = 1
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True, Nside=Nside, Ncolumns=max_lag)
###Output
_____no_output_____ |
AT-BLSTM-CCAT50.ipynb | ###Markdown
AT-BLSTM for CCAT50
###Code
# Start GPU
import os
# os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
from __future__ import print_function
import os
import re
import sys
import pandas as pd
import numpy as np
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
import pandas as pd
import keras.layers as layers
import tensorflow as tf
import tensorflow_hub as hub
import pydot
import itertools
import h5py
import keras.callbacks
import tensorflow as tf
from keras import optimizers
from keras import backend as K
from keras import regularizers
from keras.regularizers import l2
from AdamW import AdamW
from SGDW import SGDW
from keras.layers import Input, Embedding, LSTM, Dense, Activation, Dropout, Flatten, merge
from keras.models import Model
from keras.callbacks import CSVLogger
from keras.layers import Conv1D, MaxPooling1D,AveragePooling1D, GlobalMaxPooling1D, concatenate
from keras.layers import TimeDistributed, Bidirectional, Lambda
from keras.layers import Layer
from keras import initializers, regularizers, constraints
from keras.layers.normalization import BatchNormalization
from keras.models import Model
from keras.utils import plot_model
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn import preprocessing
from keras.optimizers import SGD
from keras.callbacks import TensorBoard, LearningRateScheduler, ReduceLROnPlateau, ModelCheckpoint
# Load nltk
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.util import ngrams
from nltk.classify.scikitlearn import SklearnClassifier
%matplotlib inline
batch_size = 128
maxlen = 512
max_sentences = 15
w_l2 = 1e-4
nb_classes = 50
def preProcessText(text):
#str(tweet.encode('utf-8'))
str(text)
#Replace all words preceded by '@' with 'USER_NAME'
text = re.sub(r'@[^\s]+', 'USER_NAME', text)
#Replace all url's with 'URL'
text = re.sub(r'www.[^\s]+ | http[^\s]+',' URL ', text)
#Replace all hashtags with the word
text = text.strip('#')
#Replace words with long repeated characters with the shorter form
text = re.sub(r'(.)\1{2,}', r'\1', text)
#Remove any extra white space
text = re.sub(r'[\s]+', ' ', text)
return text
def striphtml(s):
p = re.compile(r'<.*?>')
return p.sub("", str(s))
def clean(s):
return re.sub(r'[^\x00-\x7f]', r'', s)
# Load dataset now
datasource = '../../data/ccat50/'
ccat50_train = pd.read_csv(datasource + 'cleantrainaugmented.csv')
ccat50_test = pd.read_csv(datasource + 'cleantest.csv', index_col=0)
ccat50_train.head(3)
ccat50_train['text'] = ccat50_train['text'].apply(preProcessText)
ccat50_train['augmented'] = ccat50_train['augmented'].apply(preProcessText)
trainx = pd.concat([ccat50_train['text'],ccat50_train['augmented']],ignore_index=True)
trainy = pd.concat([ccat50_train['label'],ccat50_train['label']],ignore_index=True)
# convert to dataframe
trainx = pd.DataFrame(trainx,columns=['text'])
trainy = pd.DataFrame(trainy,columns=['label'])
txt = ''
docs = []
sentences = []
labels = []
for cont, label in zip(trainx.text, trainy.label):
sentences = re.split(r'(?<!\w\.\w.)(?<![A-Z][a-z]\.)(?<=\.|\?)\s', clean(striphtml(cont)))
sentences = [sent.lower() for sent in sentences]
docs.append(sentences)
labels.append(label)
num_sent = []
for doc in docs:
num_sent.append(len(doc))
for s in doc:
txt += s
chars = set(txt)
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
print (len(docs)), print(len(chars))
print('Doing One hot encoding for training sample and targets:')
x = np.ones((len(docs), max_sentences, maxlen), dtype=np.int64) * -1
y = np.array(labels)
for i, doc in enumerate(docs):
for j, sentence in enumerate(doc):
if j < max_sentences:
for t, char in enumerate(sentence[-maxlen:]):
x[i, j, (maxlen - 1 - t)] = char_indices[char]
print('Sample Training X_train:{}'.format(x[0]))
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(y)
# binary encode
from sklearn.preprocessing import OneHotEncoder
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
y = onehot_encoder.fit_transform(integer_encoded)
print('training label:', y.shape)
print(y.shape)
###Output
(5000, 50)
###Markdown
Test dataset
###Code
test_docs = []
sent = []
test_labels = []
for cont, label in zip(ccat50_test.text, ccat50_test.label):
sent = re.split(r'(?<!\w\.\w.)(?<![A-Z][a-z]\.)(?<=\.|\?)\s',
clean(striphtml(cont)))
sent = [sent.lower() for sent in sent]
test_docs.append(sent)
test_labels.append(label)
print('Doing One hot encoding for testing sample and targets:')
x_test = np.ones((len(test_docs), max_sentences, maxlen), dtype=np.int64) * -1
print(x_test.shape)
y_test = np.array(test_labels)
for i, doc in enumerate(test_docs):
for j, sentence in enumerate(doc):
if j < max_sentences:
for t, char in enumerate(sentence[-maxlen:]):
#print(t)
x_test[i, j, (maxlen - 1 - t)] = char_indices[char]
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(y_test)
# binary encode
from sklearn.preprocessing import OneHotEncoder
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
y_test = onehot_encoder.fit_transform(integer_encoded)
print('training label:', y_test.shape)
###Output
training label: (2500, 50)
###Markdown
Attention layer
###Code
def dot_product(x, kernel):
if K.backend() == 'tensorflow':
return K.squeeze(K.dot(x, K.expand_dims(kernel)), axis=-1)
else:
return K.dot(x, kernel)
class AttentionWithContext(Layer):
def __init__(self, return_coefficients=False,
W_regularizer=None, u_regularizer=None, b_regularizer=None,
W_constraint=None, u_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
self.return_coefficients = return_coefficients
self.init = initializers.get('glorot_uniform')
self.W_regularizer = regularizers.get(W_regularizer)
self.u_regularizer = regularizers.get(u_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.u_constraint = constraints.get(u_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
super(AttentionWithContext, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.W = self.add_weight((input_shape[-1], input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((input_shape[-1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
self.u = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_u'.format(self.name),
regularizer=self.u_regularizer,
constraint=self.u_constraint)
super(AttentionWithContext, self).build(input_shape)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
uit = dot_product(x, self.W)
if self.bias:
uit += self.b
uit = K.tanh(uit)
ait = dot_product(uit, self.u)
a = K.exp(ait)
# apply mask after the exp. will be re-normalized next
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in theano
a *= K.cast(mask, K.floatx())
# in some cases especially in the early stages of training the sum may be almost zero
# and this results in NaN's. A workaround is to add a very small positive number ε to the sum.
# a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
if self.return_coefficients:
return [K.sum(weighted_input, axis=1), a]
else:
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
if self.return_coefficients:
return [(input_shape[0], input_shape[-1]), (input_shape[0], input_shape[-1], 1)]
else:
return input_shape[0], input_shape[-1]
###Output
_____no_output_____
###Markdown
Precision, Recall and F-Score Script
###Code
def f1(y_true, y_pred):
def recall(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
###Output
_____no_output_____
###Markdown
Tensor board
###Code
# tensorbd_dir = 'drive/My Drive/Colab Notebooks/imbd62/'
model_save_dir = '../../save_models/ccat50/'
tensorboard = TensorBoard(log_dir=model_save_dir+'./ccat50_SGDW_logs',
histogram_freq=0,
write_graph=True,
write_images=True)
###Output
_____no_output_____
###Markdown
Callback
###Code
reduce_lr_adam = ReduceLROnPlateau(monitor='val_loss',factor=0.5,
patience=5,min_lr=1e-4)
reduce_lr_sgd = ReduceLROnPlateau(monitor='val_loss',
factor=0.5,patience=5, min_lr=1e-5)
###Output
_____no_output_____
###Markdown
Earlystop and others
###Code
# EarlyStopping
earlystop = keras.callbacks.EarlyStopping(monitor='val_loss',
patience=5,verbose=1,
mode='auto')
num_epochs = 100
#checkpointer
checkpointer = ModelCheckpoint(model_save_dir+'ccat50_adamw_SGDW.hdf5',
monitor='val_acc',
verbose=1, save_best_only=True,
mode='max')
# CSV logger keras
csv_logger = CSVLogger(model_save_dir+'ccat50_adamw_SGDW.csv',
append=True, separator=';')
###Output
_____no_output_____
###Markdown
Convolutional layer with filter, windows and pooling
###Code
# Convolutional layer parameter
filter_length = [7, 3, 3]
nb_filter = [512, 256,128]
pool_length = 2
# sentence input
in_sentence = Input(shape=(maxlen,),
dtype='int64',name='main_input1')
# document input
document = Input(shape=(max_sentences, maxlen),
dtype='int64',name='main_input2')
def binarize(x, sz=30):
return tf.to_float(tf.one_hot(x, sz, on_value=1, off_value=0, axis=-1))
def binarize_outshape(in_shape):
return in_shape[0], in_shape[1], 30
embedded = Lambda(binarize, output_shape=binarize_outshape,
name='embed_input')(in_sentence)
from keras import initializers
from keras.initializers import glorot_normal, normal
###Output
_____no_output_____
###Markdown
Encoding Layer
###Code
# embedded: encodes sentence
for i in range(len(nb_filter)):
embedded = Conv1D(filters=nb_filter[i],
kernel_size=filter_length[i],
padding='valid',
activation='relu',
kernel_initializer='glorot_normal',
strides=1)(embedded)
embedded = Dropout(0.3)(embedded)
embedded = MaxPooling1D(pool_size=pool_length)(embedded)
bi_lstm_sent = \
Bidirectional(LSTM(128, return_sequences=False))(embedded)
sent_encode = Dropout(0.3)(bi_lstm_sent)
encoder = Model(inputs=in_sentence, outputs=sent_encode)
encoder.summary()
###Output
WARNING:tensorflow:From /home/ds-nlp/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /home/ds-nlp/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4479: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.
WARNING:tensorflow:From /home/ds-nlp/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:148: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.
WARNING:tensorflow:From /home/ds-nlp/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3733: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /home/ds-nlp/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
WARNING:tensorflow:From /home/ds-nlp/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
main_input1 (InputLayer) (None, 512) 0
_________________________________________________________________
embed_input (Lambda) (None, 512, 30) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 506, 512) 108032
_________________________________________________________________
conv1d_2 (Conv1D) (None, 504, 256) 393472
_________________________________________________________________
conv1d_3 (Conv1D) (None, 502, 128) 98432
_________________________________________________________________
dropout_1 (Dropout) (None, 502, 128) 0
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 251, 128) 0
_________________________________________________________________
bidirectional_1 (Bidirection (None, 256) 263168
_________________________________________________________________
dropout_2 (Dropout) (None, 256) 0
=================================================================
Total params: 863,104
Trainable params: 863,104
Non-trainable params: 0
_________________________________________________________________
###Markdown
Decoder Layer
###Code
def build_cnn():
encoded = TimeDistributed(encoder)(document)
# encoded: sentences to bi-lstm for document encoding
b_lstm_doc = \
Bidirectional(LSTM(128, return_sequences=False))(encoded)
# output = AttentionWithContext()(b_lstm_doc)
output = Dropout(0.7)(b_lstm_doc)
output = Dense(1024, activation='relu')(output)
output = Dropout(0.5)(output)
output = Dense(nb_classes, activation='softmax')(output)
model = Model(inputs=document, outputs=output)
model.summary()
return model
###Output
_____no_output_____
###Markdown
Training with AdamW * from: https://github.com/shaoanlu/AdamW-and-SGDW
###Code
# Adam parameter
model_ccat50 = build_cnn()
b, B, T = batch_size, x.shape[0], num_epochs
wd = 0.005 * (b/B/T)**0.5
model_ccat50.compile(loss='categorical_crossentropy',
optimizer=AdamW(weight_decay=wd),metrics=['accuracy',f1])
# save model
model_ccat50.save_weights(model_save_dir+"saved_ccat50_model.h5")
k = 5
scores = []
num_validation_sample = len(x)//k
for i in range(k):
print("-----------------------------")
print('Processing fold #', i)
print("-----------------------------")
val_data = x[i * num_validation_sample: (i + 1) * num_validation_sample]
val_lab = y[i * num_validation_sample: (i + 1) * num_validation_sample]
parttial_train_X_data = np.concatenate(
[x[:i * num_validation_sample],
x[(i + 1) * num_validation_sample:]], axis=0)
parttial_train_X_label = np.concatenate(
[y[:i * num_validation_sample],
y[(i + 1) * num_validation_sample:]], axis=0)
history_ccat50_atten = model_ccat50.fit(parttial_train_X_data, parttial_train_X_label,
validation_data=(val_data,val_lab),
batch_size=batch_size,epochs=num_epochs,verbose=1,
callbacks=[reduce_lr_adam,earlystop,csv_logger,checkpointer])
#Print average acc
average_acc = np.mean(history_ccat50_atten.history['acc'])
print(average_acc)
print("------------")
#Print average val_acc
average_val_acc = np.mean(history_ccat50_atten.history['val_acc'])
print(average_val_acc)
print("------------")
#Print average loss
average_loss = np.mean(history_ccat50_atten.history['loss'])
print(average_loss)
print("------------")
#Print average val_loss
average_val_loss = np.mean(history_ccat50_atten.history['val_loss'])
print(average_val_loss)
print("------------")
#Print average f1-score
average_f1 = np.mean(history_ccat50_atten.history['f1'])
print(average_f1)
print("------------")
#Print average val_f1-score
average_val_f1 = np.mean(history_ccat50_atten.history['val_f1'])
print(average_val_f1)
print("------------")
# Evaluate the Model on the 20% Validation Dataset
score = model_ccat50.evaluate(val_data, val_lab, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# Evaluate the Model on the 20% Test Dataset
scorev = model_ccat50.evaluate(x_test, y_test, verbose=1)
print('Test loss: ', scorev[0])
print('Test accuracy:', scorev[1])
prediction_value = model_ccat50.predict(x_test)
predict_class = np.argmax(prediction_value, axis=-1)
y_test_integer = np.argmax(y_test, axis=1)
# Detailed analysis
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(y_test_integer, predict_class))
###Output
precision recall f1-score support
0 0.09 0.10 0.10 50
1 0.33 0.30 0.31 50
2 0.10 0.04 0.06 50
3 0.04 0.12 0.06 50
4 0.41 0.30 0.34 50
5 0.03 0.06 0.04 50
6 0.06 0.08 0.07 50
7 0.11 0.06 0.08 50
8 0.09 0.04 0.06 50
9 0.67 0.24 0.35 50
10 0.15 0.20 0.17 50
11 0.04 0.10 0.06 50
12 0.07 0.14 0.09 50
13 0.24 0.18 0.21 50
14 0.02 0.02 0.02 50
15 1.00 0.94 0.97 50
16 0.43 0.18 0.25 50
17 0.03 0.02 0.02 50
18 0.13 0.14 0.13 50
19 0.14 0.26 0.18 50
20 0.35 0.12 0.18 50
21 0.09 0.02 0.03 50
22 0.43 0.46 0.45 50
23 0.41 0.18 0.25 50
24 0.00 0.00 0.00 50
25 0.05 0.02 0.03 50
26 0.12 0.04 0.06 50
27 0.21 0.08 0.12 50
28 0.53 0.50 0.52 50
29 0.31 0.38 0.34 50
30 0.43 0.46 0.45 50
31 0.89 0.16 0.27 50
32 0.92 0.72 0.81 50
33 0.75 0.36 0.49 50
34 0.08 0.04 0.05 50
35 0.89 0.48 0.62 50
36 0.42 0.26 0.32 50
37 0.07 0.06 0.06 50
38 0.27 0.18 0.22 50
39 0.06 0.06 0.06 50
40 0.14 0.04 0.06 50
41 0.83 0.30 0.44 50
42 0.10 0.06 0.07 50
43 0.05 0.04 0.04 50
44 0.28 0.66 0.39 50
45 0.05 0.10 0.07 50
46 0.04 0.12 0.06 50
47 0.19 0.06 0.09 50
48 0.27 0.06 0.10 50
49 0.04 0.16 0.06 50
accuracy 0.19 2500
macro avg 0.27 0.19 0.21 2500
weighted avg 0.27 0.19 0.21 2500
###Markdown
Plot
###Code
modelsource = '../../save_models/ccat50/'
df_ccat50 = pd.read_csv(modelsource +'/ccat50_adamw.csv',engine='python',sep=';')
# df_movies.to_excel(modelsource+'/imbd622.xlsx',engine='xlsxwriter')
df_ccat50.head(100)
df_ccat50.plot('epoch','lr',kind='bar')
###Output
_____no_output_____
###Markdown
Training accuracy and validate 4 CCAT50
###Code
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.plot(df_ccat50.index, df_ccat50['acc'], 'go-', color='red', label= 'acc',linewidth=3)
plt.plot(df_ccat50.index, df_ccat50['val_acc'], '--^', color='blue',label= 'val_acc',linewidth=3)
plt.legend(loc='lower right')
plt.xlabel('Epoch')
plt.ylabel('Accuarcy')
plt.savefig(modelsource + 'ccat50_acc.png')
plt.title('AT-BLSTM model accuracy for CCAT50')
###Output
_____no_output_____
###Markdown
Training loss and validate 4 CCAT50
###Code
plt.plot(df_ccat50.index, df_ccat50['loss'], 'go-', color='red', label= 'loss',linewidth=3)
plt.plot(df_ccat50.index, df_ccat50['val_loss'], '--^', color='blue',label= 'val_loss',linewidth=3)
plt.legend(loc='upper right')
plt.xlabel('Epoch')
plt.ylabel('loss')
plt.savefig(modelsource + 'ccat50_loss.png')
plt.title('AT-BLSTM model loss for CCAT50')
###Output
_____no_output_____
###Markdown
F1-Score
###Code
plt.plot(df_ccat50.index, df_ccat50['f1'], 'go-', color='red', label= 'f1',linewidth=3)
plt.plot(df_ccat50.index, df_ccat50['val_f1'], '--*', color='blue',label= 'val_f1',linewidth=3)
plt.legend(loc='lower right')
plt.xlabel('Epoch')
plt.ylabel('F1-score')
plt.savefig(modelsource + 'ccat50_F1-score.png')
plt.title('AT-BLSTM model F-score for CCAT50')
###Output
_____no_output_____
###Markdown
Model with cross-validation Training SGDW
###Code
from keras.optimizers import SGD
model_SGDW = build_cnn()
b, B, T = batch_size, x.shape[0], num_epochs
wd = 0.0025 * (b/B/T)**0.5
model_SGDW.compile(loss='categorical_crossentropy',
optimizer=SGDW(weight_decay=wd, momentum=0.9),
metrics=['accuracy',f1])
# save model
model_SGDW.save_weights(model_save_dir+"saved_IMBD62_model_SGDW.h5")
k = 5
scores = []
num_validation_sample = len(x)//k
for i in range(k):
print("-----------------------------")
print('Processing fold #', i)
print("-----------------------------")
val_data = x[i * num_validation_sample: (i + 1) * num_validation_sample]
val_lab = y[i * num_validation_sample: (i + 1) * num_validation_sample]
parttial_train_X_data = np.concatenate(
[x[:i * num_validation_sample],
x[(i + 1) * num_validation_sample:]], axis=0)
parttial_train_X_label = np.concatenate(
[y[:i * num_validation_sample],
y[(i + 1) * num_validation_sample:]], axis=0)
history_imbd62_SGDW = model_SGDW.fit(parttial_train_X_data, parttial_train_X_label,
validation_data=(val_data,val_lab),
batch_size=batch_size,epochs=num_epochs,verbose=1,
callbacks=[reduce_lr_sgd,earlystop,csv_logger,checkpointer])
###Output
-----------------------------
Processing fold # 0
-----------------------------
WARNING:tensorflow:From /home/ds-nlp/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Train on 4000 samples, validate on 1000 samples
Epoch 1/100
4000/4000 [==============================] - 286s 71ms/step - loss: 3.9096 - acc: 0.0208 - f1: 0.0000e+00 - val_loss: 3.9450 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00001: val_acc improved from -inf to 0.00000, saving model to ../../save_models/ccat50/ccat50_adamw_SGDW.hdf5
Epoch 2/100
4000/4000 [==============================] - 280s 70ms/step - loss: 3.9004 - acc: 0.0263 - f1: 0.0000e+00 - val_loss: 3.9936 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00002: val_acc did not improve from 0.00000
Epoch 3/100
4000/4000 [==============================] - 279s 70ms/step - loss: 3.8927 - acc: 0.0238 - f1: 0.0000e+00 - val_loss: 4.0402 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00003: val_acc did not improve from 0.00000
Epoch 4/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.8854 - acc: 0.0243 - f1: 0.0000e+00 - val_loss: 4.0865 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00004: val_acc did not improve from 0.00000
Epoch 5/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.8794 - acc: 0.0248 - f1: 0.0000e+00 - val_loss: 4.1335 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00005: val_acc did not improve from 0.00000
Epoch 6/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.8759 - acc: 0.0225 - f1: 0.0000e+00 - val_loss: 4.1763 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00006: val_acc did not improve from 0.00000
Epoch 00006: early stopping
-----------------------------
Processing fold # 1
-----------------------------
Train on 4000 samples, validate on 1000 samples
Epoch 1/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9666 - acc: 0.0185 - f1: 0.0000e+00 - val_loss: 3.8072 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00001: val_acc did not improve from 0.00000
Epoch 2/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9467 - acc: 0.0223 - f1: 0.0000e+00 - val_loss: 3.8484 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00002: val_acc did not improve from 0.00000
Epoch 3/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9321 - acc: 0.0250 - f1: 0.0000e+00 - val_loss: 3.8824 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00003: val_acc did not improve from 0.00000
Epoch 4/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9219 - acc: 0.0220 - f1: 0.0000e+00 - val_loss: 3.9121 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00004: val_acc did not improve from 0.00000
Epoch 5/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9140 - acc: 0.0243 - f1: 0.0000e+00 - val_loss: 3.9398 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00005: val_acc did not improve from 0.00000
Epoch 6/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9073 - acc: 0.0267 - f1: 0.0000e+00 - val_loss: 3.9658 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00006: val_acc did not improve from 0.00000
Epoch 00006: early stopping
-----------------------------
Processing fold # 2
-----------------------------
Train on 4000 samples, validate on 1000 samples
Epoch 1/100
4000/4000 [==============================] - 279s 70ms/step - loss: 3.9290 - acc: 0.0123 - f1: 0.0000e+00 - val_loss: 3.8630 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00001: val_acc improved from 0.00000 to 0.05000, saving model to ../../save_models/ccat50/ccat50_adamw_SGDW.hdf5
Epoch 2/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9263 - acc: 0.0132 - f1: 0.0000e+00 - val_loss: 3.8783 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00002: val_acc did not improve from 0.05000
Epoch 3/100
4000/4000 [==============================] - 279s 70ms/step - loss: 3.9211 - acc: 0.0150 - f1: 0.0000e+00 - val_loss: 3.8930 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00003: val_acc did not improve from 0.05000
Epoch 4/100
4000/4000 [==============================] - 278s 69ms/step - loss: 3.9174 - acc: 0.0130 - f1: 0.0000e+00 - val_loss: 3.9070 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00004: val_acc did not improve from 0.05000
Epoch 5/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9133 - acc: 0.0160 - f1: 0.0000e+00 - val_loss: 3.9196 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00005: val_acc did not improve from 0.05000
Epoch 6/100
4000/4000 [==============================] - 278s 69ms/step - loss: 3.9117 - acc: 0.0108 - f1: 0.0000e+00 - val_loss: 3.9313 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00006: val_acc did not improve from 0.05000
Epoch 00006: early stopping
-----------------------------
Processing fold # 3
-----------------------------
Train on 4000 samples, validate on 1000 samples
Epoch 1/100
4000/4000 [==============================] - 278s 69ms/step - loss: 3.9185 - acc: 0.0255 - f1: 0.0000e+00 - val_loss: 3.8990 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00001: val_acc did not improve from 0.05000
Epoch 2/100
4000/4000 [==============================] - 278s 69ms/step - loss: 3.9156 - acc: 0.0217 - f1: 0.0000e+00 - val_loss: 3.9060 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00002: val_acc did not improve from 0.05000
Epoch 3/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9143 - acc: 0.0220 - f1: 0.0000e+00 - val_loss: 3.9126 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00003: val_acc did not improve from 0.05000
Epoch 4/100
4000/4000 [==============================] - 279s 70ms/step - loss: 3.9125 - acc: 0.0225 - f1: 0.0000e+00 - val_loss: 3.9195 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00004: val_acc did not improve from 0.05000
Epoch 5/100
4000/4000 [==============================] - 279s 70ms/step - loss: 3.9110 - acc: 0.0170 - f1: 0.0000e+00 - val_loss: 3.9263 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00005: val_acc did not improve from 0.05000
Epoch 6/100
4000/4000 [==============================] - 279s 70ms/step - loss: 3.9090 - acc: 0.0248 - f1: 0.0000e+00 - val_loss: 3.9325 - val_acc: 0.0000e+00 - val_f1: 0.0000e+00
Epoch 00006: val_acc did not improve from 0.05000
Epoch 00006: early stopping
-----------------------------
Processing fold # 4
-----------------------------
Train on 4000 samples, validate on 1000 samples
Epoch 1/100
4000/4000 [==============================] - 279s 70ms/step - loss: 3.9251 - acc: 0.0135 - f1: 0.0000e+00 - val_loss: 3.8675 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00001: val_acc did not improve from 0.05000
Epoch 2/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9242 - acc: 0.0097 - f1: 0.0000e+00 - val_loss: 3.8713 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00002: val_acc did not improve from 0.05000
Epoch 3/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9237 - acc: 0.0125 - f1: 0.0000e+00 - val_loss: 3.8750 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00003: val_acc did not improve from 0.05000
Epoch 4/100
4000/4000 [==============================] - 278s 70ms/step - loss: 3.9225 - acc: 0.0150 - f1: 0.0000e+00 - val_loss: 3.8787 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00004: val_acc did not improve from 0.05000
Epoch 5/100
4000/4000 [==============================] - 279s 70ms/step - loss: 3.9222 - acc: 0.0118 - f1: 0.0000e+00 - val_loss: 3.8823 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00005: val_acc did not improve from 0.05000
Epoch 6/100
4000/4000 [==============================] - 278s 69ms/step - loss: 3.9218 - acc: 0.0112 - f1: 0.0000e+00 - val_loss: 3.8857 - val_acc: 0.0500 - val_f1: 0.0000e+00
Epoch 00006: val_acc did not improve from 0.05000
Epoch 00006: early stopping
###Markdown
Result
###Code
#Print average acc
average_acc = np.mean(history_imbd62_SGDW.history['acc'])
print(average_acc)
print("------------")
#Print average val_acc
average_val_acc = np.mean(history_imbd62_SGDW.history['val_acc'])
print(average_val_acc)
print("------------")
#Print average loss
average_loss = np.mean(history_imbd62_SGDW.history['loss'])
print(average_loss)
print("------------")
#Print average val_loss
average_val_loss = np.mean(history_imbd62_SGDW.history['val_loss'])
print(average_val_loss)
print("------------")
#Print average f1-score
average_f1 = np.mean(history_imbd62_SGDW.history['f1'])
print(average_f1)
print("------------")
#Print average val_f1-score
average_val_f1 = np.mean(history_imbd62_SGDW.history['val_f1'])
print(average_val_f1)
print("------------")
# Evaluate the Model on the 20% Test Dataset
scorev = model_SGDW.evaluate(x_test, y_test, verbose=1)
print('Test loss: ', scorev[0])
print('Test accuracy:', scorev[1])
prediction_value = model_SGDW.predict(x_test)
predict_class = np.argmax(prediction_value, axis=-1)
y_test_integer = np.argmax(y_test, axis=1)
# Detailed analysis
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(y_test_integer, predict_class))
modelsource = '../../save_models/imbd62/'
df_movies_SGD = pd.read_csv(modelsource +'/imbd62_SGDW.csv',
engine='python',
sep=';')
df_movies_SGD.head(3)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.plot(df_movies.index, df_movies['acc'], 'go-', color='red', label= 'acc',linewidth=3)
plt.plot(df_movies.index, df_movies['val_acc'], '--^', color='blue',label= 'val_acc',linewidth=3)
plt.legend(loc='lower right')
plt.xlabel('Epoch')
plt.ylabel('Accuarcy')
plt.savefig(modelsource + 'imdb62_SGDW_acc.png')
plt.title('AT-BLSTM model accouracy for IMBD62')
plt.plot(df_movies.index, df_movies['loss'], 'go-', color='red', label= 'loss',linewidth=3)
plt.plot(df_movies.index, df_movies['val_loss'], '--^', color='blue',label= 'val_loss',linewidth=3)
plt.legend(loc='upper right')
plt.xlabel('Epoch')
plt.ylabel('loss')
plt.savefig(modelsource + 'imdb62_SGDW_loss.png')
plt.title('AT-BLSTM model loss for IMBD62')
plt.plot(df_movies.index, df_movies['f1'], 'go-', color='red', label= 'f1',linewidth=3)
plt.plot(df_movies.index, df_movies['val_f1'], '--*', color='blue',label= 'val_f1',linewidth=3)
plt.legend(loc='lower right')
plt.xlabel('Epoch')
plt.ylabel('F1-score')
plt.savefig(modelsource + 'imdb62_SGDW_F1-score.png')
plt.title('AT-BLSTM model F-score for IMBD62')
###Output
_____no_output_____
###Markdown
SGD without earlystop
###Code
model_SGD = build_cnn()
model_SGD.compile(loss='categorical_crossentropy',
optimizer=SGD(momentum=0.9),
metrics=['accuracy',f1])
# save model
model_SGD.save_weights(model_save_dir+"saved_IMBD62_SGD_model.h5")
history_imbd62_atten = model_SGD.fit(x_train, y_train,
validation_data=(x_test,y_test),
batch_size=512,epochs=80,
verbose=1,callbacks=[reduce_lr_adam,csv_logger])
###Output
_____no_output_____ |
Thinkful Program/Algorithms and Data Structures/Thinkful 29.5.ipynb | ###Markdown
Problem 1 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.Find the sum of all the multiples of 3 or 5 below 1000.
###Code
total_sum = 0
for i in range(1, 1000):
if (i % 3 == 0 or i % 5 == 0):
total_sum = total_sum + i
print(total_sum)
###Output
233168
###Markdown
Problem 2 Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
###Code
n1 = 1
n2 = 1
summ = 0
while n1 < 4000000:
n1, n2 = n2 , n1+n2
if n1 % 2 == 0:
summ += n1
print(summ)
###Output
4613732
###Markdown
Problem 3 The prime factors of 13195 are 5, 7, 13 and 29.What is the largest prime factor of the number 600851475143 ?
###Code
num = 600851475143
i = 2
while i * i < num:
while num % i == 0:
num = num // i
i = i + 1
print(num)
###Output
6857
###Markdown
Problem 4 A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.Find the largest palindrome made from the product of two 3-digit numbers.
###Code
n = 0
for a in range(999, 100, -1):
for b in range(a, 100, -1):
x = a * b
if x > n:
s = str(a * b)
if s == s[::-1]:
n = a * b
print(n)
###Output
906609
###Markdown
Problem 5 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
###Code
def gcd(x,y):
return y and gcd(y, x % y) or x
def lcm(x,y):
return x * y // gcd(x,y)
n = 1
for i in range(1, 21):
n = lcm(n, i)
print(n)
###Output
232792560
###Markdown
Problem 6 The sum of the squares of the first ten natural numbers is,12 + 22 + ... + 102 = 385 The square of the sum of the first ten natural numbers is,(1 + 2 + ... + 10)2 = 552 = 3025 Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
###Code
r = range(1, 101)
a = sum(r)
print(a * a - sum(i*i for i in r))
###Output
25164150
###Markdown
Problem 7 By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.What is the 10 001st prime number?
###Code
def func():
D = {}
q = 2
while 1:
if q not in D:
yield q
D[q*q] = [q]
else:
for p in D[q]:
D.setdefault(p + q,[]).append(p)
del D[q]
q += 1
def nth_prime(n):
for i, prime in enumerate(func()):
if i == n - 1:
return prime
print(nth_prime(10001))
###Output
104743
###Markdown
Problem 8 The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
###Code
import time
start = time.time()
num = '\
73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450'
biggest = 0
i = 0
while i < len(num) - 12:
one = int(num[i])
two = int(num[i+1])
thr = int(num[i+2])
fou = int(num[i+3])
fiv = int(num[i+4])
six = int(num[i+5])
seven = int(num[i+6])
eight = int(num[i+7])
nine = int(num[i+8])
ten = int(num[i+9])
eleven = int(num[i+10])
twelve = int(num[i+11])
thirteen = int(num[i])
product = one*two*thr*fou*fiv*six*seven*eight*nine*ten*eleven*twelve*thirteen
if product > biggest:
biggest = product
i = i + 1
print(biggest)
###Output
23514624000
###Markdown
Problem 9 A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,a2 + b2 = c2 For example, 32 + 42 = 9 + 16 = 25 = 52.There exists exactly one Pythagorean triplet for which a + b + c = 1000. Find the product abc.
###Code
for a in range(1, 1000):
for b in range(a, 1000):
c = 1000 - a - b
if c > 0:
if c*c == a*a + b*b:
print (a*b*c)
###Output
31875000
###Markdown
Problem 10 The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.Find the sum of all the primes below two million.
###Code
def eratosthenes2(n):
#Declare a set - an unordered collection of unique elements
multiples = set()
#Iterate through [2,2000000]
for i in range(2, n+1):
#If i has not been eliminated already
if i not in multiples:
#Yay prime!
yield i
#Add multiples of the prime in the range to the 'invalid' set
multiples.update(range(i*i, n+1, i))
#Now sum it up
iter = 0
ml = list(eratosthenes2(2000000))
for x in ml:
iter = int(x) + iter
print(iter)
###Output
142913828922
|
Semana-7/main.ipynb | ###Markdown
Desafio 6Neste desafio, vamos praticar _feature engineering_, um dos processos mais importantes e trabalhosos de ML. Utilizaremos o _data set_ [Countries of the world](https://www.kaggle.com/fernandol/countries-of-the-world), que contém dados sobre os 227 países do mundo com informações sobre tamanho da população, área, imigração e setores de produção.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import sklearn as sk
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.preprocessing import KBinsDiscretizer, OneHotEncoder, StandardScaler
# # Algumas configurações para o matplotlib.
# %matplotlib inline
# from IPython.core.pylabtools import figsize
# figsize(12, 8)
# sns.set()
countries = pd.read_csv("countries.csv")
new_column_names = [
"Country", "Region", "Population", "Area", "Pop_density", "Coastline_ratio",
"Net_migration", "Infant_mortality", "GDP", "Literacy", "Phones_per_1000",
"Arable", "Crops", "Other", "Climate", "Birthrate", "Deathrate", "Agriculture",
"Industry", "Service"
]
countries.columns = new_column_names
countries.head(5)
###Output
_____no_output_____
###Markdown
ObservaçõesEsse _data set_ ainda precisa de alguns ajustes iniciais. Primeiro, note que as variáveis numéricas estão usando vírgula como separador decimal e estão codificadas como strings. Corrija isso antes de continuar: transforme essas variáveis em numéricas adequadamente.Além disso, as variáveis `Country` e `Region` possuem espaços a mais no começo e no final da string. Você pode utilizar o método `str.strip()` para remover esses espaços. Inicia sua análise a partir daqui
###Code
countries.shape
countries.info()
countries['Country'] = countries['Country'].str.strip()
countries['Region'] = countries['Region'].str.strip()
for column in countries.columns:
if countries[column].dtype == np.dtype('O'):
countries[column] = countries[column].str.replace(',', '.')
try:
countries[column] = pd.to_numeric(countries[column])
except:
pass
countries.head()
###Output
_____no_output_____
###Markdown
Questão 1Quais são as regiões (variável `Region`) presentes no _data set_? Retorne uma lista com as regiões únicas do _data set_ com os espaços à frente e atrás da string removidos (mas mantenha pontuação: ponto, hífen etc) e ordenadas em ordem alfabética.
###Code
def q1():
return sorted(countries['Region'].unique())
q1()
###Output
_____no_output_____
###Markdown
Questão 2Discretizando a variável `Pop_density` em 10 intervalos com `KBinsDiscretizer`, seguindo o encode `ordinal` e estratégia `quantile`, quantos países se encontram acima do 90º percentil? Responda como um único escalar inteiro.
###Code
def q2():
discretizer = KBinsDiscretizer(n_bins=10, encode='ordinal', strategy='quantile')
pop_density = discretizer.fit_transform(countries[['Pop_density']])
return int(sum(pop_density[:,0] == 9))
q2()
###Output
_____no_output_____
###Markdown
Questão 3Se codificarmos as variáveis `Region` e `Climate` usando _one-hot encoding_, quantos novos atributos seriam criados? Responda como um único escalar.
###Code
def q3():
one_hot_encoder = OneHotEncoder(sparse=False, dtype=np.int)
region_climate_one_hot = one_hot_encoder.fit_transform(countries[["Region", "Climate"]].fillna(0))
return region_climate_one_hot.shape[1]
q3()
###Output
_____no_output_____
###Markdown
Questão 4Aplique o seguinte _pipeline_:1. Preencha as variáveis do tipo `int64` e `float64` com suas respectivas medianas.2. Padronize essas variáveis.Após aplicado o _pipeline_ descrito acima aos dados (somente nas variáveis dos tipos especificados), aplique o mesmo _pipeline_ (ou `ColumnTransformer`) ao dado abaixo. Qual o valor da variável `Arable` após o _pipeline_? Responda como um único float arredondado para três casas decimais.
###Code
test_country = [
'Test Country', 'NEAR EAST', -0.19032480757326514,
-0.3232636124824411, -0.04421734470810142, -0.27528113360605316,
0.13255850810281325, -0.8054845935643491, 1.0119784924248225,
0.6189182532646624, 1.0074863283776458, 0.20239896852403538,
-0.043678728558593366, -0.13929748680369286, 1.3163604645710438,
-0.3699637766938669, -0.6149300604558857, -0.854369594993175,
0.263445277972641, 0.5712416961268142
]
def q4():
test_country_df = pd.DataFrame([test_country], columns=countries.columns)
nums_pipeline = Pipeline(steps=[
('fill_median', SimpleImputer(missing_values=np.nan, strategy='median')),
('standardize', StandardScaler())])
nums = nums_pipeline.fit_transform(countries.iloc[:,2:])
nums = nums_pipeline.transform(test_country_df.iloc[:,2:])
return float(pd.DataFrame([nums[0]], columns=countries.columns[2:])['Arable'].round(3))
q4()
###Output
_____no_output_____
###Markdown
Questão 5Descubra o número de _outliers_ da variável `Net_migration` segundo o método do _boxplot_, ou seja, usando a lógica:$$x \notin [Q1 - 1.5 \times \text{IQR}, Q3 + 1.5 \times \text{IQR}] \Rightarrow x \text{ é outlier}$$que se encontram no grupo inferior e no grupo superior.Você deveria remover da análise as observações consideradas _outliers_ segundo esse método? Responda como uma tupla de três elementos `(outliers_abaixo, outliers_acima, removeria?)` ((int, int, bool)).
###Code
def q5():
q1 = countries['Net_migration'].quantile(0.25)
q3 = countries['Net_migration'].quantile(0.75)
iqr = q3 - q1
outlier_interval_iqr = [q1 - 1.5 * iqr, q3 + 1.5 * iqr]
outliers_inferior = countries[countries['Net_migration'] < outlier_interval_iqr[0]]
outliers_superior = countries[countries['Net_migration'] > outlier_interval_iqr[1]]
return (outliers_inferior.shape[0], outliers_superior.shape[0], False)
q5()
###Output
_____no_output_____
###Markdown
Questão 6Para as questões 6 e 7 utilize a biblioteca `fetch_20newsgroups` de datasets de test do `sklearn`Considere carregar as seguintes categorias e o dataset `newsgroups`:```categories = ['sci.electronics', 'comp.graphics', 'rec.motorcycles']newsgroup = fetch_20newsgroups(subset="train", categories=categories, shuffle=True, random_state=42)```Aplique `CountVectorizer` ao _data set_ `newsgroups` e descubra o número de vezes que a palavra _phone_ aparece no corpus. Responda como um único escalar.
###Code
def q6():
categories = ['sci.electronics', 'comp.graphics', 'rec.motorcycles']
newsgroups = fetch_20newsgroups(subset="train", categories=categories, shuffle=True, random_state=42)
count_vectorizer = CountVectorizer()
newsgroups_count = count_vectorizer.fit_transform(newsgroups.data)
count_df = pd.DataFrame(newsgroups_count.toarray(), columns=count_vectorizer.get_feature_names())
return int(count_df['phone'].sum())
q6()
###Output
_____no_output_____
###Markdown
Questão 7Aplique `TfidfVectorizer` ao _data set_ `newsgroups` e descubra o TF-IDF da palavra _phone_. Responda como um único escalar arredondado para três casas decimais.
###Code
def q7():
categories = ['sci.electronics', 'comp.graphics', 'rec.motorcycles']
newsgroups = fetch_20newsgroups(subset="train", categories=categories, shuffle=True, random_state=42)
count_vectorizer = CountVectorizer()
newsgroups_count = count_vectorizer.fit_transform(newsgroups.data)
words_idx = count_vectorizer.vocabulary_.get('phone')
tfidf_vectorizer = TfidfVectorizer()
tfidf_vectorizer.fit(newsgroups.data)
newsgroups_tfidf_vectorized = tfidf_vectorizer.transform(newsgroups.data)
return float(newsgroups_tfidf_vectorized[:, words_idx].toarray().sum().round(3))
q7()
###Output
_____no_output_____ |
SR-controlT2-CameraRecord.ipynb | ###Markdown
Display and record via HOZAN USB camera 1) Test for camera
###Code
# Test for the availability of camera
import cv2
import time
SRvideo = cv2.VideoCapture(1)
SRvideo.set(cv2.CAP_PROP_FPS,30)
# the fps of the camera is between 1 to 40, which can be configured.
if SRvideo.isOpened():
print('The camera is available')
print(SRvideo.get(5))
SRvideo.release()
# Opend and display via USB camera
import cv2
# read from the camera via creating a VideoCapture object
SRvideo = cv2.VideoCapture(1)
if not SRvideo.isOpened():
print("Cannot read the camera")
while SRvideo.isOpened():
ret, frame = SRvideo.read()
if ret == True:
# Display the resulting frame
frame = cv2.rotate(frame, cv2.ROTATE_180)
cv2.imshow('SRcrawling',frame)
# Press Q on keyboard to stop recording
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Finally, release the capture and write object, then close windows
SRvideo.release()
cv2.destroyAllWindows()
# open the USB camera, show and record the video frame
import cv2
import numpy as np
# read from the camera via creating a VideoCapture object
SRvideo = cv2.VideoCapture(1)
if not SRvideo.isOpened():
print("Cannot read the camera")
# save the video with the configuration of video encode format, fps, frame size,
frame_width = int(SRvideo.get(3))
frame_height = int(SRvideo.get(4))
SRvideo.set(cv2.CAP_PROP_FPS,40)
# shift the width and height considering the camera rotation
SRwri = cv2.VideoWriter('SRdeformationTest.avi',cv2.VideoWriter_fourcc(*'MJPG'), 40, (frame_width,frame_height))
while SRvideo.isOpened():
ret, frame = SRvideo.read()
if ret == True:
# rotate the frame, display and video save
frame = cv2.rotate(frame, cv2.ROTATE_180)
cv2.imshow('SRcrawling',frame)
SRwri.write(frame)
# Press Q on keyboard to stop recording
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Finally, release the capture and write object, then close windows
SRvideo.release()
SRwri.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
2) Display video in GUI window
###Code
# Integration in a whole class
import cv2
import tkinter
from tkinter import ttk
import PIL.Image, PIL.ImageTk
import time
class Application:
def __init__(self, window, window_title, video_source):
# USB camera source
self.SRvideo = cv2.VideoCapture(video_source)
if self.SRvideo.isOpened():
print('camera is available')
# the size of HOZAN USB camera is 640*480
self.width = int(self.SRvideo.get(cv2.CAP_PROP_FRAME_WIDTH)) # cv2.CAP_PROP_FRAME_WIDTH = 3
self.height = int(self.SRvideo.get(cv2.CAP_PROP_FRAME_HEIGHT)) # cv2.CAP_PROP_FRAME_HEIGHT = 4
self.SRvideo.set(cv2.CAP_PROP_FPS, 40) # cv2.CAP_PROP_FPD = 5
print(self.SRvideo.get(5))
# shift the height and width considering the camera rotation
self.SRwri = cv2.VideoWriter('SRcrawling.avi', cv2.VideoWriter_fourcc(*'MJPG'), 40, (self.width, self.height))
# configure the GUI window
self.window = window
self.window.title(window_title)
self.window.geometry('1280x480')
# Canvas configuration: inlucding video frame, button and pressure sensor curves
self.canvas = tkinter.Canvas(window, width = self.width, height = self.height)
self.canvas.pack(side=tkinter.LEFT)
# using new button format, it would be better when we use a add a ttk.frame widget before.
self.btn_snapshot = ttk.Button(window, text = 'Snapshot', width = 50, command = self.snapshot)
self.btn_snapshot.pack(anchor = tkinter.CENTER, expand = True)
self.btn_stop = ttk.Button(window, text = 'Stop',width = 50, command = self.stop)
self.btn_stop.pack(anchor = tkinter.CENTER, expand = True)
# Updata the frame every 10 milliseconds
self.delay = 10
self.update()
self.window.mainloop()
# function for the frame update
def update(self):
self.ret, self.frame = self.SRvideo.read()
self.frame = cv2.cvtColor(self.frame, cv2.COLOR_BGR2RGB)
self.frame = cv2.rotate(self.frame, cv2.ROTATE_180)
if self.ret:
self.photo = PIL.ImageTk.PhotoImage(PIL.Image.fromarray(self.frame))
self.canvas.create_image(0, 0, image = self.photo, anchor = tkinter.NW)
self.window.after(self.delay, self.update)
self.SRwri.write(cv2.cvtColor(self.frame, cv2.COLOR_RGB2BGR))
# function for snapshot
def snapshot(self):
self.ret, self.frame = self.SRvideo.read()
self.frame = cv2.cvtColor(self.frame, cv2.COLOR_BGR2RGB)
self.frame = cv2.rotate(self.frame, cv2.ROTATE_180)
if self.ret:
cv2.imwrite("SRcrawling-" + time.strftime("%Y-%m-%d-%H-%M-%S") + ".jpg", cv2.cvtColor(self.frame, cv2.COLOR_RGB2BGR))
def stop(self):
self.SRvideo.release()
self.SRwri.release()
cv2.destroyAllWindows()
# create a window and pass it to the Application object
Application(tkinter.Tk(), "SRcrawling", 1)
# use different class to express different function
import tkinter
import cv2
import PIL.Image, PIL.ImageTk
import time
# main class for the GUI display
class Application:
def __init__(self, window, window_title, video_source=1):
# open the HOZAN USB camera(1, the computer camera is 0)
self.SRvid = SRVideoCapture(video_source)
# configure the GUI window
self.window = window
self.window.title(window_title)
# self.window.geometry('1280*550')
# Create a canvas that can fit the above video source size
self.canvas = tkinter.Canvas(window, width = self.SRvid.width, height = self.SRvid.height)
self.canvas.pack()
# Button that lets the user take a snapshot
self.btn_snapshot=tkinter.Button(window, text="Snapshot", width=50, command=self.snapshot)
self.btn_snapshot.pack(anchor=tkinter.CENTER, expand=True)
# After it is called once, the update method will be automatically called every delay milliseconds
self.delay = 50
self.update()
self.window.mainloop()
# update GUI window
def update(self):
# Get a frame from the video source
ret, frame = self.SRvid.get_frame()
if ret:
self.photo = PIL.ImageTk.PhotoImage(image = PIL.Image.fromarray(frame))
self.canvas.create_image(0, 0, image = self.photo, anchor = tkinter.NW)
self.SRvid.SRwri.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
self.window.after(self.delay, self.update)
# snapshot during locomotion
def snapshot(self):
# Get a frame from the video source
ret, frame = self.SRvid.get_frame()
if ret:
cv2.imwrite("SRcrawling-" + time.strftime("%d-%m-%Y-%H-%M-%S") + ".jpg", cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
# Video management class
class SRVideoCapture:
def __init__(self, video_source=1):
self.SRvid = cv2.VideoCapture(video_source)
if not self.SRvid.isOpened():
print('Cannot read the camera')
# the size of HOZAN USB camera is 640*480
self.width = int(self.SRvid.get(3)) # cv2.CAP_PROP_FRAME_WIDTH = 3
self.height = int(self.SRvid.get(4)) # cv2.CAP_PROP_FRAME_HEIGHT = 4
# shift the height and width considering the camera rotation
self.SRwri = cv2.VideoWriter('SRcrawling.avi', cv2.VideoWriter_fourcc(*'MJPG'), 20, (self.width, self.height))
# frame configuration
def get_frame(self):
ret, frame = self.SRvid.read()
frame = cv2.rotate(frame, cv2.ROTATE_180)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if ret:
return (ret, frame)
else:
return (ret, None)
# Release the video source when the object is destroyed
def __del__(self):
if self.SRvid.isOpened():
self.SRvid.release()
self.SRwri.release()
# Create a window and pass it to the Application object
Application(tkinter.Tk(), "SRcrawling")
###Output
_____no_output_____ |
main/lc-quad/t5-2021-10-14/dataset-create.ipynb | ###Markdown
Goal: Trim + Replace LowerCase + Remove weirdly long Question + replaceQ/P
###Code
df_property = pd.read_csv('../../../data/kdwd/2021-10-11/property.csv')
df_item = pd.read_csv('../../../data/kdwd/2021-10-11/item.csv')
df_property2 = pd.read_csv('../../../data/kdwd/2021-10-11/property2.csv')
df_item2 = pd.read_csv('../../../data/kdwd/2021-10-11/item2.csv')
df_item3 = pd.read_csv('../../../data/kdwd/2021-10-11/item3.csv')
df_property[df_property.id==1082].en.iloc[0]
# id to str
couldnotfind = []
def encode_ids(x, df, df2):
try:
return df[df.id==int(x[1:])].en.iloc[0]
except:
try:
return df2[df2.id==int(x[1:])].en.iloc[0]
except:
try:
return df_item3[df_item3.id==int(x[1:])].en.iloc[0]
except:
couldnotfind.append(x)
return x
encode_ids('Q488651', df_item, df_item2)
def encode_props(qry):
qry = replace_all(qry, rep_dict).strip()
# Q
for m in re.finditer(":Q\d+", qry):
x = m.group(0)[1:]
newstring = f'<{encode_ids(x, df_item, df_item2)}>'
# newstring = encode_ids(x, df_item, df_item2).replace(" ", "_")
qry = qry.replace(x, newstring)
# P
for m in re.finditer(":P\d+", qry):
x = m.group(0)[1:]
newstring = f'<{encode_ids(x, df_property, df_property2)}>'
qry = qry.replace(x, newstring)
return qry
# Test
encode_props('SELECT ?obj WHERE { wd:Q1045 p:P1082 ?s . ?s ps:P1082 ?obj . ?s pq:P585 ?x filter(contains(YEAR(?x),\'2009\')) }')
# "select ?obj where [ wd:somalia p:population ?s . ?s ps:population ?obj . ?s pq:point_in_time ?x filter(contains(YEAR(?x),'2009')) ]"
encode_props("SELECT DISTINCT ?sbj ?sbj_label WHERE { ?sbj wdt:P31 wd:Q58863414 . ?sbj wdt:P2541 wd:Q62900839 . ?sbj rdfs:label ?sbj_label . FILTER(CONTAINS(lcase(?sbj_label), 'model')) . FILTER (lang(?sbj_label) = 'en') } LIMIT 25")
def prepare(ds):
col = 'translation'
df = pd.DataFrame(columns=[col])
for d in tqdm(ds):
try:
qry = encode_props(d['sparql_wikidata'])
if d['question'] is not None and d['question']!=[] and len(d['question'])<250:
df = df.append({col: {'en':replace_all(d['question'], rep_dict2).strip(), 'sparql': qry}}, ignore_index=True)
if d['paraphrased_question'] is not None and d['paraphrased_question']!=[] and len(d['paraphrased_question'])<250:
df = df.append({col: {'en':replace_all(d['paraphrased_question'], rep_dict2).strip(), 'sparql': qry}}, ignore_index=True)
except Exception as e: print(e)
return df
df_test = prepare(raw_datasets["test"])
df_train = prepare(raw_datasets["train"])
print(df_test.shape)
print(df_train.shape)
df_all = pd.concat([df_train, df_test])
print(df_all.shape)
# shuffling all
df_all = df_all.sample(frac = 1).reset_index(drop=True)
print(df_all.shape)
from sklearn.model_selection import train_test_split
df_train2, df_test2 = train_test_split(df_all, test_size=0.2)
print(df_test2.shape)
print(df_train2.shape)
ds_train = Dataset.from_pandas(df_train2)
ds_test = Dataset.from_pandas(df_test2)
pd.options.display.max_colwidth = 100
df_test.head()
df_test.iloc[0].translation
mother_ds = DatasetDict({'train': ds_train, 'test':ds_test})
ds_path='../../../data/dataset/lc-quad-wikidata-2021-10-15'
mother_ds.save_to_disk(ds_path)
df_train2.to_csv(f'{ds_path}/train.csv')
df_test2.to_csv(f'{ds_path}/test.csv')
###Output
_____no_output_____ |
Scipy_Tutorial.ipynb | ###Markdown
Section 4 of:https://deeplearningcourses.com/c/deep-learning-prerequisites-the-numpy-stack-in-python PDF and CDF
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
x = np.linspace(-6, 6, 1000)
fx = norm.pdf(x, loc=0, scale=1)
plt.plot(x, fx);
Fx = norm.cdf(x, loc=0, scale=1)
plt.plot(x, Fx);
logfx = norm.logpdf(x, loc=0, scale=1)
plt.plot(x, logfx);
###Output
_____no_output_____
###Markdown
Convolution
###Code
from PIL import Image
!wget https://github.com/lazyprogrammer/machine_learning_examples/raw/master/cnn_class/lena.png
im = Image.open('lena.png')
gray = np.mean(im, axis=2)
x = np.linspace(-6, 6, 50)
fx = norm.pdf(x, loc=0, scale=1)
plt.plot(x, fx);
filt = np.outer(fx, fx)
plt.imshow(filt, cmap='gray');
from scipy.signal import convolve2d
out = convolve2d(gray, filt)
plt.subplot(1,2,1)
plt.imshow(gray, cmap='gray')
plt.subplot(1,2,2)
plt.imshow(out, cmap='gray');
###Output
_____no_output_____ |
ai/RIVA/English/Python/jupyter_notebook/Intro/riva-setup.ipynb | ###Markdown
Installing and setting up TLTFor ease of use, please install TLT inside a python virtual environment. We recommend performing this step first and then launching the notebook from the virtual environment.In addition to installing TLT python package, please make sure of the following software requirements:1. python 3.6.92. docker-ce > 19.03.53. docker-API 1.404. nvidia-container-toolkit > 1.3.0-15. nvidia-container-runtime > 3.4.0-16. nvidia-docker2 > 2.5.0-17. nvidia-driver >= 455.23
###Code
##installing tlt
! pip install nvidia-pyindex
! pip install nvidia-tlt
##setting relevant paths
# NOTE: The following paths are set from the perspective of the TLT Docker.
# The data is saved here
DATA_DIR = "/data"
SPECS_DIR = "/specs"
RESULTS_DIR = "/results"
# Set your encryption key, and use the same key for all commands
KEY = 'tlt_encode'
###Output
_____no_output_____
###Markdown
Download AN4 Speech Data
###Code
! wget http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz
! tar -xvf an4_sphere.tar.gz
! mv an4 $DATA_DIR
###Output
_____no_output_____
###Markdown
Pre-Processing This step converts the mp3 files into wav files and splits the data into training and testing sets. It also generates a "meta-data" file to be consumed by the dataloader for training and testing.
###Code
! tlt speech_to_text dataset_convert \
-e $SPECS_DIR/speech_to_text/dataset_convert_an4.yaml \
source_data_dir=$DATA_DIR/an4 \
target_data_dir=$DATA_DIR/an4_converted
###Output
_____no_output_____
###Markdown
--- Riva ServiceMakerServicemaker is the set of tools that aggregates all the necessary artifacts (models, files, configurations, and user settings) for Riva deployment to a target environment. It has two main components as shown below: 1. Riva-buildThis step helps build a Riva-ready version of the model. It’s only output is an intermediate format (called a RMIR) of an end to end pipeline for the supported services within Riva. We are taking a ASR QuartzNet Model in consideration`riva-build` is responsible for the combination of one or more exported models (.riva files) into a single file containing an intermediate format called Riva Model Intermediate Representation (.rmir). This file contains a deployment-agnostic specification of the whole end-to-end pipeline along with all the assets required for the final deployment and inference. Please checkout the [documentation](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/service-asr.htmlpipeline-configuration) to find out more.
###Code
# IMPORTANT: UPDATE THESE PATHS
# ServiceMaker Docker
RIVA_SM_CONTAINER = "<add container name>"
# Directory where the .riva model is stored $MODEL_LOC/*.riva
MODEL_LOC = "<add path to model location>"
# Name of the .riva file
MODEL_NAME = "<add model name>"
# Key that model is encrypted with, while exporting with TLT
KEY = "<add encryption key used for trained model>"
# Get the ServiceMaker docker
! docker pull $RIVA_SM_CONTAINER
# Syntax: riva-build <task-name> output-dir-for-rmir/model.rmir:key dir-for-riva/model.riva:key --acoustic_model_name=<quartznet/jasper>
! docker run --rm --gpus 0 -v $MODEL_LOC:/data $RIVA_SM_CONTAINER -- \
riva-build speech_recognition /data/asr.rmir:$KEY /data/$MODEL_NAME:$KEY --offline \
--decoder_type=greedy
###Output
_____no_output_____
###Markdown
2. Riva-deployThe deployment tool takes as input one or more Riva Model Intermediate Representation (RMIR) files and a target model repository directory. It creates an ensemble configuration specifying the pipeline for the execution and finally writes all those assets to the output model repository directory.
###Code
## FOR ASR build
# Syntax: riva-deploy -f dir-for-rmir/model.rmir:key output-dir-for-repository
! docker run --rm --gpus 0 -v $MODEL_LOC:/data $RIVA_SM_CONTAINER -- \
riva-deploy -f /data/asr.rmir:$KEY /data/models/
## FOR QA build
# Syntax: riva-build <task-name> output-dir-for-rmir/model.rmir:key dir-for-riva/model.riva:key
!docker run --rm --gpus 1 -v $MODEL_LOC:/data $RIVA_SM_CONTAINER -- \
riva-build qa /data/question-answering.rmir:$KEY /data/$MODEL_NAME:$KEY
###Output
_____no_output_____
###Markdown
`NOTE:` Above, qa-model.riva is the qa model obtained from `tlt question_answering export` --- Start Riva ServerOnce the model repository is generated, we are ready to start the Riva server. From this step onwards you need to download the Riva QuickStart Resource from NGC. Set the path to the directory here:
###Code
# Set the Riva QuickStart directory
RIVA_DIR = "<Path to the uncompressed folder downloaded from quickstart(include the folder name)>"
###Output
_____no_output_____
###Markdown
Next, we modify config.sh to enable relevant Riva services (asr for QuartzNet Model), provide the encryption key, and path to the model repository (`riva_model_loc`) generated in the previous step among other configurations. For instance, if above the model repository is generated at `$MODEL_LOC/models`, then you can specify `riva_model_loc` as the same directory as `MODEL_LOC` Pretrained versions of models specified in models_asr/nlp/tts are fetched from NGC. Since we are using our custom model, we can comment it in models_asr (and any others that are not relevant to your use case). `NOTE:` You can perform this step of editing config.sh from outside this notebook. config.sh snippet``` Enable or Disable Riva Services service_enabled_asr=true MAKE CHANGES HEREservice_enabled_nlp=false MAKE CHANGES HEREservice_enabled_tts=false MAKE CHANGES HERE Specify one or more GPUs to use specifying more than one GPU is currently an experimental feature, and may result in undefined behaviours.gpus_to_use="device=0" Specify the encryption key to use to deploy modelsMODEL_DEPLOY_KEY="tlt_encode" MAKE CHANGES HERE Locations to use for storing models artifacts If an absolute path is specified, the data will be written to that location Otherwise, a docker volume will be used (default). riva_init.sh will create a `rmir` and `models` directory in the volume or path specified. RMIR ($riva_model_loc/rmir) Riva uses an intermediate representation (RMIR) for models that are ready to deploy but not yet fully optimized for deployment. Pretrained versions can be obtained from NGC (by specifying NGC models below) and will be downloaded to $riva_model_loc/rmir by `riva_init.sh` Custom models produced by NeMo or TLT and prepared using riva-build may also be copied manually to this location $(riva_model_loc/rmir). Models ($riva_model_loc/models) During the riva_init process, the RMIR files in $riva_model_loc/rmir are inspected and optimized for deployment. The optimized versions are stored in $riva_model_loc/models. The riva server exclusively uses these optimized versions.riva_model_loc="" MAKE CHANGES HERE (Replace with MODEL_LOC) ```
###Code
# Ensure you have permission to execute these scripts
! cd $RIVA_DIR && chmod +x ./riva_init.sh && chmod +x ./riva_start.sh
# Run Riva Init. This will fetch the containers/models
# YOU CAN SKIP THIS STEP IF YOU DID RIVA DEPLOY
! cd $RIVA_DIR && ./riva_init.sh config.sh
# Run Riva Start. This will deploy your model(s).
! cd $RIVA_DIR && ./riva_start.sh config.sh
# Install client API bindings
!cd $RIVA_DIR && pip install $RIVA_API_WHL
# IMPORTANT: Set the name of the whl file
RIVA_API_WHL = "<add riva api .whl file name>"
###Output
_____no_output_____ |
nbs/src/Markov/markov-waiting-time-formula/fix_recursion_algorithm_to_visit_all_states.ipynb | ###Markdown
Others
###Code
lambda_2 = 0.2
lambda_1 = 0.3
mu = 0.2
num_of_servers = 4
threshold = 3
system_capacity = 10
buffer_capacity = 10
num_of_trials = 10
seed_num = 0
runtime = 10000
output = "others"
accuracy = 10
plot_over = "lambda_2"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
0.1,
lambda_1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "lambda_1"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
0.1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "mu"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
0.1,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "num_of_servers"
max_parameter_value = 10
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
mu,
1,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "threshold"
max_parameter_value = 10
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
mu,
num_of_servers,
1,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "system_capacity"
max_parameter_value = 20
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
11,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "buffer_capacity"
max_parameter_value = 20
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
11,
output,
plot_over,
max_parameter_value,
accuracy,
)
###Output
_____no_output_____
###Markdown
Ambulance
###Code
lambda_2 = 0.2
lambda_1 = 0.3
mu = 0.2
num_of_servers = 5
threshold = 6
system_capacity = 20
buffer_capacity = 20
num_of_trials = 10
seed_num = 0
runtime = 5000
output = "ambulance"
accuracy = 10
plot_over = "lambda_2"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
0.1,
lambda_1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "lambda_1"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
0.1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "mu"
max_parameter_value = 0.5
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
0.1,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "num_of_servers"
max_parameter_value = 10
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
mu,
3,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
8,
)
plot_over = "threshold"
max_parameter_value = 14
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
mu,
num_of_servers,
5,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "system_capacity"
max_parameter_value = 25
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
6,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
plot_over = "buffer_capacity"
max_parameter_value = 25
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2,
lambda_1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
6,
output,
plot_over,
max_parameter_value,
accuracy,
)
###Output
_____no_output_____ |
7.nlp.ipynb | ###Markdown
Create spark session
###Code
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('nlp').getOrCreate()
df = spark.createDataFrame(
[(1,'I really liked this movie'),
(2,'I would recommend this movie to my friends'),
(3,'movie was alright but acting was horrible'),
(4,'I am never watching that movie ever again')],
['user_id','review'])
df.show(5,False)
###Output
+-------+------------------------------------------+
|user_id|review |
+-------+------------------------------------------+
|1 |I really liked this movie |
|2 |I would recommend this movie to my friends|
|3 |movie was alright but acting was horrible |
|4 |I am never watching that movie ever again |
+-------+------------------------------------------+
###Markdown
Tokenization
###Code
from pyspark.ml.feature import Tokenizer
tokenization = Tokenizer(inputCol='review',outputCol='tokens')
tokenized_df = tokenization.transform(df)
tokenized_df.show(4,False)
###Output
+-------+------------------------------------------+---------------------------------------------------+
|user_id|review |tokens |
+-------+------------------------------------------+---------------------------------------------------+
|1 |I really liked this movie |[i, really, liked, this, movie] |
|2 |I would recommend this movie to my friends|[i, would, recommend, this, movie, to, my, friends]|
|3 |movie was alright but acting was horrible |[movie, was, alright, but, acting, was, horrible] |
|4 |I am never watching that movie ever again |[i, am, never, watching, that, movie, ever, again] |
+-------+------------------------------------------+---------------------------------------------------+
###Markdown
stopwords removal
###Code
from pyspark.ml.feature import StopWordsRemover
stopword_removal = StopWordsRemover(inputCol='tokens',outputCol='refined_tokens')
refined_df = stopword_removal.transform(tokenized_df)
refined_df.select(['user_id','tokens','refined_tokens']).show(10,False)
###Output
+-------+---------------------------------------------------+----------------------------------+
|user_id|tokens |refined_tokens |
+-------+---------------------------------------------------+----------------------------------+
|1 |[i, really, liked, this, movie] |[really, liked, movie] |
|2 |[i, would, recommend, this, movie, to, my, friends]|[recommend, movie, friends] |
|3 |[movie, was, alright, but, acting, was, horrible] |[movie, alright, acting, horrible]|
|4 |[i, am, never, watching, that, movie, ever, again] |[never, watching, movie, ever] |
+-------+---------------------------------------------------+----------------------------------+
###Markdown
Count Vectorizer
###Code
from pyspark.ml.feature import CountVectorizer
count_vec = CountVectorizer(inputCol='refined_tokens',outputCol='features')
cv_df = count_vec.fit(refined_df).transform(refined_df)
cv_df.select(['user_id','refined_tokens','features']).show(4,False)
count_vec.fit(refined_df).vocabulary
###Output
_____no_output_____
###Markdown
Tf-idf
###Code
from pyspark.ml.feature import HashingTF,IDF
hashing_vec = HashingTF(inputCol='refined_tokens',outputCol='tf_features')
hashing_df = hashing_vec.transform(refined_df)
hashing_df.select(['user_id','refined_tokens','tf_features']).show(4,False)
tf_idf_vec = IDF(inputCol='tf_features',outputCol='tf_idf_features')
tf_idf_df=tf_idf_vec.fit(hashing_df).transform(hashing_df)
tf_idf_df.select(['user_id','tf_idf_features']).show(4,False)
###Output
+-------+----------------------------------------------------------------------------------------------------+
|user_id|tf_idf_features |
+-------+----------------------------------------------------------------------------------------------------+
|1 |(262144,[14,32675,155321],[0.9162907318741551,0.9162907318741551,0.0]) |
|2 |(262144,[129613,155321,222394],[0.9162907318741551,0.0,0.9162907318741551]) |
|3 |(262144,[80824,155321,236263,240286],[0.9162907318741551,0.0,0.9162907318741551,0.9162907318741551])|
|4 |(262144,[63139,155321,203802,245806],[0.9162907318741551,0.0,0.9162907318741551,0.9162907318741551])|
+-------+----------------------------------------------------------------------------------------------------+
###Markdown
Classification
###Code
text_df = spark.read.csv('data/Movie_reviews.csv', inferSchema=True, header=True, sep=',')
text_df.printSchema()
text_df.count()
from pyspark.sql.functions import rand
text_df.orderBy(rand()).show(10,False)
text_df = text_df.filter(((text_df.Sentiment =='1') | (text_df.Sentiment =='0')))
text_df.count()
text_df.groupBy('Sentiment').count().show()
text_df = text_df.withColumn("Label", text_df.Sentiment.cast('float')).drop('Sentiment')
text_df.orderBy(rand()).show(10,False)
from pyspark.sql.functions import length
text_df = text_df.withColumn('length',length(text_df['Review']))
text_df.orderBy(rand()).show(10,False)
text_df.groupBy('Label').agg({'Length':'mean'}).show()
tokenization = Tokenizer(inputCol='Review',outputCol='tokens')
tokenized_df = tokenization.transform(text_df)
tokenized_df.show()
stopword_removal = StopWordsRemover(inputCol='tokens',outputCol='refined_tokens')
refined_text_df = stopword_removal.transform(tokenized_df)
refined_text_df.show()
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType
from pyspark.sql.functions import *
len_udf = udf(lambda s: len(s), IntegerType())
refined_text_df = refined_text_df.withColumn("token_count", len_udf(col('refined_tokens')))
refined_text_df.orderBy(rand()).show(10)
count_vec = CountVectorizer(inputCol='refined_tokens',outputCol='features')
cv_text_df = count_vec.fit(refined_text_df).transform(refined_text_df)
cv_text_df.select(['refined_tokens','token_count','features','Label']).show(10)
model_text_df = cv_text_df.select(['features','token_count','Label'])
from pyspark.ml.feature import VectorAssembler
df_assembler = VectorAssembler(inputCols=['features','token_count'], outputCol='features_vec')
model_text_df = df_assembler.transform(model_text_df)
model_text_df.printSchema()
from pyspark.ml.classification import LogisticRegression
training_df,test_df = model_text_df.randomSplit([0.75,0.25])
training_df.groupBy('Label').count().show()
test_df.groupBy('Label').count().show()
log_reg = LogisticRegression(featuresCol='features_vec', labelCol='Label').fit(training_df)
results = log_reg.evaluate(test_df).predictions
results.show()
from pyspark.ml.evaluation import BinaryClassificationEvaluator
#confusion matrix
true_postives = results[(results.Label == 1) & (results.prediction == 1)].count()
true_negatives = results[(results.Label == 0) & (results.prediction == 0)].count()
false_positives = results[(results.Label == 0) & (results.prediction == 1)].count()
false_negatives = results[(results.Label == 1) & (results.prediction == 0)].count()
recall = float(true_postives)/(true_postives + false_negatives)
print(recall)
precision = float(true_postives) / (true_postives + false_positives)
print(precision)
accuracy=float((true_postives+true_negatives) /(results.count()))
print(accuracy)
###Output
0.9801255230125523
0.9669762641898865
0.9710062535531552
|
Moringa_Data_Science_Prep_W3_Independent_Project_2022_02_Rehema_Owino_MTNDataAnalysis.ipynb | ###Markdown
###Code
# First, import numpy and pandas libraries
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
**Loading the datasets**
###Code
# These are the datasets that will be used in this analysis:
# cells_geo_description.xlsx [Link] (Links to an external site.)
# cells_geo.csv [Link] (Links to an external site.)
# CDR_description.xlsx [Link] (Links to an external site.)
# CDR 20120507 [http://bit.ly/TelecomDataset1] (Links to an external site.)
# CDR 20120508 [http://bit.ly/TelecomDataset2] (Links to an external site.)
# CDR 20120509 [http://bit.ly/TelecomDataset3]
# loading and previewing the first dataset by creating a dataframe
cells_geo_desc = pd.read_excel("cells_geo_description.xlsx")
cells_geo_desc
# loading and previewing the second dataset by creating a dataframe
cells_geo = pd.read_csv("cells_geo.csv", delimiter = ';')
cells_geo
# loading and previewing the third dataset by creating a dataframe
cdr_desc = pd.read_excel("CDR_description.xlsx")
cdr_desc
# loading and previewing the forth dataset by creating a dataframe
dt1 = pd.read_csv("Telcom_dataset.csv")
dt1.head(5)
# loading and previewing the fifth dataset by creating a dataframe
dt2 = pd.read_csv("Telcom_dataset2.csv")
dt2.head(5)
# loading and previewing the last dataset by creating a dataframe
dt3 = pd.read_csv("Telcom_dataset3.csv")
dt3.head(5)
###Output
_____no_output_____
###Markdown
**Cleaning the datasets**
###Code
# The last 3 datasets have similar entries, therefore I will merge them by:
# 1. changing the column names
# 2. filling in missing values, if any
# 3. dropping unnecessary columns
# setting the column names to the same case and format
dt1.columns = dt1.columns.str.lower()
dt1.columns
columns = ['product', 'value', 'date_time', 'cell_on_site', 'dw_a_number', 'dw_b_number', 'country_a', 'country_b', 'cell_id', 'site_id']
# applying column name change to dataset 4
dt1.columns = columns
dt1.head(5)
# applying column name change to dataset 5
dt2.columns = columns
dt2.head(5)
# applying column name change to dataset 6
dt3.columns = columns
dt3.head(5)
# checking for missing values in dataset 4
dt1.isnull()
# checking for missing values in dataset 5
dt2.isnull()
# checking for missing velues in dataset 6
dt3.isnull()
# merging the 3 datasets
dt_merge = [dt1, dt2, dt3]
merged = pd.concat(dt_merge)
merged
# Dropping unnecessary columns from the merged dataframe
# country_a and country_b columns are unnecessary because they are duplicate information which is not needed
# cell_on_site column is also not needed *from the description file*
df_merged = merged.drop(columns = ['country_a', 'country_b', 'cell_on_site'])
df_merged
# gouping values by products
product_value = df_merged['value'].groupby(df_merged['product'])
df_merged
# checking and dropping duplicates, if there are any
df_merged.duplicated()
df_merged.drop_duplicates()
###Output
_____no_output_____
###Markdown
Analysis
###Code
# sum of all the values (billing price)
df_merged.value.sum()
# sum for products' price billing
df_merged.groupby(['product'])['value'].sum()
# summary statistics on value
df_merged['value'].describe()
# creating a pivot table of group counts by value and product
pd.pivot_table(df_merged, index = ['product', 'value'], aggfunc = 'count')
###Output
_____no_output_____ |
WQU MScFE 630 - Computational Finance (C21-S1)/Group Work/Final Submissions (Linked to CV)/Submission 2/MScFE 630 - Group 14 - Submission 2/MScFE 630 - Group 14 - Submission 2 - Qs 5-7 v3 F.ipynb | ###Markdown
MScFE 630 - Group 14 - Submission 2 Code - Qs 5-7 Answer 5 - Volatility Smile Call Option Strike Closest to the Current Price of Facebook Stock
###Code
S0 = 324.76 #Facebook stock price at October 15, 2021 close
K = 325
###Output
_____no_output_____
###Markdown
Implied Volatility Corresponding to Call Option Strike of 325 for Various Maturities
###Code
import pandas as pd
calls_table_K325_detailed = pd.read_excel('Call Option Data.xlsx', sheet_name = 'K=325')
calls_table_K325_detailed
import warnings
warnings.filterwarnings('ignore')
calls_table_K325 = calls_table_K325_detailed[['Contract Name', 'Expire Date', 'Implied Volatility']]
calls_table_K325['Expire Date'] = pd.to_datetime(calls_table_K325['Expire Date'])
calls_table_K325 = calls_table_K325.set_index('Expire Date')
calls_table_K325
###Output
_____no_output_____
###Markdown
3 Closest Strikes Below 325 and their Implied Volatilities
###Code
xl_dict_below = {}
sheetnames_below = ['K=320', 'K=315', 'K=310']
for sheet_below in sheetnames_below:
xl_dict_below[sheet_below] = pd.read_excel('Call Option Data.xlsx', sheet_name=sheet_below)
xl_dict_below[sheet_below] = xl_dict_below[sheet_below][['Contract Name', 'Expire Date', 'Implied Volatility']]
xl_dict_below[sheet_below]['Expire Date'] = pd.to_datetime(xl_dict_below[sheet_below]['Expire Date'])
xl_dict_below[sheet_below] = xl_dict_below[sheet_below].set_index('Expire Date')
###Output
_____no_output_____
###Markdown
Strike = 320
###Code
xl_dict_below['K=320']
###Output
_____no_output_____
###Markdown
Strike = 315
###Code
xl_dict_below['K=315']
###Output
_____no_output_____
###Markdown
Strike = 310
###Code
xl_dict_below['K=310']
###Output
_____no_output_____
###Markdown
3 Closest Strikes Above 325 and their Implied Volatilities
###Code
xl_dict_above = {}
sheetnames_above = ['K=330', 'K=335', 'K=340']
for sheet_above in sheetnames_above:
xl_dict_above[sheet_above] = pd.read_excel('Call Option Data.xlsx', sheet_name=sheet_above)
xl_dict_above[sheet_above] = xl_dict_above[sheet_above][['Contract Name', 'Expire Date', 'Implied Volatility']]
xl_dict_above[sheet_above]['Expire Date'] = pd.to_datetime(xl_dict_above[sheet_above]['Expire Date'])#.dt.date
xl_dict_above[sheet_above] = xl_dict_above[sheet_above].set_index('Expire Date')
###Output
_____no_output_____
###Markdown
Strike = 330
###Code
xl_dict_above['K=330']
###Output
_____no_output_____
###Markdown
Strike = 335
###Code
xl_dict_above['K=335']
###Output
_____no_output_____
###Markdown
Strike = 340
###Code
xl_dict_above['K=340']
###Output
_____no_output_____
###Markdown
Facebook Option Volatility Smile for Calls Expiring on 22 October 2021
###Code
strikes = [310, 315, 320, 325, 330, 335, 340]
dataframes_below = ['K=310', 'K=315', 'K=320']
dataframes_above = ['K=330', 'K=335', 'K=340']
vol_2021_10_22 = []
for dataframe in dataframes_below:
vol_2021_10_22.append(xl_dict_below[dataframe].iloc[0,1])
vol_2021_10_22.append(calls_table_K325.iloc[0,1])
for dataframe in dataframes_above:
vol_2021_10_22.append(xl_dict_above[dataframe].iloc[0,1])
import matplotlib.pyplot as plt
plt.plot(strikes, vol_2021_10_22)
plt.xlabel('Strike Price')
plt.ylabel('Implied Volatility')
plt.title('Facebook Option Volatility Smile')
###Output
_____no_output_____
###Markdown
Answer 6 - Function for Implied Volatility - Newton-Raphson Algorithm
###Code
#Inputs
r_f = 0.04 #1-month T-bill rate as on October 15
#Source: https://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=billrates
v_0 = 0.1
threshold = 0.00000001
iterations = 100
import numpy as np
from scipy import stats
#Functions
def d1(current_stock_price, strike, risk_free_rate, volatility, time_to_maturity):
return (np.log(current_stock_price/strike) + (risk_free_rate + volatility**2/2)*time_to_maturity)/(
volatility*np.sqrt(time_to_maturity))
def d2(d_1, volatility, time_to_maturity):
return d_1 - volatility * np.sqrt(time_to_maturity)
def f_sigma(current_stock_price, d_1, d_2, strike, risk_free_rate, time_to_maturity, call_market_price):
return (current_stock_price * stats.norm.cdf(d_1) - strike * np.exp(-risk_free_rate*time_to_maturity)
* stats.norm.cdf(d_2) - call_market_price)
def Dd1_Dsigma(volatility, time_to_maturity, current_stock_price, strike, risk_free_rate):
return (volatility**2*time_to_maturity*np.sqrt(time_to_maturity)-
(np.log(current_stock_price/strike)+(risk_free_rate+volatility**2/2)*time_to_maturity)
*np.sqrt(time_to_maturity))/(volatility**2*time_to_maturity)
def Dd2_Dsigma(d1_derivative, time_to_maturity):
return d1_derivative - np.sqrt(time_to_maturity)
def Df_sigma_Dsigma(current_stock_price, d_1, d1_derivative, strike, risk_free_rate, time_to_maturity, d_2,
d2_derivative):
return (current_stock_price*stats.norm.pdf(d_1)*d1_derivative
- strike*np.exp(-risk_free_rate*time_to_maturity)*stats.norm.pdf(d_2)*d2_derivative)
#Algorithm
def implied_volatility_function(call_price, time_to_maturity, strike):
vol_sequence = []
vol_sequence.append(v_0)
for i in range(0,iterations):
d_1 = d1(S0, strike, r_f, vol_sequence[i], time_to_maturity)
d_2 = d2(d_1, vol_sequence[i], time_to_maturity)
d1_derivative = Dd1_Dsigma(vol_sequence[i], time_to_maturity, S0, strike, r_f)
d2_derivative = Dd2_Dsigma(d1_derivative, time_to_maturity)
vol_sequence.append(vol_sequence[i] - f_sigma(S0, d_1, d_2, strike, r_f, time_to_maturity, call_price)/
Df_sigma_Dsigma(S0, d_1, d1_derivative, strike, r_f, time_to_maturity, d_2, d2_derivative))
if abs(vol_sequence[i+1] - vol_sequence[i]) < threshold: break
return vol_sequence[-1]
###Output
_____no_output_____
###Markdown
Validating the Implied Volatility for 2 Options Contracts 325 Call 19 Nov 2021 - FB211119C00325000
###Code
vol_bisection = implied_volatility_function(13.30, 1/12, 325)
print('Implied Volatility Based on Algorithm:', "{:.2%}".format(vol_bisection))
print('Reported Implied Volatility: 34.31%')
print('Difference:', "{:.2%}".format(0.3431-vol_bisection))
###Output
Implied Volatility Based on Algorithm: 34.48%
Reported Implied Volatility: 34.31%
Difference: -0.17%
###Markdown
325 Call 17 Dec 2021 - FB211217C00325000
###Code
vol_bisection = implied_volatility_function(16.70, 2/12, 325)
print('Implied Volatility Based on Algorithm:', "{:.2%}".format(vol_bisection))
print('Reported Implied Volatility: 31.69%')
print('Difference:', "{:.2%}".format(0.3169-vol_bisection))
###Output
Implied Volatility Based on Algorithm: 29.83%
Reported Implied Volatility: 31.69%
Difference: 1.86%
###Markdown
Answer 7 - Volatility Skewness
###Code
df_vol_skewness = pd.DataFrame(list(zip(strikes, vol_2021_10_22)), columns = ['Strike Price', 'Implied Volatility'])
df_vol_skewness['Delta Strike Price'] = df_vol_skewness['Strike Price'].diff()
df_vol_skewness['Delta Implied Volatility'] = df_vol_skewness['Implied Volatility'].diff()
df_vol_skewness['Volatility Skewness'] = (df_vol_skewness['Delta Implied Volatility']
/ df_vol_skewness['Delta Strike Price'])
df_vol_skewness.drop([4, 5, 6])
###Output
_____no_output_____ |
Introduction to Computer_Vision/Image_representation&_Classification/Classification.ipynb | ###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
Avg brightness: 25.677469697
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
# Use the avg brightness feature to predict a label (0, 1)
avg_b = avg_brightness(rgb_image)
predicted_label = 0
## TODO: set the value of a threshold that will separate day and night images
threshold = 100
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
# above or below the threshold
if avg_b > threshold:
predicted_label = 1
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classif
estimate_label(selected_image)
###Output
_____no_output_____ |
Conv2D/Conv2D.ipynb | ###Markdown
Conv2D layer import libraries
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
# Install TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from pprint import pprint
import numpy as np
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
###Output
TensorFlow 2.x selected.
###Markdown
Defining Layer
###Code
# Put your answer here
from tensorflow.keras import layers
class Donv2D(layers.Layer):
def conv2d(self,kernel_size,kernel_count,images):
if(len(images.shape) == 3):
images = tf.reshape(images,(images.shape[0],images.shape[1],images.shape[2],1)) ## in case of input image
else:
images = tf.reshape(images,(images.shape[0],images.shape[1],images.shape[2],images.shape[3])) ## in case of in between layers
image_base = images.shape[0],images.shape[1]
patches = tf.image.extract_patches(
images=images,
sizes=[1, kernel_size[0],kernel_size[1], 1],
strides=[1,1,1, 1],
rates=[1, 1, 1, 1],
padding='VALID'
)
output_filters = (patches.shape[1],patches.shape[2])
patches = tf.reshape(patches,(patches.shape[0],patches.shape[1]*patches.shape[2],patches.shape[3]))
patches = tf.dtypes.cast(
patches,
tf.dtypes.float32,
)
patches= tf.transpose(patches,(0,2,1))
conv = tf.matmul(self.filters,patches)
conv = tf.reshape(conv,(image_base[0],output_filters[0],output_filters[1],kernel_count))
return conv
def __init__(self, kernel_count,kernel_size,input_depth=32):
super(Donv2D, self).__init__()
self.kernel_count = kernel_count
self.kernel_size = kernel_size
self.valid = False
w_init = tf.random_normal_initializer()
self.filters = self.add_weight(shape=(kernel_count,kernel_size[0]*kernel_size[1]*input_depth),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return self.conv2d(self.kernel_size,self.kernel_count,inputs)
###Output
_____no_output_____
###Markdown
Defining Model
###Code
from tensorflow.keras.datasets import mnist
class Conv(tf.keras.Model):
def __init__(self,num_classes):
super(Conv, self).__init__()
self.block_1 = Donv2D(20,(3,3),1)
self.block_2 = Donv2D(32,(3,3),20)
self.block_3 = Donv2D(64,(3,3),32)
self.flatten = tf.keras.layers.Flatten()
self.global_pool = tf.keras.layers.GlobalAveragePooling2D()
self.classifier = tf.keras.layers.Dense(num_classes)
self.dense = tf.keras.layers.Dense(100)
self.sigmoid = tf.keras.layers.Activation('sigmoid')
self.relu = tf.keras.layers.Activation('relu')
def call(self, inputs):
x = self.block_1(inputs)
x = self.relu(x)
# print(x.shape)
# print("One is Done")
x = self.block_2(x)
x = self.relu(x)
# print(x.shape)
# print("Two is Done")
x = self.block_3(x)
x = self.relu(x)
# print(x.shape)
# print("Three is Done")
x = self.global_pool(x)
# print("Connected")
x = self.flatten(x)
x = self.relu(x)
# print(x.shape)
x = self.dense(x)
x = self.relu(x)
# print(x.shape)
x = self.classifier(x)
x = self.sigmoid(x)
# print(x.shape)
return x
###Output
_____no_output_____
###Markdown
Problem 1Mnist Digit Recognition import data
###Code
(x_train, y_train),(x_test,y_test) = tf.keras.datasets.mnist.load_data()
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
x_train.shape
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
visualization
###Code
plt.imshow(x_train[0])
plt.title = y_train[0]
###Output
_____no_output_____
###Markdown
Training Model
###Code
conv = Conv(10)
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.RMSprop(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = tf.keras.losses.CategoricalCrossentropy()
acc = tf.keras.metrics.CategoricalAccuracy()
# Prepare the training dataset.
batch_size = 64
seen = True
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 10
for epoch in range(epochs):
print('Start of epoch %d' % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# print(y_batch_train.shape)
with tf.GradientTape() as tape:
y_pred = conv(x_batch_train)
loss_value = loss_fn(y_batch_train, y_pred)
a = acc(y_batch_train, y_pred)
grads = tape.gradient(loss_value, conv.trainable_weights)
optimizer.apply_gradients(zip(grads, conv.trainable_weights))
if step % 100 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
print('Training acc (for one batch) at step %s: %s' % (step, float(a)))
print('Seen so far: %s samples' % ((step + 1) * 64))
###Output
Start of epoch 0
Training loss (for one batch) at step 0: 2.4747869968414307
Training acc (for one batch) at step 0: 0.0625
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.9148193001747131
Training acc (for one batch) at step 100: 0.6155630946159363
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.3151840567588806
Training acc (for one batch) at step 200: 0.7356964945793152
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.3452046513557434
Training acc (for one batch) at step 300: 0.7872197031974792
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.17904913425445557
Training acc (for one batch) at step 400: 0.8208385109901428
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.19866925477981567
Training acc (for one batch) at step 500: 0.8424713015556335
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.19249111413955688
Training acc (for one batch) at step 600: 0.8586470484733582
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.20010143518447876
Training acc (for one batch) at step 700: 0.8699848651885986
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.09158486127853394
Training acc (for one batch) at step 800: 0.8782771825790405
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.10195309668779373
Training acc (for one batch) at step 900: 0.8867924809455872
Seen so far: 57664 samples
Start of epoch 1
Training loss (for one batch) at step 0: 0.08283492177724838
Training acc (for one batch) at step 0: 0.8902004361152649
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.12081103026866913
Training acc (for one batch) at step 100: 0.8967711925506592
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.05446421355009079
Training acc (for one batch) at step 200: 0.9020366668701172
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.01716001331806183
Training acc (for one batch) at step 300: 0.9067041873931885
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.0460287407040596
Training acc (for one batch) at step 400: 0.9109777808189392
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.028502332046628
Training acc (for one batch) at step 500: 0.9144399762153625
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.07687141001224518
Training acc (for one batch) at step 600: 0.9176450371742249
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.05170559883117676
Training acc (for one batch) at step 700: 0.9203253984451294
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.17578867077827454
Training acc (for one batch) at step 800: 0.9225265979766846
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.0359458401799202
Training acc (for one batch) at step 900: 0.9249132871627808
Seen so far: 57664 samples
Start of epoch 2
Training loss (for one batch) at step 0: 0.06016453728079796
Training acc (for one batch) at step 0: 0.9259728193283081
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.023505765944719315
Training acc (for one batch) at step 100: 0.928169310092926
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.11723443120718002
Training acc (for one batch) at step 200: 0.9300487637519836
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.031387344002723694
Training acc (for one batch) at step 300: 0.931891918182373
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.08950603753328323
Training acc (for one batch) at step 400: 0.9336555600166321
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.1682843267917633
Training acc (for one batch) at step 500: 0.9352838397026062
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.07134267687797546
Training acc (for one batch) at step 600: 0.9368563294410706
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.17884895205497742
Training acc (for one batch) at step 700: 0.9381975531578064
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.020642656832933426
Training acc (for one batch) at step 800: 0.939350962638855
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.06277105957269669
Training acc (for one batch) at step 900: 0.9406013488769531
Seen so far: 57664 samples
Start of epoch 3
Training loss (for one batch) at step 0: 0.3271031081676483
Training acc (for one batch) at step 0: 0.9411431550979614
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.009884987957775593
Training acc (for one batch) at step 100: 0.9423695802688599
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.0861096903681755
Training acc (for one batch) at step 200: 0.9434368014335632
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.03412575647234917
Training acc (for one batch) at step 300: 0.9445208311080933
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.01749270409345627
Training acc (for one batch) at step 400: 0.9455762505531311
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.05833020061254501
Training acc (for one batch) at step 500: 0.9464878439903259
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.014883903786540031
Training acc (for one batch) at step 600: 0.9473872184753418
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.12477491050958633
Training acc (for one batch) at step 700: 0.9482131600379944
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.016410809010267258
Training acc (for one batch) at step 800: 0.9490020275115967
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.04509979113936424
Training acc (for one batch) at step 900: 0.9497988820075989
Seen so far: 57664 samples
Start of epoch 4
Training loss (for one batch) at step 0: 0.011778784915804863
Training acc (for one batch) at step 0: 0.9501549601554871
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.10376890003681183
Training acc (for one batch) at step 100: 0.9508894085884094
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.010521911084651947
Training acc (for one batch) at step 200: 0.9516459703445435
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.0976869985461235
Training acc (for one batch) at step 300: 0.9523227214813232
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.029451124370098114
Training acc (for one batch) at step 400: 0.9530082941055298
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.010705593042075634
Training acc (for one batch) at step 500: 0.9537351727485657
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.10438527166843414
Training acc (for one batch) at step 600: 0.9543962478637695
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.07084573805332184
Training acc (for one batch) at step 700: 0.9549960494041443
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.011644931510090828
Training acc (for one batch) at step 800: 0.9554905295372009
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.10777392983436584
Training acc (for one batch) at step 900: 0.9560443758964539
Seen so far: 57664 samples
Start of epoch 5
Training loss (for one batch) at step 0: 0.2930106222629547
Training acc (for one batch) at step 0: 0.9562993049621582
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.014714164659380913
Training acc (for one batch) at step 100: 0.9568562507629395
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.03289126604795456
Training acc (for one batch) at step 200: 0.9573521018028259
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.1736689805984497
Training acc (for one batch) at step 300: 0.9578405618667603
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.10588037967681885
Training acc (for one batch) at step 400: 0.9583712220191956
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.07402394711971283
Training acc (for one batch) at step 500: 0.9588512778282166
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.024070311337709427
Training acc (for one batch) at step 600: 0.9593664407730103
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.02970535308122635
Training acc (for one batch) at step 700: 0.9598015546798706
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.03170570731163025
Training acc (for one batch) at step 800: 0.9601752758026123
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.05106952413916588
Training acc (for one batch) at step 900: 0.9606027007102966
Seen so far: 57664 samples
Start of epoch 6
Training loss (for one batch) at step 0: 0.12608285248279572
Training acc (for one batch) at step 0: 0.9608014225959778
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.15533384680747986
Training acc (for one batch) at step 100: 0.9611939787864685
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.031242195516824722
Training acc (for one batch) at step 200: 0.9616509079933167
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.1262103021144867
Training acc (for one batch) at step 300: 0.9620844721794128
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.07261557877063751
Training acc (for one batch) at step 400: 0.9624906778335571
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.022258084267377853
Training acc (for one batch) at step 500: 0.962868332862854
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.19884449243545532
Training acc (for one batch) at step 600: 0.9632488489151001
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.059015851467847824
Training acc (for one batch) at step 700: 0.9636075496673584
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.009861024096608162
Training acc (for one batch) at step 800: 0.9639550447463989
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.02905181236565113
Training acc (for one batch) at step 900: 0.9643469452857971
Seen so far: 57664 samples
Start of epoch 7
Training loss (for one batch) at step 0: 0.2512659728527069
Training acc (for one batch) at step 0: 0.9644768238067627
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.013335241936147213
Training acc (for one batch) at step 100: 0.9648387432098389
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.03970073163509369
Training acc (for one batch) at step 200: 0.9651807546615601
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.03974173590540886
Training acc (for one batch) at step 300: 0.965496838092804
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.058757681399583817
Training acc (for one batch) at step 400: 0.9658218026161194
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.0025968223344534636
Training acc (for one batch) at step 500: 0.9661375284194946
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.025393841788172722
Training acc (for one batch) at step 600: 0.9664204716682434
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.005815902724862099
Training acc (for one batch) at step 700: 0.9667128324508667
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.05530892312526703
Training acc (for one batch) at step 800: 0.9669612050056458
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.002984172198921442
Training acc (for one batch) at step 900: 0.9672510623931885
Seen so far: 57664 samples
Start of epoch 8
Training loss (for one batch) at step 0: 0.0009516647551208735
Training acc (for one batch) at step 0: 0.9673751592636108
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.0074845259077847
Training acc (for one batch) at step 100: 0.9676440358161926
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.0045386748388409615
Training acc (for one batch) at step 200: 0.9679445028305054
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.025977719575166702
Training acc (for one batch) at step 300: 0.968227207660675
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.011723874136805534
Training acc (for one batch) at step 400: 0.968504786491394
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.021272752434015274
Training acc (for one batch) at step 500: 0.9687578082084656
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.0551300123333931
Training acc (for one batch) at step 600: 0.9689949750900269
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.045192983001470566
Training acc (for one batch) at step 700: 0.9692111015319824
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.008501145988702774
Training acc (for one batch) at step 800: 0.9694314002990723
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.00011530512711033225
Training acc (for one batch) at step 900: 0.9696688055992126
Seen so far: 57664 samples
Start of epoch 9
Training loss (for one batch) at step 0: 0.007957170717418194
Training acc (for one batch) at step 0: 0.9697684049606323
Seen so far: 64 samples
Training loss (for one batch) at step 100: 0.00420426158234477
Training acc (for one batch) at step 100: 0.9699687361717224
Seen so far: 6464 samples
Training loss (for one batch) at step 200: 0.011408806778490543
Training acc (for one batch) at step 200: 0.970206081867218
Seen so far: 12864 samples
Training loss (for one batch) at step 300: 0.04517115280032158
Training acc (for one batch) at step 300: 0.9704325795173645
Seen so far: 19264 samples
Training loss (for one batch) at step 400: 0.03688410297036171
Training acc (for one batch) at step 400: 0.9706663489341736
Seen so far: 25664 samples
Training loss (for one batch) at step 500: 0.010057760402560234
Training acc (for one batch) at step 500: 0.9708983898162842
Seen so far: 32064 samples
Training loss (for one batch) at step 600: 0.0012681849766522646
Training acc (for one batch) at step 600: 0.9710630178451538
Seen so far: 38464 samples
Training loss (for one batch) at step 700: 0.031731873750686646
Training acc (for one batch) at step 700: 0.9712548851966858
Seen so far: 44864 samples
Training loss (for one batch) at step 800: 0.007447856478393078
Training acc (for one batch) at step 800: 0.9714374542236328
Seen so far: 51264 samples
Training loss (for one batch) at step 900: 0.0392756387591362
Training acc (for one batch) at step 900: 0.9716362357139587
Seen so far: 57664 samples
###Markdown
Evaluation
###Code
y_pred = conv(x_test) # Logits for this
a = acc(y_test, y_pred)
print(a)
###Output
tf.Tensor(0.9718164, shape=(), dtype=float32)
|
aulas/projetos/projeto_pad.ipynb | ###Markdown
Projeto em Ciência de Dados com Python Nomes: Eric Mizote Garcia Flávio Bezerra Pereira Frederico De Meira Bastone Laura Ladeia Maciel 1 - Importando as bibliotecas
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
2 - Importando o conjunto de dados que vamos analisar
###Code
url = 'https://raw.githubusercontent.com/flaviobp/python-analise-dados/main/aulas/projetos/metadados_telecomunicacoes/WA_Fn-UseC_-Telco-Customer-Churn.csv'
df = pd.read_csv(url)
###Output
_____no_output_____
###Markdown
**Descrição Variáveis Telecomunicações**>* CustomerID: Um ID exclusivo que identifica cada cliente.* Gender: Sexo do cliente: Masculino, Feminino* Age: a idade atual do cliente, em anos, no momento em que o trimestre fiscal terminou.* Senior Citizen: Indica se o cliente tem 65 anos ou mais: Sim, Não* Married (Partner): Indica se o cliente é casado: Sim, Não* Dependents: Indica se o cliente mora com algum dependente: Sim, Não. Os dependentes podem ser filhos, pais, avós, etc.* Number of Dependents: Indica o número de dependentes que moram com o cliente.* Phone Service: Indica se o cliente assina serviço telefônico residencial com a empresa: Sim, Não* Multiple Lines: Indica se o cliente assina várias linhas telefônicas com a empresa: Sim, Não* Internet Service: Indica se o cliente assina serviço de Internet com a empresa: Não, DSL, Fibra Óptica, Cabo.* Online Security: Indica se o cliente assina um serviço adicional de segurança online fornecido pela empresa: Sim, Não* Online Backup: Indica se o cliente assina um serviço de backup online adicional fornecido pela empresa: Sim, Não* Device Protection Plan: Indica se o cliente assina um plano de proteção de dispositivos adicional para seu equipamento de Internet fornecido pela empresa: Sim, Não* Premium Tech Support: Indica se o cliente assina um plano de suporte técnico adicional da empresa com tempos de espera reduzidos: Sim, Não* Streaming TV: Indica se o cliente utiliza seu serviço de Internet para transmitir programação de televisão de um provedor terceirizado: Sim, Não. A empresa não cobra taxa adicional por este serviço.* Streaming Movies: Indica se o cliente usa seu serviço de Internet para fazer streaming de filmes de um provedor terceirizado: Sim, Não. A empresa não cobra taxa adicional por este serviço.* Contract: Indica o tipo de contrato atual do cliente: mês a mês, um ano, dois anos.* Paperless Billing: Indica se o cliente optou pelo faturamento sem papel: Sim, Não* Payment Method: Indica como o cliente paga sua fatura: Saque Bancário, Cartão de Crédito, Cheque Postado* Monthly Charge: Indica a cobrança mensal total atual do cliente para todos os seus serviços da empresa.* Total Charges: Indica o total de encargos do cliente, calculado até o final do trimestre especificado acima.* Tenure: Indica o total de meses em que o cliente está na empresa.* Churn: Sim = o cliente deixou a empresa neste trimestre. Não = o cliente permaneceu na empresa. Diretamente relacionado ao valor do Churn.
###Code
# Exibindo as primeiras linhas do Dataframe
df.head(10)
###Output
_____no_output_____
###Markdown
3 - Preparação dos dados Quando os dados estiverem completamente mapeados chegou o momento de preparação desse dado. Nesta etapa é importante fazer algumas verificações, como por exemplo registros duplicados ou faltantes, dados com formato estranho (como idade negativa, por exemplo), valores que são muito discrepantes (os chamados outliers). O não tratamento desse dado pode induzir a conclusões precipitadas. Nesse ponto é sempre muito importante estar próximo da área que está solicitando o projeto, pois como exemplo, no caso de outliers, muitas vezes o que parece ser um dado discrepante pode ser um comportamento, não muito usual, mas que de fato ocorre no contexto do problema. **Formato do Dataframe**
###Code
# numero de linhas (7043) e colunas (21) do Data Frame
df.shape
###Output
_____no_output_____
###Markdown
**Registro Duplicado**
###Code
# registros duplicados
ids = df.customerID
df[ids.isin(ids[ids.duplicated()])].sort_values(by="customerID")
# contagem de valores distintos para a chave customerID
df['customerID'].nunique()
###Output
_____no_output_____
###Markdown
Conclusão: Não Foram encontrados valores duplicados de acordo com a chave customerID, número de linhas (7043) igual a contagem para valores distintos **Dados Faltantes**
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Conclusão: Não foram encontrados valores faltantes **Tipos de dados**
###Code
df.info()
df.columns.to_series().groupby(df.dtypes).groups
###Output
_____no_output_____
###Markdown
Conclusão: 3 colunas numéricas e 18 texto **Análise da padronização das colunas textuais**
###Code
df_text = df.select_dtypes(exclude=[np.number])
df_text.head()
# customerID não possui valores duplicados (chave do df)
df_text.customerID.nunique()
# gender - valores corretos
df_text.gender.unique()
# Partner - valores corretos
df_text.Partner.unique()
# Dependents - valores corretos
df_text.Dependents.unique()
# PhoneService - valores corretos
df_text.PhoneService.unique()
# MultipleLines - valor 'No phone service' não mapeado na descricao
df_text.MultipleLines.value_counts(normalize=True)
## replace 'No phone service' para 'No'
df_text.MultipleLines = df_text.MultipleLines.replace({'No phone service':'No'})
df_text.MultipleLines.value_counts(normalize=True)
# InternetService - valor 'Cable' não encontrado
df_text.InternetService.unique()
# OnlineSecurity - valor 'No internet service' não mapeado na descrição
df_text.OnlineSecurity.value_counts(normalize=True)
## replace 'No internet service' para 'No'
df_text.OnlineSecurity = df_text.OnlineSecurity.replace({'No internet service':'No'})
df_text.OnlineSecurity.value_counts(normalize=True)
# OnlineBackup - valor 'No internet service' não mapeado na descrição
df_text.OnlineBackup.value_counts(normalize=True)
## replace 'No internet service' para 'No'
df_text.OnlineBackup = df_text.OnlineBackup.replace({'No internet service':'No'})
df_text.OnlineBackup.value_counts(normalize=True)
# DeviceProtection - valor 'No internet service' não mapeado na descrição
df_text.DeviceProtection.value_counts(normalize=True)
## replace 'No internet service' para 'No'
df_text.DeviceProtection = df_text.DeviceProtection.replace({'No internet service':'No'})
df_text.DeviceProtection.value_counts(normalize=True)
# TechSupport
df_text.TechSupport.value_counts(normalize=True)
## replace 'No internet service' para 'No'
df_text.TechSupport = df_text.TechSupport.replace({'No internet service':'No'})
df_text.TechSupport.value_counts(normalize=True)
# StreamingTV
df_text.StreamingTV.value_counts(normalize=True)
## replace 'No internet service' para 'No'
df_text.StreamingTV = df_text.StreamingTV.replace({'No internet service':'No'})
df_text.StreamingTV.value_counts(normalize=True)
# StreamingMovies
df_text.StreamingMovies.value_counts(normalize=True)
## replace 'No internet service' para 'No'
df_text.StreamingMovies = df_text.StreamingMovies.replace({'No internet service':'No'})
df_text.StreamingMovies.value_counts(normalize=True)
# Contract - valores corretos
df_text.Contract.value_counts(normalize=True)
# PaperlessBilling - valores corretos
df_text.PaperlessBilling.value_counts(normalize=True)
# PaymentMethod - Descricao indica tres possibilidades, mas aparentemente a distribuicao faz sentido
df_text.PaymentMethod.value_counts(normalize=True)
# TotalCharges - Coluna numérica necessita ser dropada do dt_text e realizar o converte
df_text.TotalCharges.nunique()
## convert string para float, mas caso ocarra erro nem alguma conversao coloca NaN
df_totalcharges = pd.to_numeric(df_text.TotalCharges, errors='coerce').round(2)
df_totalcharges.isnull().sum()
## df_totalcharges a ser analisado na parte de colunas numéricas
df_totalcharges.head()
## retirando a coluna TotalCharges no df_text
df_text = df_text.drop(['TotalCharges'], axis=1)
## retirando as linhas com missing values
df_text = df_text.drop(df_text[df_totalcharges.isnull()].index)
df = df.drop(df[df_totalcharges.isnull()].index)
df_totalcharges = df_totalcharges.drop(df_totalcharges[df_totalcharges.isnull()].index)
## formato dos dataframes apos a retirada de missing values
df_text.shape
df.shape
df_totalcharges.shape
## colunas texto
df_text.head()
# Churn - valores corretos
df_text.Churn.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
**Análise colunas numéricas**
###Code
df_number = df.select_dtypes(include=[np.number])
df_number.head()
# Concatenar coluna TotalCharges
df_number['TotalCharges'] = df_totalcharges
df_number.head()
# estatisticas basicas
df_number.describe()
# SeniorCitizen é variável categorica
df_number.SeniorCitizen.unique()
# boxplot
fig, axes = plt.subplots(2, 2, figsize=(10, 30))
axes[0,0].set_title('Boxplot tenure')
ax1 = sns.boxplot(ax=axes[0,0], y='tenure', palette='husl', data=df_number)
axes[0,1].set_title('Boxplot MonthlyCharges')
ax2 = sns.boxplot(ax=axes[0,1], y='MonthlyCharges', palette='husl', data=df_number)
axes[1,0].set_title('Boxplot TotalCharges')
ax3 = sns.boxplot(ax=axes[1,0], y='TotalCharges', palette='husl', data=df_number)
# IQR para tenure
q1, q3 = np.percentile(df_number['tenure'], [25, 75])
iqr = q3 - q1
# limite inferior: q1 -(1.5 * iqr)
# limite superior: q3 +(1.5 * iqr)
lower_bound = q1 -(1.5 * iqr)
upper_bound = q3 +(1.5 * iqr)
print('lower bound', lower_bound)
print('upper bound', upper_bound)
df_number['tenure'].sort_values(ascending=False) # sem outliers
# IQR para MonthlyCharges
q1, q3 = np.percentile(df_number['MonthlyCharges'], [25, 75])
iqr = q3 - q1
# limite inferior: q1 -(1.5 * iqr)
# limite superior: q3 +(1.5 * iqr)
lower_bound = q1 -(1.5 * iqr)
upper_bound = q3 +(1.5 * iqr)
print('lower bound', lower_bound)
print('upper bound', upper_bound)
df_number['MonthlyCharges'].sort_values(ascending=False) # sem outliers
# IQR para TotalCharges
q1, q3 = np.percentile(df_number['TotalCharges'], [25, 75])
iqr = q3 - q1
# limite inferior: q1 -(1.5 * iqr)
# limite superior: q3 +(1.5 * iqr)
lower_bound = q1 -(1.5 * iqr)
upper_bound = q3 +(1.5 * iqr)
print('lower bound', lower_bound)
print('upper bound', upper_bound)
df_number['TotalCharges'].sort_values(ascending=False) # sem outliers
# Distribuição das variáveis
for i in df_number.columns:
plt.figure()
plt.title(i)
plt.hist(df_number[i])
###Output
_____no_output_____
###Markdown
Conclusão: A partir dos boxplot e do IQR, não encontramos valores discrepantes para serem tratados como outliers. A análise de valores únicos e o histograma foi possível verificar que a coluna SeniorCitizen se trata de uma coluna categorica. 4 - Exploração dos dados Uma vez que os dados foram preparados e as inconsistencias foram removidas chegou o momento de experimentar os dados. Nessa etapa é importante para que o cientista possa validar hipóteses que foram levantadas durante o entendimento do problema e ainda nesse momento é possível buscar padrões e encontrar relações entre as variáveis do problema. **Montar o dataframe final**
###Code
# Churn será o target - indicar se o cliente irá deixar a empresa neste trimestre
y = df_text.Churn.replace({'Yes':1, 'No':0})
# retirar a coluna Churn e customerID do df_text
df_text = df_text.drop(['Churn','customerID'], axis=1)
# primeiras colunas
df_text.head()
# Aplicar LabelEnconder para variaveis categoricas
from sklearn.preprocessing import LabelEncoder
df_text = df_text.apply(LabelEncoder().fit_transform)
# primeiras linhas após Enconder
df_text.head()
# Dataframe de variaveis, número de linhas e colunas
X = pd.concat([df_text, df_number], axis=1)
X.shape
# Primeiras linhas
X.head()
# Correlacao variaveis em X e target y
df_aux = pd.concat([X, y], axis=1)
corr = df_aux.corr(method='pearson')
plt.figure(figsize=(15,8))
absoluto = np.abs(corr)
# fazendo um corte nos valores a serem mostrados
corte = absoluto[(absoluto >= 0.7)]
matriz = np.triu(corte)
sns.heatmap(corte, mask= matriz, annot=True, linewidths=0.01, linecolor='gray', cmap='YlGnBu', fmt='.1g')
###Output
_____no_output_____
###Markdown
Conclusão: Não encontramos vazamentos na variável target Churn sem corr. Encontramos uma forte correlação entre tenure (total de meses que um cliente está na empresa) e TotalCharges (custo total do cliente até o final do trimestre)
###Code
# Mostrando o balanceamento da variavel resposta
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Conclusão: Apesar da literatura considerar um dado desbalanceado quando uma classe represante menos de 30%, o valor de 26% para a variável que indica a saida do cliente é satisfatório. Assim, iremos comparar os resultados da amostra antes e após o balanceamento para verificar a melhor abordagem.
###Code
# Seleção de variaveis via RFE
from sklearn.svm import SVC
from sklearn.feature_selection import RFE, RFECV
estimator= SVC(kernel='linear')
selector = RFE(estimator, step=100)
selector = selector.fit(X, y)
f = selector.get_support(1)
X_sel = X[X.columns[f]]
X_sel
###Output
_____no_output_____
###Markdown
Conclusão: Seleção de variáveis por RFE (~8minutos executando) filtrou 9 variáveis em X_sel 5 - Escolha da modelagem (sem o balanceamento) Nesta etapa vocês tem a oportunidade de fazer inúmeros testes com diferentes técnicas de modelagem. Por exemplo, se o problema que está sendo trabalhado corresponde a uma classificação, diversas técnincas de machine learning podem ser testadas, como Árvores de decisão, Regressão logística, Random Forest, KNN, entre outras. Para mais detalhes (https://developers.google.com/machine-learning/crash-course)
###Code
# Dividir em conjunto de teste e treino
from sklearn.model_selection import train_test_split
# divide os dados para treino e teste de modo estratificado
X_train, X_test, y_train, y_test = train_test_split(X_sel, y,
test_size=0.3, #30% para teste train
stratify=y,
random_state=420) #semente
# Cross validation com RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
# cross validation com n_splits = 5
skf = StratifiedKFold(n_splits=5)
acc_dt = []
# pega um pedaco para treino outro para validacao em X e Y
for tr_idx, vl_idx in skf.split(X_train, y_train):
X_train_f, X_valid_f = X_train.iloc[tr_idx], X_train.iloc[vl_idx]
y_train_f, y_valid_f = y_train.iloc[tr_idx], y_train.iloc[vl_idx]
clf = RandomForestClassifier(random_state=42, n_estimators=10, max_leaf_nodes=30, min_samples_split=15, min_samples_leaf = 10).fit(X_train_f, y_train_f.values.ravel())
y_pred_f = clf.predict(X_valid_f)
acc_dt.append(accuracy_score(y_valid_f, y_pred_f))
print("Acurácia média =",np.mean(acc_dt)*100," %.")
###Output
Acurácia média = 76.0261441954521 %.
###Markdown
Conclusão: Utilizando cross validation com 5 splits, obtemos na fase de treino uma acurácia média de 76%
###Code
# Grid search
from sklearn.model_selection import GridSearchCV
params = {'max_leaf_nodes': [30,20,4], 'min_samples_split':[20,15,3], 'min_samples_leaf':[30,20,10]}
# no RF quanto mais estimadores melhor o modelo
grid_search_cv = GridSearchCV(RandomForestClassifier(n_estimators=10, random_state=42), params, verbose=1, cv=3, scoring='roc_auc')
grid_search_cv.fit(X_train, y_train)
grid_search_cv.best_params_
###Output
Fitting 3 folds for each of 27 candidates, totalling 81 fits
###Markdown
Conclusão: Através do grid search, vamos configurar o modelo para utilizar os parâmetros ajustados para treinar o modelo com o conjunto de teste.
###Code
# treina o modelo com o conjunto de treino
clfAll = RandomForestClassifier(random_state=42, n_estimators=10, max_leaf_nodes=20, min_samples_split=20, min_samples_leaf = 20).fit(X_train, y_train.values.ravel())
y_pred = clfAll.predict(X_test)
print("Acurácia =",accuracy_score(y_test, y_pred)*100," %.")
###Output
Acurácia = 76.77725118483413 %.
###Markdown
Conclusão: Obtemos uma acurácia de 76% no conjunto de teste após o modelo ser treinado. 6 - Avaliação dos resultados (sem o balanceamento) Na etapa de avaliação dos resultados podemos implementar algumas métricas de performance que nos dão informações se o modelo está performando bem ou mal. Dentre elas estão a acurácia, F1-score, Curva ROC, entre outras. Para mais informações (https://medium.com/@MohammedS/performance-metrics-for-classification-problems-in-machine-learning-part-i-b085d432082b)
###Code
from sklearn import metrics
# Avaliacao Acuracia
metrics.accuracy_score(y_test, y_pred)
# Avaliação curva ROC
y_pred_proba = clfAll.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_proba)
metrics.auc(fpr, tpr)
# metricas
print(metrics.classification_report(y_test, y_pred,target_names=['Não','Sim']))
###Output
precision recall f1-score support
Não 0.80 0.91 0.85 1549
Sim 0.60 0.37 0.46 561
accuracy 0.77 2110
macro avg 0.70 0.64 0.65 2110
weighted avg 0.75 0.77 0.75 2110
###Markdown
Conclusão: Precisão de 60% para Sim, ou seja, para o cliente deixar a empresa, com acurácia de 77% e 80% curva ROC.
###Code
# Visualizar curva ROC
from sklearn.metrics import precision_recall_curve, roc_auc_score, roc_curve, auc
plt.figure()
lw= 2
y_pred_proba = clf.predict_proba(X_test)[:,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc_roc = roc_auc_score(y_test, y_pred_proba)
plt.plot([0,1], [0, 1], color='navy', lw=lw, linestyle = '--')
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' %auc_roc, color='darkorange')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.title('ROC')
plt.show()
# Graficos aula
from sklearn.metrics import confusion_matrix
th = 0.5
fig, axes = plt.subplots(ncols=2, nrows = 2, figsize=(15, 15))
precision, recall, thresholds = precision_recall_curve(y_test, y_pred_proba)
loc = np.argmin(np.abs(thresholds-th))
print('PR AUC: ', auc(recall, precision))
axes[0,0].plot(recall, precision)
axes[0,0].plot(recall[loc], precision[loc], 'ko')
axes[0,0].set_title('Curva de precisao recall')
axes[0,0].set_xlabel('Recall')
axes[0,0].set_title('Precisao')
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
loc = np.argmin(np.abs(thresholds-th))
print('ROC AUC: ', auc(fpr, tpr)*100)
axes[0,1].plot(fpr, tpr, color='blue', label='ROC')
axes[0,1].plot(fpr[loc], tpr[loc], color='red',marker= 'o', label='ROC')
plt.plot([0,1], [0, 1], color='grey', lw=lw, linestyle = '--')
axes[0,1].set_title('Taxa de FP')
axes[0,1].set_xlabel('Taxa de VP')
axes[0,1].set_title('Curva |ROC')
limiar =th
cm = confusion_matrix(y_test, (y_pred_proba>=th))
print(cm/np.sum(cm))
sns.heatmap(cm, vmax=np.max(cm), vmin=np.min(cm), annot=True, square=True, fmt='g', ax=axes[1,0])
axes[1,0].set_title('Matriz de confusao (Limiar={})'.format(limiar))
axes[1,0].set_xlabel('Previsao')
axes[1,0].set_ylabel('Verdadeiro')
lista_fn = []
lista_fp = []
x = []
for i in np.arange(0,1, 0.01):
cm =confusion_matrix(y_test, (y_pred_proba>=i))
lista_fn.append(cm[1,0]/(cm[1,0]+cm[1,1]))
lista_fp.append(cm[0,1]/(cm[0,1]+cm[0,0]))
x.append(i)
axes[1,1].axvline(th, color='k', linestyle=':')
axes[1,1].plot(x, lista_fn, label='FN')
axes[1,1].plot(x, lista_fp, label='FP')
axes[1,1].set_title('FP/FN limiar de decisao')
axes[1,1].legend()
###Output
PR AUC: 0.545423615567116
ROC AUC: 80.55165255256395
[[0.67156398 0.06255924]
[0.17061611 0.09526066]]
###Markdown
* **MATRIZ DE CONFUSÃO**: A matriz de confusão indica que o Falso Positivo (360)é pequeno em relação ao Verdadeiro positivo (1417), ou seja, o modelo prevê bem quais clientes permanecerão na empresa no semestre. Já a proporção do Falso Negativo em relação ao resultado Negativo total é aproximadamente 39,63%, de forma que o diagnóstico de saída de um cliente não é confiável.* **PRECISION-RECALL**: A curva Precision_Recall indica que o modelo está retornando resultados medianamente precisos (Precision de 60%), ou seja, tem uma taxa mediana de falsos positivos. Além disso, o modelo não retorna a maioria de todos os resultados positivos (Recall de 37%), ou seja, possui uma taxa de falsos negativos ligeiramente elevada. Dessa forma, adaptações devem ser realizadas para melhorar tais indicadores 7 - Escolha da modelagem (com o balanceamento)
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state=42)
X_sm, y_sm = sm.fit_resample(X_train, y_train)
print(f'''Shape de X antes do balanceamento: {X.shape}
Shape de X depois do balanceamento: {X_sm.shape}''')
print('\nNovo balanceamento da variável resposta (%):')
y_sm.value_counts(normalize=True) * 100
# Dividir em conjunto de teste e treino
from sklearn.model_selection import train_test_split
# divide os dados para treino e teste de modo estratificado
X_train, X_test, y_train, y_test = train_test_split(X_sm,
y_sm,
test_size=0.3, #30% para teste train
random_state=42) #semente
# Cross validation com RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
# cross validation com n_splits = 5
skf = StratifiedKFold(n_splits=5)
acc_dt = []
# pega um pedaco para treino outro para validacao em X e Y
for tr_idx, vl_idx in skf.split(X_train, y_train):
X_train_f, X_valid_f = X_train.iloc[tr_idx], X_train.iloc[vl_idx]
y_train_f, y_valid_f = y_train.iloc[tr_idx], y_train.iloc[vl_idx]
clf = RandomForestClassifier(random_state=42, n_estimators=10, max_leaf_nodes=30, min_samples_split=15, min_samples_leaf = 10).fit(X_train_f, y_train_f.values.ravel())
y_pred_f = clf.predict(X_valid_f)
acc_dt.append(accuracy_score(y_valid_f, y_pred_f))
print("Acurácia média =",np.mean(acc_dt)*100," %.")
# Grid search
from sklearn.model_selection import GridSearchCV
params = {'max_leaf_nodes': [30,20,4], 'min_samples_split':[20,15,3], 'min_samples_leaf':[30,20,10]}
# no RF quanto mais estimadores melhor o modelo
grid_search_cv = GridSearchCV(RandomForestClassifier(n_estimators=10, random_state=42), params, verbose=1, cv=3, scoring='roc_auc')
grid_search_cv.fit(X_train, y_train)
grid_search_cv.best_params_
###Output
Fitting 3 folds for each of 27 candidates, totalling 81 fits
###Markdown
Conclusão: Através do grid search, vamos configurar o modelo para utilizar os parâmetros ajustados para treinar o modelo com o conjunto de teste.
###Code
# treina o modelo com o conjunto de treino
clfAll = RandomForestClassifier(random_state=42, n_estimators=10, max_leaf_nodes=30, min_samples_split=10, min_samples_leaf = 20).fit(X_train, y_train.values.ravel())
y_pred = clfAll.predict(X_test)
print("Acurácia =",accuracy_score(y_test, y_pred)*100," %.")
###Output
Acurácia = 73.76671277086216 %.
###Markdown
8 - Avaliação dos resultados (com o balanceamento) Na etapa de avaliação dos resultados podemos implementar algumas métricas de performance que nos dão informações se o modelo está performando bem ou mal. Dentre elas estão a acurácia, F1-score, Curva ROC, entre outras. Para mais informações (https://medium.com/@MohammedS/performance-metrics-for-classification-problems-in-machine-learning-part-i-b085d432082b)
###Code
from sklearn import metrics
# Avaliacao Acuracia
metrics.accuracy_score(y_test, y_pred)
# Avaliação curva ROC
y_pred_proba = clfAll.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_proba)
metrics.auc(fpr, tpr)
# metricas
print(metrics.classification_report(y_test, y_pred,target_names=['Não','Sim']))
###Output
precision recall f1-score support
Não 0.81 0.63 0.71 1103
Sim 0.69 0.85 0.76 1066
accuracy 0.74 2169
macro avg 0.75 0.74 0.74 2169
weighted avg 0.75 0.74 0.73 2169
###Markdown
Conclusão: Precisão de 69% para Sim para o cliente deixar a empresa, com acurácia de 74% e 81% curva ROC.
###Code
# Visualizar curva ROC
from sklearn.metrics import precision_recall_curve, roc_auc_score, roc_curve, auc
plt.figure()
lw= 2
y_pred_proba = clf.predict_proba(X_test)[:,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc_roc = roc_auc_score(y_test, y_pred_proba)
plt.plot([0,1], [0, 1], color='navy', lw=lw, linestyle = '--')
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' %auc_roc, color='darkorange')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.title('ROC')
plt.show()
# Graficos aula
from sklearn.metrics import confusion_matrix
th = 0.5
fig, axes = plt.subplots(ncols=2, nrows = 2, figsize=(15, 15))
precision, recall, thresholds = precision_recall_curve(y_test, y_pred_proba)
loc = np.argmin(np.abs(thresholds-th))
print('PR AUC: ', auc(recall, precision))
axes[0,0].plot(recall, precision)
axes[0,0].plot(recall[loc], precision[loc], 'ko')
axes[0,0].set_title('Curva de precisao recall')
axes[0,0].set_xlabel('Recall')
axes[0,0].set_title('Precisao')
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
loc = np.argmin(np.abs(thresholds-th))
print('ROC AUC: ', auc(fpr, tpr)*100)
axes[0,1].plot(fpr, tpr, color='blue', label='ROC')
axes[0,1].plot(fpr[loc], tpr[loc], color='red',marker= 'o', label='ROC')
plt.plot([0,1], [0, 1], color='grey', lw=lw, linestyle = '--')
axes[0,1].set_title('Taxa de FP')
axes[0,1].set_xlabel('Taxa de VP')
axes[0,1].set_title('Curva |ROC')
limiar =th
cm = confusion_matrix(y_test, (y_pred_proba>=th))
print(cm/np.sum(cm))
sns.heatmap(cm, vmax=np.max(cm), vmin=np.min(cm), annot=True, square=True, fmt='g', ax=axes[1,0])
axes[1,0].set_title('Matriz de confusao (Limiar={})'.format(limiar))
axes[1,0].set_xlabel('Previsao')
axes[1,0].set_ylabel('Verdadeiro')
lista_fn = []
lista_fp = []
x = []
for i in np.arange(0,1, 0.01):
cm =confusion_matrix(y_test, (y_pred_proba>=i))
lista_fn.append(cm[1,0]/(cm[1,0]+cm[1,1]))
lista_fp.append(cm[0,1]/(cm[0,1]+cm[0,0]))
x.append(i)
axes[1,1].axvline(th, color='k', linestyle=':')
axes[1,1].plot(x, lista_fn, label='FN')
axes[1,1].plot(x, lista_fp, label='FP')
axes[1,1].set_title('FP/FN limiar de decisao')
axes[1,1].legend()
###Output
PR AUC: 0.7769640782228534
ROC AUC: 81.57663986501082
[[0.32780083 0.18072845]
[0.07699401 0.41447672]]
###Markdown
* **MATRIZ DE CONFUSÃO**: A matriz de confusão indica que o Falso Positivo (167)é pequeno em relação ao Verdadeiro positivo (711), ou seja, o modelo prevê bem quais clientes permanecerão na empresa no semestre. A proporção do Falso Negativo em relação ao resultado Negativo total é aproximadamente 30,36%, tornando mais confiante o diagnóstico permanência de um cliente, em relação ao modelo desbalanceado. Assim, o resultado o que cumpre o propósito da análise de prever um meio de reter os clientes já que permite que a empresa identifique bem os clientes insatisfeitos e adote estratégias para a manutenção desses anteriormente às suas potenciais saídas.* **PRECISION-RECALL**: A curva Precision_Recall indica que o modelo está retornando resultados melhores do que no caso do modelo desbalanceado (Precision de 69% em relação a 60%, do teste anterior). O indicador Recall teve uma melhoria ainda mais notável, passando de 37% para 85%, de forma que o modelo agora possui uma taxa de falsos negativos baixa.* **CONCLUSÃO**: houve uma redução na acurácia do modelo em 3% após o balanceamento, contudo, as melhorias nos indicadores precision, recall e na matriz de confusão compensão tal perda, tornando o modelo balanceado mais eficiente.
###Code
# Comparacao modelo aleatorio
y_test_df = y_test.to_frame()
y_test_df.loc[:,'pred'] = y_pred_proba
y_test_df
# Comparacao modelo aleatorio
plt.figure(figsize=(8,8))
ax = plt.subplot(1,1,1)
aux = y_test_df.sort_values('pred', ascending=False).reset_index()
aux.loc[:, 'y_win'] = aux.Churn.cumsum().div(np.arange(1,aux.shape[0]+1))
aux.y_win.plot(color='b', label='Modelo')
aux2 = y_test_df.sort_values('pred', ascending=False).sample(frac=1).reset_index()
aux2.loc[:, 'y_win'] = aux2.Churn.cumsum().div(np.arange(1,aux.shape[0]+1))
aux2.y_win.plot(color='k', label='Aleatorio')
plt.ylabel('Qts clientes deixam a empresa')
plt.axvline(100, color='k', linestyle=':')
plt.legend();
###Output
_____no_output_____
###Markdown
Conclusão: Curva do modelo acima da curva aleatória, logo temos mais acertos via modelagem do que no modo aleatório.
###Code
# Importancia das variaveis
imps= clf.feature_importances_
cols = X_test.columns
order = np.argsort(imps)[::-1]
for col, imp in zip(cols[order], imps[order]):
print(f'{col:30s} | {imp:7.3f}')
M = X_test['Contract'].max()
plt.figure(figsize=(15,5))
plt.hist(X_test.loc[y_test==0]['Contract'], bins=np.linspace(0,M, 20), color='black', density=True, rwidth=.3)
plt.hist(X_test.loc[y_test==1]['Contract'], bins=np.linspace(0,M, 20), color='green', density=True, rwidth=.3)
###Output
_____no_output_____ |
doc/rotation/kochanek-bartels.ipynb | ###Markdown
This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also https://splines.readthedocs.io/.[back to rotation splines](index.ipynb) Kochanek--Bartels-like Rotation Splines
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
[helper.py](helper.py)
###Code
from helper import angles2quat, animate_rotations, display_animation
###Output
_____no_output_____
###Markdown
[splines.quaternion.KochanekBartels](../python-module/splines.quaternion.rstsplines.quaternion.KochanekBartels)
###Code
from splines.quaternion import KochanekBartels
rotations = [
angles2quat(0, 0, 0),
angles2quat(90, 0, 0),
angles2quat(90, 90, 0),
angles2quat(90, 90, 90),
]
###Output
_____no_output_____
###Markdown
Uniform Catmull--Rom
###Code
s = KochanekBartels(rotations)
times = np.linspace(s.grid[0], s.grid[-1], 100)
ani = animate_rotations(s.evaluate(times), figsize=(3, 2))
display_animation(ani, default_mode='reflect')
###Output
_____no_output_____
###Markdown
Non-Uniform Catmull--Rom
###Code
grid = 0, 0.2, 0.9, 1.2
s = KochanekBartels(rotations, grid)
times = np.linspace(s.grid[0], s.grid[-1], 100)
ani = animate_rotations(s.evaluate(times), figsize=(3, 2))
display_animation(ani, default_mode='reflect')
###Output
_____no_output_____
###Markdown
TCB
###Code
s = KochanekBartels(rotations, tcb=[0, 1, 1])
times = np.linspace(s.grid[0], s.grid[-1], 100)
ani = animate_rotations(s.evaluate(times), figsize=(3, 2))
display_animation(ani, default_mode='reflect')
###Output
_____no_output_____
###Markdown
Edge Cases* 0 or 1 quaternions: not allowed* 2 quaternions: Slerp* 180° rotations (dot product = 0)?* ... 2 quaternions:
###Code
rotations = [
angles2quat(0, 0, 0),
angles2quat(90, 90, 90),
]
s = KochanekBartels(rotations)
times = np.linspace(s.grid[0], s.grid[-1], 100)
ani = animate_rotations(s.evaluate(times), figsize=(3, 2))
display_animation(ani, default_mode='reflect')
###Output
_____no_output_____ |
src/Hospital_Billing/InductiveMiner_after.ipynb | ###Markdown
Inductive Miner Step 1: Handling and import event data
###Code
import pm4py
from pm4py.objects.log.importer.xes import importer as xes_importer
from pm4py.algo.discovery.inductive import algorithm as inductive_miner
log = xes_importer.apply("Hospital_Billing-3-filtering.xes")
###Output
_____no_output_____
###Markdown
Step 2: Mining event log - Process Discovery
###Code
from pm4py.algo.discovery.inductive import algorithm as inductive_miner
from pm4py.visualization.process_tree import visualizer as pt_visualizer
net, initial_marking, final_marking = inductive_miner.apply(log)
tree = inductive_miner.apply_tree(log)
###Output
_____no_output_____
###Markdown
Step 3: Visualize Petri net and Process Tree of Mined Process from log
###Code
pm4py.view_petri_net(net, initial_marking, final_marking)
gviz = pt_visualizer.apply(tree)
pt_visualizer.view(gviz)
###Output
_____no_output_____
###Markdown
Step 4: Convert Petri Net to BPMN
###Code
bpmn_graph = pm4py.convert_to_bpmn(*[net, initial_marking, final_marking])
pm4py.view_bpmn(bpmn_graph, "png")
###Output
_____no_output_____
###Markdown
Step 5: Log-Model Evaluatio Replay Fitness
###Code
from pm4py.algo.evaluation.replay_fitness import algorithm as replay_fitness_evaluator
fitness = replay_fitness_evaluator.apply(log, net, initial_marking, final_marking, variant=replay_fitness_evaluator.Variants.TOKEN_BASED)
fitness
###Output
_____no_output_____
###Markdown
Precision
###Code
from pm4py.algo.evaluation.precision import algorithm as precision_evaluator
prec = precision_evaluator.apply(log, net, initial_marking, final_marking, variant=precision_evaluator.Variants.ETCONFORMANCE_TOKEN)
prec
###Output
_____no_output_____
###Markdown
F-measure
###Code
def f_measure(f, p):
return (2*f*p)/(f+p)
f_measure(fitness['average_trace_fitness'], prec)
%reset -f
fps = pm4py.discovery.discover_footprints(log, *[net, initial_marking, final_marking])
fps
###Output
_____no_output_____ |
Pro Prog.ipynb | ###Markdown
Project for Programming Creating a dataset I decided to simulate a dataset about wiskeys  Research I looked at a number of exisiting datasets out there about wines. These datasets are interesting as wines like spirits are somewhat impervious to the usual reguations on listings its ingredients, even the alcohol content is observed to be more of an estimation than anything else. I looked at one particular dataset about wine quality. The author input physiochemical properties and the outputs are sensory data. Two sets were created using white and red wines. A panel of experts scored the quality of the wines between 0 and 10. O being the worst and 10 being excellent. A number of methods were applied and many beyond my limited abilities but I found this dataset to be a good starting as its interesting/relatable Wine Dataset Reference https://www.kaggle.com/piyushgoyal443/red-wine-datasetwineQualityInfo.txt P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553. ISSN: 0167-9236 Whiskey Dataset https://www.kaggle.com/koki25ando/scotch-whisky-dataset This dataset looks at scotch whiskys, with a focus on flavours and preference by the expert panel. Its not a highly complex dataset but having looked at a few I feel its best to start with something simple for the purposes of this assigment. I also really liked this analysis - the author used R and GGplots. The plots look great and its a nicely explained exercise for a total novice http://www.drbunsen.org/whisky-analysis/ Focus Rather than focusing on qulity per se, this dataset will hopefully provide an overview of current preferences/tastes. Are increasingly sweet tastes killing tradition? A recent article noted how the traditional herbaceous flavours of Gin are lessening in favour of sweetened pink gins. insert link https://www.theguardian.com/food/2018/dec/06/pink-terror-how-sweet-gins-muscled-in-on-the-artisan-market-pink-ginIt is genreally accpeted that aged whiskeys with a dark colour and the barrel conditions etc are what makes a great, high quality whisky and likely the preference for a panel of experts. These types of whiskys are often complex, hot, smoky and deep. They require little embellishment other than air and a little water. To try to ascertain what appeals to a more general audience the panel for this simualted dataset have all identifed as non experts. The Whiskeys I took a list from a search located here at http://www.dcs.ed.ac.uk/home/jhb/whisky/spread.htmlI saved to excel and used RAND(0) to reshuffle The Variables - Brand taken from dataset reference above - Years Aged up 7- VOL between 40 & 45 - Sweetness 1 to 10 1 indicating little or none, 10 the strongest- Shade 1 to 10 1 indicating little or none, 10 the strongest- Heat 1 to 10 1 indicating little or none, 10 the strongest - Smoke 1 to 10 1 indicating little or none, 10 the strongest- Age Participant- Graden 1 to 10, 1 being the worste, 10 being excellent
###Code
import pandas as pd
import numpy as np
import numpy as np
np.random.seed(45)
pd.options.display.max_rows= 101
df = pd.DataFrame({'Brands':['Ardmore','LeaValley','Glengalwan','Tullibardine','Braeval','Glengoblin','Glengalwan',
'Glengalwan',
'Braeval',
'Tullibardine',
'Thomas Street',
'Braes of Glenlivet',
'Glen Albyn',
'Rosebank',
'Glentarras',
'Pitilie',
'Hillside',
'Campbeltown',
'Inisfinn',
'Scotia',
'Springbank',
'Dalaruan',
'Benmore',
'Scapa',
'Glenesk',
'Glenfarclas',
'Dufftown',
'Auchinblae',
'Glentauchers',
'Brora',
'Inchmurrin',
'Dean',
'Devanha',
'Lochnagar',
'Speyside',
'Kininvie',
'Bankier',
'Isla',
'Bowmore',
'Boness',
'Macallan',
'Glenlochy',
'Glenallachie',
'LomondLoch',
'BlairAthol',
'Imperial',
'Dalwhinnie',
'Avoniel',
'Glen Ord',
'Burnside',
'Yamazaki',
'Cameronbridge',
'Lochruan',
'Karuizawa',
'Tomintoul',
'Provanmill',
'Glenglassaugh',
'Bushmills',
'Mannochmore',
'Ballechin',
'Glenside',
'Lagavulin',
'Kirkliston',
'Longmorn',
'Vauxhall'
'Highland Park'
'Glenaden',
'Aberfeldy',
'AlltBhainne',
'Knockando',
'Bladnoch',
'Tomatin',
'Convalmore',
'Parkmore',
'Caol Ila',
'Glencraig',
'Glen',
'Clydesdale',
'Strathmore',
'GlenSpey',
'Tamnavulin',
'GlenScotia',
'Mortlach',
'Ladyburn',
'Glenfoyle',
'Strathclyde',
'Carsebridge',
'Ardlussa',
'Annandale',
'Port Ellen',
'Lochindaal',
'Inchgower',
'Abbey Street',
'Loch Katrine',
'Auchroisk',
'Adelphi',
'Dumbarton',
'Teaninich',
'Glenturret',
'Hokkaido',
'Craigellachie',
'Laphroaig',
'Tamdhu'],
'Years Aged':np.random.randint(low=3, high=12,size=101), #avoiding negative numbers used low/high
'VOL':np.random.randint(low=40,high=50,size=101),
'Sweetness':np.random.normal(7,1,size=101),
'Shade':np.random.randint(1,3,size=101)*2, #using randint*2- trying to manipulate a little
"Heat":np.random.randint(1,10,size=101),
"Smoke": np.random.randint(1,5,size=101),
"Age Participant":np.random.randint(low=25,high=35,size=101),
"Grade":np.random.randint(low=1,high=10,size=101)})
df.set_index('VOL','Brands')
df.columns
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns;sns.set()
ax=sns.scatterplot(x='Sweetness',y='Grade',data=df)
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns;sns.set()
ax=sns.violinplot(x='Grade',y='VOL',data=df)
df
import matplotlib.pyplot as plt
import seaborn as sns
w=sns.catplot("VOL","Years Aged",data=df, kind="bar",height=6,aspect=3,palette="autumn",legend=False)
# Show plot
plt.show()
df.boxplot(column='Years Aged',by='VOL')
plt.show()
np.random.multinomial(20, [1/6.]*6, size=1)
df.boxplot(column='Grade',by='VOL')
plt.show()
###Output
_____no_output_____ |
DA Exercises/Yelp.ipynb | ###Markdown
YELP Dataset
###Code
import pandas as pd
xls = pd.ExcelFile('yelp.xlsx')
df = xls.parse('yelp_data')
df.head()
###Output
_____no_output_____
###Markdown
Joining data
###Code
df_cities = xls.parse('cities')
df_cities.head()
df = pd.merge(left=df, right=df_cities, how='inner', left_on='city_id', right_on='id')
df.head()
df_states = xls.parse('states')
df_states.head()
df = pd.merge(left=df, right=df_states, how='inner', left_on='state_id', right_on='id')
df.shape
df.head()
atts = ['name', 'city', 'state']
df[atts].head(100)
del df['id_x']
del df['id_y']
df.head()
###Output
_____no_output_____
###Markdown
Slicing rows
###Code
df[100:200]
index = len(df) - 1
last_business = df[index:] #get a slice from provided start index all the way to end of dataframe
last_business['name']
df[-1:]['name']
###Output
_____no_output_____
###Markdown
Querying data using boolean indexing
###Code
pitts = df['city'] == 'Pittsburgh'
type(pitts)
pitts
df[pitts]
rest = df['name'] == 'The Dragon Chinese Cuisine'
df[rest]
df[rest]['take_out']
cat_0_bars = df["category_0"] == "Bars"
cat_1_bars = df["category_1"] == "Bars"
df[cat_0_bars | cat_1_bars]
cat_0_bars = df["category_0"] == "Bars"
cat_1_bars = df["category_1"] == "Bars"
carnegie = df["city"] == "Carnegie"
df[(cat_0_bars | cat_1_bars) & carnegie]
cat_0 = df["category_0"].isin(["Bars", "Restaurants"])
cat_1 = df["category_1"].isin(["Bars", "Restaurants"])
carnegie = df["city"] == "Carnegie"
df[(cat_0 | cat_1) & carnegie]
lv = df["city"] == "Las Vegas"
cat_0_bars = df["category_0"] == "Dive Bars"
cat_1_bars = df["category_1"] == "Dive Bars"
divebars_lv = df[lv &(cat_0_bars | cat_1_bars)]
len(divebars_lv)
stars = divebars_lv["stars"] >= 4.0
divebars_lv_4star_rating = divebars_lv[stars]
divebars_lv_4star_rating
import random
rand_int = random.randint(0, len(divebars_lv_4star_rating) - 1)
rand_divebar = divebars_lv_4star_rating[rand_int : rand_int + 1]
rand_divebar
rand_int = random.randint(0, len(divebars_lv_4star_rating) - 1)
rand_divebar = divebars_lv_4star_rating.iloc[rand_int]
rand_divebar
###Output
_____no_output_____
###Markdown
Updating & creating data
###Code
df["rating"] = df["stars"] * 2
df.head()
def convert_to_rating(x):
return (str(x) + " out of 10")
df["rating"] = df["rating"].apply(convert_to_rating)
df.head()
bars_rest = df["category_0"].isin(["Bars", "Restaurants"])
df_bars_rest = df[bars_rest]
df_bars_rest
###Output
_____no_output_____
###Markdown
Pivot tables
###Code
pivot_state_cat = pd.pivot_table(df_bars_rest, index = ["state", "city", "category_0"])
pivot_state_cat[["review_count", "stars"]]
###Output
_____no_output_____
###Markdown
Histograms
###Code
import matplotlib.pyplot as plt
%matplotlib inline
df_pitt = df[df["city"] == "Pittsburgh"]
df_pitt.head()
df_vegas = df[df["city"] == "Las Vegas"]
df_vegas.head()
pitt_stars = df_pitt["stars"]
vegas_stars = df_vegas["stars"]
vegas_stars.head()
plt.hist(
pitt_stars,
alpha = 0.3,
color = 'yellow',
label = 'Pittsburgh',
bins = 'auto'
)
plt.hist(
vegas_stars,
alpha = 0.3,
color = 'red',
label = 'Las Vegas',
bins = 'auto'
)
plt.xlabel("Rating")
plt.ylabel("Number of Rating Scores")
plt.legend(loc = 'best')
plt.title("Review distribution of Pittsburgh and Las Vegas")
plt.show()
plt.hist(
[pitt_stars, vegas_stars],
alpha = 0.7,
color = ['red', 'blue'],
label = ['Pittsburgh', 'Las Vegas'],
bins = 'auto'
)
plt.xlabel('Rating')
plt.ylabel('Number of Rating Scores')
plt.legend(loc = 'best')
plt.title('Review distribution of Pittsburgh and Las Vegas')
plt.show()
###Output
_____no_output_____
###Markdown
Scatterplots
###Code
df_health = df[df["category_0"] == "Health & Medical"]
df_fast = df[df["category_0"] == "Fast Food"]
df_break = df[df["category_0"] == "Breakfast & Brunch"]
df_break.head()
plt.scatter(
df_health["stars"], df_health["review_count"],
marker = "o",
color = 'r',
alpha = 0.7,
s = 124,
label = ['Health & Medical']
)
plt.scatter(
df_fast["stars"], df_fast["review_count"],
marker = "h",
color = 'b',
alpha = 0.7,
s = 124,
label = ['Fast Food']
)
plt.scatter(
df_break["stars"], df_break["review_count"],
marker = "^",
color = 'g',
alpha = 0.7,
s = 124,
label = ['Breakfast & Brunch']
)
plt.xlabel('Rating')
plt.ylabel('Review Count')
plt.legend(loc = 'upper left')
axes = plt.gca()
axes.set_yscale('log')
plt.show()
###Output
_____no_output_____ |
code/.ipynb_checkpoints/Rover_Lab_Notebook-Copy1-checkpoint.ipynb | ###Markdown
Rover Lab NotebookThis notebook contains the functions from the lesson and provides the scaffolding you need to test out your mapping methods. The steps you need to complete in this notebook for the project are the following:* First just run each of the cells in the notebook, examine the code and the results of each.**Note: For the online lab, data has been collected and provided for you. If you would like to try locally please do so! Please continue instructions from the continue point.*** Run the simulator in "Training Mode" and record some data. Note: the simulator may crash if you try to record a large (longer than a few minutes) dataset, but you don't need a ton of data, just some example images to work with. * Change the data directory path (2 cells below) to be the directory where you saved data* Test out the functions provided on your data**Continue Point*** Write new functions (or modify existing ones) to report and map out detections of obstacles and rock samples (yellow rocks)* Populate the `process_image()` function with the appropriate steps/functions to go from a raw image to a worldmap.* Run the cell that calls `process_image()` using `moviepy` functions to create video output* Once you have mapping working, move on to modifying `perception.py` and `decision.py` in the project to allow your rover to navigate and map in autonomous mode!**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".****Run the next cell to get code highlighting in the markdown cells.**
###Code
%%HTML
<style> code {background-color : orange !important;} </style>
%matplotlib inline
#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window (note it may show up behind your browser)
# Make some of the relevant imports
import cv2 # OpenCV for perspective transform
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import scipy.misc # For saving images as needed
import glob # For reading in a list of images from a folder
###Output
_____no_output_____
###Markdown
Quick Look at the DataThere's some example data provided in the `test_dataset` folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator. Next, read in and display a random image from the `test_dataset` folder
###Code
path = './Sim Data/IMG/*'
img_list = glob.glob(path)
# Grab a random image and display it
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Calibration DataRead in and display example grid and rock sample calibration images. You'll use the grid for perspective transform and the rock image for creating a new color selection that identifies these samples of interest.
###Code
# In the simulator you can toggle on a grid on the ground for calibration
# You can also toggle on the rock samples with the 0 (zero) key.
# Here's an example of the grid and one of the rocks
example_grid = './calibration_images/example_grid1.jpg'
example_rock = './calibration_images/example_rock2.jpg'
grid_img = mpimg.imread(example_grid)
rock_img = mpimg.imread(example_rock)
fig = plt.figure(figsize=(12,3))
plt.subplot(121)
plt.imshow(grid_img)
plt.subplot(122)
plt.imshow(rock_img)
###Output
_____no_output_____
###Markdown
Perspective TransformDefine the perspective transform function from the lesson and test it on an image.
###Code
# Define a function to perform a perspective transform
# I've used the example grid image above to choose source points for the
# grid cell in front of the rover (each grid cell is 1 square meter in the sim)
# Define a function to perform a perspective transform
def perspect_transform(img, src, dst):
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))# keep same size as input image
return warped
# Define calibration box in source (actual) and destination (desired) coordinates
# These source and destination points are defined to warp the image
# to a grid where each 10x10 pixel square represents 1 square meter
# The destination box will be 2*dst_size on each side
dst_size = 5
# Set a bottom offset to account for the fact that the bottom of the image
# is not the position of the rover but a bit in front of it
# this is just a rough guess, feel free to change it!
bottom_offset = 6
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
warped = perspect_transform(grid_img, source, destination)
plt.imshow(warped)
# rock_warped = perspect_transform(rock_img, source, destination)
# plt.imshow(rock_warped)
#scipy.misc.imsave('../output/warped_example.jpg', warped)
###Output
_____no_output_____
###Markdown
Color ThresholdingDefine the color thresholding function from the lesson and apply it to the warped image**TODO:** Ultimately, you want your map to not just include navigable terrain but also obstacles and the positions of the rock samples you're searching for. Modify this function or write a new function that returns the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that you can map these areas into world coordinates as well. **Suggestion:** Think about imposing a lower and upper boundary in your color selection to be more specific about choosing colors. Feel free to get creative and even bring in functions from other libraries. Here's an example of [color selection](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html) using OpenCV. **Beware:** if you start manipulating images with OpenCV, keep in mind that it defaults to `BGR` instead of `RGB` color space when reading/writing images, so things can get confusing.
###Code
# Identify pixels above the threshold
# Threshold of RGB > 160 does a nice job of identifying ground pixels only
def color_thresh(img, rgb_thresh=(160, 160, 160)):
# Create an array of zeros same xy size as img, but single channel
color_select = np.zeros_like(img[:,:,0])
# Require that each pixel be above all three threshold values in RGB
# above_thresh will now contain a boolean array with "True"
# where threshold was met
above_thresh = (img[:,:,0] > rgb_thresh[0]) \
& (img[:,:,1] > rgb_thresh[1]) \
& (img[:,:,2] > rgb_thresh[2])
# Index the array of zeros with the boolean array and set to 1
color_select[above_thresh] = 1
# Return the binary image
return color_select
def rock_thresh(img):
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
H = hls[:,:,0]
L = hls[:,:,1]
S = hls[:,:,2]
thresh_H = (20, 255)
thresh_L = (64, 93)
thresh_S = (186, 255)
binary_H = np.zeros_like(H)
binary_H[(H > thresh_H[0]) & (H <= thresh_H[1])] = 1
binary_L = np.zeros_like(L)
binary_L[(L > thresh_L[0]) & (L <= thresh_L[1])] = 1
binary_S = np.zeros_like(S)
binary_S[(S > thresh_S[0]) & (S <= thresh_S[1])] = 1
res = binary_L & binary_S
ypos, xpos = res.nonzero()
return res, ypos, xpos
res, ypos, xpos = rock_thresh(rock_img)
threshed = color_thresh(warped)
print(xpos.min(), xpos.max())
print(ypos.min(), ypos.max())
xl = (xpos.max() - xpos.min())
yl = (ypos.max() - ypos.min())
if xl >= yl:
major = xl
minor = yl
else:
major = yl
minor = xl
xcenter = xpos.min() + ((xl)//2)
ycenter = ypos.min() + ((yl)//2)
print(xcenter, ycenter)
cv2.ellipse(res,(xcenter,ycenter),(major//2, minor//2),0,0,360,255,-1)
# plt.imshow(threshed, cmap='gray')
plt.imshow(res, cmap='gray')
#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)
plt.imshow(rock_img, cmap='gray')
###Output
_____no_output_____
###Markdown
Coordinate TransformationsDefine the functions used to do coordinate transforms and apply them to an image.
###Code
# Define a function to convert from image coords to rover coords
def rover_coords(binary_img):
# Identify nonzero pixels
ypos, xpos = binary_img.nonzero()
# Calculate pixel positions with reference to the rover position being at the
# center bottom of the image.
x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)
y_pixel = -(xpos - binary_img.shape[1]/2 ).astype(np.float)
return x_pixel, y_pixel
# Define a function to convert to radial coords in rover space
def to_polar_coords(x_pixel, y_pixel):
# Convert (x_pixel, y_pixel) to (distance, angle)
# in polar coordinates in rover space
# Calculate distance to each pixel
dist = np.sqrt(x_pixel**2 + y_pixel**2)
# Calculate angle away from vertical for each pixel
angles = np.arctan2(y_pixel, x_pixel)
return dist, angles
# Define a function to map rover space pixels to world space
def rotate_pix(xpix, ypix, yaw):
# Convert yaw to radians
yaw_rad = yaw * np.pi / 180
xpix_rotated = (xpix * np.cos(yaw_rad)) - (ypix * np.sin(yaw_rad))
ypix_rotated = (xpix * np.sin(yaw_rad)) + (ypix * np.cos(yaw_rad))
# Return the result
return xpix_rotated, ypix_rotated
def translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale):
# Apply a scaling and a translation
xpix_translated = (xpix_rot / scale) + xpos
ypix_translated = (ypix_rot / scale) + ypos
# Return the result
return xpix_translated, ypix_translated
# Define a function to apply rotation and translation (and clipping)
# Once you define the two functions above this function should work
def pix_to_world(xpix, ypix, xpos, ypos, yaw, world_size, scale):
# Apply rotation
xpix_rot, ypix_rot = rotate_pix(xpix, ypix, yaw)
# Apply translation
xpix_tran, ypix_tran = translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale)
# Perform rotation, translation and clipping all at once
x_pix_world = np.clip(np.int_(xpix_tran), 0, world_size - 1)
y_pix_world = np.clip(np.int_(ypix_tran), 0, world_size - 1)
# Return the result
return x_pix_world, y_pix_world
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
warped = perspect_transform(image, source, destination)
threshed = color_thresh(warped)
# Calculate pixel values in rover-centric coords and distance/angle to all pixels
xpix, ypix = rover_coords(threshed)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
# Do some plotting
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(threshed, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(mean_dir)
y_arrow = arrow_length * np.sin(mean_dir)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
###Output
_____no_output_____
###Markdown
Read in saved data and ground truth map of the worldThe next cell is all setup to read your saved data into a `pandas` dataframe. Here you'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain. After that, we'll define a class to store telemetry data and pathnames to images. When you instantiate this class (`data = Databucket()`) you'll have a global variable called `data` that you can refer to for telemetry and map data within the `process_image()` function in the following cell.
###Code
# Import pandas and read in csv file as a dataframe
import pandas as pd
# Change this path to your data directory
df = pd.read_csv('./test_dataset/robot_log.csv')
csv_img_list = df["Path"].tolist() # Create list of image pathnames
# Read in ground truth map and create a 3-channel image with it
ground_truth = mpimg.imread('./calibration_images/map_bw.png')
ground_truth_3d = np.dstack((ground_truth*0, ground_truth*255, ground_truth*0)).astype(np.float)
# Creating a class to be the data container
# Will read in saved data from csv file and populate this object
# Worldmap is instantiated as 200 x 200 grids corresponding
# to a 200m x 200m space (same size as the ground truth map: 200 x 200 pixels)
# This encompasses the full range of output position values in x and y from the sim
class Databucket():
def __init__(self):
self.images = csv_img_list
self.xpos = df["X_Position"].values
self.ypos = df["Y_Position"].values
self.yaw = df["Yaw"].values
self.count = -1 # This will be a running index, setting to -1 is a hack
# because moviepy (below) seems to run one extra iteration
self.worldmap = np.zeros((200, 200, 3)).astype(np.float)
self.ground_truth = ground_truth_3d # Ground truth worldmap
# Instantiate a Databucket().. this will be a global variable/object
# that you can refer to in the process_image() function below
data = Databucket()
###Output
_____no_output_____
###Markdown
--- Write a function to process stored imagesModify the `process_image()` function below by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this `process_image()` function in conjunction with the `moviepy` video processing package to create a video from the images you saved taking data in the simulator. In short, you will be passing individual images into `process_image()` and building up an image called `output_image` that will be stored as one frame of video. You can make a mosaic of the various steps of your analysis process and add text as you like (example provided below). To start with, you can simply run the next three cells to see what happens, but then go ahead and modify them such that the output video demonstrates your mapping process. Feel free to get creative!
###Code
# Define a function to pass stored images to
# reading rover position and yaw angle from csv file
# This function will be used by moviepy to create an output video
world_size = 200
scale = 10
def process_image(img):
# Example of how to use the Databucket() object defined above
# to print the current x, y and yaw values
# print(data.xpos[data.count], data.ypos[data.count], data.yaw[data.count])
count = data.count
# TODO:
# 1) Define source and destination points for perspective transform
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
# 2) Apply perspective transform
warped = perspect_transform(grid_img, source, destination)
# 3) Apply color threshold to identify navigable terrain/obstacles/rock samples
threshed = color_thresh(warped)
# 4) Convert thresholded image pixel values to rover-centric coords
xpix, ypix = rover_coords(threshed)
rover_xpos = data.xpos[count]
rover_ypos = data.ypos[count]
yaw = data.yaw[count]
global world_size, scale
# 5) Convert rover-centric pixel values to world coords
x_world, y_world = pix_to_world(xpix, ypix, rover_xpos, rover_ypos, yaw, world_size, scale)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
# 6) Update worldmap (to be displayed on right side of screen)
# Example:
# data.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1
# data.worldmap[rock_y_world, rock_x_world, 1] += 1
# data.worldmap[navigable_y_world, navigable_x_world, 2] += 1
data.worldmap[y_world, x_world, 2] += 1
# 7) Make a mosaic image, below is some example code
# First create a blank image (can be whatever shape you like)
output_image = np.zeros((img.shape[0] + data.worldmap.shape[0], img.shape[1]*2, 3))
# Next you can populate regions of the image with various output
# Here I'm putting the original image in the upper left hand corner
output_image[0:img.shape[0], 0:img.shape[1]] = img
# Let's create more images to add to the mosaic, first a warped image
warped = perspect_transform(img, source, destination)
# Add the warped image in the upper right hand corner
output_image[0:img.shape[0], img.shape[1]:] = warped
# Overlay worldmap with ground truth map
map_add = cv2.addWeighted(data.worldmap, 1, data.ground_truth, 0.5, 0)
# Flip map overlay so y-axis points upward and add to output_image
output_image[img.shape[0]:, 0:data.worldmap.shape[1]] = np.flipud(map_add)
# Then putting some text over the image
cv2.putText(output_image,"Populate this image with your analyses to make a video!", (20, 20),
cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)
data.count += 1 # Keep track of the index in the Databucket()
return output_image
###Output
_____no_output_____
###Markdown
Make a video from processed image dataUse the [moviepy](https://zulko.github.io/moviepy/) library to process images and create a video.
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import ImageSequenceClip
# Define pathname to save the output video
output = './output/test_mapping.mp4'
data = Databucket() # Re-initialize data in case you're running this cell multiple times
clip = ImageSequenceClip(data.images, fps=60) # Note: output video will be sped up because
# recording rate in simulator is fps=25
new_clip = clip.fl_image(process_image) #NOTE: this function expects color images!!
%time new_clip.write_videofile(output, audio=False)
###Output
_____no_output_____
###Markdown
This next cell should function as an inline video playerIf this fails to render the video, try running the following cell (alternative video rendering method). You can also simply have a look at the saved mp4 in your `/output` folder
###Code
output = './output/test_mapping.mp4'
from IPython.display import HTML
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(output))
###Output
_____no_output_____ |
tests/postresaxis/validate_rotating_ellipse/compareCoils.ipynb | ###Markdown
Let's take a look at what FOCUSADD's coils look like.
###Code
for ic in range(r.shape[0]):
for n in range(r.shape[2]):
for b in range(r.shape[3]):
p = mlab.plot3d(r[ic,:,n,b,0],r[ic,:,n,b,1],r[ic,:,n,b,2],tube_radius=0.004, line_width = 0.01, color=(0.0,0.0,0.8))
p
###Output
_____no_output_____
###Markdown
Now let's overlay the FOCUSAD (FORTRAN) coils. First, we need to read in the data from coils.out.txt
###Code
with open("coils.out.txt", 'r') as f:
_ = f.readline() #1
NC, NF = f.readline().split(" ")
NC = int(NC)
NF = int(NF)
_ = f.readline()
NN, NB, NFR = f.readline().split(" ") #4
NN = int(NN)
NB = int(NB)
NFR = int(NFR)
_ = f.readline()
_ = f.readline() #6
fc = np.zeros((6,NC,NF+1))
fr = np.zeros((2,NC,NFR))
for c in range(NC):
_ = f.readline() #coilnumber
_ = f.readline()
_ = f.readline() #NR
_ = f.readline()
_ = f.readline()
xc = np.asarray([float(txt) for txt in f.readline().split(" ") if txt != ''])
_ = f.readline()
xs = np.asarray([float(txt) for txt in f.readline().split(" ") if txt != ''])
_ = f.readline()
yc = np.asarray([float(txt) for txt in f.readline().split(" ") if txt != ''])
_ = f.readline()
ys = np.asarray([float(txt) for txt in f.readline().split(" ") if txt != ''])
_ = f.readline()
zc = np.asarray([float(txt) for txt in f.readline().split(" ") if txt != ''])
_ = f.readline()
zs = np.asarray([float(txt) for txt in f.readline().split(" ") if txt != ''])
_ = f.readline()
_ = f.readline()
_ = f.readline()
_ = f.readline()
fc[:,c,:] = np.concatenate((xc[np.newaxis],yc[np.newaxis],zc[np.newaxis],xs[np.newaxis],ys[np.newaxis],zs[np.newaxis]),axis=0)
params = (fc, fr)
_, _, r, _, _ = CoilSet.get_outputs(coil_data, True, params)
for ic in range(r.shape[0]):
for n in range(r.shape[2]):
for b in range(r.shape[3]):
p = mlab.plot3d(r[ic,:,n,b,0],r[ic,:,n,b,1],r[ic,:,n,b,2],tube_radius=0.004, line_width = 0.01, color=(0.0,0.8,0.0))
p
###Output
_____no_output_____ |
scripts/AVAL_5.ipynb | ###Markdown
Convolutional NN
###Code
import numpy as np
import gzip
import os
import pickle
from matplotlib import pyplot
from si.data.dataset import Dataset, summary
from si.util.util import to_categorical
###Output
_____no_output_____
###Markdown
Load the MNIST dataset
###Code
def load_mnist(sample_size=None):
DIR = os.path.dirname(os.path.realpath('.'))
filename = os.path.join(DIR, 'datasets/mnist.pkl.gz')
f = gzip.open(filename, 'rb')
data = pickle.load(f, encoding='bytes')
(x_train, y_train), (x_test, y_test) = data
if sample_size:
return Dataset(x_train[:sample_size],y_train[:sample_size]),Dataset(x_test,y_test)
else:
return Dataset(x_train,y_train),Dataset(x_test,y_test)
train,test = load_mnist(500)
def preprocess(train):
# reshape and normalize input data
train.X = train.X.reshape(train.X.shape[0], 28, 28, 1)
train.X = train.X.astype('float32')
train.X /= 255
train.Y = to_categorical(train.Y)
preprocess(train)
preprocess(test)
def plot_img(img,shape=(28,28)):
pic = (img*255).reshape(shape)
pic = pic.astype('int')
pyplot.imshow(pic, cmap=pyplot.get_cmap('gray'))
pyplot.show()
plot_img(test.X[0])
from si.supervised.nn import NN, Dense, Activation, Conv2D, Flatten
from si.util.activation import Tanh, Sigmoid
###Output
_____no_output_____
###Markdown
Build the model
###Code
net = NN(epochs=1000,lr=0.1,verbose=False)
net.add(Conv2D((28, 28,1), (3, 3), 1))
net.add(Activation(Tanh()))
net.add(Flatten())
net.add(Dense(26*26*1, 100))
net.add(Activation(Tanh()))
net.add(Dense(100, 10))
net.add(Activation(Sigmoid()))
###Output
_____no_output_____
###Markdown
Train the model
###Code
net.fit(train)
out = net.predict(test.X[0:3])
print("\n")
print("predicted values : ")
print(np.round(out), end="\n")
print("true values : ")
print(test.Y[0:3])
conv = net.layers[0]
plot_img(conv.forward(test.X[:1]),shape=(26,26))
###Output
_____no_output_____ |
notebooks/data_exp/sales-prediction.shops_items.python.1.0.ipynb | ###Markdown
Explonatory Shop Analysis
###Code
%store -r item_cat
%store -r item
%store -r sub
%store -r shops
%store -r sales_test
%store -r sales_train
%store -r __ipy
%store -r __da
__da
%%capture
__ipy
import plotly.express as px
from functools import partial
from basic_text_preprocessing import BasicPreprocessText
import googlemaps
gmaps = googlemaps.Client(key='AIzaSyCW4PTjjIz6yGUgAmqrG2cLy9euzbim23M')
new_shops = shops.copy()
cleaned_shop_name = BasicPreprocessText().vectorize_process_text(shops['shop_name'])
new_shops['shop_name'] = cleaned_shop_name
new_shops['city'] = new_shops['shop_name'].apply(lambda x: x.split()[0])
city = new_shops['city'] .value_counts()\
.to_frame().reset_index().rename(columns={'index': 'shop_name', 'city': 'count_shops'})
px.bar(city[city['count_shops'] > 1], x='shop_name', y='count_shops')
###Output
_____no_output_____
###Markdown
Only one store present in city
###Code
city[city['count_shops'] == 1].count()
def not_city_str(x, t):
return 1 if t in "".join(x.split()[1:]) else 0
new_shops['is_mal'] = new_shops['shop_name'].apply(partial(not_city_str, t='тц'))
new_shops['is_en_mal'] = new_shops['shop_name'].apply(partial(not_city_str, t='трк'))
def get_location(x):
loc = gmaps.geocode(x)
return loc[0]['geometry']['location'] if len(loc) != 0 else {'lat': 0, 'lng': 0}
locations = new_shops['shop_name'].progress_apply(get_location)
new_shops_with_coords = pd.concat([new_shops, pd.DataFrame.from_records(locations.values)], axis=1)
new_shops_with_coords.head()
plot_data = new_shops_with_coords[new_shops_with_coords['lng'] > 0]
data = [ go.Scattergeo(
lon = plot_data['lng'][new_shops_with_coords['lng'] > 0],
lat = plot_data['lat'],
text = "Shop location")]
layout = dict(
width=1000,
height=500,
title = 'Locations'
)
fig = go.Figure(data=data, layout=layout)
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
Shop count: 60
###Code
print(f"Shop count: {sales_train['shop_id'].nunique()}")
###Output
Shop count: 60
###Markdown
Explonatory item Analysis Duplication of item_id. Duplication of item_id's are absent
###Code
item['item_id'].shape[0] - item['item_id'].unique().shape[0]
###Output
_____no_output_____
###Markdown
Preprocess names to find simmilar names of items:
###Code
cleaned_item_name = BasicPreprocessText().vectorize_process_text(item['item_name'])
cleaned_item_name_series = pd.Series(cleaned_item_name)
###Output
_____no_output_____
###Markdown
Duplication in item_name
###Code
item['item_id'].shape[0] - cleaned_item_name_series.unique().shape[0]
###Output
_____no_output_____
###Markdown
Item count:
###Code
print(f"Item count: {sales_train['item_id'].nunique()}")
###Output
_____no_output_____ |
Synthetic_ErdosRanyi/run_experiments.ipynb | ###Markdown
Load Data
###Code
name = 'Caveman'
walk_len = 4 # set walk length for GKAT
num_classes = 2
num_features = 32
num_heads = 2
feature_drop = 0
atten_drop = 0
runtimes = 15
epsilon = 1e-4
start_tol = 499
tolerance = 80
max_epoch = 500
batch_size = 128
learning_rate = 0.001
h_size = 5
normalize = None
# load all train and validation graphs
train_graphs = pickle.load(open(f'graph_data/{name}/train_graphs.pkl', 'rb'))
val_graphs = pickle.load(open(f'graph_data/{name}/val_graphs.pkl', 'rb'))
# load all labels
train_labels = np.load(f'graph_data/{name}/train_labels.npy')
val_labels = np.load(f'graph_data/{name}/val_labels.npy')
# here we load the pre-calculated GKAT kernel
train_GKAT_kernel = pickle.load(open(f'graph_data/{name}/GKAT_dot_kernels_train_len={walk_len}.pkl', 'rb'))
val_GKAT_kernel = pickle.load(open(f'graph_data/{name}/GKAT_dot_kernels_val_len={walk_len}.pkl', 'rb'))
train_GAT_masking = pickle.load(open(f'graph_data/{name}/GAT_masking_train.pkl', 'rb'))
val_GAT_masking = pickle.load(open(f'graph_data/{name}/GAT_masking_val.pkl', 'rb'))
train_GKAT_kernel = [torch.from_numpy(g) for g in train_GKAT_kernel]
val_GKAT_kernel = [torch.from_numpy(g) for g in val_GKAT_kernel]
for bg in train_graphs:
bg.remove_nodes_from(list(nx.isolates(bg)))
for bg in val_graphs:
bg.remove_nodes_from(list(nx.isolates(bg)))
def generate_knn_degrees(bg, h_size):
bg_h = np.zeros([bg.number_of_nodes(), h_size])
degree_dict = bg.degree
for node in bg.nodes():
nbr_degrees = []
nbrs = bg.neighbors(node)
for nb in nbrs:
nbr_degrees.append( degree_dict[nb] )
nbr_degrees.sort(reverse = True)
if len(nbr_degrees)==0:
nbr_degrees.append(1e-3)
bg_h[node] = (nbr_degrees + h_size*[0])[:h_size]
return bg_h
h_size = 5
train_h = [generate_knn_degrees(bg, h_size) for bg in train_graphs]
val_h = [generate_knn_degrees(bg, h_size) for bg in val_graphs]
train_graphs = [ dgl.from_networkx(g) for g in train_graphs]
val_graphs = [ dgl.from_networkx(g) for g in val_graphs]
GKAT_masking = [train_GKAT_kernel, val_GKAT_kernel]
GAT_masking = [train_GAT_masking, val_GAT_masking]
###Output
_____no_output_____
###Markdown
GKAT and GAT
###Code
class GKATLayer(nn.Module):
def __init__(self, in_dim, out_dim, feat_drop=0., attn_drop=0., alpha=0.2, agg_activation=F.elu):
super(GKATLayer, self).__init__()
self.feat_drop = nn.Dropout(feat_drop)
self.fc = nn.Linear(in_dim, out_dim, bias=False)
#torch.nn.init.xavier_uniform_(self.fc.weight)
#torch.nn.init.zeros_(self.fc.bias)
self.attn_l = nn.Parameter(torch.ones(size=(out_dim, 1)))
self.attn_r = nn.Parameter(torch.ones(size=(out_dim, 1)))
self.attn_drop = nn.Dropout(attn_drop)
self.activation = nn.LeakyReLU(alpha)
self.softmax = nn.Softmax(dim = 1)
self.agg_activation=agg_activation
def forward(self, feat, bg, counting_attn):
self.g = bg
h = self.feat_drop(feat)
head_ft = self.fc(h).reshape((h.shape[0], -1))
a1 = torch.mm(head_ft, self.attn_l) # V x 1
a2 = torch.mm(head_ft, self.attn_r) # V x 1
a = self.attn_drop(a1 + a2.transpose(0, 1))
a = self.activation(a)
a_ = a #- maxes
a_nomi = torch.mul(torch.exp(a_), counting_attn.float())
a_deno = torch.sum(a_nomi, 1, keepdim=True)
a_nor = a_nomi/(a_deno+1e-9)
ret = torch.mm(a_nor, head_ft)
if self.agg_activation is not None:
ret = self.agg_activation(ret)
return ret
class GKATLayer(nn.Module):
def __init__(self, in_dim, out_dim, feat_drop=0., attn_drop=0., alpha=0.2, agg_activation=F.elu):
super(GKATLayer, self).__init__()
self.feat_drop = feat_drop #nn.Dropout(feat_drop, training=self.training)
self.attn_drop = attn_drop #nn.Dropout(attn_drop)
self.fc_Q = nn.Linear(in_dim, out_dim, bias=False)
self.fc_K = nn.Linear(in_dim, out_dim, bias=False)
self.fc_V = nn.Linear(in_dim, out_dim, bias=False)
self.softmax = nn.Softmax(dim = 1)
self.agg_activation=agg_activation
def forward(self, feat, bg, counting_attn):
h = F.dropout(feat, p=self.feat_drop, training=self.training)
Q = self.fc_Q(h).reshape((h.shape[0], -1))
K = self.fc_K(h).reshape((h.shape[0], -1))
V = self.fc_V(h).reshape((h.shape[0], -1))
logits = F.dropout( torch.matmul( Q, torch.transpose(K,0,1) ) , p=self.attn_drop, training=self.training) / np.sqrt(Q.shape[1])
maxes = torch.max(logits, 1, keepdim=True)[0]
logits = logits - maxes
a_nomi = torch.mul(torch.exp( logits ), counting_attn.float())
a_deno = torch.sum(a_nomi, 1, keepdim=True)
a_nor = a_nomi/(a_deno+1e-9)
ret = torch.mm(a_nor, V)
if self.agg_activation is not None:
ret = self.agg_activation(ret)
return ret
class GKATClassifier_ER(nn.Module):
def __init__(self, in_dim, hidden_dim, num_heads, n_classes, feat_drop_=0., attn_drop_=0.,):
super(GKATClassifier_ER, self).__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.layers = nn.ModuleList([
nn.ModuleList([GKATLayer(in_dim, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads)]),
nn.ModuleList([GKATLayer(hidden_dim * num_heads, hidden_dim, feat_drop = feat_drop_, attn_drop = attn_drop_, agg_activation=F.elu) for _ in range(num_heads)]), ])
self.classify = nn.Linear(hidden_dim * num_heads, n_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, bg, bg_h, counting_attn, normalize = 'normal'):
h = torch.tensor(bg_h).float()
num_nodes = h.shape[0]
if normalize == 'normal':
features = h.numpy() #.flatten()
mean_ = np.mean(features, -1).reshape(-1,1)
std_ = np.std(features, -1).reshape(-1,1)
h = (h - mean_)/std_
for i, gnn in enumerate(self.layers):
all_h = []
for j, att_head in enumerate(gnn):
all_h.append(att_head(h, bg, counting_attn))
h = torch.squeeze(torch.cat(all_h, dim=1))
bg.ndata['h'] = h
hg = dgl.mean_nodes(bg, 'h')
return self.classify(hg)
###Output
_____no_output_____
###Markdown
GCN
###Code
class GCNClassifier_ER(nn.Module):
def __init__(self, in_dim, hidden_dim, n_classes):
super(GCNClassifier_ER, self).__init__()
self.conv1 = GraphConv(in_dim, hidden_dim)
self.conv2 = GraphConv(hidden_dim, hidden_dim)
self.classify = nn.Linear(hidden_dim, n_classes)
def forward(self, g, bg_h, normalize = 'normal'):
h = torch.tensor(bg_h).float()
num_nodes = h.shape[0]
if normalize == 'normal':
features = h.numpy()
mean_ = np.mean(features, -1).reshape(-1,1)
std_ = np.std(features, -1).reshape(-1,1)
h = (h - mean_)/std_
# Perform graph convolution and activation function.
h = F.relu(self.conv1(g, h))
h = F.relu(self.conv2(g, h))
g.ndata['h'] = h
hg = dgl.mean_nodes(g, 'h')
return self.classify(hg)
###Output
_____no_output_____
###Markdown
SGC
###Code
def cal_Laplacian(graph):
N = nx.adjacency_matrix(graph).shape[0]
D = np.sum(nx.adjacency_matrix(graph), 1)
D_hat = np.diag((np.array(D).flatten()+1e-5)**(-0.5))
return np.identity(N) - np.dot(D_hat, nx.to_numpy_matrix(graph)).dot(D_hat)
def rescale_L(L, lmax=2):
"""Rescale Laplacian eigenvalues to [-1,1]"""
M, M = L.shape
I = torch.diag(torch.ones(M))
L /= lmax * 2
L = torch.tensor(L)
L -= I
return L
def lmax_L(L):
"""Compute largest Laplacian eigenvalue"""
return scipy.sparse.linalg.eigsh(L, k=1, which='LM', return_eigenvectors=False)[0]
train_L_original = [cal_Laplacian(bg) for bg in train_graphs]
val_L_original = [cal_Laplacian(bg) for bg in val_graphs]
train_L_max = [lmax_L(L) for L in train_L_original]
val_L_max = [lmax_L(L) for L in val_L_original]
train_L = []
for iter, L in tqdm(enumerate(train_L_original)):
train_L.append(rescale_L(L, train_L_max[iter]))
val_L = []
for iter, L in tqdm(enumerate(val_L_original)):
val_L.append(rescale_L(L, val_L_max[iter]))
class Graph_ConvNet_LeNet5(nn.Module):
def __init__(self, net_parameters):
print('Graph ConvNet: LeNet5')
super(Graph_ConvNet_LeNet5, self).__init__()
# parameters
h_size, CL1_F, CL1_K, CL2_F, CL2_K, FC1_F = net_parameters
# graph CL1
self.cl1 = nn.Linear(h_size*CL1_K, CL1_F)
Fin = CL1_K; Fout = CL1_F;
scale = np.sqrt( 2.0/ (Fin+Fout) )
self.cl1.weight.data.uniform_(-scale, scale)
self.cl1.bias.data.fill_(0.0)
self.CL1_K = CL1_K; self.CL1_F = CL1_F;
# graph CL2
self.cl2 = nn.Linear(CL2_K*CL1_F, CL2_F)
Fin = CL2_K*CL1_F; Fout = CL2_F;
scale = np.sqrt( 2.0/ (Fin+Fout) )
self.cl2.weight.data.uniform_(-scale, scale)
self.cl2.bias.data.fill_(0.0)
self.CL2_K = CL2_K; self.CL2_F = CL2_F;
# FC1
self.fc1 = nn.Linear(CL2_F, FC1_F)
Fin = CL2_F; Fout = FC1_F;
scale = np.sqrt( 2.0/ (Fin+Fout) )
self.fc1.weight.data.uniform_(-scale, scale)
self.fc1.bias.data.fill_(0.0)
# nb of parameters
nb_param = h_size* CL1_K* CL1_F + CL1_F # CL1
nb_param += CL2_K* CL1_F* CL2_F + CL2_F # CL2
nb_param += CL2_F* FC1_F + FC1_F # FC1
print('nb of parameters=',nb_param,'\n')
def init_weights(self, W, Fin, Fout):
scale = np.sqrt( 2.0/ (Fin+Fout) )
W.uniform_(-scale, scale)
return W
def graph_conv_cheby(self, x, cl, L, Fout, K):
# parameters
# B = batch size
# V = nb vertices
# Fin = nb input features
# Fout = nb output features
# K = Chebyshev order & support size
B, V, Fin = x.size(); B, V, Fin = int(B), int(V), int(Fin)
# rescale Laplacian
# transform to Chebyshev basis
x0 = x.permute(1,2,0).contiguous().cuda() # V x Fin x B
x0 = x0.view([V, Fin*B]) # V x Fin*B
x = x0.unsqueeze(0) # 1 x V x Fin*B
def concat(x, x_):
x_ = x_.unsqueeze(0) # 1 x V x Fin*B
return torch.cat((x, x_), 0) # K x V x Fin*B
x1 = torch.mm(L.double().cuda(),x0.double()) # V x Fin*B
x = torch.cat((x, x1.unsqueeze(0)),0) # 2 x V x Fin*B
for k in range(2, K):
x2 = 2 * torch.mm(L.cuda(),x1) - x0
x = torch.cat((x, x2.unsqueeze(0)),0) # M x Fin*B
x0, x1 = x1, x2
x = x.view([K, V, Fin, B]) # K x V x Fin x B
x = x.permute(3,1,2,0).contiguous() # B x V x Fin x K
x = x.view([B*V, Fin*K]) # B*V x Fin*K
# Compose linearly Fin features to get Fout features
#print(x.shape)
x = cl(x.float()) # B*V x Fout
x = x.view([B, V, Fout]) # B x V x Fout
#print(x.shape)
return x
def forward(self, x, L):
# graph CL1
x = torch.tensor(x).unsqueeze(0) # B x V x Fin=1
x = self.graph_conv_cheby(x, self.cl1, L, self.CL1_F, self.CL1_K)
x = F.relu(x)
# graph CL2
x = self.graph_conv_cheby(x, self.cl2, L, self.CL2_F, self.CL2_K)
x = F.relu(x)
# FC1
x = self.fc1(x)
x = torch.mean(x, axis = 1)
return x
###Output
_____no_output_____
###Markdown
Start Training
###Code
all_GGG_train_losses = []
all_GGG_train_acc = []
all_GGG_val_losses = []
all_GGG_val_acc = []
GGG_test_acc_end = []
GGG_test_acc_ckpt = []
from prettytable import PrettyTable
def count_parameters(model):
table = PrettyTable(["Modules", "Parameters"])
total_params = 0
for name, parameter in model.named_parameters():
if not parameter.requires_grad: continue
param = parameter.numel()
table.add_row([name, param])
total_params+=param
print(table)
print(f"Total Trainable Params: {total_params}")
return total_params
for runtime in trange(runtimes):
for method in ['GAT', 'GKAT', 'GCN', 'ChebyGNN']:
ckpt_file = f'results/{name}/ckpt/{method}__ckpt.pt'
if method == 'GKAT':
num_features = 9
train_GGG_masking, val_GGG_masking = GKAT_masking
model = GKATClassifier_ER(h_size, num_features, num_heads, num_classes, feat_drop_ = feature_drop, attn_drop_ = atten_drop)
if method == 'GAT':
num_features = 9
train_GGG_masking, val_GGG_masking = GAT_masking
model = GKATClassifier_ER(h_size, num_features, num_heads, num_classes, feat_drop_ = feature_drop, attn_drop_ = atten_drop)
if method == 'GCN':
num_features = 32
model = GCNClassifier_ER(h_size, num_features, num_classes)
if method == 'ChebyGNN':
CL1_F = 32
CL1_K = 2
CL2_F = 32
CL2_K = 2
FC1_F = 2
net_parameters = [h_size, CL1_F, CL1_K, CL2_F, CL2_K, FC1_F]
# instantiate the object net of the class
model = Graph_ConvNet_LeNet5(net_parameters)
for p in model.parameters():
if p.dim() > 1:
nn.init.xavier_uniform(p)
count_parameters(model)
#model.apply(init_weights)
loss_func = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
model.train()
epoch_train_losses_GGG = []
epoch_train_acc_GGG = []
epoch_val_losses_GGG = []
epoch_val_acc_GGG = []
num_batches = int(len(train_graphs)/batch_size)
epoch = 0
nan_found = 0
tol = 0
while True:
if nan_found:
break
epoch_loss = 0
epoch_acc = 0
''' Training '''
for iter in range(num_batches):
#for iter in range(2):
predictions = []
labels = torch.empty(batch_size)
rand_indices = np.random.choice(len(train_graphs), batch_size, replace=False)
for b in range(batch_size):
if method == 'GCN':
predictions.append(model(train_graphs[rand_indices[b]], train_h[rand_indices[b]][:,:h_size], normalize = normalize ))
elif method == 'GAT':
predictions.append(model(train_graphs[rand_indices[b]], train_h[rand_indices[b]][:,:h_size], train_GGG_masking[rand_indices[b]], normalize = normalize ))
elif method == 'GKAT':
predictions.append(model(train_graphs[rand_indices[b]], train_h[rand_indices[b]][:,:h_size], train_GGG_masking[rand_indices[b]], normalize = normalize ))
elif method == 'ChebyGNN':
predictions.append(model(train_h[rand_indices[b]], train_L[rand_indices[b]]))
if torch.isnan(predictions[b][0])[0]:
print('NaN found.')
break
labels[b] = train_labels[rand_indices[b]]
acc = 0
for k in range(len(predictions)):
if predictions[k][0][0]>predictions[k][0][1] and labels[k]==0:
acc += 1
elif predictions[k][0][0]<=predictions[k][0][1] and labels[k]==1:
acc += 1
acc /= len(predictions)
epoch_acc += acc
predictions = torch.squeeze(torch.stack(predictions))
if torch.any(torch.isnan(predictions)):
print('NaN found.')
nan_found = 1
break
loss = loss_func(predictions, labels.long())
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.detach().item()
epoch_acc /= (iter + 1)
epoch_loss /= (iter + 1)
val_acc = 0
val_loss = 0
predictions_val = []
for b in range(len(val_graphs)):
if method == 'GCN':
predictions_val.append(model(val_graphs[b], val_h[b][:,:h_size], normalize = normalize ))
elif method == 'GAT':
predictions_val.append(model(val_graphs[b], val_h[b][:,:h_size], val_GGG_masking[b], normalize = normalize ))
elif method == 'GKAT':
predictions_val.append(model(val_graphs[b], val_h[b][:,:h_size], val_GGG_masking[b], normalize = normalize ))
elif method == 'ChebyGNN':
predictions_val.append(model(val_h[b], val_L[b]))
for k in range(len(predictions_val)):
if predictions_val[k][0][0]>predictions_val[k][0][1] and val_labels[k]==0:
val_acc += 1
elif predictions_val[k][0][0]<=predictions_val[k][0][1] and val_labels[k]==1:
val_acc += 1
val_acc /= len(val_graphs)
predictions_val = torch.squeeze(torch.stack(predictions_val))
loss = loss_func(predictions_val, torch.tensor(val_labels).long())
val_loss += loss.detach().item()
if len(epoch_val_losses_GGG) ==0:
try:
os.remove(f'{projectpath}{ckpt_file}')
except:
pass
torch.save(model, f'{projectpath}{ckpt_file}')
print('Epoch {}, acc{:.2f}, loss {:.4f}, tol {}, val_acc{:.2f}, val_loss{:.4f} -- checkpoint saved'.format(epoch, epoch_acc, epoch_loss, tol, val_acc, val_loss))
elif (np.min(epoch_val_losses_GGG) >= val_loss) and (np.max(epoch_val_acc_GGG) <= val_acc):
torch.save(model, f'{projectpath}{ckpt_file}')
print('Epoch {}, acc{:.2f}, loss {:.4f}, tol {}, val_acc{:.2f}, val_loss{:.4f} -- checkpoint saved'.format(epoch, epoch_acc, epoch_loss, tol, val_acc, val_loss))
else:
print('Epoch {}, acc{:.2f}, loss {:.4f}, tol {}, val_acc{:.2f}, val_loss{:.4f}'.format(epoch, epoch_acc, epoch_loss, tol, val_acc, val_loss))
if epoch > start_tol:
if np.min(epoch_val_losses_GGG) <= val_loss:
tol += 1
if tol == tolerance:
print('Loss do not decrease')
break
else:
if np.abs(epoch_val_losses_GGG[-1] - val_loss)<epsilon:
print('Converge steadily')
break
tol = 0
if epoch > max_epoch:
print("Reach Max Epoch Number")
break
epoch += 1
epoch_train_acc_GGG.append(epoch_acc)
epoch_train_losses_GGG.append(epoch_loss)
epoch_val_acc_GGG.append(val_acc)
epoch_val_losses_GGG.append(val_loss)
all_GGG_train_acc.append(epoch_train_acc_GGG)
all_GGG_train_losses.append(epoch_train_losses_GGG)
all_GGG_val_acc.append(epoch_val_acc_GGG)
all_GGG_val_losses.append(epoch_val_losses_GGG)
np.save(f'{projectpath}results/{name}/epoch_train_acc_{method}_run{runtime}.npy', epoch_train_acc_GGG)
np.save(f'{projectpath}results/{name}/epoch_val_acc_{method}_run{runtime}.npy', epoch_val_acc_GGG)
np.save(f'{projectpath}results/{name}/epoch_train_losses_{method}_run{runtime}.npy', epoch_train_losses_GGG)
np.save(f'{projectpath}results/{name}/epoch_val_losses_{method}_run{runtime}.npy', epoch_val_losses_GGG)
###Output
_____no_output_____ |
openbb_terminal/jupyter/reports/due_dilligence.ipynb | ###Markdown
Notebook setup
###Code
import io
import warnings
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import matplotlib_inline.backend_inline
from openbb_terminal import api as openbb
from openbb_terminal.helper_classes import TerminalStyle
%matplotlib inline
matplotlib_inline.backend_inline.set_matplotlib_formats("svg")
warnings.filterwarnings("ignore")
try:
theme = TerminalStyle("light", "light", "light")
except:
pass
stylesheet = openbb.widgets.html_report_stylesheet()
###Output
_____no_output_____
###Markdown
Select Ticker
###Code
# Parameters that will be replaced when calling this notebook
ticker = "AMC"
report_name = ""
ticker_data = openbb.stocks.load(ticker, start=datetime.now() - timedelta(days=365))
ticker_data = openbb.stocks.process_candle(ticker_data)
report_title = f"{ticker.upper()} Due Diligence report ({datetime.now().strftime('%Y-%m-%d %H:%M:%S')})"
report_title
overview = openbb.stocks.fa.models.yahoo_finance.get_info(ticker=ticker).transpose()[
"Long business summary"
][0]
overview
###Output
_____no_output_____
###Markdown
Data
###Code
(
df_year_estimates,
df_quarter_earnings,
df_quarter_revenues,
) = openbb.stocks.dd.models.business_insider.get_estimates(ticker)
###Output
_____no_output_____
###Markdown
1. Yearly Estimates
###Code
display_year = sorted(df_year_estimates.columns.tolist())[:3]
df_year_estimates = df_year_estimates[display_year].head(5)
df_year_estimates
###Output
_____no_output_____
###Markdown
2. Quarterly Earnings
###Code
df_quarter_earnings
###Output
_____no_output_____
###Markdown
3. Quarterly Revenues
###Code
df_quarter_revenues
###Output
_____no_output_____
###Markdown
4. SEC Filings
###Code
df_sec_filings = openbb.stocks.dd.models.marketwatch.get_sec_filings(ticker)[
["Type", "Category", "Link"]
].head(5)
df_sec_filings
###Output
_____no_output_____
###Markdown
5. Analyst Ratings
###Code
df_analyst = openbb.stocks.dd.models.finviz.get_analyst_data(ticker)
df_analyst["target_to"] = df_analyst["target_to"].combine_first(df_analyst["target"])
df_analyst = df_analyst[["category", "analyst", "rating", "target_to"]].rename(
columns={
"category": "Category",
"analyst": "Analyst",
"rating": "Rating",
"target_to": "Price Target",
}
)
df_analyst
###Output
_____no_output_____
###Markdown
Plots 1. Price history
###Code
fig, (candles, volume) = plt.subplots(nrows=2, ncols=1, figsize=(5, 3), dpi=150)
openbb.stocks.candle(
s_ticker=ticker,
df_stock=ticker_data,
use_matplotlib=True,
external_axes=[candles, volume],
)
candles.set_xticklabels("")
fig.tight_layout()
f = io.BytesIO()
fig.savefig(f, format="svg")
price_chart = f.getvalue().decode("utf-8")
###Output
_____no_output_____
###Markdown
2. Price Target
###Code
fig, ax = plt.subplots(figsize=(8, 3), dpi=150)
openbb.stocks.dd.pt(
ticker=ticker,
start="2021-10-25",
interval="1440min",
stock=ticker_data,
num=10,
raw=False,
external_axes=[ax],
)
fig.tight_layout()
f = io.BytesIO()
fig.savefig(f, format="svg")
price_target_chart = f.getvalue().decode("utf-8")
###Output
_____no_output_____
###Markdown
3. Ratings over time
###Code
fig, ax = plt.subplots(figsize=(8, 3), dpi=150)
openbb.stocks.dd.rot(
ticker=ticker,
num=10,
raw=False,
export="",
external_axes=[ax],
)
fig.tight_layout()
f = io.BytesIO()
fig.savefig(f, format="svg")
ratings_over_time_chart = f.getvalue().decode("utf-8")
###Output
_____no_output_____
###Markdown
Render the report template to a file
###Code
body = ""
# Title
body += openbb.widgets.h(1, report_title)
body += openbb.widgets.h(2, "Overview")
body += openbb.widgets.row([openbb.widgets.p(overview)])
# Analysts ratings
body += openbb.widgets.h(2, "Analyst assessments")
body += openbb.widgets.row([price_target_chart])
body += openbb.widgets.row([df_analyst.to_html()])
body += openbb.widgets.row([ratings_over_time_chart])
# Price history and yearly estimates
body += openbb.widgets.row(
[
openbb.widgets.h(3, "Price history") + price_chart,
openbb.widgets.h(3, "Estimates") + df_year_estimates.head().to_html(),
]
)
# Earnings and revenues
body += openbb.widgets.h(2, "Earnings and revenues")
body += openbb.widgets.row([df_quarter_earnings.head().to_html()])
body += openbb.widgets.row([df_quarter_revenues.head().to_html()])
# Sec filings and insider trading
body += openbb.widgets.h(2, "SEC filings")
body += openbb.widgets.row([df_sec_filings.to_html()])
report = openbb.widgets.html_report(title=report_name, stylesheet=stylesheet, body=body)
# to save the results
with open(report_name + ".html", "w") as fh:
fh.write(report)
###Output
_____no_output_____ |
Introduction/Week2/Exercise2-Question.ipynb | ###Markdown
Exercise 2In the course you learned how to do classificaiton using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.Some notes:1. It should succeed in less than 10 epochs, so it is okay to change epochs= to 10, but nothing larger2. When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!"3. If you add any additional variables, make sure you use the same names as the ones used in the classI've started the code for you below -- how would you finish it?
###Code
import tensorflow as tf
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/mnist.npz"
# GRADED FUNCTION: train_mnist
def train_mnist():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
# YOUR CODE SHOULD START HERE
class AccuracyCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs.get('acc') >= 0.99:
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
# YOUR CODE SHOULD END HERE
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data(path=path)
# YOUR CODE SHOULD START HERE
x_train = x_train/255.0
x_test = x_test/255.0
# YOUR CODE SHOULD END HERE
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# YOUR CODE SHOULD END HERE
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# model fitting
history = model.fit(# YOUR CODE SHOULD START HERE
x_train, y_train, epochs=10, callbacks=[AccuracyCallback()]
# YOUR CODE SHOULD END HERE
)
# model fitting
return history.epoch, history.history['acc'][-1]
train_mnist()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____ |
_notebooks/2021-11-05-decision-tree-classifier.ipynb | ###Markdown
Decision Tree Classifier Tutorial with Python> Learn about Decision Tree Classifier in Python- toc: false- badges: true- comments: true- categories: [classification, decision-tree]- image: images/DecisionTrees.png **Table of Contents**1. [Introduction to Decision Tree algorithm](1)2. [Classification and Regression Trees](2)3. [Decision Tree algorithm terminology](3)4. [Decision Tree algorithm intuition](4)5. [Attribute selection measures](5) - 5.1 [Information gain](5.1) - 5.2 [Gini index](5.2)6. [Overfitting in Decision-Tree algorithm](6)7. [Import libraries](7)8. [Import dataset](8)9. [Exploratory data analysis](9)10. [Declare feature vector and target variable](10)11. [Split data into separate training and test set](11)12. [Feature engineering](12)13. [Decision Tree classifier with criterion gini-index](13)14. [Decision Tree classifier with criterion entropy](14)15. [Confusion matrix](15)16. [Classification report](16)17. [Results and conclusion](17)18. [References](18) **1. Introduction to Decision Tree algorithm** A Decision Tree algorithm is one of the most popular machine learning algorithms. It uses a tree like structure and their possible combinations to solve a particular problem. It belongs to the class of supervised learning algorithms where it can be used for both classification and regression purposes. A decision tree is a structure that includes a root node, branches, and leaf nodes. Each internal node denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node holds a class label. The topmost node in the tree is the root node. We make some assumptions while implementing the Decision-Tree algorithm. These are listed below:-1. At the beginning, the whole training set is considered as the root.2. Feature values need to be categorical. If the values are continuous then they are discretized prior to building the model.3. Records are distributed recursively on the basis of attribute values.4. Order to placing attributes as root or internal node of the tree is done by using some statistical approach.I will describe Decision Tree terminology in later section. **2. Classification and Regression Trees (CART)** Nowadays, Decision Tree algorithm is known by its modern name **CART** which stands for **Classification and Regression Trees**. Classification and Regression Trees or **CART** is a term introduced by Leo Breiman to refer to Decision Tree algorithms that can be used for classification and regression modeling problems.The CART algorithm provides a foundation for other important algorithms like bagged decision trees, random forest and boosted decision trees. In this kernel, I will solve a classification problem. So, I will refer the algorithm also as Decision Tree Classification problem. **3. Decision Tree algorithm terminology** - In a Decision Tree algorithm, there is a tree like structure in which each internal node represents a test on an attribute, each branch represents the outcome of the test, and each leaf node represents a class label. The paths from the root node to leaf node represent classification rules.- We can see that there is some terminology involved in Decision Tree algorithm. The terms involved in Decision Tree algorithm are as follows:- **Root Node**- It represents the entire population or sample. This further gets divided into two or more homogeneous sets. **Splitting**- It is a process of dividing a node into two or more sub-nodes. Decision Node- When a sub-node splits into further sub-nodes, then it is called a decision node. Leaf/Terminal Node- Nodes that do not split are called Leaf or Terminal nodes. Pruning- When we remove sub-nodes of a decision node, this process is called pruning. It is the opposite process of splitting. Branch/Sub-Tree- A sub-section of an entire tree is called a branch or sub-tree. Parent and Child Node- A node, which is divided into sub-nodes is called the parent node of sub-nodes where sub-nodes are the children of a parent node. The above terminology is represented clearly in the following diagram:- Decision-Tree terminology **4. Decision Tree algorithm intuition** The Decision-Tree algorithm is one of the most frequently and widely used supervised machine learning algorithms that can be used for both classification and regression tasks. The intuition behind the Decision-Tree algorithm is very simple to understand.The Decision Tree algorithm intuition is as follows:-1. For each attribute in the dataset, the Decision-Tree algorithm forms a node. The most important attribute is placed at the root node. 2. For evaluating the task in hand, we start at the root node and we work our way down the tree by following the corresponding node that meets our condition or decision.3. This process continues until a leaf node is reached. It contains the prediction or the outcome of the Decision Tree. **5. Attribute selection measures** The primary challenge in the Decision Tree implementation is to identify the attributes which we consider as the root node and each level. This process is known as the **attributes selection**. There are different attributes selection measure to identify the attribute which can be considered as the root node at each level.There are 2 popular attribute selection measures. They are as follows:-- **Information gain**- **Gini index**While using **Information gain** as a criterion, we assume attributes to be categorical and for **Gini index** attributes are assumed to be continuous. These attribute selection measures are described below. **5.1 Information gain** By using information gain as a criterion, we try to estimate the information contained by each attribute. To understand the concept of Information Gain, we need to know another concept called **Entropy**. **Entropy**Entropy measures the impurity in the given dataset. In Physics and Mathematics, entropy is referred to as the randomness or uncertainty of a random variable X. In information theory, it refers to the impurity in a group of examples. **Information gain** is the decrease in entropy. Information gain computes the difference between entropy before split and average entropy after split of the dataset based on given attribute values. Entropy is represented by the following formula:- Here, **c** is the number of classes and **pi** is the probability associated with the ith class. The ID3 (Iterative Dichotomiser) Decision Tree algorithm uses entropy to calculate information gain. So, by calculating decrease in **entropy measure** of each attribute we can calculate their information gain. The attribute with the highest information gain is chosen as the splitting attribute at the node. **5.2 Gini index** Another attribute selection measure that **CART (Categorical and Regression Trees)** uses is the **Gini index**. It uses the Gini method to create split points. Gini index can be represented with the following diagram:- **Gini index**Here, again **c** is the number of classes and **pi** is the probability associated with the ith class. Gini index says, if we randomly select two items from a population, they must be of the same class and probability for this is 1 if the population is pure.It works with the categorical target variable “Success” or “Failure”. It performs only binary splits. The higher the value of Gini, higher the homogeneity. CART (Classification and Regression Tree) uses the Gini method to create binary splits.Steps to Calculate Gini for a split1. Calculate Gini for sub-nodes, using formula sum of the square of probability for success and failure (p^2+q^2).2. Calculate Gini for split using weighted Gini score of each node of that split.In case of a discrete-valued attribute, the subset that gives the minimum gini index for that chosen is selected as a splitting attribute. In the case of continuous-valued attributes, the strategy is to select each pair of adjacent values as a possible split-point and point with smaller gini index chosen as the splitting point. The attribute with minimum Gini index is chosen as the splitting attribute. **6. Overfitting in Decision Tree algorithm** Overfitting is a practical problem while building a Decision-Tree model. The problem of overfitting is considered when the algorithm continues to go deeper and deeper to reduce the training-set error but results with an increased test-set error. So, accuracy of prediction for our model goes down. It generally happens when we build many branches due to outliers and irregularities in data.Two approaches which can be used to avoid overfitting are as follows:-- Pre-Pruning- Post-Pruning **Pre-Pruning**In pre-pruning, we stop the tree construction a bit early. We prefer not to split a node if its goodness measure is below a threshold value. But it is difficult to choose an appropriate stopping point. **Post-Pruning**In post-pruning, we go deeper and deeper in the tree to build a complete tree. If the tree shows the overfitting problem then pruning is done as a post-pruning step. We use the cross-validation data to check the effect of our pruning. Using cross-validation data, we test whether expanding a node will result in improve or not. If it shows an improvement, then we can continue by expanding that node. But if it shows a reduction in accuracy then it should not be expanded. So, the node should be converted to a leaf node. **7. Import libraries**
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # data visualization
import seaborn as sns # statistical data visualization
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
**8. Import dataset**
###Code
data = 'car_evaluation.csv'
df = pd.read_csv(data, header=None)
###Output
_____no_output_____
###Markdown
**9. Exploratory data analysis** Now, I will explore the data to gain insights about the data.
###Code
# view dimensions of dataset
df.shape
###Output
_____no_output_____
###Markdown
We can see that there are 1728 instances and 7 variables in the data set. View top 5 rows of dataset
###Code
# preview the dataset
df.head()
###Output
_____no_output_____
###Markdown
Rename column namesWe can see that the dataset does not have proper column names. The columns are merely labelled as 0,1,2.... and so on. We should give proper names to the columns. I will do it as follows:-
###Code
col_names = ['buying', 'maint', 'doors', 'persons', 'lug_boot', 'safety', 'class']
df.columns = col_names
col_names
# let's again preview the dataset
df.head()
###Output
_____no_output_____
###Markdown
We can see that the column names are renamed. Now, the columns have meaningful names. View summary of dataset
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1728 entries, 0 to 1727
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 buying 1728 non-null object
1 maint 1728 non-null object
2 doors 1728 non-null object
3 persons 1728 non-null object
4 lug_boot 1728 non-null object
5 safety 1728 non-null object
6 class 1728 non-null object
dtypes: object(7)
memory usage: 94.6+ KB
###Markdown
Frequency distribution of values in variablesNow, I will check the frequency counts of categorical variables.
###Code
col_names = ['buying', 'maint', 'doors', 'persons', 'lug_boot', 'safety', 'class']
for col in col_names:
print(df[col].value_counts())
###Output
vhigh 432
high 432
med 432
low 432
Name: buying, dtype: int64
vhigh 432
high 432
med 432
low 432
Name: maint, dtype: int64
2 432
3 432
4 432
5more 432
Name: doors, dtype: int64
2 576
4 576
more 576
Name: persons, dtype: int64
small 576
med 576
big 576
Name: lug_boot, dtype: int64
low 576
med 576
high 576
Name: safety, dtype: int64
unacc 1210
acc 384
good 69
vgood 65
Name: class, dtype: int64
###Markdown
We can see that the `doors` and `persons` are categorical in nature. So, I will treat them as categorical variables. Summary of variables- There are 7 variables in the dataset. All the variables are of categorical data type.- These are given by `buying`, `maint`, `doors`, `persons`, `lug_boot`, `safety` and `class`.- `class` is the target variable. Explore `class` variable
###Code
df['class'].value_counts()
###Output
_____no_output_____
###Markdown
The `class` target variable is ordinal in nature. Missing values in variables
###Code
# check missing values in variables
df.isnull().sum()
###Output
_____no_output_____
###Markdown
We can see that there are no missing values in the dataset. I have checked the frequency distribution of values previously. It also confirms that there are no missing values in the dataset. **10. Declare feature vector and target variable**
###Code
X = df.drop(['class'], axis=1)
y = df['class']
###Output
_____no_output_____
###Markdown
**11. Split data into separate training and test set**
###Code
# split X and y into training and testing sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 42)
# check the shape of X_train and X_test
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
**12. Feature Engineering** **Feature Engineering** is the process of transforming raw data into useful features that help us to understand our model better and increase its predictive power. I will carry out feature engineering on different types of variables.First, I will check the data types of variables again.
###Code
# check data types in X_train
X_train.dtypes
###Output
_____no_output_____
###Markdown
Encode categorical variablesNow, I will encode the categorical variables.
###Code
X_train.head()
###Output
_____no_output_____
###Markdown
We can see that all the variables are ordinal categorical data type.
###Code
# import category encoders
import category_encoders as ce
# encode variables with ordinal encoding
encoder = ce.OrdinalEncoder(cols=['buying', 'maint', 'doors', 'persons', 'lug_boot', 'safety'])
X_train = encoder.fit_transform(X_train)
X_test = encoder.transform(X_test)
X_train.head()
X_test.head()
###Output
_____no_output_____
###Markdown
We now have training and test set ready for model building. **13. Decision Tree Classifier with criterion gini index**
###Code
# import DecisionTreeClassifier
from sklearn.tree import DecisionTreeClassifier
# instantiate the DecisionTreeClassifier model with criterion gini index
clf_gini = DecisionTreeClassifier(criterion='gini', max_depth=3, random_state=0)
# fit the model
clf_gini.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict the Test set results with criterion gini index
###Code
y_pred_gini = clf_gini.predict(X_test)
###Output
_____no_output_____
###Markdown
Check accuracy score with criterion gini index
###Code
from sklearn.metrics import accuracy_score
print('Model accuracy score with criterion gini index: {0:0.4f}'. format(accuracy_score(y_test, y_pred_gini)))
###Output
Model accuracy score with criterion gini index: 0.8021
###Markdown
Here, **y_test** are the true class labels and **y_pred_gini** are the predicted class labels in the test-set. Compare the train-set and test-set accuracyNow, I will compare the train-set and test-set accuracy to check for overfitting.
###Code
y_pred_train_gini = clf_gini.predict(X_train)
y_pred_train_gini
print('Training-set accuracy score: {0:0.4f}'. format(accuracy_score(y_train, y_pred_train_gini)))
###Output
Training-set accuracy score: 0.7865
###Markdown
Check for overfitting and underfitting
###Code
# print the scores on training and test set
print('Training set score: {:.4f}'.format(clf_gini.score(X_train, y_train)))
print('Test set score: {:.4f}'.format(clf_gini.score(X_test, y_test)))
###Output
Training set score: 0.7865
Test set score: 0.8021
###Markdown
Here, the training-set accuracy score is 0.7865 while the test-set accuracy to be 0.8021. These two values are quite comparable. So, there is no sign of overfitting. Visualize decision-trees
###Code
plt.figure(figsize=(12,8))
from sklearn import tree
tree.plot_tree(clf_gini.fit(X_train, y_train))
###Output
_____no_output_____
###Markdown
Visualize decision-trees with graphviz
###Code
import graphviz
dot_data = tree.export_graphviz(clf_gini, out_file=None,
feature_names=X_train.columns,
class_names=y_train,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
###Markdown
**14. Decision Tree Classifier with criterion entropy**
###Code
# instantiate the DecisionTreeClassifier model with criterion entropy
clf_en = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0)
# fit the model
clf_en.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict the Test set results with criterion entropy
###Code
y_pred_en = clf_en.predict(X_test)
###Output
_____no_output_____
###Markdown
Check accuracy score with criterion entropy
###Code
from sklearn.metrics import accuracy_score
print('Model accuracy score with criterion entropy: {0:0.4f}'. format(accuracy_score(y_test, y_pred_en)))
###Output
Model accuracy score with criterion entropy: 0.8021
###Markdown
Compare the train-set and test-set accuracyNow, I will compare the train-set and test-set accuracy to check for overfitting.
###Code
y_pred_train_en = clf_en.predict(X_train)
y_pred_train_en
print('Training-set accuracy score: {0:0.4f}'. format(accuracy_score(y_train, y_pred_train_en)))
###Output
Training-set accuracy score: 0.7865
###Markdown
Check for overfitting and underfitting
###Code
# print the scores on training and test set
print('Training set score: {:.4f}'.format(clf_en.score(X_train, y_train)))
print('Test set score: {:.4f}'.format(clf_en.score(X_test, y_test)))
###Output
Training set score: 0.7865
Test set score: 0.8021
###Markdown
We can see that the training-set score and test-set score is same as above. The training-set accuracy score is 0.7865 while the test-set accuracy to be 0.8021. These two values are quite comparable. So, there is no sign of overfitting. Visualize decision-trees
###Code
plt.figure(figsize=(12,8))
from sklearn import tree
tree.plot_tree(clf_en.fit(X_train, y_train))
###Output
_____no_output_____
###Markdown
Visualize decision-trees with graphviz
###Code
import graphviz
dot_data = tree.export_graphviz(clf_en, out_file=None,
feature_names=X_train.columns,
class_names=y_train,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
###Markdown
Now, based on the above analysis we can conclude that our classification model accuracy is very good. Our model is doing a very good job in terms of predicting the class labels.But, it does not give the underlying distribution of values. Also, it does not tell anything about the type of errors our classifer is making. We have another tool called `Confusion matrix` that comes to our rescue. **15. Confusion matrix** A confusion matrix is a tool for summarizing the performance of a classification algorithm. A confusion matrix will give us a clear picture of classification model performance and the types of errors produced by the model. It gives us a summary of correct and incorrect predictions broken down by each category. The summary is represented in a tabular form.Four types of outcomes are possible while evaluating a classification model performance. These four outcomes are described below:-**True Positives (TP)** – True Positives occur when we predict an observation belongs to a certain class and the observation actually belongs to that class.**True Negatives (TN)** – True Negatives occur when we predict an observation does not belong to a certain class and the observation actually does not belong to that class.**False Positives (FP)** – False Positives occur when we predict an observation belongs to a certain class but the observation actually does not belong to that class. This type of error is called **Type I error.****False Negatives (FN)** – False Negatives occur when we predict an observation does not belong to a certain class but the observation actually belongs to that class. This is a very serious error and it is called **Type II error.**These four outcomes are summarized in a confusion matrix given below.
###Code
# Print the Confusion Matrix and slice it into four pieces
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred_en)
print('Confusion matrix\n\n', cm)
###Output
Confusion matrix
[[ 73 0 56 0]
[ 20 0 0 0]
[ 12 0 385 0]
[ 25 0 0 0]]
###Markdown
**16. Classification Report** **Classification report** is another way to evaluate the classification model performance. It displays the **precision**, **recall**, **f1** and **support** scores for the model. I have described these terms in later.We can print a classification report as follows:-
###Code
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred_en))
###Output
precision recall f1-score support
acc 0.56 0.57 0.56 129
good 0.00 0.00 0.00 20
unacc 0.87 0.97 0.92 397
vgood 0.00 0.00 0.00 25
accuracy 0.80 571
macro avg 0.36 0.38 0.37 571
weighted avg 0.73 0.80 0.77 571
|
Daily/Knight Moves.ipynb | ###Markdown
Knights Tour = Visit All Squares on the BoardGiven N, find all the number of knight tours on an NxN chessboard. Backtracking: > Backtracking is a form of recursion. But it involves choosing only option out of any possibilities. We begin by choosing an option and backtrack from it, if we reach a state where we conclude that this specific option does not give the required solution. We repeat these steps by going across each available option until we get the desired solution.```(python) def permute(list, s): if list == 1: return s else: return [ y + x for y in permute(1, s) for x in permute(list - 1, s) ]print(permute(1, ["a","b","c"]))print(permute(2, ["a","b","c"])) when executed it becomes['a', 'b', 'c']['aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc']```
###Code
# r, c, are like coordinates
# board[r][c] is None checks if there's no value in board[r][c]
# def_is_valid_move() takes in board state, the move, and the size of the board
def is_valid_move(board, move, n):
r, c = move
return 0 <= r < n and 0 <=c <n and board[r][c] is None
# these are just valid moves, where delta => movement in the position of a piece
def valid_moves(board, r, c, n):
deltas = [
(2,1),
(1,2),
(1,-2),
(-2,1),
(-1,2)
(2,-1),
(-1,-2)
(-2,-1),
]
# this is an awesome piece of code here.
all_moves = [(r + r_delta, c + c_delta) for r_delta, c_delta in deltas]
# filter out the moves that aren't legal using.. "if is_valid_move(board, move, n)"
return [move for move in all_moves if is_valid_move(board, move, n)]
def knights_tours(n):
count = 0
for i in range(n):
for j in range(n):
# "for _ in" --> convention that indicates that the loop variable isn't used.
# [None for _ in range(n)] --> [None, None, None, ... n times]
# Then, [[None for _ in range(n)] for _ in range(n)] just creates
# multiple copies of [None, None, None ....], [None, None, None ...] etc.
# we use backtracking...
# for every possible square, initialize a knight
# try every valid move from that square
# once we've hit every single square, we can add to our count
board = [[None for _ in range(n)] for _ in range(n)] # this resets the board
board[i][j] = 0
count += knights_tours_helper(board, [(i,j)], n)
return count
# tours_helper is where all of the magic happens.
# the tour is just a sequence of tuples (r,c)
def knights_tours_helper(board, tour, n):
if len(tour) == n * n:
return 1
else:
count = 0
last_r, last_c = tour[-1]
for r, c in valid_moves(board, last_r, last_c, n):
tour.append((r,c))
board[r][c] = len(tour)
count += knights_tours_helper(board, tour, n)
tour.pop()
board[r][c] = None
return count
###Output
_____no_output_____
###Markdown
Takes $O(N*N)$ space and O(8^(N^2)) timeEach step we have potentially 8 moves to check, and we have to do this for each square.
###Code
arr = [None for _ in range(5)]
print(arr)
arr2 = [[None for _ in range(5)] for _ in range(5)]
print(arr2)
# so you can see this code creates the board state.
# then board[i][j] = 0 means everything is set to 0..
for i in range(5):
for j in range(5):
board = [[None for _ in range(5)] for _ in range(5)]
board[i][j] = 0
print(board)
# look at what this creates. notice how board = [[None for _ in range(5)] for _ in range(5)]
# just creates the None None None ... etc. list of lists.
# and then when you set board[i][j] you do indeed set it to 0, but then on the next loop around
# you reset board again before board[i][j] gets assigned so what you're left with as an output here
# is just the board with position (4,4) = 0 and that's what you see in the output.
###Output
[[None, None, None, None, None], [None, None, None, None, None], [None, None, None, None, None], [None, None, None, None, None], [None, None, None, None, 0]]
|
docs/source/Notebooks/1_read_write_tfrecords.ipynb | ###Markdown
Writing and Reading TFRecordsTensoflow-Transformers has off the shelf support to write and read tfrecord with so much ease.It also allows you to shard, shuffle and batch your data most of the times, with minimal code.Here we will see, how can we make use of these utilities to write and read tfrecords.For this examples, we will be using a [**Squad Dataset**](https://huggingface.co/datasets/squad "Squad Dataset"), to convert it to a text to text problem usingGPT2 Tokenizer.
###Code
from tf_transformers.data import TFWriter, TFReader
from transformers import GPT2TokenizerFast
from datasets import load_dataset
import tempfile
import json
import glob
###Output
_____no_output_____
###Markdown
Load Data and TokenizerWe will load dataset and tokenizer. Then we will define the length for the examples.It is important to make sure we have limit the length within the allowed limit of each models.
###Code
# Load Dataset
dataset = load_dataset("squad")
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
# Define length for examples
max_passage_length = 384
max_question_length = 64
max_answer_length = 40
###Output
_____no_output_____
###Markdown
Write TFRecordTo write a TFRecord, we need to provide a schema (**dict**). This schema supports **int**, **float**, **bytes**.**TFWriter**, support [**FixedLen**](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenFeature) and[**VarLen**](https://www.tensorflow.org/api_docs/python/tf/io/VarLenFeature) feature types. The recommended and easiest is to use **Varlen**, this will be faster and easy to write and read.We can also pad it accordingly after reading.
###Code
def parse_train(dataset, tokenizer, max_passage_length, max_question_length, max_answer_length, key):
"""Function o to parse examples
Args:
dataset (:obj:`dataet`): HF dataset
tokenizer (:obj:`tokenizer`): HF Tokenizer
max_passage_length (:obj:`int`): Passage Length
max_question_length (:obj:`int`): Question Length
max_answer_length (:obj:`int`): Answer Length
key (:obj:`str`): Key of dataset (`train`, `validation` etc)
"""
result = {}
for f in dataset[key]:
question_input_ids = tokenizer(item['context'], max_length=max_passage_length, truncation=True)['input_ids'] + [tokenizer.bos_token_id]
passage_input_ids = tokenizer(item['question'], max_length=max_question_length, truncation=True)['input_ids'] + \
[tokenizer.bos_token_id]
# Input Question + Context
# We should make sure that we will mask labels here,as we dont want model to predict inputs
input_ids = question_input_ids + passage_input_ids
labels_mask = [0] * len(input_ids)
# Answer part
answer_ids = tokenizer(item['answers']['text'][0], max_length=max_answer_length, truncation=True)['input_ids'] + \
[tokenizer.bos_token_id]
input_ids = input_ids + answer_ids
labels_mask = labels_mask + [1] * len(answer_ids)
# Shift positions to make proper training examples
labels = input_ids[1:]
labels_mask = labels_mask[1:]
input_ids = input_ids[:-1]
result = {}
result['input_ids'] = input_ids
result['labels'] = labels
result['labels_mask'] = labels_mask
yield result
# Write using TF Writer
schema = {
"input_ids": ("var_len", "int"),
"labels": ("var_len", "int"),
"labels_mask": ("var_len", "int"),
}
tfrecord_train_dir = tempfile.mkdtemp()
tfrecord_filename = 'squad'
tfwriter = TFWriter(schema=schema,
file_name=tfrecord_filename,
model_dir=tfrecord_train_dir,
tag='train',
overwrite=True
)
# Train dataset
train_parser_fn = parse_train(dataset, tokenizer, max_passage_length, max_question_length, max_answer_length, key='train')
tfwriter.process(parse_fn=train_parser_fn)
###Output
Tag train
###Markdown
Read TFRecordsTo read a TFRecord, we need to provide a schema (**dict**). This schema supports **int**, **float**, **bytes**.**TFWReader**, support [**FixedLen**](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenFeature) and[**VarLen**](https://www.tensorflow.org/api_docs/python/tf/io/VarLenFeature) feature types. We can also **auto_batch**, **shuffle**, choose the optional keys (not all keys in tfrecords) might not be required while reading, etc in a single function.
###Code
# Read TFRecord
schema = json.load(open("{}/schema.json".format(tfrecord_train_dir)))
all_files = glob.glob("{}/*.tfrecord".format(tfrecord_train_dir))
tf_reader = TFReader(schema=schema,
tfrecord_files=all_files)
x_keys = ['input_ids']
y_keys = ['labels', 'labels_mask']
batch_size = 16
train_dataset = tf_reader.read_record(auto_batch=True,
keys=x_keys,
batch_size=batch_size,
x_keys = x_keys,
y_keys = y_keys,
shuffle=True,
drop_remainder=True
)
for (batch_inputs, batch_labels) in train_dataset:
print(batch_inputs, batch_labels)
break
###Output
{'input_ids': <tf.Tensor: shape=(16, 187), dtype=int32, numpy=
array([[19895, 5712, 20221, ..., 12944, 343, 516],
[19895, 5712, 20221, ..., 12944, 343, 516],
[19895, 5712, 20221, ..., 12944, 343, 516],
...,
[19895, 5712, 20221, ..., 12944, 343, 516],
[19895, 5712, 20221, ..., 12944, 343, 516],
[19895, 5712, 20221, ..., 12944, 343, 516]], dtype=int32)>} {'labels': <tf.Tensor: shape=(16, 187), dtype=int32, numpy=
array([[ 5712, 20221, 11, ..., 343, 516, 50256],
[ 5712, 20221, 11, ..., 343, 516, 50256],
[ 5712, 20221, 11, ..., 343, 516, 50256],
...,
[ 5712, 20221, 11, ..., 343, 516, 50256],
[ 5712, 20221, 11, ..., 343, 516, 50256],
[ 5712, 20221, 11, ..., 343, 516, 50256]], dtype=int32)>, 'labels_mask': <tf.Tensor: shape=(16, 187), dtype=int32, numpy=
array([[0, 0, 0, ..., 1, 1, 1],
[0, 0, 0, ..., 1, 1, 1],
[0, 0, 0, ..., 1, 1, 1],
...,
[0, 0, 0, ..., 1, 1, 1],
[0, 0, 0, ..., 1, 1, 1],
[0, 0, 0, ..., 1, 1, 1]], dtype=int32)>}
|
jupyter/.ipynb_checkpoints/qm_explanation-checkpoint.ipynb | ###Markdown
$$\newcommand{\ket}[1]{\left|{1}\right\rangle}$$$$\newcommand{\bra}[1]{\left\langle{1}\right|}$$ Explanation of First-Generation QM Model DisclaimerUltimately, I would like to understand, and be able to explain, the quantum mechanics behind the entire process of simulating an NMR spectrum. For now, here is a "recipe" of the steps to arrive at the spin Hamiltonian, and how its eigensolution can be used to calculate frequencies and intensities.Two sources in particular enabled this:1. Materials by Ilya Kuprov at SpinDynamics.org, particularly [Module I, Lecture 5](http://spindynamics.org/Spin-Dynamics---Part-I---Lecture-05.php) and the Matlab code of [Module II, Lecture 05](http://spindynamics.org/Spin-Dynamics---Part-II---Lecture-05.php) and [06](http://spindynamics.org/Spin-Dynamics---Part-II---Lecture-06.php).2. [Materials](http://www.users.csbsju.edu/~frioux/workinprogress.htmlSpectroscopy) by Frank Rioux at St. John's University and College of St. Benedict. In particular, [*ABC Proton NMR Using Tensor Algebra*](http://www.users.csbsju.edu/~frioux/nmr/ABC-NMR-Tensor.pdf) was very helpful.
###Code
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import numpy as np
from math import sqrt
from scipy.linalg import eigh
from scipy.sparse import kron, csc_matrix, csr_matrix, lil_matrix, bmat
import bokeh.io
import bokeh.plotting
###Output
_____no_output_____
###Markdown
Constructing the Hamiltonian From Scratch Start with the Pauli matrices:\begin{align}\sigma_x = \begin{pmatrix}0& \frac{1}{2}\\ \frac{1}{2}&0\end{pmatrix}, \sigma_y = \begin{pmatrix}0& -\frac{i}{2}\\ \frac{i}{2}&0\end{pmatrix}, \sigma_z = \begin{pmatrix}\frac{1}{2}& 0\\ 0&-\frac{1}{2}\end{pmatrix}\end{align}plus the identity matrix $I = \begin{pmatrix}1&0\\0&1\end{pmatrix}$
###Code
sigma_x = np.matrix([[0, 1 / 2], [1 / 2, 0]])
sigma_y = np.matrix([[0, -1j / 2], [1j / 2, 0]])
sigma_z = np.matrix([[1 / 2, 0], [0, -1 / 2]])
unit = np.matrix([[1, 0], [0, 1]])
###Output
_____no_output_____
###Markdown
The required inputs are a list of frequencies $\nu_i$ and a matrix of $J_{ij}$ coupling constants:
###Code
freqlist = [10.0, 20.0]
couplings = np.array([[0, 5], [5, 0]])
###Output
_____no_output_____
###Markdown
Let's break the original hamiltonian function down into smaller functions and explain their roles:
###Code
def hamiltonian(freqlist, couplings):
"""Calculate the Hamiltonian for a spin system (isotropic liquid).
Arguments:
freqlist: a list of n chemical shifts (in Hz)
couplings: an n x n array of J coupling constants (in Hz)
Return:
H: numpy.ndarray spin Hamiltonian
"""
Lx, Ly, Lz = create_krons(freqlist)
Lproduct = cartesian_products(Lx, Ly, Lz)
H_zeeman = hamiltonian_diagonal(freqlist, Lz)
H_J = hamiltonian_off_diagonal(couplings, Lproduct)
H = H_zeeman + H_J
return H
###Output
_____no_output_____
###Markdown
Step 1: Each spin gets its own $L_x$, $L_y$ and $L_z$ operators.These are formed from Kronecker products between $\sigma_{x/y/z}$ and $I$ operators.Each individual product, for n spins, uses 1$\sigma_{x/y/z}$ and (n - 1) $I$ operators. They all differ in where in the sequence the $\sigma_{x/y/z}$ operator is placed.For 3 spins, and using $L_z$ for example:\begin{align}L_{z_1} &= \sigma_z \otimes I \otimes I\\L_{z_2} &= I \otimes \sigma_z \otimes I\\L_{z_3} &= I \otimes I \otimes \sigma_z\end{align} In the Python code, these individual $L_{x/y/z_n}$ operators get stored in a \[0, n\] array, Ln. *{I'm not sure why a 2-D array of one row was used. It's possible that this could be simplified to a regular 1-D array, but it's also possible that there was an issue with some other operation, such as creating the Cartesian products. Need to check this.}*
###Code
def create_krons(freqlist):
nspins = len(freqlist)
# The following empty arrays will be used to store the
# Cartesian spin operators.
Lx = np.empty((1, nspins), dtype='object')
Ly = np.empty((1, nspins), dtype='object')
Lz = np.empty((1, nspins), dtype='object')
for n in range(nspins):
Lx[0, n] = 1
Ly[0, n] = 1
Lz[0, n] = 1
for k in range(nspins):
if k == n: # Diagonal element
Lx[0, n] = np.kron(Lx[0, n], sigma_x)
Ly[0, n] = np.kron(Ly[0, n], sigma_y)
Lz[0, n] = np.kron(Lz[0, n], sigma_z)
else: # Off-diagonal element
Lx[0, n] = np.kron(Lx[0, n], unit)
Ly[0, n] = np.kron(Ly[0, n], unit)
Lz[0, n] = np.kron(Lz[0, n], unit)
return Lx, Ly, Lz
###Output
_____no_output_____
###Markdown
Step 2: Create the sums of cartesian products of $L$ operators.Eventually, the off-diagonal components of the Hamiltonian $H$ require calculating Cartesian products of the $L$ operators. In an attempt to hopefully "vectorize" these for faster computation, all of these products were calculated at once. Then, when a particular result is required (e.g. $L_{x_1}L_{x_2}+L_{y_1}L_{y_2}+L_{z_1}L_{z_2}$) it can be plucked like a bonbon out of a tray when desired.First, the $L_x, L_y, $ and $L_z$ operators were compiled into a 3 x n array of operators:```pythonLcol = np.vstack((Lx, Ly, Lz)).real```created:\begin{align}L_{col} = \begin{pmatrix}L_{x_1}& L_{x_2}&\dots & L_{x_n}\\ L_{y_1}& L_{y_2}&\dots & L_{y_n}\\L_{z_1}& L_{z_2}&\dots & L_{z_n}\end{pmatrix}\end{align}Its transpose created the n x 3 array of operators:\begin{align}L_{row} = \begin{pmatrix}L_{x_1}& L_{y_1}&L_{z_1}\\ L_{x_2}& L_{y_2}&L_{z_2}\\\vdots&\vdots&\vdots\\L_{x_n}& L_{y_n}&L_{z_n}\end{pmatrix}\end{align}The product of these two arrays gives an array of the Cartesian products:\begin{align}L_{product}&= L_{row} \cdot L_{col} \\&=\Tiny\begin{pmatrix}L_{x_1}L_{x_1}+L_{y_1}L_{y_1}+L_{z_1}L_{z_1}&L_{x_1}L_{x_2}+L_{y_1}L_{y_2}+L_{z_1}L_{z_2}&\dots&L_{x_1}L_{x_n}+L_{y_1}L_{y_n}+L_{z_1}L_{z_n}\\L_{x_2}L_{x_1}+L_{y_2}L_{y_1}+L_{z_2}L_{z_1}&L_{x_2}L_{x_2}+L_{y_2}L_{y_2}+L_{z_2}L_{z_2}&\dots&L_{x_2}L_{x_n}+L_{y_2}L_{y_n}+L_{z_2}L_{z_n}\\\vdots& &\ddots& \\L_{x_n}L_{x_1}+L_{y_n}L_{y_1}+L_{z_n}L_{z_1}&L_{x_n}L_{x_2}+L_{y_n}L_{y_2}+L_{z_n}L_{z_2}&\dots&L_{x_n}L_{x_n}+L_{y_n}L_{y_n}+L_{z_n}L_{z_n}\\\end{pmatrix}\end{align}
###Code
def cartesian_products(Lx, Ly, Lz):
Lcol = np.vstack((Lx, Ly, Lz)).real
Lrow = Lcol.T # As opposed to sparse version of code, this works!
Lproduct = np.dot(Lrow, Lcol)
return Lproduct
###Output
_____no_output_____
###Markdown
Step 3: Add the Zeeman (on-diagonal) terms to the Hamiltonian.\begin{align}H_{Zeeman} = \sum_{i=1}^n \nu_i L_{z_i}\end{align}
###Code
def hamiltonian_diagonal(freqlist, Lz):
nspins = len(freqlist)
# Hamiltonian operator
H = np.zeros((2**nspins, 2**nspins))
# Add Zeeman interactions:
for n in range(nspins):
H = H + freqlist[n] * Lz[0, n]
return H
###Output
_____no_output_____
###Markdown
Step 4: Add the J-coupling (off-diagonal) terms to the Hamiltonian.\begin{align}H_J &= \sum_{i=1}^n \sum_{j=1}^n \frac{J_{ij}}{2} (L_{x_i}L_{x_j}+L_{y_i}L_{y_j}+L_{z_i}L_{z_j})\\H &= H_{Zeeman} + H_J\end{align}In an attempt to vectorize the calculation for better performance, this was accomplished by doing in-place multiplication of the matrix of $J$ coupling constants and the Cartesian products $L_{product}$.
###Code
def hamiltonian_off_diagonal(couplings, Lproduct):
nspins = len(couplings[0])
# Hamiltonian operator
H = np.zeros((2**nspins, 2**nspins))
# Scalar couplings
# Testing with MATLAB discovered J must be /2.
# Believe it is related to the fact that in the SpinDynamics.org simulation
# freqs are *2pi, but Js by pi only.
scalars = 0.5 * couplings
scalars = np.multiply(scalars, Lproduct)
for n in range(nspins):
for k in range(nspins):
H += scalars[n, k].real
return H
###Output
_____no_output_____
###Markdown
Extracting Signal Frequencies and Intensities From the HamiltonianTo simulate a "modern" NMR experiment, a 90° pulse and FID acquisition is simulated, followed by Fourier transform. This is the approach used in Kuprov's Matlab code, and should be the required approach for any experiment requiring a more elaborate pulse sequence. For a simple NMR spectrum, we can adipt a "continuous wave spectrometer" approach. We can find the resonance frequencies and their relative intensities directly from the spin Hamiltonian. The time-independent Schrodinger equation $H\Psi = E\Psi$ is solved for eigenvectors and corresponding eigenvalues. For each $\psi_i$, the eigenvectors are the coefficients $c_n$ for each pure spin state. For a two-spin system, for example,$\psi_i = c_1\ket{\alpha\alpha} + c_2\ket{\alpha\beta} + c_3\ket{\beta\alpha} + c_4\ket{\beta\beta}$.and the corresponding eigenvector would be \begin{bmatrix}c_1\\c_2\\c_3\\c_4\end{bmatrix} For a one-spin system, the two states for "spin-up" ($\ \ket{\uparrow}$ or $\ket{\alpha}$) and for "spin-down" ($\ \ket{\downarrow}$ or $\ket{\beta}$) are represented by vectors $\begin{bmatrix}1\\0\end{bmatrix}$ and $\begin{bmatrix}0\\1\end{bmatrix}$, respectively. For "pure" multiple-spin states, their vectors are obtained by taking tensor products of these vectors. For example:\begin{align}\ket{\alpha\alpha} &=\begin{bmatrix}1\\0\end{bmatrix}\otimes\begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}1\\0\\0\\0\end{bmatrix}\\\ket{\alpha\beta} &= \begin{bmatrix}1\\0\end{bmatrix}\otimes\begin{bmatrix}0\\1\end{bmatrix} = \begin{bmatrix}0\\1\\0\\0\end{bmatrix}\\\ket{\beta\alpha} &= \begin{bmatrix}0\\1\end{bmatrix}\otimes\begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}0\\0\\1\\0\end{bmatrix}\\\ket{\beta\beta} &= \begin{bmatrix}0\\1\end{bmatrix}\otimes\begin{bmatrix}0\\1\end{bmatrix} = \begin{bmatrix}0\\0\\0\\1\end{bmatrix}\end{align} A (coincidental?) consequence of this is that the index for $H$, expressed in binary form as a series of 0s and 1s, is the eigenvector for the associated pure spin state (cf. Rioux's *ABC Proton NMR Using Tensor Algebra*). Since allowed transitions change the total spin of a system by $\pm$ 1, this is analogous to transitions only being allowed between spin states whose binary indices only differ at one bit. In computing terms, if the Hamming weight of the two indices differ by exactly 1, the transition is allowed.
###Code
def popcount(n=0):
"""
Computes the popcount (binary Hamming weight) of integer n
input:
:param n: an integer
returns:
popcount of integer (binary Hamming weight)
"""
return bin(n).count('1')
def is_allowed(m=0, n=0):
"""
determines if a transition between two spin states is allowed or forbidden.
The transition is allowed if one and only one spin (i.e. bit) changes
input: integers whose binary codes for a spin state
:param n:
:param m:
output: 1 = allowed, 0 = forbidden
"""
return popcount(m ^ n) == 1
###Output
_____no_output_____
###Markdown
Knowing this, we can create a transition probability matrix $T$, where $T_{ij} = 1$ if a transition between states $i$ and $j$ are allowed, and $0$ if not.
###Code
def transition_matrix(n):
"""
Creates a matrix of allowed transitions.
The integers 0-n, in their binary form, code for a spin state (alpha/beta).
The (i,j) cells in the matrix indicate whether a transition from spin state
i to spin state j is allowed or forbidden.
See the is_allowed function for more information.
input:
:param n: size of the n,n matrix (i.e. number of possible spin states)
:returns: a transition matrix that can be used to compute the intensity of
allowed transitions.
"""
# function was optimized by only calculating upper triangle and then adding
# the lower.
T = lil_matrix((n, n)) # sparse matrix created
for i in range(n - 1):
for j in range(i + 1, n):
if is_allowed(i, j):
T[i, j] = 1
T = T + T.T
return T
###Output
_____no_output_____
###Markdown
The eigenvector solutions for the Hamiltonian include two pure states ("all-up/$\alpha$" and "all-down/$\beta$", plus mixed states. We can construct a matrix $V_{col}$ where each column of the matrix is an eigenvector solution, in their indexed order:\begin{align}V_{col} = \begin{pmatrix}\ket{\psi_1} &\ket{\psi_2} &\dots &\ket{\psi_n}\end{pmatrix}=\begin{pmatrix}\begin{bmatrix}c_1\\c_2\\\vdots\\c_n\end{bmatrix}_1&\begin{bmatrix}c_1\\c_2\\\vdots\\c_n\end{bmatrix}_2&\dots&\begin{bmatrix}c_1\\c_2\\\vdots\\c_n\end{bmatrix}_n\end{pmatrix}\end{align}and where its transpose $V_{row} = V_{col}^T$ has an eigenvector for each row:\begin{align}V_{row}=\begin{pmatrix}\bra{\psi_1} \\\bra{\psi_2} \\\vdots\\\bra{\psi_n} \\\end{pmatrix}=\begin{pmatrix}\begin{bmatrix}c_1&c_2&\dots&c_n\end{bmatrix}_1\\\begin{bmatrix}c_1&c_2&\dots&c_n\end{bmatrix}_2\\\vdots\\\begin{bmatrix}c_1&c_2&\dots&c_n\end{bmatrix}_n\\\end{pmatrix}\end{align} The intensity matrix $I$ can be obtained by taking $V_{row}\cdot T \cdot V_{col}$ and squaring it element-wise, so that $I_{ij}$ is the relative probability of a transition between the $\psi_i$ and $\psi_j$ states. The difference in energy between the two states gives the frequency in Hz.
###Code
def simsignals(H, nspins):
"""
Solves the spin Hamiltonian H and returns a list of (frequency, intensity)
tuples. Nuclei must be spin-1/2.
Inputs:
:param H: a Hamiltonian array
:param nspins: number of nuclei
:return spectrum: a list of (frequency, intensity) tuples.
"""
# This routine was optimized for speed by vectorizing the intensity
# calculations, replacing a nested-for signal-by-signal calculation.
# Considering that hamiltonian was dramatically faster when refactored to
# use arrays instead of sparse matrices, consider an array refactor to this
# function as well.
# The eigensolution calculation apparently must be done on a dense matrix,
# because eig functions on sparse matrices can't return all answers?!
# Using eigh so that answers have only real components and no residual small
# unreal components b/c of rounding errors
E, V = np.linalg.eigh(H) # V will be eigenvectors, v will be frequencies
print(E)
# Eigh still leaves residual 0j terms, so:
V = np.asmatrix(V.real)
print(V)
# Calculate signal intensities
Vcol = csc_matrix(V)
print('Vcol:')
print(Vcol.todense())
Vrow = csr_matrix(Vcol.T)
print('Vrow:')
print(Vrow.todense())
m = 2 ** nspins
T = transition_matrix(m)
print('T:')
print(T.todense())
print('T•Vcol:')
mid_t = T * Vcol
print(mid_t.todense())
I = Vrow * T * Vcol
print('I:')
print(I.todense())
print('I squared:')
I = np.square(I.todense())
print(I)
spectrum = []
for i in range(m - 1):
for j in range(i + 1, m):
if I[i, j] > 0.01: # consider making this minimum intensity
# cutoff a function arg, for flexibility
v = abs(E[i] - E[j])
spectrum.append((v, I[i, j]))
return spectrum
def hamiltonian_with_prints(freqlist, couplings):
# The following empty arrays will be used to store the
# Cartesian spin operators.
Lx = np.empty((1, nspins), dtype='object')
Ly = np.empty((1, nspins), dtype='object')
Lz = np.empty((1, nspins), dtype='object')
for n in range(nspins):
Lx[0, n] = 1
Ly[0, n] = 1
Lz[0, n] = 1
for k in range(nspins):
if k == n: # Diagonal element
Lx[0, n] = np.kron(Lx[0, n], sigma_x)
Ly[0, n] = np.kron(Ly[0, n], sigma_y)
Lz[0, n] = np.kron(Lz[0, n], sigma_z)
else: # Off-diagonal element
Lx[0, n] = np.kron(Lx[0, n], unit)
Ly[0, n] = np.kron(Ly[0, n], unit)
Lz[0, n] = np.kron(Lz[0, n], unit)
Lcol = np.vstack((Lx, Ly, Lz)).real
Lrow = Lcol.T # As opposed to sparse version of code, this works!
Lproduct = np.dot(Lrow, Lcol)
print(Lcol)
print('-'*10)
print(Lrow)
print('-'*10)
print(Lproduct)
# Hamiltonian operator
H = np.zeros((2**nspins, 2**nspins))
# Add Zeeman interactions:
for n in range(nspins):
H = H + freqlist[n] * Lz[0, n]
# Scalar couplings
# Testing with MATLAB discovered J must be /2.
# Believe it is related to the fact that in the SpinDynamics.org simulation
# freqs are *2pi, but Js by pi only.
scalars = 0.5 * couplings
scalars = np.multiply(scalars, Lproduct)
for n in range(nspins):
for k in range(nspins):
H += scalars[n, k].real
print('Lz: ', Lz)
return H
H = hamiltonian(freqlist, couplings)
H
np.set_printoptions(threshold=np.nan)
nspins = len(freqlist)
simsignals(H, nspins)
###Output
[-13.75 -6.84016994 4.34016994 16.25 ]
[[ 0. 0. -0. 1. ]
[ 0. -0.97324899 -0.22975292 0. ]
[ 0. 0.22975292 -0.97324899 0. ]
[ 1. 0. -0. 0. ]]
Vcol:
[[ 0. 0. 0. 1. ]
[ 0. -0.97324899 -0.22975292 0. ]
[ 0. 0.22975292 -0.97324899 0. ]
[ 1. 0. 0. 0. ]]
Vrow:
[[ 0. 0. 0. 1. ]
[ 0. -0.97324899 0.22975292 0. ]
[ 0. -0.22975292 -0.97324899 0. ]
[ 1. 0. 0. 0. ]]
T:
[[ 0. 1. 1. 0.]
[ 1. 0. 0. 1.]
[ 1. 0. 0. 1.]
[ 0. 1. 1. 0.]]
T•Vcol:
[[ 0. -0.74349607 -1.20300191 0. ]
[ 1. 0. 0. 1. ]
[ 1. 0. 0. 1. ]
[ 0. -0.74349607 -1.20300191 0. ]]
I:
[[ 0. -0.74349607 -1.20300191 0. ]
[-0.74349607 0. 0. -0.74349607]
[-1.20300191 0. 0. -1.20300191]
[ 0. -0.74349607 -1.20300191 0. ]]
I squared:
[[ 0. 0.5527864 1.4472136 0. ]
[ 0.5527864 0. 0. 0.5527864]
[ 1.4472136 0. 0. 1.4472136]
[ 0. 0.5527864 1.4472136 0. ]]
###Markdown
What happens if we only have one nucleus?
###Code
freqlist = [10.0]
couplings = np.array([[0]])
nspins = len(freqlist)
###Output
_____no_output_____
###Markdown
two uncoupled spins:
###Code
freqlist = [10.0, 20.0]
couplings = np.array([[0, 0], [0, 0]])
nspins = len(freqlist)
def spin2():
v = np.array([150-7.5, 150+7.5])
J = lil_matrix((2, 2))
J[0, 1] = 12
J = J + J.T
return v, J
v, J = spin2()
J = J.todense()
print(v)
print(J)
H = hamiltonian_with_prints(v, J)
H
###Output
[ 142.5 157.5]
[[ 0. 12.]
[ 12. 0.]]
[[ matrix([[ 0. , 0. , 0.5, 0. ],
[ 0. , 0. , 0. , 0.5],
[ 0.5, 0. , 0. , 0. ],
[ 0. , 0.5, 0. , 0. ]])
matrix([[ 0. , 0.5, 0. , 0. ],
[ 0.5, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0.5],
[ 0. , 0. , 0.5, 0. ]])]
[ matrix([[ 0.+0.j , 0.+0.j , 0.-0.5j, 0.+0.j ],
[ 0.+0.j , 0.+0.j , 0.+0.j , 0.-0.5j],
[ 0.+0.5j, 0.+0.j , 0.+0.j , 0.+0.j ],
[ 0.+0.j , 0.+0.5j, 0.+0.j , 0.+0.j ]])
matrix([[ 0.+0.j , 0.-0.5j, 0.+0.j , 0.-0.j ],
[ 0.+0.5j, 0.+0.j , 0.+0.j , 0.+0.j ],
[ 0.+0.j , 0.-0.j , 0.+0.j , 0.-0.5j],
[ 0.+0.j , 0.+0.j , 0.+0.5j, 0.+0.j ]])]
[ matrix([[ 0.5, 0. , 0. , 0. ],
[ 0. , 0.5, 0. , 0. ],
[ 0. , 0. , -0.5, -0. ],
[ 0. , 0. , -0. , -0.5]])
matrix([[ 0.5, 0. , 0. , 0. ],
[ 0. , -0.5, 0. , -0. ],
[ 0. , 0. , 0.5, 0. ],
[ 0. , -0. , 0. , -0.5]])]]
----------
[[ matrix([[ 0. , 0. , 0.5, 0. ],
[ 0. , 0. , 0. , 0.5],
[ 0.5, 0. , 0. , 0. ],
[ 0. , 0.5, 0. , 0. ]])
matrix([[ 0.+0.j , 0.+0.j , 0.-0.5j, 0.+0.j ],
[ 0.+0.j , 0.+0.j , 0.+0.j , 0.-0.5j],
[ 0.+0.5j, 0.+0.j , 0.+0.j , 0.+0.j ],
[ 0.+0.j , 0.+0.5j, 0.+0.j , 0.+0.j ]])
matrix([[ 0.5, 0. , 0. , 0. ],
[ 0. , 0.5, 0. , 0. ],
[ 0. , 0. , -0.5, -0. ],
[ 0. , 0. , -0. , -0.5]])]
[ matrix([[ 0. , 0.5, 0. , 0. ],
[ 0.5, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0.5],
[ 0. , 0. , 0.5, 0. ]])
matrix([[ 0.+0.j , 0.-0.5j, 0.+0.j , 0.-0.j ],
[ 0.+0.5j, 0.+0.j , 0.+0.j , 0.+0.j ],
[ 0.+0.j , 0.-0.j , 0.+0.j , 0.-0.5j],
[ 0.+0.j , 0.+0.j , 0.+0.5j, 0.+0.j ]])
matrix([[ 0.5, 0. , 0. , 0. ],
[ 0. , -0.5, 0. , -0. ],
[ 0. , 0. , 0.5, 0. ],
[ 0. , -0. , 0. , -0.5]])]]
----------
[[ matrix([[ 0.75+0.j, 0.00+0.j, 0.00+0.j, 0.00+0.j],
[ 0.00+0.j, 0.75+0.j, 0.00+0.j, 0.00+0.j],
[ 0.00+0.j, 0.00+0.j, 0.75+0.j, 0.00+0.j],
[ 0.00+0.j, 0.00+0.j, 0.00+0.j, 0.75+0.j]])
matrix([[ 0.25+0.j, 0.00+0.j, 0.00+0.j, 0.00+0.j],
[ 0.00+0.j, -0.25+0.j, 0.50+0.j, 0.00+0.j],
[ 0.00+0.j, 0.50+0.j, -0.25+0.j, 0.00+0.j],
[ 0.00+0.j, 0.00+0.j, 0.00+0.j, 0.25+0.j]])]
[ matrix([[ 0.25+0.j, 0.00+0.j, 0.00+0.j, 0.00+0.j],
[ 0.00+0.j, -0.25+0.j, 0.50+0.j, 0.00+0.j],
[ 0.00+0.j, 0.50+0.j, -0.25+0.j, 0.00+0.j],
[ 0.00+0.j, 0.00+0.j, 0.00+0.j, 0.25+0.j]])
matrix([[ 0.75+0.j, 0.00+0.j, 0.00+0.j, 0.00+0.j],
[ 0.00+0.j, 0.75+0.j, 0.00+0.j, 0.00+0.j],
[ 0.00+0.j, 0.00+0.j, 0.75+0.j, 0.00+0.j],
[ 0.00+0.j, 0.00+0.j, 0.00+0.j, 0.75+0.j]])]]
Lz: [[ matrix([[ 0.5, 0. , 0. , 0. ],
[ 0. , 0.5, 0. , 0. ],
[ 0. , 0. , -0.5, -0. ],
[ 0. , 0. , -0. , -0.5]])
matrix([[ 0.5, 0. , 0. , 0. ],
[ 0. , -0.5, 0. , -0. ],
[ 0. , 0. , 0.5, 0. ],
[ 0. , -0. , 0. , -0.5]])]]
|
notebooks/.ipynb_checkpoints/dynesty-delete-checkpoint.ipynb | ###Markdown
I'll define the conversions between solar mass -> kg and solar radius -> meters for convenience.
###Code
smass_kg = 1.9885e30 # Solar mass (kg)
srad_m = 696.34e6 # Solar radius (m)
###Output
_____no_output_____
###Markdown
The Sample I'm using the sample of "cool KOIs" from [Muirhead et al. 2013](https://iopscience.iop.org/article/10.1088/0067-0049/213/1/5), and their properites from spectroscopy published here.
###Code
muirhead_data = pd.read_csv("datafiles/Muirhead2013_isochrones/muirhead_data_incmissing.txt", sep=" ")
###Output
_____no_output_____
###Markdown
I'm reading in a file containing data for all Kepler planets from the Exoplanet Archive (`planets`), then only taking these data for planets in the Muirhead et al. 2013 sample (`spectplanets`).
###Code
# ALL Kepler planets from exo archive
planets = pd.read_csv('datafiles/exoplanetarchive/cumulative_kois.csv')
# Take the Kepler planet archive entries for the planets in Muirhead et al. 2013 sample
# TODO: Vet based on KOI not KIC
spectplanets = planets[planets['kepid'].isin(list(muirhead_data['KIC']))]
# spectplanets.to_csv('spectplanets.csv')
###Output
_____no_output_____
###Markdown
Now, I'm reading in the entire Kepler/Gaia dataset from [gaia-kepler.fun]. I'm again matching these data with the objects in our sample (`muirhead_gaia`). I'm using the DR2 data with a 4 arcsecond search radius.Then I'll combine the spectroscopy data with Kepler/Gaia data for our sample.
###Code
# Kepler-Gaia Data
kpgaia = Table.read('datafiles/Kepler-Gaia/kepler_dr2_4arcsec.fits', format='fits').to_pandas();
# Kepler-Gaia data for only the objects in our sample
muirhead_gaia = kpgaia[kpgaia['kepid'].isin(list(muirhead_data.KIC))]
muirhead_gaia = muirhead_gaia.rename(columns={"source_id": "m_source_id"})
# Combined spectroscopy data + Gaia/Kepler data for our sample
muirhead_comb = pd.merge(muirhead_data, muirhead_gaia, how='inner', left_on='KIC', right_on='kepid')
#muirhead_comb.to_csv('muirhead_comb.csv')
# Only targets from table above with published luminosities from Gaia
muirhead_comb_lums = muirhead_comb[muirhead_comb.lum_val.notnull()]
#muirhead_comb_lums.to_csv('muirhead_comb_lums.csv')
###Output
/Users/ssagear/anaconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3418: TableReplaceWarning: converted column 'r_result_flag' from integer to float
exec(code_obj, self.user_global_ns, self.user_ns)
/Users/ssagear/anaconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3418: TableReplaceWarning: converted column 'r_modality_flag' from integer to float
exec(code_obj, self.user_global_ns, self.user_ns)
/Users/ssagear/anaconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3418: TableReplaceWarning: converted column 'teff_err1' from integer to float
exec(code_obj, self.user_global_ns, self.user_ns)
/Users/ssagear/anaconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3418: TableReplaceWarning: converted column 'teff_err2' from integer to float
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
Defining a "test planet" I'm going to pick a random planet from our sample to test how well `photoeccentric` works. Here, I'm picking Kepler-1582 b, a super-Earth orbiting an M dwarf [Exoplanet Catalog Entry](https://exoplanets.nasa.gov/exoplanet-catalog/2457/kepler-1582-b/). It has an orbital period of about 5 days.First, I'll use the spectroscopy data from Muirhead et al. 2013 and Gaia luminosities to constrain the mass and radius of the host star beyond the constraint published in the Exoplanet Archive. I'll do this by matching these data with stellar isochrones [MESA](https://iopscience.iop.org/article/10.3847/0004-637X/823/2/102) (check this ciation) and using the masses/radii from the matching isochrones to constrian the stellar density.
###Code
# Kepler ID for Kepler-1582 b
kepid = 5868793
kepname = spectplanets.loc[spectplanets['kepid'] == kepid].kepler_name.values[0]
kp1582b = muirhead_comb.loc[muirhead_comb['KIC'] == kepid]
kp1582b
# Read in MESA isochrones
#isochrones = pd.read_csv('datafiles/Muirhead2013_isochrones/isochrones_sdss_spitzer_lowmass.dat', sep='\s\s+', engine='python')
###Output
_____no_output_____
###Markdown
Using `ph.fit_isochrone_lum()` to match isochrones to stellar data
###Code
#iso_lums = ph.fit_isochrone_lum(kp1582b, muirhead_comb_lums, isochrones, gaia_lum=False, source='Muirhead')
# Write to csv, then read back in (prevents python notebook from lagging)
#iso_lums.to_csv("datafiles/isochrones/iso_lums_" + str(kepid) + ".csv")
isodf = pd.read_csv("datafiles/isochrones/iso_lums_" + str(kepid) + ".csv")
###Output
_____no_output_____
###Markdown
I'm determining the mass and radius constraints of this star based on the isochrones that were consistent with the observational data.
###Code
mstar = isodf["mstar"].mean()
mstar_err = isodf["mstar"].std()
rstar = isodf["radius"].mean()
rstar_err = isodf["radius"].std()
###Output
_____no_output_____
###Markdown
Using `ph.find_density_dist_symmetric()` to create a stellar density distribution from symmetric (Gaussian) distributions based on mstar and rstar (from isochrones). Note: this does not necessarily mean the resulting density distribution will appear symmetric.
###Code
rho_star, mass, radius = ph.find_density_dist_symmetric(mstar, mstar_err, rstar, rstar_err, arrlen)
plt.hist(rho_star, bins=20)
plt.xlabel('Stellar Density Histogram (kg m^-3)', fontsize=20)
###Output
_____no_output_____
###Markdown
Creating a fake light curve based on a real planet I'm pulling the planet parameters of Kepler-1582 b from the exoplanet archive using `ph.planet_params_from_archive()`. This will give me the published period, Rp/Rs, and inclination constraints of this planet. (It will also return some other parameters, but we don't need those right now).I'm calculating a/Rs using `ph.calc_a()`, instead of using the a/Rs constraint from the Exoplanet Archive. The reason is because a/Rs must be consistent with the density calculated above from spectroscopy/Gaia for the photoeccentric effect to work correctly, and the published a/Rs is often inconsistent. a/Rs depends on the orbital period, Mstar, and Rstar.
###Code
period, period_uerr, period_lerr, rprs, rprs_uerr, rprs_lerr, a_arc, a_uerr_arc, a_lerr_arc, i, e_arc, w_arc = ph.planet_params_from_archive(spectplanets, kepname)
# We calculate a_rs to ensure that it's consistent with the spec/Gaia stellar density.
a_rs = ph.calc_a(period*86400.0, mstar*smass_kg, rstar*srad_m)
a_rs_err = np.mean((a_uerr_arc, a_lerr_arc))
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('Period (Days): ', period, 'Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
###Output
Stellar mass (Msun): 0.17438311970562165 Stellar radius (Rsun): 0.19884797856314
Period (Days): 4.83809469 Rp/Rs: 0.036066
a/Rs: 33.79155851141583
i (deg): 89.98
###Markdown
Now, I'll create a fake transit using `batman`.I'm creating a model with the period, Rp/Rs, a/Rs, and inclination specified by the Kepler catalog entry and the density constraints.I'll create the transit model with an $e$ and $w$ of my choice. This will allow me to test whether `photoeccentric` accurately recovers the $(e,w)$ combination I have input. I'll start with $e = 0.0$ and $w = 90.0$ degrees. $e = 0.9$, $\omega = -85.0$
###Code
# 30 minute cadence
cadence = 0.02142857142857143
time = np.arange(-25, 25, cadence)
###Output
_____no_output_____
###Markdown
The function `ph.integrate_lcfitter()` evaluates flux at every minute, than sums over every 30 minutes to simultae the Kepler integration time.
###Code
# Calculate flux from transit model
e = 0.0
w = 90.0
flux = ph.integratedlc(time, period, rprs, a_rs, e, i, w)
# Adding some gaussian noise
noise = np.random.normal(0,0.00006,len(time))
nflux = flux+noise
flux_err = np.array([0.00006]*len(nflux))
plt.errorbar(time, nflux, yerr=flux_err)
plt.xlabel('Time')
plt.ylabel('Flux')
#plt.xlim(-1, 1)
###Output
_____no_output_____
###Markdown
Fitting the transit I'm using the Astropy BLS method to determine the period of the fake light curve. I'll use the power spectrum to create a PDF of the possible periods for this planet. I'm fitting the period and the other planet parameters separately.
###Code
periodPDF = ph.get_period_dist(time, nflux, 4, 6, arrlen)
print('Period fit: ', ph.mode(periodPDF))
pdist = periodPDF
per_guess = ph.mode(pdist)
###Output
Period fit: 4.828282828282829
###Markdown
Now, I'm fitting the transit shape with `emcee`. $Rp/Rs$, $a/Rs$, $i$, and $w$ are allowed to vary as free parameters. The transit fitter, `ph.planetlc_fitter`, fixes $e = 0.0$, even if the input eccentricity is not zero! This means that if e != 0, the transit fitter will fit the wrong values for $a/Rs$ and $i$ -- but they will be wrong in such a way that reveals the eccentricity of the orbit. More on that in the next section.I enter an initial guess based on what I estimate the fit parameters will be. For this one, I'll enter values pretty close to what I input.
###Code
ttimes = np.concatenate((-np.arange(0, time[-1], period)[1:], np.arange(0, time[-1], period)))
ttimes = np.sort(ttimes)
time1, nflux1, fluxerr1 = ph.get_transit_cutout_full(ttimes, 4, time, nflux, flux_err)
mid = ph.get_mid(time1)
ptime1 = ph.get_ptime(time1, mid, 29)
time = time1
flux = nflux1
flux_err = fluxerr1
ptime = ptime1
plt.errorbar(time, flux, yerr=flux_err)
rp = rprs
a = a_rs
inc = i
t0 = 0
u = np.array([0,1])
# u * width + offset from 0
u*15.+20.
# Define the dimensionality of our problem.
ndim = 5
def tfit_loglike(theta):
"""
Transit fit emcee function
model = ph.integratedlc_fitter()
gerr = sigma of g distribution
"""
per, rp, a, inc, t0 = theta
#per, rp = theta
#print(per,rp,a,inc,t0)
model = ph.integratedlc_fitter(time, per, rp, a, inc, t0)
sigma2 = flux_err ** 2
#print(-0.5 * np.sum((flux - model) ** 2 / sigma2 + np.log(sigma2)))
return -0.5 * np.sum((flux - model) ** 2 / sigma2 + np.log(sigma2))
# Define our uniform prior.
def tfit_prior_transform(utheta):
"""Transforms samples `u` drawn from the unit cube to samples to those
from our uniform prior within [-10., 10.) for each variable."""
uper, urp, ua, uinc, ut0 = utheta
#uper, urp = utheta
per = 3.*uper+3.
rp = urp
a = ua*15.+20.
inc = uinc*3.+87.
t0 = 2.*ut0-1.
return per,rp,a,inc,t0
dsampler = dynesty.DynamicNestedSampler(tfit_loglike, tfit_prior_transform, ndim=ndim, nlive=1500,
bound='multi', sample='rwalk')
dsampler.run_nested()
dres = dsampler.results
#ph.mode(dres.samples[:,4])
#fig, axes = dyplot.cornerplot(dres)
# plot proposals in corner format for 'cubes'
dyplot.cornerbound(dres, it=800, prior_transform=tfit_prior_transform)
fig, axes = dyplot.cornerplot(dres, labels=["period", "Rp/Rs", "a/Rs", "i", "t0"])
# Inital guess: period, rprs, a/Rs, i, t0
p0 = [per_guess, rprs, 35, 89.9, 0]
dr = 'e_' + str(0) + '_w_' + str(w)
direct = 'plots_tutorial_dynesty/' + dr + '/'
if not os.path.exists(direct):
os.mkdir(direct)
# EMCEE Transit Model Fitting
_, _, pdist, rdist, adist, idist, t0dist = ph.mcmc_fitter(p0, time1, ptime1, nflux1, fluxerr1, nwalk, nsteps, ndiscard, e, w, direct)
per_f = ph.mode(pdist)
rprs_f = ph.mode(rdist)
a_f = ph.mode(adist)
i_f = ph.mode(idist)
t0_f = ph.mode(t0dist)
###Output
_____no_output_____
###Markdown
Below, I print the original parameters and fit parameters, and overlay the fit light curve on the input light curve. Because I input $e = 0.0$, the transit fitter should return the exact same parameters I input (because the transit fitter always requires $e = 0.0$).
###Code
# Create a light curve with the fit parameters
fit1 = ph.integratedlc_fitter(time1, per_f, rprs_f, a_f, i_f, t0_f)
plt.errorbar(time1, nflux1, yerr=fluxerr1, c='blue', alpha=0.5, label='Original LC')
plt.plot(time1, fit1, c='red', alpha=1.0, label='Fit LC')
#plt.xlim(-0.1, 0.1)
plt.legend()
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('\n')
print('Input params:')
print('Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
print('\n')
print('Fit params:')
print('Rp/Rs: ', rprs_f)
print('a/Rs: ', a_f)
print('i (deg): ', i_f)
###Output
_____no_output_____
###Markdown
Determining T14 and T23
###Code
T14dist = ph.get_T14(pdist, rdist, adist, idist)
T14errs = ph.get_sigmas(T14dist)
T23dist = ph.get_T23(pdist, rdist, adist, idist)
T23errs = ph.get_sigmas(T23dist)
###Output
_____no_output_____
###Markdown
Get $g$ A crucial step to determining the $(e, w)$ distribution from the transit is calculating the total and full transit durations. T14 is the total transit duration (the time between first and fourth contact). T23 is the full transit duration (i.e. the time during which the entire planet disk is in front of the star, the time between second and third contact.)Here, I'm using equations 14 and 15 from [this textbook](https://sites.astro.caltech.edu/~lah/review/transits_occultations.winn.pdf). We calculate T14 and T23 assuming the orbit must be circular, and using the fit parameters assuming the orbit is circular. (If the orbit is not circular, T14 and T23 will not be correct -- but this is what we want, because they will differ from the true T14 and T23 in a way that reveals the eccentricity of the orbit.)
###Code
gs, rho_c = ph.get_g_distribution(rho_star, pdist, rdist, T14dist, T23dist)
g_mean = ph.mode(gs)
g_sigma = np.mean(np.abs(ph.get_sigmas(gs)))
###Output
_____no_output_____
###Markdown
Print $g$ and $\sigma_{g}$: Finally, we can use all the values above to determine $\rho_{circ}$. $\rho_{circ}$ is what we would calculate the stellar density to be if we knew that the orbit was definitely perfectly circular. We will compare $\rho_{circ}$ to $\rho_{star}$ (the true, observed stellar density we calculated from spectroscopy/Gaia), and get $g(e, w)$: which is also defined as  Thus, if the orbit is circular $(e = 0)$, then $g$ should equal 1. If the orbit is not circular $(e != 0)$, then $\rho_{circ}$ should differ from $\rho_{star}$, and $g$ should be something other than 1. We can draw a $(e, w)$ distribution based on the value we calcaulte for $g(e,w)$! `ph.get_g_distribution()` will help us determine the value of g. This function takes the observed $\rho_{star}$ as well as the fit (circular) transit parameters and calculated transit durations, and calculates $\rho_{circ}$ and $g(e,w)$ based on equations 6 and 7 in [Dawson & Johnson 2012](https://arxiv.org/pdf/1203.5537.pdf).
###Code
g_mean
g_sigma
###Output
_____no_output_____
###Markdown
Dynesty
###Code
g = g_mean
gerr = g_sigma
###Output
_____no_output_____
###Markdown
The mean of $g$ is about 1.0, which means that $\rho_{circ}$ agrees with $\rho_{star}$ and the eccentricity of this transit must be zero, which is exactly what we input! We can take $g$ and $\sigma_{g}$ and use MCMC (`emcee`) to determine the surface of most likely $(e,w)$.`photoeccentric` has the probability function for $(e,w)$ from $g$ built in to `ph.log_probability()`.
###Code
u = np.array([0,1])
m = 5.5 * u - 5.
b = 10. * u
lnf = 11. * u - 10.
m
b
lnf
# Define the dimensionality of our problem.
ndim = 2
def loglike(theta):
"""The log-likelihood function."""
w, e = theta
model = (1+e*np.sin(w*(np.pi/180.)))/np.sqrt(1-e**2)
sigma2 = gerr ** 2
return -0.5 * np.sum((g - model) ** 2 / sigma2 + np.log(sigma2))
# Define our uniform prior.
def prior_transform(utheta):
"""Transforms samples `u` drawn from the unit cube to samples to those
from our uniform prior within [-10., 10.) for each variable."""
uw, ue = utheta
w = 360.*uw-90.
e = 1. * ue
return w, e
dsampler = dynesty.DynamicNestedSampler(loglike, prior_transform, ndim=2,
bound='multi', sample='rstagger')
dsampler.run_nested()
dres = dsampler.results
# Plot a summary of the run.
#rfig, raxes = dyplot.runplot(results)
# Plot traces and 1-D marginalized posteriors.
#tfig, taxes = dyplot.traceplot(results)
truths = [w, e]
# Plot the 2-D marginalized posteriors.
fig, axes = dyplot.cornerplot(dres, truths=truths, show_titles=True, title_kwargs={'y': 1.04}, labels=["w", "e"],
fig=plt.subplots(2, 2, figsize=(8, 8)))
#Guesses
w_guess = 0.0
e_guess = 0.0
solnx = (w_guess, e_guess)
pos = solnx + 1e-4 * np.random.randn(32, 2)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, ph.log_probability, args=(g_mean, g_sigma), threads=4)
sampler.run_mcmc(pos, 5000, progress=True);
labels = ["w", "e"]
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
fig = corner.corner(flat_samples, labels=labels, title_kwargs={"fontsize": 12}, truths=[w, e], plot_contours=True)
u = np.array([0,1])
360*u-90
g = 1.0
gerr = 0.05
# Define the dimensionality of our problem.
ndim = 2
# Define our 3-D correlated multivariate normal likelihood.
#C = np.identity(ndim) # set covariance to identity matrix
#C[C==0] = 0.95 # set off-diagonal terms
#Cinv = np.linalg.inv(C) # define the inverse (i.e. the precision matrix)
#lnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(np.linalg.det(C))) # ln(normalization)
def loglike(theta):
"""The log-likelihood function."""
w, e = theta
model = (1+e*np.sin(w*(np.pi/180.)))/np.sqrt(1-e**2)
sigma2 = gerr ** 2
return -0.5 * np.sum((g - model) ** 2 / sigma2 + np.log(sigma2))
# Define our uniform prior.
def priortransform(utheta):
"""Transforms samples `u` drawn from the unit cube to samples to those
from our uniform prior within [-10., 10.) for each variable."""
uw, ue = utheta # copy u
w = 360. * uw - 90. # scale and shift to [-90., 270.)
e = ue
return w, e
# # Beta
# a, b = 2.31, 0.627 # shape parameters
# x[1] = scipy.stats.beta.ppf(u[1], a, b)
dsampler = dynesty.DynamicNestedSampler(loglike, priortransform, ndim=2,
bound='multi', sample='rstagger')
dsampler.run_nested()
dres = dsampler.results
from dynesty import utils as dyfunc
# Combine results from "Static" and "Dynamic" runs.
results = dyfunc.merge_runs([sresults, dresults])
from dynesty import plotting as dyplot
# Plot a summary of the run.
rfig, raxes = dyplot.runplot(results)
# Plot traces and 1-D marginalized posteriors.
tfig, taxes = dyplot.traceplot(results)
# Plot the 2-D marginalized posteriors.
cfig, caxes = dyplot.cornerplot(results, labels=["w", "e"])
from dynesty import utils as dyfunc
# Extract sampling results.
samples = results.samples # samples
weights = np.exp(results.logwt - results.logz[-1]) # normalized weights
# Compute 10%-90% quantiles.
quantiles = [dyfunc.quantile(samps, [0.1, 0.9], weights=weights)
for samps in samples.T]
# Compute weighted mean and covariance.
mean, cov = dyfunc.mean_and_cov(samples, weights)
# Resample weighted samples.
samples_equal = dyfunc.resample_equal(samples, weights)
# Generate a new set of results with statistical+sampling uncertainties.
results_sim = dyfunc.simulate_run(results)
print(sampler.citations)
###Output
Code and Methods:
================
Speagle (2020): ui.adsabs.harvard.edu/abs/2019arXiv190402180S
Nested Sampling:
===============
Skilling (2004): ui.adsabs.harvard.edu/abs/2004AIPC..735..395S
Skilling (2006): projecteuclid.org/euclid.ba/1340370944
Bounding Method:
===============
Feroz, Hobson & Bridges (2009): ui.adsabs.harvard.edu/abs/2009MNRAS.398.1601F
Sampling Method:
===============
|
20171118_PyCuda/dot_product.ipynb | ###Markdown
Dot Product 구현 Use CPU
###Code
import numpy as np
N = 10
a_mat = np.random.randint(5, size=[N, N])
b_mat = np.random.randint(5, size=[N, N])
ret_mat_cpu = np.dot(a_mat, b_mat)
print("a_mat =")
print(a_mat)
print("\nb_mat =")
print(b_mat)
print("\nret_mat =")
print(ret_mat_cpu)
###Output
a_mat =
[[1 4 0 4 1 4 2 2 0 3]
[3 1 0 4 1 3 0 3 4 1]
[2 1 1 3 3 0 1 2 3 2]
[1 2 2 2 2 0 1 0 4 2]
[2 4 1 1 1 0 0 3 2 2]
[3 3 3 4 0 4 1 4 3 1]
[3 0 3 4 4 0 3 4 2 3]
[2 4 4 3 3 0 3 3 2 3]
[4 3 2 2 3 4 2 0 0 1]
[3 3 2 4 2 3 2 2 2 3]]
b_mat =
[[3 2 1 0 1 0 4 0 2 4]
[3 0 2 2 4 4 1 0 2 3]
[2 2 1 1 1 2 1 0 3 0]
[2 2 0 2 4 1 3 3 2 1]
[1 2 0 2 1 3 3 4 0 3]
[1 1 4 1 1 3 3 3 3 2]
[3 3 4 4 0 3 1 3 1 2]
[4 1 4 1 1 4 0 2 4 1]
[0 4 0 1 1 3 4 0 0 3]
[2 1 4 1 1 1 3 3 1 3]]
ret_mat =
[[48 27 53 35 43 52 46 47 43 46]
[38 39 33 23 35 45 56 34 38 46]
[35 37 25 26 29 40 47 34 26 42]
[26 35 19 24 27 37 43 23 19 38]
[39 23 31 20 31 42 33 19 31 39]
[57 44 52 33 46 63 58 38 59 51]
[58 52 46 40 35 56 59 54 45 53]
[62 47 50 44 45 66 54 45 48 56]
[44 33 40 31 34 46 53 39 39 51]
[55 44 51 38 45 58 63 48 48 58]]
###Markdown
Use GPU
###Code
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda import driver, compiler
# Kernel code
kernel_code = """
__constant__ int n;
__global__ void mul(int* in_arr1, int* in_arr2, int* out_arr)
{
int col = threadIdx.x;
int row = threadIdx.y;
int sum = 0;
if ( col < n && row < n )
{
for ( int i=0 ; i<n ; i++ )
{
sum += in_arr1[row * n + i] * in_arr2[i * n + col];
}
}
out_arr[col + row*n] = sum;
}
"""
# Compile the kernel code
mod = compiler.SourceModule(kernel_code)
# Get kernel function
mul_func = mod.get_function("mul")
# 상수 설정
host_n = np.array([N], dtype=np.int)
device_n = mod.get_global("n")[0]
cuda.memcpy_htod(device_n, host_n[0])
# Run
ret_mat_gpu = np.zeros_like(a_mat)
mul_func(cuda.In(a_mat), cuda.In(b_mat), cuda.Out(ret_mat_gpu), block=(N, N, 1), grid=(1, 1))
# 출력
print("\nGPU로 계산 =")
print(ret_mat_gpu)
print("\nCPU로 계산 =")
print(ret_mat_cpu)
###Output
GPU로 계산 =
[[48 27 53 35 43 52 46 47 43 46]
[38 39 33 23 35 45 56 34 38 46]
[35 37 25 26 29 40 47 34 26 42]
[26 35 19 24 27 37 43 23 19 38]
[39 23 31 20 31 42 33 19 31 39]
[57 44 52 33 46 63 58 38 59 51]
[58 52 46 40 35 56 59 54 45 53]
[62 47 50 44 45 66 54 45 48 56]
[44 33 40 31 34 46 53 39 39 51]
[55 44 51 38 45 58 63 48 48 58]]
CPU로 계산 =
[[48 27 53 35 43 52 46 47 43 46]
[38 39 33 23 35 45 56 34 38 46]
[35 37 25 26 29 40 47 34 26 42]
[26 35 19 24 27 37 43 23 19 38]
[39 23 31 20 31 42 33 19 31 39]
[57 44 52 33 46 63 58 38 59 51]
[58 52 46 40 35 56 59 54 45 53]
[62 47 50 44 45 66 54 45 48 56]
[44 33 40 31 34 46 53 39 39 51]
[55 44 51 38 45 58 63 48 48 58]]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.