path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
machine_learning/datascienceschool/0323_HyperParameterTuning.ipynb | ###Markdown
하이퍼파라미터 튜닝- 모델의 성능을 확보하기 위해 조절하는 설정값**Hyperparameter tuning** $\rightarrow$ Build Models $\rightarrow$ Models $\rightarrow$ Training Result $\rightarrow$ **Hyperparameter tuning** $\cdots$- 튜닝대상 - 결정나무에서 해볼만한 것: max_depth (수학적 최적화가 불가능) - 간단히 반복문으로 max_depth를 바꿔가며 테스트 - 그러나 더 간단한 방법이 있다. wine data로 실습
###Code
red_url = 'https://raw.githubusercontent.com/PinkWink/ML_tutorial/master/dataset/winequality-red.csv'
white_url = 'https://raw.githubusercontent.com/PinkWink/ML_tutorial/master/dataset/winequality-white.csv'
red_wine = pd.read_csv(red_url, sep=';')
white_wine = pd.read_csv(white_url, sep=';')
red_wine['color'] = 1.
white_wine['color'] = 0.
wine = pd.concat([red_wine, white_wine])
wine['taste'] = [1. if grade > 5 else 0. for grade in wine['quality']]
X = wine.drop(['taste', 'quality'], axis=1)
y = wine.taste
###Output
_____no_output_____
###Markdown
GridSearchCV- CV(Cross Validation)- 결과를 확인하고 싶은 파라미터를 `param_gird` 속성에 정의해줌- `n_jobs` 옵션을 높이면 CPU의 코어를 보다 병렬로 활용함. 코어가 많다면 n_job를 높이면 속도가 빨라진다
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeClassifier
params = {'max_depth': [2, 4, 7, 10]}
wine_tree = DecisionTreeClassifier(max_depth=2, random_state=13)
gridsearch = GridSearchCV(estimator=wine_tree, param_grid=params, cv=5)
gridsearch.fit(X, y)
###Output
_____no_output_____
###Markdown
GridSearchCV의 결과 확인
###Code
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(gridsearch.cv_results_)
###Output
{ 'mean_fit_time': array([0.0057776 , 0.01001801, 0.01635695, 0.02265601]),
'mean_score_time': array([0.00080662, 0.00094981, 0.000804 , 0.00080271]),
'mean_test_score': array([0.68877944, 0.66353702, 0.65337848, 0.64383562]),
'param_max_depth': masked_array(data=[2, 4, 7, 10],
mask=[False, False, False, False],
fill_value='?',
dtype=object),
'params': [ {'max_depth': 2},
{'max_depth': 4},
{'max_depth': 7},
{'max_depth': 10}],
'rank_test_score': array([1, 2, 3, 4]),
'split0_test_score': array([0.55230769, 0.51230769, 0.50846154, 0.51615385]),
'split1_test_score': array([0.68846154, 0.63153846, 0.60307692, 0.60076923]),
'split2_test_score': array([0.71461538, 0.72384615, 0.68384615, 0.66769231]),
'split3_test_score': array([0.73210162, 0.73210162, 0.73672055, 0.70977675]),
'split4_test_score': array([0.75654854, 0.71802773, 0.73497689, 0.72496148]),
'std_fit_time': array([3.87495109e-04, 1.48430454e-05, 4.81547149e-04, 7.19014644e-04]),
'std_score_time': array([4.03320566e-04, 1.90665709e-05, 4.03350388e-04, 4.02129682e-04]),
'std_test_score': array([0.07178437, 0.08391641, 0.08725323, 0.07701473])}
###Markdown
최적의 성능을 가진 모델- max_depth=2
###Code
gridsearch.best_estimator_
gridsearch.best_score_
gridsearch.best_params_
###Output
_____no_output_____
###Markdown
Pipeline에 GridSearchCV 적용하기
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
estimators = [('scaler', StandardScaler()),
('clf', DecisionTreeClassifier(random_state=13))]
pipe = Pipeline(estimators)
param_grid = [{'clf__max_depth': [2, 4, 7, 10]}]
GridSearch = GridSearchCV(estimator=pipe, param_grid=param_grid, cv=5)
GridSearch.fit(X, y)
GridSearch.best_estimator_
GridSearch.best_score_
GridSearch.best_params_
GridSearch.cv_results_
###Output
_____no_output_____
###Markdown
Tree 확인
###Code
from graphviz import Source
from sklearn.tree import export_graphviz
Source(export_graphviz(GridSearch.best_estimator_['clf'], feature_names=X.columns, class_names=['White', 'Red'], rounded=False, filled=True))
###Output
_____no_output_____
###Markdown
표로 성능 결과를 정리하기
###Code
score_df = pd.DataFrame(GridSearch.cv_results_)
score_df[['params', 'rank_test_score', 'mean_test_score', 'std_test_score']]
###Output
_____no_output_____ |
notebooks/convolutional-neural-networks-for-visual-recognition/10_ConvolutionalNetworks.ipynb | ###Markdown
Convolutional NetworksSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
###Code
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from classifiers.cnn import *
from utils.data_utils import get_CIFAR10_data
from utils.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from utils.layers import *
from utils.fast_layers import *
from utils.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
cifar10_dataset = get_CIFAR10_data(cifar10_dir='./datasets/cifar-10-batches-py', channels_first=True)
X_train = cifar10_dataset['X_train']
y_train = cifar10_dataset['y_train']
X_dev = cifar10_dataset['X_dev']
y_dev = cifar10_dataset['y_dev']
X_val = cifar10_dataset['X_val']
y_val = cifar10_dataset['y_val']
X_test = cifar10_dataset['X_test']
y_test = cifar10_dataset['y_test']
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Dev data shape: ', X_dev.shape)
print('Dev labels shape: ', y_dev.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
# subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# Load the (preprocessed) CIFAR10 data.
data = {
'X_train': X_train, 'y_train': y_train,
'X_dev': X_dev, 'y_dev': y_dev,
'X_val': X_val, 'y_val': y_val,
'X_test': X_test, 'y_test': y_test
}
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
###Output
_____no_output_____
###Markdown
Convolution: Naive forward passThe core of a convolutional network is the convolution operation. In the file `utils/layers.py`, implement the forward pass for the convolution layer in the function `conv_forward_naive`. You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.You can test your implementation by running the following:
###Code
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
b_shape = (3, 1, 1, 1)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=np.prod(b_shape)).reshape(b_shape)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]])
# Compare your output to ours; difference should be around e-8
print('Testing conv_forward_naive')
print('difference: ', rel_error(out, correct_out))
###Output
_____no_output_____
###Markdown
Aside: Image processing via convolutionsAs fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
###Code
from imageio import imread
from PIL import Image
kitten = imread('images/kitten.jpg')
puppy = imread('images/puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
img_size = 200 # Make this smaller if it runs too slow
resized_puppy = np.array(Image.fromarray(puppy).resize((img_size, img_size)))
resized_kitten = np.array(Image.fromarray(kitten_cropped).resize((img_size, img_size)))
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = resized_puppy.transpose((2, 0, 1))
x[1, :, :, :] = resized_kitten.transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128]).reshape(2, 1, 1, 1)
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_no_ax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_no_ax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_no_ax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_no_ax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_no_ax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_no_ax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_no_ax(out[1, 1])
plt.show()
###Output
_____no_output_____
###Markdown
Convolution: Naive backward passImplement the backward pass for the convolution operation in the function `conv_backward_naive` in the file `utils/layers.py`. Again, you don't need to worry too much about computational efficiency.When you are done, run the following to check your backward pass with a numeric gradient check.
###Code
np.random.seed(231)
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2, 1, 1, 1)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around e-8 or less.
print('Testing conv_backward_naive function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
###Output
_____no_output_____
###Markdown
Max-Pooling: Naive forwardImplement the forward pass for the max-pooling operation in the function `max_pool_forward_naive` in the file `utils/layers.py`. Again, don't worry too much about computational efficiency.Check your implementation by running the following:
###Code
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be on the order of e-8.
print('Testing max_pool_forward_naive function:')
print('difference: ', rel_error(out, correct_out))
###Output
_____no_output_____
###Markdown
Max-Pooling: Naive backwardImplement the backward pass for the max-pooling operation in the function `max_pool_backward_naive` in the file `cs231n/layers.py`. You don't need to worry about computational efficiency.Check your implementation with numeric gradient checking by running the following:
###Code
np.random.seed(231)
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be on the order of e-12
print('Testing max_pool_backward_naive function:')
print('dx error: ', rel_error(dx, dx_num))
###Output
_____no_output_____
###Markdown
Fast layersMaking convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file `utils/fast_layers.py`.The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the `utils` directory:```bashpython setup.py build_ext --inplace```The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.**NOTE:** The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.You can compare the performance of the naive and fast versions of these layers by running the following:
###Code
# Rel errors should be around e-9 or less
from utils.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
np.random.seed(231)
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25, 1, 1, 1)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print('Testing conv_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
db_fast = db_fast.reshape(-1, 1, 1, 1)
print('\nTesting conv_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('\ndx difference: ', rel_error(dx_naive, dx_fast))
print('dw difference: ', rel_error(dw_naive, dw_fast))
print('db difference: ', rel_error(db_naive, db_fast))
# Relative errors should be close to 0.0
from utils.fast_layers import max_pool_forward_fast, max_pool_backward_fast
np.random.seed(231)
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print('Testing pool_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting pool_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
###Output
_____no_output_____
###Markdown
Convolutional "sandwich" layersPreviously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file `utils/layer_utils.py` you will find sandwich layers that implement a few commonly used patterns for convolutional networks. Run the cells below to sanity check they're working.
###Code
from utils.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
np.random.seed(231)
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
# Relative errors should be around e-8 or less
print('Testing conv_relu_pool')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from utils.layer_utils import conv_relu_forward, conv_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
# Relative errors should be around e-8 or less
print('Testing conv_relu:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
###Output
_____no_output_____
###Markdown
Three-layer ConvNetNow that you have implemented all the necessary layers, we can put them together into a simple convolutional network.Open the file `classifiers/cnn.py` and complete the implementation of the `ThreeLayerConvNet` class. Remember you can use the fast/sandwich layers (already imported for you) in your implementation. Run the following cells to help you debug: Sanity check lossAfter you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about `log(C)` for `C` classes. When we add regularization the loss should go up slightly.
###Code
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss)
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss)
###Output
_____no_output_____
###Markdown
Gradient checkAfter the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to the order of e-2.
###Code
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
np.random.seed(231)
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
# Errors should be small, but correct implementations may have
# relative errors up to the order of e-2
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
###Output
_____no_output_____
###Markdown
Overfit small dataA nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
###Code
np.random.seed(231)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=20, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=10)
solver.train()
###Output
_____no_output_____
###Markdown
Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
###Code
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, '-o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
###Output
_____no_output_____
###Markdown
Train the netBy training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
###Code
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=100)
solver.train()
###Output
_____no_output_____
###Markdown
Visualize FiltersYou can visualize the first-layer convolutional filters from the trained network by running the following:
###Code
from utils.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
###Output
_____no_output_____
###Markdown
Spatial Batch NormalizationWe already saw that batch normalization is a very useful technique for training deep fully-connected networks. As proposed in the original paper (link in `BatchNormalization.ipynb`), batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."Normally batch-normalization accepts inputs of shape `(N, D)` and produces outputs of shape `(N, D)`, where we normalize across the minibatch dimension `N`. For data coming from convolutional layers, batch normalization needs to accept inputs of shape `(N, C, H, W)` and produce outputs of shape `(N, C, H, W)` where the `N` dimension gives the minibatch size and the `(H, W)` dimensions give the spatial size of the feature map.If the feature map was produced using convolutions, then we expect every feature channel's statistics e.g. mean, variance to be relatively consistent both between different images, and different locations within the same image -- after all, every feature channel is produced by the same convolutional filter! Therefore spatial batch normalization computes a mean and variance for each of the `C` feature channels by computing statistics over the minibatch dimension `N` as well the spatial dimensions `H` and `W`.[1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167) Spatial batch normalization: forwardIn the file `utils/layers.py`, implement the forward pass for spatial batch normalization in the function `spatial_batchnorm_forward`. Check your implementation by running the following:
###Code
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print('Before spatial batch normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x.mean(axis=(0, 2, 3)))
print(' Stds: ', x.std(axis=(0, 2, 3)))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization (nontrivial gamma, beta):')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
np.random.seed(231)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in range(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After spatial batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=(0, 2, 3)))
print(' stds: ', a_norm.std(axis=(0, 2, 3)))
###Output
_____no_output_____
###Markdown
Spatial batch normalization: backwardIn the file `utils/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_batchnorm_backward`. Run the following to check your implementation using a numeric gradient check:
###Code
np.random.seed(231)
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
#You should expect errors of magnitudes between 1e-12~1e-06
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
_____no_output_____
###Markdown
Group NormalizationIn the previous notebook, we mentioned that Layer Normalization is an alternative normalization technique that mitigates the batch size limitations of Batch Normalization. However, as the authors of [2] observed, Layer Normalization does not perform as well as Batch Normalization when used with Convolutional Layers:>With fully connected layers, all the hidden units in a layer tend to make similar contributions to the final prediction, and re-centering and rescaling the summed inputs to a layer works well. However, the assumption of similar contributions is no longer true for convolutional neural networks. The large number of the hidden units whosereceptive fields lie near the boundary of the image are rarely turned on and thus have very differentstatistics from the rest of the hidden units within the same layer.The authors of [3] propose an intermediary technique. In contrast to Layer Normalization, where you normalize over the entire feature per-datapoint, they suggest a consistent splitting of each per-datapoint feature into G groups, and a per-group per-datapoint normalization instead. **Visual comparison of the normalization techniques discussed so far (image edited from [3])**Even though an assumption of equal contribution is still being made within each group, the authors hypothesize that this is not as problematic, as innate grouping arises within features for visual recognition. One example they use to illustrate this is that many high-performance handcrafted features in traditional Computer Vision have terms that are explicitly grouped together. Take for example Histogram of Oriented Gradients [4]-- after computing histograms per spatially local block, each per-block histogram is normalized before being concatenated together to form the final feature vector.You will now implement Group Normalization. Note that this normalization technique that you are to implement in the following cells was introduced and published to ECCV just in 2018 -- this truly is still an ongoing and excitingly active field of research![2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer Normalization." stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)[3] [Wu, Yuxin, and Kaiming He. "Group Normalization." arXiv preprint arXiv:1803.08494 (2018).](https://arxiv.org/abs/1803.08494)[4] [N. Dalal and B. Triggs. Histograms of oriented gradients forhuman detection. In Computer Vision and Pattern Recognition(CVPR), 2005.](https://ieeexplore.ieee.org/abstract/document/1467360/) Group normalization: forwardIn the file `utils/layers.py`, implement the forward pass for group normalization in the function `spatial_groupnorm_forward`. Check your implementation by running the following:
###Code
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 6, 4, 5
G = 2
x = 4 * np.random.randn(N, C, H, W) + 10
x_g = x.reshape((N * G, -1))
print('Before spatial group normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x_g.mean(axis=1))
print(' Stds: ', x_g.std(axis=1))
# Means should be close to zero and stds close to one
gamma, beta = np.ones((1,C,1,1)), np.zeros((1,C,1,1))
bn_param = {'mode': 'train'}
out, _ = spatial_groupnorm_forward(x, gamma, beta, G, bn_param)
out_g = out.reshape((N * G, -1))
print('After spatial group normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out_g.mean(axis=1))
print(' Stds: ', out_g.std(axis=1))
###Output
_____no_output_____
###Markdown
Spatial group normalization: backwardIn the file `utils/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_groupnorm_backward`. Run the following to check your implementation using a numeric gradient check:
###Code
np.random.seed(231)
N, C, H, W = 2, 6, 4, 5
G = 2
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(1,C,1,1)
beta = np.random.randn(1,C,1,1)
dout = np.random.randn(N, C, H, W)
gn_param = {}
fx = lambda x: spatial_groupnorm_forward(x, gamma, beta, G, gn_param)[0]
fg = lambda a: spatial_groupnorm_forward(x, gamma, beta, G, gn_param)[0]
fb = lambda b: spatial_groupnorm_forward(x, gamma, beta, G, gn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_groupnorm_forward(x, gamma, beta, G, gn_param)
dx, dgamma, dbeta = spatial_groupnorm_backward(dout, cache)
#You should expect errors of magnitudes between 1e-12~1e-07
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
_____no_output_____ |
DL_PyTorch/Part 9 - dataset loader - torchvision.ImageFolder.ipynb | ###Markdown
PyTorch
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
seed = 1
lr = 0.001
momentum = 0.5
batch_size = 64
test_batch_size = 64
epochs = 5
no_cuda = False
log_interval = 100
###Output
_____no_output_____
###Markdown
Model
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
###Output
_____no_output_____
###Markdown
Preprocess grayscale은 안 되는 이유https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.pyL157
###Code
torch.manual_seed(seed)
use_cuda = not no_cuda and torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
###Output
_____no_output_____
###Markdown
Optimization
###Code
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=momentum)
###Output
_____no_output_____
###Markdown
Training
###Code
for epoch in range(1, epochs + 1):
# Train Mode
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad() # backpropagation 계산하기 전에 0으로 기울기 계산
output = model(data)
loss = F.nll_loss(output, target) # https://pytorch.org/docs/stable/nn.html#nll-loss
loss.backward()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
# Test mode
model.eval() # batch norm이나 dropout 등을 train mode 변환
test_loss = 0
correct = 0
with torch.no_grad(): # autograd engine, 즉 backpropagatin이나 gradient 계산 등을 꺼서 memory usage를 줄이고 속도를 높임
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item() # pred와 target과 같은지 확인
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
###Output
Train Epoch: 1 [0/60000 (0%)] Loss: 2.232911
Train Epoch: 1 [200/60000 (0%)] Loss: 2.204920
Train Epoch: 1 [400/60000 (1%)] Loss: 1.901318
Train Epoch: 1 [600/60000 (1%)] Loss: 1.783645
Train Epoch: 1 [800/60000 (1%)] Loss: 1.861633
Train Epoch: 1 [1000/60000 (2%)] Loss: 1.308529
Train Epoch: 1 [1200/60000 (2%)] Loss: 0.398948
Train Epoch: 1 [1400/60000 (2%)] Loss: 0.448349
Train Epoch: 1 [1600/60000 (3%)] Loss: 0.500799
Train Epoch: 1 [1800/60000 (3%)] Loss: 0.487348
Train Epoch: 1 [2000/60000 (3%)] Loss: 0.096714
Train Epoch: 1 [2200/60000 (4%)] Loss: 0.804104
Train Epoch: 1 [2400/60000 (4%)] Loss: 0.181191
Train Epoch: 1 [2600/60000 (4%)] Loss: 0.217457
Train Epoch: 1 [2800/60000 (5%)] Loss: 0.034894
Train Epoch: 1 [3000/60000 (5%)] Loss: 0.100181
Train Epoch: 1 [3200/60000 (5%)] Loss: 0.260145
Train Epoch: 1 [3400/60000 (6%)] Loss: 0.011204
Train Epoch: 1 [3600/60000 (6%)] Loss: 0.010704
Train Epoch: 1 [3800/60000 (6%)] Loss: 0.085857
Train Epoch: 1 [4000/60000 (7%)] Loss: 0.006544
Train Epoch: 1 [4200/60000 (7%)] Loss: 0.468256
Train Epoch: 1 [4400/60000 (7%)] Loss: 0.092061
Train Epoch: 1 [4600/60000 (8%)] Loss: 0.007170
Train Epoch: 1 [4800/60000 (8%)] Loss: 0.086386
Train Epoch: 1 [5000/60000 (8%)] Loss: 0.008531
Train Epoch: 1 [5200/60000 (9%)] Loss: 0.045890
Train Epoch: 1 [5400/60000 (9%)] Loss: 0.010391
Train Epoch: 1 [5600/60000 (9%)] Loss: 0.171973
Train Epoch: 1 [5800/60000 (10%)] Loss: 0.010529
Train Epoch: 1 [6000/60000 (10%)] Loss: 0.911217
Train Epoch: 1 [6200/60000 (10%)] Loss: 0.190894
Train Epoch: 1 [6400/60000 (11%)] Loss: 0.284844
Train Epoch: 1 [6600/60000 (11%)] Loss: 0.004608
Train Epoch: 1 [6800/60000 (11%)] Loss: 0.030988
Train Epoch: 1 [7000/60000 (12%)] Loss: 0.002042
Train Epoch: 1 [7200/60000 (12%)] Loss: 0.594563
Train Epoch: 1 [7400/60000 (12%)] Loss: 0.027013
Train Epoch: 1 [7600/60000 (13%)] Loss: 0.009044
Train Epoch: 1 [7800/60000 (13%)] Loss: 0.234935
Train Epoch: 1 [8000/60000 (13%)] Loss: 0.000469
Train Epoch: 1 [8200/60000 (14%)] Loss: 0.005757
Train Epoch: 1 [8400/60000 (14%)] Loss: 0.021440
Train Epoch: 1 [8600/60000 (14%)] Loss: 0.012102
Train Epoch: 1 [8800/60000 (15%)] Loss: 2.327168
Train Epoch: 1 [9000/60000 (15%)] Loss: 0.772761
Train Epoch: 1 [9200/60000 (15%)] Loss: 0.004751
Train Epoch: 1 [9400/60000 (16%)] Loss: 0.064552
Train Epoch: 1 [9600/60000 (16%)] Loss: 0.006910
Train Epoch: 1 [9800/60000 (16%)] Loss: 0.000601
Train Epoch: 1 [10000/60000 (17%)] Loss: 0.146731
Train Epoch: 1 [10200/60000 (17%)] Loss: 0.058498
Train Epoch: 1 [10400/60000 (17%)] Loss: 0.046566
Train Epoch: 1 [10600/60000 (18%)] Loss: 0.001207
Train Epoch: 1 [10800/60000 (18%)] Loss: 0.019302
Train Epoch: 1 [11000/60000 (18%)] Loss: 0.003549
Train Epoch: 1 [11200/60000 (19%)] Loss: 0.023783
Train Epoch: 1 [11400/60000 (19%)] Loss: 0.178630
Train Epoch: 1 [11600/60000 (19%)] Loss: 0.037193
Train Epoch: 1 [11800/60000 (20%)] Loss: 0.020176
Train Epoch: 1 [12000/60000 (20%)] Loss: 0.175030
Train Epoch: 1 [12200/60000 (20%)] Loss: 0.003346
Train Epoch: 1 [12400/60000 (21%)] Loss: 0.067472
Train Epoch: 1 [12600/60000 (21%)] Loss: 0.005951
Train Epoch: 1 [12800/60000 (21%)] Loss: 0.276975
Train Epoch: 1 [13000/60000 (22%)] Loss: 0.008755
Train Epoch: 1 [13200/60000 (22%)] Loss: 0.000861
Train Epoch: 1 [13400/60000 (22%)] Loss: 0.001547
Train Epoch: 1 [13600/60000 (23%)] Loss: 1.374528
Train Epoch: 1 [13800/60000 (23%)] Loss: 0.340848
Train Epoch: 1 [14000/60000 (23%)] Loss: 0.013831
Train Epoch: 1 [14200/60000 (24%)] Loss: 0.022789
Train Epoch: 1 [14400/60000 (24%)] Loss: 0.000792
Train Epoch: 1 [14600/60000 (24%)] Loss: 0.416799
Train Epoch: 1 [14800/60000 (25%)] Loss: 0.410651
Train Epoch: 1 [15000/60000 (25%)] Loss: 0.151646
|
Train_and_Test_Notebooks/notebooks/ISPRS/ISPRS_CarTest.ipynb | ###Markdown
Segmentation of Road from Satellite imagery Importing Libraries
###Code
import warnings
warnings.filterwarnings('ignore')
import os
import cv2
#from google.colab.patches import cv2_imshow
import numpy as np
import tensorflow as tf
import pandas as pd
from keras.models import Model, load_model
from skimage.morphology import label
import pickle
from keras import backend as K
from matplotlib import pyplot as plt
from tqdm import tqdm_notebook
import random
from skimage.io import imread, imshow, imread_collection, concatenate_images
from matplotlib import pyplot as plt
import h5py
seed = 56
!pip install tensorflow==1.14.0
from google.colab import drive
drive.mount('/content/gdrive/')
base_path = "gdrive/My\ Drive/MapSegClean/"
%cd gdrive/My\ Drive/MapSegClean/
###Output
Drive already mounted at /content/gdrive/; to attempt to forcibly remount, call drive.mount("/content/gdrive/", force_remount=True).
/content/gdrive/My Drive/MapSegClean
###Markdown
Defining Custom Loss functions and accuracy Metric.
###Code
#Source: https://towardsdatascience.com/metrics-to-evaluate-your-semantic-segmentation-model-6bcb99639aa2
from keras import backend as K
def iou_coef(y_true, y_pred, smooth=1):
intersection = K.sum(K.abs(y_true * y_pred), axis=[1,2,3])
union = K.sum(y_true,[1,2,3])+K.sum(y_pred,[1,2,3])-intersection
iou = K.mean((intersection + smooth) / (union + smooth), axis=0)
return iou
def dice_coef(y_true, y_pred, smooth = 1):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def soft_dice_loss(y_true, y_pred):
return 1-dice_coef(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Defining Our Model
###Code
from keras.models import Model, load_model
import tensorflow as tf
from keras.layers import Input
from keras.layers.core import Dropout, Lambda
from keras.layers.convolutional import Conv2D, Conv2DTranspose
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras import optimizers
from keras.layers import BatchNormalization
import keras
inputs = Input((256, 256, 3))
s = Lambda(lambda x: x / 255) (inputs)
conv1 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (inputs)
conv1 = BatchNormalization() (conv1)
conv1 = Dropout(0.1) (conv1)
conv1 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv1)
conv1 = BatchNormalization() (conv1)
pooling1 = MaxPooling2D((2, 2)) (conv1)
conv2 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (pooling1)
conv2 = BatchNormalization() (conv2)
conv2 = Dropout(0.1) (conv2)
conv2 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv2)
conv2 = BatchNormalization() (conv2)
pooling2 = MaxPooling2D((2, 2)) (conv2)
conv3 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (pooling2)
conv3 = BatchNormalization() (conv3)
conv3 = Dropout(0.2) (conv3)
conv3 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv3)
conv3 = BatchNormalization() (conv3)
pooling3 = MaxPooling2D((2, 2)) (conv3)
conv4 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (pooling3)
conv4 = BatchNormalization() (conv4)
conv4 = Dropout(0.2) (conv4)
conv4 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv4)
conv4 = BatchNormalization() (conv4)
pooling4 = MaxPooling2D(pool_size=(2, 2)) (conv4)
conv5 = Conv2D(256, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (pooling4)
conv5 = BatchNormalization() (conv5)
conv5 = Dropout(0.3) (conv5)
conv5 = Conv2D(256, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv5)
conv5 = BatchNormalization() (conv5)
upsample6 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same') (conv5)
upsample6 = concatenate([upsample6, conv4])
conv6 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (upsample6)
conv6 = BatchNormalization() (conv6)
conv6 = Dropout(0.2) (conv6)
conv6 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv6)
conv6 = BatchNormalization() (conv6)
upsample7 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (conv6)
upsample7 = concatenate([upsample7, conv3])
conv7 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (upsample7)
conv7 = BatchNormalization() (conv7)
conv7 = Dropout(0.2) (conv7)
conv7 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv7)
conv7 = BatchNormalization() (conv7)
upsample8 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (conv7)
upsample8 = concatenate([upsample8, conv2])
conv8 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (upsample8)
conv8 = BatchNormalization() (conv8)
conv8 = Dropout(0.1) (conv8)
conv8 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv8)
conv8 = BatchNormalization() (conv8)
upsample9 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (conv8)
upsample9 = concatenate([upsample9, conv1], axis=3)
conv9 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (upsample9)
conv9 = BatchNormalization() (conv9)
conv9 = Dropout(0.1) (conv9)
conv9 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv9)
conv9 = BatchNormalization() (conv9)
outputs = Conv2D(1, (1, 1), activation='sigmoid') (conv9)
model = Model(inputs=[inputs], outputs=[outputs])
model.summary()
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4479: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:203: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2041: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:148: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3733: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 256, 256, 16) 448 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 256, 256, 16) 64 conv2d_1[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 256, 256, 16) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 256, 256, 16) 2320 dropout_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 256, 256, 16) 64 conv2d_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 128, 128, 16) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 128, 128, 32) 4640 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 128, 128, 32) 128 conv2d_3[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 128, 128, 32) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 128, 128, 32) 9248 dropout_2[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 128, 128, 32) 128 conv2d_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 64, 64, 32) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 64, 64, 64) 18496 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 64, 64, 64) 256 conv2d_5[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 64, 64, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 64, 64, 64) 36928 dropout_3[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 64, 64, 64) 256 conv2d_6[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 32, 32, 64) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 32, 32, 128) 73856 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 32, 32, 128) 512 conv2d_7[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 32, 32, 128) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 32, 32, 128) 147584 dropout_4[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 32, 32, 128) 512 conv2d_8[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 16, 16, 128) 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 16, 16, 256) 295168 max_pooling2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 16, 16, 256) 1024 conv2d_9[0][0]
__________________________________________________________________________________________________
dropout_5 (Dropout) (None, 16, 16, 256) 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 16, 16, 256) 590080 dropout_5[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 16, 16, 256) 1024 conv2d_10[0][0]
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 32, 32, 128) 131200 batch_normalization_10[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_1[0][0]
batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 32, 32, 128) 295040 concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 32, 32, 128) 512 conv2d_11[0][0]
__________________________________________________________________________________________________
dropout_6 (Dropout) (None, 32, 32, 128) 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 32, 32, 128) 147584 dropout_6[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 32, 32, 128) 512 conv2d_12[0][0]
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 64, 64, 64) 32832 batch_normalization_12[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_2[0][0]
batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 64, 64, 64) 73792 concatenate_2[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 64, 64, 64) 256 conv2d_13[0][0]
__________________________________________________________________________________________________
dropout_7 (Dropout) (None, 64, 64, 64) 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 64, 64, 64) 36928 dropout_7[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 64, 64, 64) 256 conv2d_14[0][0]
__________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTrans (None, 128, 128, 32) 8224 batch_normalization_14[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_3[0][0]
batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 128, 128, 32) 18464 concatenate_3[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 128, 128, 32) 128 conv2d_15[0][0]
__________________________________________________________________________________________________
dropout_8 (Dropout) (None, 128, 128, 32) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 128, 128, 32) 9248 dropout_8[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 128, 128, 32) 128 conv2d_16[0][0]
__________________________________________________________________________________________________
conv2d_transpose_4 (Conv2DTrans (None, 256, 256, 16) 2064 batch_normalization_16[0][0]
__________________________________________________________________________________________________
concatenate_4 (Concatenate) (None, 256, 256, 32) 0 conv2d_transpose_4[0][0]
batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 256, 256, 16) 4624 concatenate_4[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 256, 256, 16) 64 conv2d_17[0][0]
__________________________________________________________________________________________________
dropout_9 (Dropout) (None, 256, 256, 16) 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 256, 256, 16) 2320 dropout_9[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 256, 256, 16) 64 conv2d_18[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 256, 256, 1) 17 batch_normalization_18[0][0]
==================================================================================================
Total params: 1,946,993
Trainable params: 1,944,049
Non-trainable params: 2,944
__________________________________________________________________________________________________
###Markdown
HYPER_PARAMETERS
###Code
LEARNING_RATE = 0.0001
###Output
_____no_output_____
###Markdown
Initializing Callbacks
###Code
#from tensorboardcolab import TensorBoardColab, TensorBoardColabCallback
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from datetime import datetime
model_path = "/content/gdrive/My Drive/ISPRS/Models/cars.h5"
checkpointer = ModelCheckpoint(model_path,
monitor="val_loss",
mode="min",
save_best_only = True,
verbose=1)
earlystopper = EarlyStopping(monitor = 'val_loss',
min_delta = 0,
patience = 5,
verbose = 1,
restore_best_weights = True)
lr_reducer = ReduceLROnPlateau(monitor='val_loss',
factor=0.1,
patience=4,
verbose=1,
epsilon=1e-4)
###Output
_____no_output_____
###Markdown
Compiling the model
###Code
opt = keras.optimizers.adam(LEARNING_RATE)
model.compile(
optimizer=opt,
loss=soft_dice_loss,
metrics=[iou_coef])
import keras.losses, keras.metrics
keras.losses.soft_dice_loss = soft_dice_loss
keras.metrics.iou_coef = iou_coef
from keras.models import load_model
model = load_model('/content/gdrive/My Drive/MapSegClean/Models/model.h5',
custom_objects={'loss': soft_dice_loss})
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4479: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2041: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3733: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
###Markdown
Testing our Model On Test Images
###Code
model.load_weights("/content/gdrive/My Drive/ISPRS/Models/Final_unet_cars.h5")
"""Test"""
import cv2
import glob
import numpy as np
import h5py
#test_images = np.array([cv2.imread(file) for file in glob.glob("/home/bisag/Desktop/Road-Segmentation/I/")])
#test_masks = np.array([cv2.imread(file) for file in glob.glob("/home/bisag/Desktop/Road-Segmentation/M/")])
test_images = []
files = glob.glob ("/content/gdrive/My Drive/ISPRS/Test/*.png")
for myFile in files:
print(myFile)
image = cv2.imread (myFile)
test_images.append (image)
#test_images = cv2.imread("/home/bisag/Desktop/Road-Segmentation/I/1.png")
#test_masks = cv2.imread("/home/bisag/Desktop/Road-Segmentation/M/1.png")
test_images = np.array(test_images)
print(test_images.shape)
predictions = model.predict(test_images, verbose=1)*255
from google.colab.patches import cv2_imshow
cv2_imshow(np.squeeze(test_images[0]))
cv2_imshow(np.squeeze(predictions[0]))
thresh_val = 0.1
predicton_threshold = (predictions > thresh_val).astype(np.uint8)
import matplotlib
for i in range(len(predictions)):
cv2.imwrite( "/content/gdrive/My Drive/ISPRS/Results/" + str(i) + "Image.png" , np.squeeze(test_images[i][:,:,0]))
#cv2.imwrite( "/home/bisag/Desktop/Road-Segmentation/Results/" + str(i) + "Prediction.png" , np.squeeze(predictions[i][:,:,0]))
#cv2.imwrite( "/home/bisag/Desktop/Road-Segmentation/Results/" + str(i) + "Prediction_Threshold.png" , np.squeeze(predicton_threshold[i][:,:,0]))
#matplotlib.image.imsave('/home/bisag/Desktop/Road-Segmentation/Results/000.png', np.squeeze(predicton_threshold[0][:,:,0]))
matplotlib.image.imsave("/content/gdrive/My Drive/ISPRS/Results/" + str(i) + "Prediction.png" , np.squeeze(predictions[i][:,:,0]))
matplotlib.image.imsave( "/content/gdrive/My Drive/ISPRS/Results/" + str(i) + "Prediction_Threshold.png" , np.squeeze(predicton_threshold[i][:,:,0]))
#imshow(np.squeeze(predictions[0][:,:,0]))
imshow(np.squeeze(predictions[0][:,:,0]))
#import scipy.misc
#scipy.misc.imsave('/home/bisag/Desktop/Road-Segmentation/Results/00.png', np.squeeze(predictions[0][:,:,0]))
###Output
_____no_output_____ |
experiments/Applications_to_real_data_sets.ipynb | ###Markdown
Applications to real data sets
###Code
# binary classification using SGHMC
# libraries
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
# load iris dataset
iris = datasets.load_iris()
idx = iris.target != 2
X = iris.data[idx].astype(np.float32)
y = iris.target[idx].astype(np.float32)
# split into the training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Confirm the features in the dataset
iris.feature_names
# plot the classes using the first two features (in 2D plot)
plt.scatter(X_train[:,0], X_train[:, 1], c=y_train,
cmap=plt.cm.Paired, s=100)
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
pass
# Stochastic Gradient Hamiltonian Monte Carlo
def logistic_function(x):
"""Logistic function"""
return 1 / (1 + np.exp(-x))
def LR_SGHMC(X,y):
nstep = int(1e4);
# Data size and feature dimention
N, D = X.shape;
# Set minibatch size = sqrt(N)
tau = int(np.floor(np.sqrt(N)));
# Initial parameter distribution
theta = np.zeros(D);
sigma = 0.1;
SigmaStar = np.eye(D)*sigma;
invSigmaStar = np.linalg.inv(SigmaStar);
# Initialize the coefficients vector
beta0 = np.random.rand(D);
betaVec = np.zeros((nstep,D));
betaVec[0,:] = beta0;
epsilon = np.zeros(nstep);
z = np.zeros((nstep,D));
epsilon0 = 0.01;
# SGHMC
for t in range(nstep-1):
# random sample a minibatch
S = np.random.choice(N, tau, replace=False);
# parameters of sghmc
C = np.eye(D)
Bh = 0
# decay the epsilon
epsilon[t] = max(1/(t+1), epsilon0);
zCov = epsilon[t] * 2 * (C - Bh);
z[t,:] = np.random.multivariate_normal(np.zeros(D),zCov);
gradR = np.dot(invSigmaStar,(betaVec[t,:]-theta));
gradL = -np.dot(X[S,:].T,(y[S]-logistic_function(np.dot(X[S,:],betaVec[t,:]))));
betaVec[t+1,:] = betaVec[t,:]-epsilon[t]*(gradR+N/tau*gradL) - epsilon[t]*np.dot(C,betaVec[t,:]) + z[t,:];
# plot the convergence of each coefficient
fig = plt.figure(figsize=(10, 12));
ax1 = fig.add_subplot(411)
ax1.set_ylabel(r"$\beta_0$")
ax1.plot(range(nstep),betaVec[:,0]);
ax2 = fig.add_subplot(412)
ax2.set_ylabel(r"$\beta_1$")
ax2.plot(range(nstep),betaVec[:,1]);
ax3 = fig.add_subplot(413)
ax3.set_ylabel(r"$\beta_2$")
ax3.plot(range(nstep),betaVec[:,2]);
ax4 = fig.add_subplot(414)
ax4.set_ylabel(r"$\beta_3$")
ax4.plot(range(nstep),betaVec[:,3]);
plt.xlabel("iteration")
burn_from = int(0.9*nstep);
samples = betaVec[burn_from+1:,:];
return samples
samples_SGHMC = LR_SGHMC(X,y)
fig1, f1_axes = plt.subplots(ncols=2, nrows=2, constrained_layout=True, figsize=(8, 6))
f1_axes[0, 0].hist(samples_SGHMC[:,0],20)
f1_axes[0, 0].set_ylabel("Frequency")
f1_axes[0, 0].set_xlabel(r"$\beta_0$")
f1_axes[0, 1].hist(samples_SGHMC[:,1],20)
f1_axes[0, 1].set_xlabel(r"$\beta_1$")
f1_axes[1, 0].hist(samples_SGHMC[:,2],20)
f1_axes[1, 0].set_xlabel(r"$\beta_2$")
f1_axes[1, 1].hist(samples_SGHMC[:,3],20)
f1_axes[1, 1].set_xlabel(r"$\beta_3$")
pass
# Test accuracy
prob_test = np.mean(logistic_function(X_test @ samples_SGHMC.T), 1)
y_hat_test = prob_test > 0.5
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_hat_test)
###Output
_____no_output_____ |
source-code/pandas/pivot_versus_pivot_table.ipynb | ###Markdown
pivot versus pivot_tabel
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
pandas has two function to restructure dataframes. Although they are similar, each has its won applications. To experiment with them, we use the patient data set, consisting of the experimental numerical data, and the categorical metadata.
###Code
experiment = pd.read_excel('data/patient_experiment.xlsx',
dtype={'dose': np.float32,
'temperature': np.float32})
metadata = pd.read_excel('data/patient_metadata.xlsx',
dtype={'gender': 'category',
'condition': 'category'})
###Output
_____no_output_____
###Markdown
We merge the dataframes. There will be missing data in each data column.
###Code
data = pd.merge(experiment, metadata, how='left', on='patient')
data.info()
data.head()
###Output
_____no_output_____
###Markdown
pivot Using the `pivot` method, all columns are taken into accout, so when using the `'date'` column as the new index, and `'patient'` as second level column, we get a new dataframe with $4 \times 9$ columns, the first level columns will be `'dose'`, `'temperature'`, `'gender'` and `'condition'`, the second level the `'patient'` ID.
###Code
time_series = data.pivot(index='date', columns='patient')
time_series.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 7 entries, 2012-10-02 10:00:00 to 2012-10-02 16:00:00
Data columns (total 36 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 (dose, 1) 7 non-null float32
1 (dose, 2) 7 non-null float32
2 (dose, 3) 7 non-null float32
3 (dose, 4) 6 non-null float32
4 (dose, 5) 7 non-null float32
5 (dose, 6) 6 non-null float32
6 (dose, 7) 7 non-null float32
7 (dose, 8) 7 non-null float32
8 (dose, 9) 7 non-null float32
9 (temperature, 1) 7 non-null float32
10 (temperature, 2) 7 non-null float32
11 (temperature, 3) 6 non-null float32
12 (temperature, 4) 7 non-null float32
13 (temperature, 5) 7 non-null float32
14 (temperature, 6) 6 non-null float32
15 (temperature, 7) 7 non-null float32
16 (temperature, 8) 7 non-null float32
17 (temperature, 9) 7 non-null float32
18 (gender, 1) 7 non-null category
19 (gender, 2) 7 non-null category
20 (gender, 3) 7 non-null category
21 (gender, 4) 0 non-null category
22 (gender, 5) 7 non-null category
23 (gender, 6) 6 non-null category
24 (gender, 7) 7 non-null category
25 (gender, 8) 7 non-null category
26 (gender, 9) 7 non-null category
27 (condition, 1) 7 non-null category
28 (condition, 2) 7 non-null category
29 (condition, 3) 7 non-null category
30 (condition, 4) 0 non-null category
31 (condition, 5) 7 non-null category
32 (condition, 6) 6 non-null category
33 (condition, 7) 7 non-null category
34 (condition, 8) 7 non-null category
35 (condition, 9) 7 non-null category
dtypes: category(18), float32(18)
memory usage: 1.7 KB
###Markdown
The `'gender'` and `'condition'` column in this dataframe will contain identical values for each row. The optional `values` argument can be used to select only the columns of interest, e.g., we can discard `'dose'` and `'condition'`.
###Code
temp_gender_data = data.pivot(index='date', columns='patient',
values=['temperature', 'gender'])
temp_gender_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 7 entries, 2012-10-02 10:00:00 to 2012-10-02 16:00:00
Data columns (total 18 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 (temperature, 1) 7 non-null object
1 (temperature, 2) 7 non-null object
2 (temperature, 3) 6 non-null object
3 (temperature, 4) 7 non-null object
4 (temperature, 5) 7 non-null object
5 (temperature, 6) 6 non-null object
6 (temperature, 7) 7 non-null object
7 (temperature, 8) 7 non-null object
8 (temperature, 9) 7 non-null object
9 (gender, 1) 7 non-null object
10 (gender, 2) 7 non-null object
11 (gender, 3) 7 non-null object
12 (gender, 4) 0 non-null object
13 (gender, 5) 7 non-null object
14 (gender, 6) 6 non-null object
15 (gender, 7) 7 non-null object
16 (gender, 8) 7 non-null object
17 (gender, 9) 7 non-null object
dtypes: object(18)
memory usage: 1.0+ KB
###Markdown
pivot_table The `pivot_table` method on the other hand will only take the numerical columns into account.
###Code
time_series_table = data.pivot_table(index='date', columns='patient')
###Output
_____no_output_____
###Markdown
This dataframe has just $2 \times 9$ columns, two top level columns `'dose'` and `'temperature'`, and the `'patient'` ID as second level.
###Code
time_series_table.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 7 entries, 2012-10-02 10:00:00 to 2012-10-02 16:00:00
Data columns (total 18 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 (dose, 1) 7 non-null float32
1 (dose, 2) 7 non-null float32
2 (dose, 3) 7 non-null float32
3 (dose, 4) 6 non-null float32
4 (dose, 5) 7 non-null float32
5 (dose, 6) 6 non-null float32
6 (dose, 7) 7 non-null float32
7 (dose, 8) 7 non-null float32
8 (dose, 9) 7 non-null float32
9 (temperature, 1) 7 non-null float32
10 (temperature, 2) 7 non-null float32
11 (temperature, 3) 6 non-null float32
12 (temperature, 4) 7 non-null float32
13 (temperature, 5) 7 non-null float32
14 (temperature, 6) 6 non-null float32
15 (temperature, 7) 7 non-null float32
16 (temperature, 8) 7 non-null float32
17 (temperature, 9) 7 non-null float32
dtypes: float32(18)
memory usage: 560.0 bytes
###Markdown
The motivation for this implementation is that `pivot_table` is mainly inteneded to aggregate data. For instance, the cumulative dose can be computed.
###Code
dose_table = data.pivot_table(index='date',
values=['dose'],
columns='patient',
aggfunc=np.sum,
margins=True,)
dose_table
###Output
_____no_output_____
###Markdown
Note that the `margins` argument results in the computation of totals for rows and colomns (according to the aggregation function). Compute the maximum temperature for each gender/condition.
###Code
data.pivot_table(index=['gender', 'condition'],
values='temperature',
aggfunc=np.max,)
###Output
_____no_output_____
###Markdown
Compute the total dose and the maximum temperature for each patient grouped by gender.
###Code
data.pivot_table(index=['gender', 'patient'],
values=['temperature', 'dose'],
aggfunc={
'temperature': np.max,
'dose': np.sum,
},)
###Output
_____no_output_____ |
ExploratoryDataAnalysis/EDA_Mapas.ipynb | ###Markdown
Teste Leafmap - Choropleth Setup Instalações
###Code
!pip install plotly==4.14.3
!pip install -U kaleido
!pip install geopandas
###Output
_____no_output_____
###Markdown
Imports
###Code
import plotly
import plotly.express as px
import geopandas as gpd
import json
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Data Collecting Carregando dados dos inscritos por UF
###Code
inscritos_path = '/content/drive/MyDrive/Colab Notebooks/IC_Data_Science_ENEM/Mapas/CSV/inscritos_uf.csv'
inscritos_uf = pd.read_csv(inscritos_path).drop(columns='Unnamed: 0')
inscritos_uf.head()
###Output
_____no_output_____
###Markdown
Carregando dados dos desistentes por UF
###Code
desistentes_path = '/content/drive/MyDrive/Colab Notebooks/IC_Data_Science_ENEM/Mapas/CSV/desistentes_uf.csv'
desistentes_uf = pd.read_csv(desistentes_path).drop(columns='Unnamed: 0')
desistentes_uf.head()
###Output
_____no_output_____
###Markdown
Carregando dados da disponibilidade de internet por UF
###Code
internet_path = '/content/drive/MyDrive/Colab Notebooks/IC_Data_Science_ENEM/Mapas/CSV/internet_uf.csv'
internet_uf = pd.read_csv(internet_path).drop(columns='Unnamed: 0')
internet_uf.head()
###Output
_____no_output_____
###Markdown
Carregando Mapas Coletando os dados da geometria a partir de um GeoJSON
###Code
uf_json_path = '/content/drive/MyDrive/Colab Notebooks/IC_Data_Science_ENEM/Mapas/SHP Brasil/uf_brasil.json'
with open(uf_json_path) as f:
uf_brasil = json.load(f)
uf_brasil['features'][0].keys()
uf_brasil['features'][0]['properties']
###Output
_____no_output_____
###Markdown
Construindo choropleths Inscritos por UF
###Code
colorscale = [(0.00, '#caf0f8'), ((1/6), '#caf0f8'),
((1/6), '#90e0ef'), ((2/6), '#90e0ef'),
((2/6), '#00b4d8'), ((3/6), '#00b4d8'),
((3/6), '#0077b6'), ((4/6), '#0077b6'),
((4/6), '#023e8a'), ((5/6), '#023e8a'),
((5/6), '#03045e'), (1.00, '#03045e')]
fig = px.choropleth(
inscritos_uf,
geojson=uf_brasil,
locations='CO_UF_PROVA',
color='Total',
range_color=(101, 2900001),
color_continuous_scale=colorscale,
featureidkey="properties.CD_GEOCUF",
projection='mercator',
title='Concentração de Inscritos do ENEM<br>por UF entre os anos de 2017 e 2019'
)
fig.update_geos(fitbounds="locations", visible=False)
fig.update_layout(
title=dict(font=dict(size=24)),
autosize=True,
margin={"r":0,"t":70,"l":0,"b":0},
)
fig.update_coloraxes(
colorbar=dict(
title=dict(text='Total\n', font=dict(size=20)),
tickfont=dict(size=18),
x=0.8
)
)
fig.show()
fig_name = 'inscritos_choropleth'
fig_folder = 'Inscritos'
fig_path = f'/content/drive/MyDrive/Colab Notebooks/IC_Data_Science_ENEM/Imagens/{fig_folder}/{fig_name}.png'
fig.write_image(fig_path)
###Output
_____no_output_____
###Markdown
Desistentes por UF
###Code
#colorscale = [(0.00, '#590d22'), ((1/6), '#590d22'),
# ((1/6), '#800f2f'), ((2/6), '#800f2f'),
# ((2/6), '#a4133c'), ((3/6), '#a4133c'),
# ((3/6), '#ff4d6d'), ((4/6), '#ff4d6d'),
# ((4/6), '#ff758f'), ((5/6), '#ff758f'),
# ((5/6), '#ff8fa3'), (1.00, '#ff8fa3')]
fig = px.choropleth(
desistentes_uf,
geojson=uf_brasil,
locations='CodEstado',
color='Média',
range_color=(desistentes_uf['Média'].min(), desistentes_uf['Média'].max()),
color_continuous_scale='YlOrRd',
featureidkey="properties.CD_GEOCUF",
projection='mercator',
title='Desistência de Inscritos do ENEM<br>por UF entre os anos de 2017 e 2019'
)
fig.update_geos(fitbounds="locations", visible=False)
fig.update_layout(
title=dict(font=dict(size=24)),
autosize=True,
margin={"r":0,"t":70,"l":0,"b":0},
)
fig.update_coloraxes(
colorbar=dict(
title=dict(text='Desistência\n', font=dict(size=20)),
ticksuffix='%',
tickfont=dict(size=18),
x=0.8
)
)
fig.show()
fig_name = 'desistentes_choropleth'
fig_folder = 'Desistencia'
fig_path = f'/content/drive/MyDrive/Colab Notebooks/IC_Data_Science_ENEM/Imagens/{fig_folder}/{fig_name}.png'
fig.write_image(fig_path)
###Output
_____no_output_____
###Markdown
Disponibilidade de Internet por UF
###Code
colorscale = [(0.00, '#f1f9f6'), ((1/6), '#f1f9f6'),
((1/6), '#c3e4d7'), ((2/6), '#c3e4d7'),
((2/6), '#41aa83'), ((3/6), '#41aa83'),
((3/6), '#58a261'), ((4/6), '#58a261'),
((4/6), '#028e5a'), ((5/6), '#028e5a'),
((5/6), '#073d20'), (1.00, '#073d20')]
fig = px.choropleth(
internet_uf,
geojson=uf_brasil,
locations='CodEstado',
color='Porcentagem',
range_color=(5,65),
color_continuous_scale=colorscale,
featureidkey="properties.CD_GEOCUF",
projection='mercator',
title='Proporção de Inscritos do ENEM sem Internet em Casa<br>por Estado entre os anos de 2017 e 2019'
)
fig.update_geos(fitbounds="locations", visible=False)
fig.update_layout(
title=dict(font=dict(size=24)),
autosize=True,
margin={"r":0,"t":70,"l":0,"b":0},
)
fig.update_coloraxes(
colorbar=dict(
title=dict(text='Porcentagem\n', font=dict(size=20)),
ticksuffix='%',
tickfont=dict(size=18),
tickvals=[None, 15, 25, 35, 45, 55, None],
x=0.8
)
)
fig.show()
fig_name = 'internet_choropleth'
fig_folder = 'Internet'
fig_path = f'/content/drive/MyDrive/Colab Notebooks/IC_Data_Science_ENEM/Imagens/{fig_folder}/{fig_name}.png'
fig.write_image(fig_path)
###Output
_____no_output_____ |
docs/tutorial/stateful_layer.ipynb | ###Markdown
Training static Digital Backpropogation (DBP) variants with adaptive equalizer layers[](https://colab.research.google.com/github/remifan/commplax/blob/master/docs/tutorial/stateful_layer.ipynb)[](https://mybinder.org/v2/gh/remifan/commplax/HEAD?labpath=docs%2Ftutorial%2Fstateful_layer.ipynb) ``` ▲ ▲ │ │ ┌───────┐ ┌───┴───┐ ┌──────┐ ┌───┴────┐──────►│ DBP ├──►│ FOE ├─┬─►│ Conv ├──►│ MIMO ├───┬─┬─► └───────┘ └───┬───┘ │ └──────┘ └───┬────┘ │ │ ▲ │ │ ▲ │ │ │ │ adaptive │ │ adaptive │ │ backprop │ │ backprop │ │ │ │ └─────┘ │ └────────┘ │ │ │ │ └───────────────────────┴──────────────────────┘``` Install and import dependencies
###Code
# install Jax if not found
try:
import jax
except ModuleNotFoundError:
%pip install --upgrade "jax[cpu]"
# %pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_releases.html # Note: wheels only available on linux.
# install commplax if not found
try:
import commplax
except ModuleNotFoundError:
%pip install --upgrade https://github.com/remifan/commplax/archive/master.zip
# install data api if not found
try:
import labptptm2
except ModuleNotFoundError:
%pip install https://github.com/remifan/LabPtPTm2/archive/master.zip
# install GDBP if not found
try:
import gdbp
except ModuleNotFoundError:
%pip install https://github.com/remifan/gdbp_study/archive/master.zip
import numpy as np
from tqdm.auto import tqdm
from functools import partial
import matplotlib.pyplot as plt
from commplax import util, comm
from gdbp import gdbp_base as gb, data as gdat, aux
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
Loading data from LabPtPTm2 datasets
###Code
# get 2 received waveforms@0dBm with different random sequence
# Note: see https://github.com/remifan/LabPtPTm2 for details
ds_train, ds_test = gdat.load(1, 0, 4, 1)[0], gdat.load(2, 0, 4, 1)[0]
###Output
_____no_output_____
###Markdown
Initializing models
###Code
# Note: see `https://github.com/remifan/gdbp_study/blob/master/gdbp/gdbp_base.py` for model definition
def init_models(data: gdat.Input, **kwargs):
''' make CDC and DBP's derivatives
(all methods has trainable R-filter)
cdc: static D-filter, no NLC
dbp: static D-filter, scalar manually optimized NLC factor
fdbp: static D-filter, static N-filter scaled by manually optimized NLC factor
edbp: static D-filter, tap-by-tap optimizable/trainable N-filter
gdbp: tap-by-tap optimizable/trainable D-filter and N-filter
'''
mode = kwargs.get('mode', 'train')
steps = kwargs.get('steps', 3)
dtaps = kwargs.get('dtaps', 261)
ntaps = kwargs.get('ntaps', 41)
rtaps = kwargs.get('rtaps', 61)
xi = kwargs.get('xi', 1.1) # optimal xi for FDBP
fdbp_init = partial(gb.fdbp_init, data.a, steps=steps)
model_init = partial(gb.model_init, data)
comm_conf = {'mode': mode, 'steps': steps, 'dtaps': dtaps, 'rtaps': rtaps} # common configurations
# init. func.| define model structure parameters and some initial values | define static modules | identifier
cdc = model_init({**comm_conf, 'ntaps': 1, 'init_fn': fdbp_init(xi=0.0)}, [('fdbp_0',)], name='CDC')
dbp = model_init({**comm_conf, 'ntaps': 1, 'init_fn': fdbp_init(xi=0.15)}, [('fdbp_0',)], name='DBP')
fdbp = model_init({**comm_conf, 'ntaps': ntaps, 'init_fn': fdbp_init(xi=xi)}, [('fdbp_0',)], name='FDBP')
edbp = model_init({**comm_conf, 'ntaps': ntaps, 'init_fn': fdbp_init(xi=xi)}, [('fdbp_0', r'DConv_\d')], name='EDBP')
gdbp = model_init({**comm_conf, 'ntaps': ntaps, 'init_fn': fdbp_init(xi=xi)}, [], name='GDBP')
return cdc, dbp, fdbp, edbp, gdbp
models_train = init_models(ds_train)
models_test = init_models(ds_test, mode='test')
###Output
_____no_output_____
###Markdown
Training and testing all models
###Code
results = []
for model_train, model_test in tqdm(zip(models_train, models_test), total=5, desc='sweep models'):
# use trained params of the 3rd last batch, as tailing samples are corrupted by CD
params_queue = [None] * 3
for _, p, _ in gb.train(model_train, ds_train, n_iter=2000):
params_queue.append(p)
params = params_queue.pop(0)
results.append(gb.test(model_test, params, ds_test, metric_fn=comm.qamqot_local)[0])
###Output
_____no_output_____
###Markdown
Results
###Code
import matplotlib.pyplot as plt
labels = ['CDC', 'DBP', 'FDBP', 'EDBP', 'GDBP']
colors = plt.cm.RdPu(np.linspace(0.2, 0.8, len(labels)))
fig = plt.figure(dpi=100)
for r, l, c in zip(results, labels, colors):
plt.plot(r['SNR'][:, 2], label=l, color=c) # averaged SNR of Pol. X and Pol. Y
plt.title('local SNR every 1e4 symbols')
plt.xlabel('symbol index')
plt.ylabel('SNR (dB)')
plt.ylim([14.6, 16.])
plt.legend(loc='lower right')
###Output
_____no_output_____ |
Python_Class/.ipynb_checkpoints/Class_6-checkpoint.ipynb | ###Markdown
Exercise ** 1. Write a function called squared that takes in one int and retrun the second power of it.**
###Code
def squared(x):
return(x**2)
squared(2)
squared(3)
###Output
_____no_output_____
###Markdown
** 2.Write a function called concatenate that concatenate two words**
###Code
def concatenate(x,y):
return(x+y)
3 + 2
'hello' +'world'
concatenate('hello', 'world')
###Output
_____no_output_____
###Markdown
** 3.Write a function called average that calculates the average of a list of ints**
###Code
number_3 = [1,2,3,4,5,6,7]
def average(x):
n = len(x)
total = sum(x)
return(total/n)
average(number_3)
def average(x):
n = len(x)
total = 0
for number in x:
total = total+number
return(total/n)
average(number_3)
###Output
_____no_output_____
###Markdown
** 4.Write a function called variance that calculates the variance of a list of numbers**
###Code
def variance(numbers):
n = len(numbers)
total = sum(numbers)
average = total/n
square_total = 0
for num in numbers:
square_total = square_total + (num -average)**2
return(square_total / n)
variance(number_3)
###Output
_____no_output_____
###Markdown
Lambda Function lambda function is a type of anonymous function which is defined by the **lambda** keyword Syntax**lambda** arguments:expression
###Code
double = lambda x:x*2
double(5)
mutiply = lambda a,b : a*b
mutiply(5,6)
###Output
_____no_output_____
###Markdown
write a function that add up all there arguments using lambda
###Code
add_up = lambda a,b,c:a+b+c
add_up(1,2,3)
add_up(1,2,4)
###Output
_____no_output_____
###Markdown
Map **map** applies a function to all the items in an input list SYNTAX:map(function_to_apply, list_of_inputs)
###Code
## Print out a list with its squared value
items = [1 ,2 ,3 ,4 ,5 ,6]
## Oldest way
squared = []
for num in items:
squared.append(num**2)
## Advanced way -- List comprehenshion
squared = [num**2 for num in items]
squared
## map way
list(map(lambda x:x**2, items))
dict_a = [{'name':'python','points':10}
,{'name':'Java', 'points':9}]
list(map(lambda x: x['name'], dict_a))
list(map(lambda x: x['points'], dict_a))
list(map(lambda x: x['name']=='python', dict_a))
## Write a function that add up list_a and list_b using map
list_a = [1,2,3]
list_b = [4,5,6]
list(map(lambda x,y:x+y,list_a,list_b))
###Output
_____no_output_____
###Markdown
Filter filter returns only those element for which the function_object return true SYNTAX:filter(function_object, iterable_list)
###Code
list_a = [1,2,3,4,5,6]
list(filter(lambda x:x%2==0, list_a))
list(filter(lambda x:x%2==1, list_a))
list_a
###Output
_____no_output_____
###Markdown
Apply a function to list_a that triples list_a using lambda function
###Code
list(map(lambda x: x*3, list_a))
###Output
_____no_output_____
###Markdown
from list_a, filter out everything that is mutiple of 3
###Code
list(filter(lambda x: x%3==0, list_a))
###Output
_____no_output_____
###Markdown
Nested Function Like nested loops, we simply create a function using def inside another function to nest two functions
###Code
def f1():
print("Hello")
def f2():
print("world")
return f2()
f1()
def f1():
x = 1
def f2(a):
print(a+x)
f2(2)
f1()
###Output
3
###Markdown
Global & Local Variables
###Code
#local
def f1():
local = 5
return local
##Global
glob = 5
def f1():
print(glob)
f1()
###Output
5
###Markdown
Closure
###Code
def power(exponent):
def exponent_of(base):
return(base**exponent)
return exponent_of
square = power(2)
square(2)
triple = power(3)
triple(2)
def myfunc(n):
return lambda a:a*n
mydouble = myfunc(2)
mydouble(5)
###Output
_____no_output_____
###Markdown
Game
###Code
cow_game()
guessed 1234
answer 2345
3cows 0bulls
def cow_game():
number = '1380'
cow = 0
def calculare_cow_and_bulls(guess_number):
cows = 0
bulls = 0
for i in range(0,4):
if guess_number[i] == number[i]:
bulls +=1
elif guess_number[i] in number:
cows+=1
return cows, bulls
while cow<4:
user_number = input("Guess the 4-digits number:")
cow,bull = calculare_cow_and_bulls(user_number)
print('Cows: ', cow)
print("Bulls: ", bull)
if bull ==4:
print('Congratulations!')
break
###Output
_____no_output_____ |
Probabilistic Machine Learning/Introduction to Probabilistic Machine Learning/.ipynb_checkpoints/Linear in the parameters regression-checkpoint.ipynb | ###Markdown
- Dataset $\mathcal{D}=\{x_i,y_i\}^N_{i=1}$ of N pairs of inputs $x_i$ and targets $y_i$. This data can be measurements in an experiment.- Goal: predict target $y_*$ associated to any arbitrary input $x_*$. This is known as a **regression** task in machine learning. Generate Dataset$$y_i = (x_i+1)^3+\epsilon_i$$where $\epsilon_i \sim \mathcal{N}(0,1)$
###Code
x = np.linspace(-4,2,20)
y = (x+1)**3+10*np.random.normal(0,1,20)
plt.plot(x,y,'b.')
###Output
_____no_output_____
###Markdown
Model of the dataAssume that the dataset can be fit with a $M^{th}$ order polynomial (**polynomial regression**),$$f_w(x) = w_0+w_1x+w_2x^2+w_3x^3+...+w_Mx^M=\sum^M_{j=1}w_j\phi_j(x)$$The $w_j$ are the weights of the polynomial, the parameters of the model, and $\phi_j(x)$ is a basis function of our linear in the parameters model. Fitting model parameters via the least squares approach- Measure the quality of the fit to the training data.- For each train point, measure the squared error $e^2_i = (y_i-f(x_i))^2$.- Find the parameters that minimize the sum of squared errors:$$E(\mathbf{w})=\sum^N_{i=1}e^2_i=\|\mathbf{e}\|^2 = \mathbf{e}^\top\mathbf{e}=(\mathbf{y}-\mathbf{f})^\top(\mathbf{y}-\mathbf{f})$$where $\mathbf{y} = [y_1,...y_N]^\top$ is a vector that stacks the N training targets, $\mathbf{f}=[f_\mathbf{W}(x_1),...,f_\mathbf{w}(x_N)]^\top$ stacks the prediction evaluated at the N training inputs.Therefore, \begin{align}\mathbf{y}&=\mathbf{f}+\mathbf{e}=\mathbf{\Phi w}+\mathbf{e} \\\begin{pmatrix} y_1\\y_2\\\vdots\\y_N\end{pmatrix}&=\begin{pmatrix}1&x_1&x_1^2&...&x_1^M\\1&x_2&x_2^2&...&x_2^M\\\vdots&\vdots&\vdots&\cdots&\vdots\\1&x_N&x_N^2&...&x_N^M\end{pmatrix}\begin{pmatrix}w_0\\w_1\\\vdots\\w_M\end{pmatrix} + \mathbf{e}\end{align} The sum of squared errors is a convex function of $\mathbf{w}$: $$E(\mathbf{w})=(\mathbf{y}-\mathbf{\Phi w})^\top(\mathbf{y}-\mathbf{\Phi w})$$To minimize the errors, find the weight vector $\mathbf{\hat{w}}$ that sets the gradient with respect to the weights to zero, $$\frac{\partial E(\mathbf{w})}{\partial \mathbf{w}}=-2\mathbf{\Phi}^\top(\mathbf{y}-\mathbf{\Phi w})=2\mathbf{\Phi^\top\Phi w}-2\mathbf{\Phi}^\top \mathbf{y}=0$$The weight vector is $$\mathbf{\hat{w}}=(\mathbf{\Phi^\top\Phi})^{-1}\mathbf{\Phi^\top y}$$
###Code
def polynomialFit(x,y,order=3):
for i in range(order+1):
if i == 0:
Phi = x**i
else:
Phi = np.vstack((Phi,x**i))
Phi = Phi.T
if order == 0:
Phi = Phi.reshape(-1,1)
w = mm(mm(inv(mm(Phi.T,Phi)),Phi.T),y)
f = mm(Phi,w)
dif = y-f
err = mm(dif.T,dif)
return f,err,w,Phi
f,err,w,Phi = polynomialFit(x,y)
plt.plot(x,y,'b.')
plt.plot(x,f,'r-')
print(w) # ideal: 1,3,3,1
print(err)
###Output
[-3.72267566 3.06389909 3.16488099 1.13750016]
2094.571187080396
###Markdown
M-th order Polynomial
###Code
errlist = []
plt.figure(figsize=(20,20))
for i in range(21):
plt.subplot(7,3,i+1)
f,err,w,Phi = polynomialFit(x,y,i)
errlist.append(err)
plt.plot(x,y,'b.')
plt.plot(x,f,'r-')
plt.title('Order '+str(i)+': '+str(err))
plt.plot(np.arange(16),errlist[:16])
###Output
_____no_output_____
###Markdown
The fitting becomes very unstable after the order of 15. This may be due to the inverse instability. This can be resolved via LU decomposition, Cholesky decomposition or QR decomposition. LU decomposition
###Code
import scipy
from scipy.linalg import lu_factor,lu_solve
def polynomialFitLU(x,y,order=3):
for i in range(order+1):
if i == 0:
Phi = x**i
else:
Phi = np.vstack((Phi,x**i))
Phi = Phi.T
if order == 0:
Phi = Phi.reshape(-1,1)
lu,piv = lu_factor(mm(Phi.T,Phi))
tmp = lu_solve((lu,piv),Phi.T)
w = mm(tmp,y)
f = mm(Phi,w)
dif = y-f
err = mm(dif.T,dif)
return f,err,w,Phi
errlistLU = []
plt.figure(figsize=(20,20))
for i in range(21):
plt.subplot(7,3,i+1)
f,err,w,Phi = polynomialFitLU(x,y,i)
errlistLU.append(err)
plt.plot(x,y,'b.')
plt.plot(x,f,'r-')
plt.title('Order '+str(i)+': '+str(err))
plt.plot(np.arange(21),errlistLU)
###Output
_____no_output_____
###Markdown
Cholesky decomposition
###Code
from scipy.linalg import cho_factor,cho_solve
def polynomialFitChol(x,y,order=3):
for i in range(order+1):
if i == 0:
Phi = x**i
else:
Phi = np.vstack((Phi,x**i))
Phi = Phi.T
if order == 0:
Phi = Phi.reshape(-1,1)
c,low = cho_factor(mm(Phi.T,Phi))
tmp = cho_solve((c,low),Phi.T)
w = mm(tmp,y)
f = mm(Phi,w)
dif = y-f
err = mm(dif.T,dif)
return f,err,w,Phi
errlistChol = []
plt.figure(figsize=(20,20))
for i in range(21):
plt.subplot(7,3,i+1)
f,err,w,Phi = polynomialFitLU(x,y,i)
errlistChol.append(err)
plt.plot(x,y,'b.')
plt.plot(x,f,'r-')
plt.title('Order '+str(i)+': '+str(err))
plt.plot(np.arange(21),errlistChol)
###Output
_____no_output_____
###Markdown
Comparison between inverse, LU decomposition and Cholesky decomposition
###Code
plt.plot(np.arange(21),errlist)
plt.plot(np.arange(21),errlistLU)
plt.plot(np.arange(21),errlistChol)
plt.plot(np.arange(21),errlistLU)
plt.plot(np.arange(21),errlistChol)
###Output
_____no_output_____ |
explore_opensmile.ipynb | ###Markdown
Explore openSMILEThis notebook will explore the [openSMILE](https://audeering.github.io/opensmile-python/) Python interface to extract vocal features from audio. To run the notebook the `opensmile` package needs to be installed: Run a command prompt as administrator and then enter `pip install opensmile`.When FFmpeg is not installed, openSMILE expects audio file in mono PCM `.wav` format. It extracts different pre-defined feature sets (see this [link](https://coder.social/audeering/opensmile-python) for a comparison) that can be applied on three different levels: Low-level descriptors, functionals, and low-level descriptor deltas. It is also possible to supply a custom configuration file for feature extraction.
###Code
import opensmile
import pandas as pd
import matplotlib.pyplot as plt
###Output
SoX could not be found!
If you do not have SoX, proceed here:
- - - http://sox.sourceforge.net/ - - -
If you do (or think that you should) have SoX, double-check your
path variables.
###Markdown
Single SpeakerAs a test case for a single speaker, a part of the victory speech by the current president of the United States Joe Biden will be used ([link](https://www.englishspeecheschannel.com/english-speeches/joe-biden-speech/) to full speech). This audio file is located at `/audio`.
###Code
extractor = opensmile.Smile(
feature_set=opensmile.FeatureSet.eGeMAPSv02,
feature_level=opensmile.FeatureLevel.LowLevelDescriptors
)
extractor.feature_names
features_single_df = extractor.process_file("audio/test_audio_single.wav")
print(features_single_df["F0semitoneFrom27.5Hz_sma3nz"])
def plot_feature_over_time(features_df, feature_name):
plot_df = features_df.reset_index()
x = pd.to_timedelta(plot_df["start"])
y = plot_df[feature_name]
plt.plot(x, y)
plt.show()
plot_feature_over_time(features_single_df, "F0semitoneFrom27.5Hz_sma3nz")
###Output
_____no_output_____
###Markdown
Multiple speakersAs a test case for multiple speakers, a part of a presidential debate in the United States 2020 will be used ([link](https://www.englishspeecheschannel.com/english-speeches/us-presidential-debates-2020/) to full speech). The audio file is located at `/audio`.
###Code
features_multi_df = extractor.process_file("audio/test_audio_multi.wav")
plot_feature_over_time(features_multi_df, "F0semitoneFrom27.5Hz_sma3nz")
###Output
_____no_output_____
###Markdown
Custom Feature SetsopenSMILE also allows the user to supply a custom set of features. These can be specified in a `.config` file (see this [link](https://audeering.github.io/opensmile/reference.htmlconfiguration-files) for documentation). In this file, several components can be stacked to extract features from the raw audio stream. In this file, 5 components are included:- `cFramer`: Splits the audio signal into frames- `cTransformFFT`: Applies the FFT to the frames- `cFFTmagphase`: Computes the magnitude and phase of the FFT signal- `cAcf`: Computes the ACF from the magnitude spectrum- `cPitchACF`: Computes the fundamental frequency (F0) from the ACFEach component receives the output of the previous component as input.
###Code
config_str = """
;;; Source
[componentInstances:cComponentManager]
instance[dataMemory].type=cDataMemory
\{\cm[source{?}:include external source]}
;;; Main section
[componentInstances:cComponentManager]
instance[framer].type = cFramer
instance[fft_trans].type = cTransformFFT
instance[fft_magphase].type = cFFTmagphase
instance[acf].type = cAcf
instance[pitch_acf].type = cPitchACF
;;; Split input signal into frames
[framer:cFramer]
reader.dmLevel = wave
writer.dmLevel = frames
copyInputName = 1
frameMode = fixed
frameSize = 0.02
frameStep = 0.02
frameCenterSpecial = left
noPostEOIprocessing = 1
;;; Apply FFT to frame signal
[fft_trans:cTransformFFT]
reader.dmLevel = frames
writer.dmLevel = fft_trans
;;; Compute magnitude and phase of FFT signal
[fft_magphase:cFFTmagphase]
reader.dmLevel = fft_trans
writer.dmLevel = fft_magphase
;;; Compute autocorrelation function from magnitude signal
[acf:cAcf]
reader.dmLevel = fft_magphase
writer.dmLevel = acf
;;; Compute voice pitch from ACF
[pitch_acf:cPitchACF]
reader.dmLevel = acf
writer.dmLevel = pitch_acf
F0 = 1
;;; Sink
\{\cm[sink{?}:include external sink]}
"""
with open('custom.conf', 'w') as file:
file.write(config_str)
extractor_custom = opensmile.Smile(
feature_set="custom.conf",
feature_level="pitch_acf"
)
features_custom_df = extractor_custom.process_file("audio/test_audio_multi.wav")
print(features_custom_df)
plot_feature_over_time(features_custom_df, "F0")
###Output
_____no_output_____ |
Big-Data-Clusters/CU1/Public/content/repair/tsg075-networkplugin-cni-failed-to-setup-pod.ipynb | ###Markdown
TSG075 - FailedCreatePodSandBox due to NetworkPlugin cni failed to set up pod=============================================================================Description-----------> Error: Warning FailedCreatePodSandBox 58m kubelet,> rasha-virtual-machine Failed create pod sandbox: rpc error: code => Unknown desc = failed to set up sandbox container> “b76dc0446642bf06ef91b331be55814795410d58807eeffddf1fe3b5c9c572c0”> network for pod “mssql-controller-hfvxr”: NetworkPlugin cni failed to> set up pod “mssql-controller-hfvxr\_test” network: open> /run/flannel/subnet.env: no such file or directory Normal> SandboxChanged 34m (x325 over 59m) kubelet, virtual-machine Pod> sandbox changed, it will be killed and re-created. Warning> FailedCreatePodSandBox 4m5s (x831 over 58m) kubelet, virtual-machine> (combined from similar events): Failed create pod sandbox: rpc error:> code = Unknown desc = failed to set up sandbox container> “bee7d4eb0a74a4937de687a31676887b0c324e88a528639180a10bdbc33ce008”> network for pod “mssql-controller-hfvxr”: NetworkPlugin cni failed to> set up pod “mssql-controller-hfvxr\_test” network: open> /run/flannel/subnet.env: no such file or directory Instantiate Kubernetes client
###Code
# Instantiate the Python Kubernetes client into 'api' variable
import os
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
config.load_kube_config()
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
###Output
_____no_output_____
###Markdown
Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {}
error_hints = {}
install_hint = {}
first_run = True
rules = None
def run(cmd, return_output=False, no_output=False, retry_count=0):
"""
Run shell command, stream stdout, print stderr and optionally return output
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# To aid supportabilty, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
if which_binary == None:
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
line_decoded = line.decode()
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
if rules is not None:
apply_expert_rules(line_decoded)
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
return output
else:
return
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
return output
def load_json(filename):
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
try:
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
j = load_json("tsg075-networkplugin-cni-failed-to-setup-pod.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
global rules
for rule in rules:
# rules that have 9 elements are the injected (output) rules (the ones we want). Rules
# with only 8 elements are the source (input) rules, which are not expanded (i.e. TSG029,
# not ../repair/tsg029-nb-name.ipynb)
if len(rule) == 9:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
# print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
# print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}
error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['no such host', 'TSG011 - Restart sparkhistory server', '../repair/tsg011-restart-sparkhistory-server.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}
install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}
###Output
_____no_output_____
###Markdown
ResolutionThis issue has been seen on single node kubeadm installations when thehost machine has been rebooted.To resolve the issue, delete the kube-flannel and coredns pods. Thehigher level Kuberenetes objects will re-create these pods.The following code cells will do this for you: Verify there are flannel and coredns pods in this kubernetes cluster
###Code
run(f"kubectl get pods -n kube-system")
###Output
_____no_output_____
###Markdown
Delete them, so they can be re-created by the higher level Kubernetes objects
###Code
pod_list = api.list_namespaced_pod("kube-system")
for pod in pod_list.items:
if pod.metadata.name.find("kube-flannel-ds") != -1:
print(f"Deleting pod: {pod.metadata.name}")
run(f"kubectl delete pod/{pod.metadata.name} -n kube-system")
if pod.metadata.name.find("coredns-") != -1:
print(f"Deleting pod: {pod.metadata.name}")
run(f"kubectl delete pod/{pod.metadata.name} -n kube-system")
###Output
_____no_output_____
###Markdown
Verify the flannel and coredns pods have been re-created
###Code
run(f"kubectl get pods -n kube-system")
print('Notebook execution complete.')
###Output
_____no_output_____ |
WineReviewApp.ipynb | ###Markdown
Data Science ProjectWilliam Ponton2.17.19
###Code
# Import modules
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn # scikit-learn
import requests
from bs4 import BeautifulSoup
# Import modules
%matplotlib inline
import matplotlib.pyplot as plt
df = pd.read_csv("data/winemag-data_first150k.csv")
df.head()
df.dtypes
%%time
plt.plot(df.index, df["price"])
df2 = pd.read_csv("data/winemag-data_first150k.csv")
df2.head()
df2.tail()
%%time
plt.plot(df2.index, df2["price"])
# Setting plot appearance
# See here for more options: https://matplotlib.org/users/customizing.html
%config InlineBackend.figure_format='retina'
sns.set() # Revert to matplotlib defaults
plt.rcParams['figure.figsize'] = (9, 6)
plt.rcParams['axes.labelpad'] = 10
sns.set_style("darkgrid")
# sns.set_context("poster", font_scale=1.0)
%%time
plt.plot(df2.index, df2["price"])
###Output
CPU times: user 68.1 ms, sys: 13 ms, total: 81 ms
Wall time: 98.2 ms
|
star_formation/bonnor_ebert.ipynb | ###Markdown
Introduction to the Interstellar Medium Jonathan Williams Figures 9.1, 9.2, and 9.3: Bonnor-Ebert profiles and mass
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate
import scipy.interpolate as interpolate
%matplotlib inline
def lane_emden_integrate(x):
# solve Lane-Emden equation
nsteps = x.size
y = np.zeros(nsteps)
yp = np.zeros(nsteps)
yp2 = np.zeros(nsteps)
# initial condition on d2y/dx2
# (verified that solutions are insensitive to this beyond x = 2)
yp2[0] = 1/3
# integrate outwards step by step
# (logarithmic steps)
for i in np.arange(1,nsteps):
dx = x[i] - x[i-1]
y[i] = y[i-1] + yp[i-1]*dx + yp2[i-1]*dx**2/2
yp[i] = yp[i-1] + yp2[i-1]*dx
yp2[i] = np.exp(-y[i]) - 2*yp[i]/x[i]
return(y,yp)
def plot_profiles():
# plot Bonnor-Ebert density profile
nsteps = 1000
xmax = 1e4
x = np.logspace(-2, np.log10(xmax), nsteps)
y,yp = lane_emden_integrate(x)
# scale for various physical parameters
r0 = 1.243e3 # radial scale factor in pc
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(111)
ax.set_xlim(0.002,1.0)
ax.set_ylim(1e8,1e13)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel(r'${\rm Radius}\ {\rm (pc)}$', fontsize=14)
ax.set_ylabel(r'${\rm H_2\ density}\ {\rm (m^{-3})}$', fontsize=14)
T = 10 # isothermal temperature (K)
n_ext = 8e9/T # lower density limit from pressure equilibrium
n0 = np.array([1,0.2,5,25,125])*14.2*n_ext
ls = ['-','-','--','--','--']
lw = [2,2,2,2,2]
alpha = [1,0.3,0.3,0.3,0.3]
for i in range(len(n0)):
r = x * r0 * np.sqrt(T/n0[i])
n = n0[i] / np.exp(y)
if i == 0:
ax.plot(r, n, linestyle=ls[i], color='k', lw=lw[i], alpha=alpha[i], label='Critical')
else:
ax.plot(r, n, linestyle=ls[i], color='k', lw=lw[i], alpha=alpha[i])
# singular isothermal sphere
r = np.logspace(-3,1,2)
ax.plot(r,3.09e6*T/r**2, 'k--', lw=2, label='Singular')
ax.plot([0.2,10], [n_ext,n_ext], 'k:', label='Ambient density')
ax.legend()
ax.text(0.0027, 2.7e9, 'Stable', fontsize=10)
ax.text(0.0027, 6.9e10, 'Unstable', fontsize=10)
x_labels = ['0.01','0.1','1']
x_loc = np.array([float(x) for x in x_labels])
ax.set_xticks(x_loc)
ax.set_xticklabels(x_labels)
fig.tight_layout()
plt.savefig('bonnor_ebert_profiles.pdf')
def plot_mass():
# plot mass for given P_ext
nsteps = 10000
xmax = 1e4
x = np.logspace(-4, np.log10(xmax), nsteps)
y,yp = lane_emden_integrate(x)
T = 10 # isothermal temperature (K)
r0 = 1.243e3 # radial scale factor in pc
n_ext = 8e9/T # exterior density in m-3
n0 = np.logspace(np.log10(1.1*n_ext),12,300)
ndens = n0.size
r_ext = np.zeros(ndens)
m_ext = np.zeros(ndens)
m_tot = np.zeros(ndens)
for i in range(ndens):
y_ext = np.log(n0[i]/n_ext)
j = np.where(np.abs(y/y_ext - 1) < 0.1)[0]
ycubic = interpolate.UnivariateSpline(x[j],y[j]-y_ext)
x_ext = ycubic.roots()[0]
k = np.where(x < x_ext)[0]
m_ext[i] = 1.19e3 * integrate.simps(x[k]**2 / np.exp(y[k]), x[k]) * np.sqrt(T**3/n0[i])
# max pressure contrast
Pratio = n0/n_ext
imax = m_ext.argmax()
m_ext_max = m_ext[imax]
Pratio_max = Pratio[imax]
fig = plt.figure(figsize=(6,4))
ax1 = fig.add_subplot(111)
ax1.set_xlim(1,3e2)
ax1.set_xscale('log')
#ax1.set_yscale('log')
ax1.set_xlabel(r'$\rho_{\rm cen}/\rho_{\rm amb}$', fontsize=14)
ax1.set_ylim(0,6.5)
#ax1.set_yscale('log')
ax1.set_ylabel(r'${\rm Mass}\ (M_\odot)$', fontsize=14)
#mplot = ax1.plot(Pratio, m_ext, 'k-', lw=3, label='Mass')
ax1.plot(Pratio[0:imax-1], m_ext[0:imax-1], 'k-', lw=2, alpha=0.3, zorder=99)
ax1.plot(Pratio[imax+1:], m_ext[imax+1:], 'k--', lw=2, alpha=0.3, zorder=99)
ax1.plot(Pratio_max, m_ext_max, 'ko', markersize=4, zorder=999)
ax1.text(2.05, 3.2, 'Stable', fontsize=12, rotation=58, backgroundcolor='white', zorder=2)
ax1.text(50, 4.6, 'Unstable', fontsize=12, rotation=-21, zorder=2)
ax1.text(9.5, m_ext_max+0.15, r'$M_{\rm BE}$', fontsize=12)
# SIS
m_SIS = 1.06 * np.sqrt(1e10/n_ext) * (T/10)**1.5
ax1.plot([1,300], [m_SIS,m_SIS], 'k:', zorder=1)
ax1.text(150, m_SIS-0.33, r'$M_{\rm SIS}$', fontsize=12)
print(' M_SIS = {0:5.2f} Msun'.format(m_SIS))
print(' M_max = {0:5.2f} Msun'.format(m_ext_max))
print('M_max/M_SIS = {0:4.2f}'.format(m_ext_max/m_SIS))
print(' P_0/P_ext = {0:5.2f}'.format(Pratio_max))
ax1.plot([Pratio_max,Pratio_max], [0,10], 'k:')
#x_labels = ['1','10','100']
x_labels = ['1','3','10','30','100','300']
x_loc = np.array([float(x) for x in x_labels])
ax1.set_xticks(x_loc)
ax1.set_xticklabels(x_labels)
fig.tight_layout()
plt.savefig('bonnor_ebert_mass.pdf')
def plot_b68():
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(111)
# observed profile
# data from Alves et al. Nature 2001
# Figure 2 digitized using https://apps.automeris.io/wpd/
r, Av = np.genfromtxt('Alves_Av.txt', unpack=True, delimiter=',')
ax.plot(r, Av, 'ko', markersize=3, label='Observations')
nsteps = 10000
xmax = 10
x = np.logspace(-2, np.log10(xmax), nsteps)
y, yp = lane_emden_integrate(x)
# set outer boundary
# (note that I find a value a bit lower than in Alves et al...)
xmax = 4.5
y[x > xmax] = 10
b = np.logspace(-2, np.log10(xmax)+0.5, 1000)
Av = np.zeros(b.size)
yinterp = interpolate.interp1d(x, y, kind='cubic',
bounds_error=False, fill_value='extrapolate')
for i in range(b.size):
b1 = b[i]
xpath = np.sqrt(x**2 + b1**2)
Av[i] = integrate.simps(np.exp(-yinterp(xpath)), xpath)
# manually scale axes to match Av
# this has physical significance but that's the point of the paper
# (this illustrative plot is only to show that an observed core does indeed look like a BE sphere)
Ascale = 35/Av.max()
Av *= Ascale
b *= 26
ax.plot(b, Av, 'k-', lw=2, alpha=0.5, label='Bonnor Ebert profile')
ax.set_xlim(8,150)
ax.set_ylim(0.3,45)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel("Projected radius ('')", fontsize=14)
ax.set_ylabel(r"${\rm A_V\ (mag)}$", fontsize=14)
ax.legend(loc=3, bbox_to_anchor=(0.04, 0.05))
ax.text(0.24, 0.24, 'B68 visual extinction', fontsize=12, ha='center', transform = ax.transAxes)
x_labels = ['10','30','100']
x_loc = np.array([float(x) for x in x_labels])
ax.set_xticks(x_loc)
ax.set_xticklabels(x_labels)
y_labels = ['1','3','10','30']
y_loc = np.array([float(y) for y in y_labels])
ax.set_yticks(y_loc)
ax.set_yticklabels(y_labels)
fig.tight_layout()
plt.savefig('b68_profile.pdf')
# Figure 9.1
plot_profiles()
# Figure 9.2
plot_mass()
# Figure 9.3
plot_b68()
###Output
_____no_output_____ |
Optuna.docset/Contents/Resources/Documents/_downloads/4239c2fc38c810c87be56aa03d0933e6/002_configurations.ipynb | ###Markdown
Pythonic Search SpaceFor hyperparameter sampling, Optuna provides the following features:- :func:`optuna.trial.Trial.suggest_categorical` for categorical parameters- :func:`optuna.trial.Trial.suggest_int` for integer parameters- :func:`optuna.trial.Trial.suggest_float` for floating point parametersWith optional arguments of ``step`` and ``log``, we can discretize or take the logarithm ofinteger and floating point parameters.
###Code
import optuna
def objective(trial):
# Categorical parameter
optimizer = trial.suggest_categorical("optimizer", ["MomentumSGD", "Adam"])
# Integer parameter
num_layers = trial.suggest_int("num_layers", 1, 3)
# Integer parameter (log)
num_channels = trial.suggest_int("num_channels", 32, 512, log=True)
# Integer parameter (discretized)
num_units = trial.suggest_int("num_units", 10, 100, step=5)
# Floating point parameter
dropout_rate = trial.suggest_float("dropout_rate", 0.0, 1.0)
# Floating point parameter (log)
learning_rate = trial.suggest_float("learning_rate", 1e-5, 1e-2, log=True)
# Floating point parameter (discretized)
drop_path_rate = trial.suggest_float("drop_path_rate", 0.0, 1.0, step=0.1)
###Output
_____no_output_____
###Markdown
Defining Parameter SpacesIn Optuna, we define search spaces using familiar Python syntax including conditionals and loops.Also, you can use branches or loops depending on the parameter values.For more various use, see `examples `_. - Branches:
###Code
import sklearn.ensemble
import sklearn.svm
def objective(trial):
classifier_name = trial.suggest_categorical("classifier", ["SVC", "RandomForest"])
if classifier_name == "SVC":
svc_c = trial.suggest_float("svc_c", 1e-10, 1e10, log=True)
classifier_obj = sklearn.svm.SVC(C=svc_c)
else:
rf_max_depth = trial.suggest_int("rf_max_depth", 2, 32, log=True)
classifier_obj = sklearn.ensemble.RandomForestClassifier(max_depth=rf_max_depth)
###Output
_____no_output_____ |
coursera-dlaicourse-handwriting-recognition.ipynb | ###Markdown
Exercise 2In the course you learned how to do classification using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.Some notes:1. It should succeed in less than 10 epochs, so it is okay to change epochs to 10, but nothing larger2. When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!"3. If you add any additional variables, make sure you use the same names as the ones used in the class
###Code
import tensorflow as tf
print(f"Tensor Flow Version: {tf.__version__}")
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
# path = f"{getcwd()}/../tmp2/mnist.npz"
# GRADED FUNCTION: train_mnist
def train_mnist():
class FitCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if((logs.get('accuracy') is not None and logs.get('accuracy') > 0.99) or
(logs.get('acc') is not None and logs.get('acc') > 0.99)):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data() # (path=path)
fit_callback = FitCallback()
x_train = x_train / 255.0
x_test = x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# model fitting
history = model.fit(
x_train, y_train, epochs=10, callbacks=[fit_callback]
)
train_mnist()
###Output
_____no_output_____ |
Day_003-1_build_dataframe_from_scratch.ipynb | ###Markdown
方法一
###Code
data = {'weekday': ['Sun', 'Sun', 'Mon', 'Mon'],
'city': ['Austin', 'Dallas', 'Austin', 'Dallas'],
'visitor': [139, 237, 326, 456]}
visitors_1 = pd.DataFrame(data)
print(visitors_1)
###Output
city visitor weekday
0 Austin 139 Sun
1 Dallas 237 Sun
2 Austin 326 Mon
3 Dallas 456 Mon
###Markdown
方法二
###Code
cities = ['Austin', 'Dallas', 'Austin', 'Dallas']
weekdays = ['Sun', 'Sun', 'Mon', 'Mon']
visitors = [139, 237, 326, 456]
list_labels = ['city', 'weekday', 'visitor']
list_cols = [cities, weekdays, visitors]
zipped = list(zip(list_labels, list_cols))
visitors_2 = pd.DataFrame(dict(zipped))
print(visitors_2)
###Output
city visitor weekday
0 Austin 139 Sun
1 Dallas 237 Sun
2 Austin 326 Mon
3 Dallas 456 Mon
###Markdown
一個簡單例子假設你想知道如果利用 pandas 計算上述資料中,每個 weekday 的平均 visitor 數量,通過 google 你找到了 https://stackoverflow.com/questions/30482071/how-to-calculate-mean-values-grouped-on-another-column-in-pandas想要測試的時候就可以用 visitors_1 這個只有 4 筆資料的資料集來測試程式碼
###Code
visitors_1.groupby(by="weekday")['visitor'].mean()
###Output
_____no_output_____ |
ann/.ipynb_checkpoints/Tensorflow_2_0_Fashion_MNIST_(ANN) (1)-checkpoint.ipynb | ###Markdown
Image source: https://www.kaggle.com/ Fashion MNIST An MNIST-like dataset of 70,000 28x28 labeled fashion images ContextFashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."Zalando seeks to replace the original MNIST dataset ContentEach image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255. The training and test data sets have 785 columns. The first column consists of the class labels (see above), and represents the article of clothing. The rest of the columns contain the pixel-values of the associated image.To locate a pixel on the image, suppose that we have decomposed x as x = i * 28 + j, where i and j are integers between 0 and 27. The pixel is located on row i and column j of a 28 x 28 matrix.For example, pixel31 indicates the pixel that is in the fourth column from the left, and the second row from the top, as in the ascii-diagram below. LabelsEach training and test example is assigned to one of the following labels:0 T-shirt/top1 Trouser2 Pullover3 Dress4 Coat5 Sandal6 Shirt7 Sneaker8 Bag9 Ankle boot TL;DREach row is a separate imageColumn 1 is the class label.Remaining columns are pixel numbers (784 total).Each value is the darkness of the pixel (1 to 255)https://www.kaggle.com/zalando-research/fashionmnist Installing dependencies
###Code
#no need do install it on colab
#!pip install tensorflow-gpu==2.0.0.alpha0
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import datetime
import tensorflow as tf
from tensorflow.keras.datasets import fashion_mnist
print("Tensorflow Version: " + tf.__version__)
###Output
Tensorflow Version: 2.3.0
###Markdown
Data Preprocessing Loading the dataset
###Code
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step
###Markdown
Normalizing the imagesWe divide each pixel of the image in the training and test sets by the maximum number of pixels (255).In this way each pixel will be in the range [0, 1]. By normalizing images we make sure that our model (ANN) trains faster.
###Code
X_train = X_train / 255.0
X_test = X_test / 255.0
###Output
_____no_output_____
###Markdown
Reshaping the datasetSince we are building a fully connected network, we reshape the training set and the test set to be into the vector format.
###Code
# Since each image's dimension is 28x28, we reshape the full dataset to [-1 (all elements), height *
def reshape(data):
return data.reshape(-1, 28*28)
X_train = reshape(X_train)
X_test = reshape(X_test)
###Output
_____no_output_____
###Markdown
Building an Artificial Neural Network Defining the modelSimply define an object of the Sequential model.
###Code
model = tf.keras.models.Sequential()
###Output
_____no_output_____
###Markdown
Adding fully-connected hidden layer Adding a second layer with DropoutDropout is a Regularization technique where we randomly set neurons in a layer to zero. That way while training those neurons won't be updated. Because some percentage of neurons won't be updated the whole training process is long and we have less chance for overfitting.
###Code
model.add(tf.keras.layers.Dense(units=128, activation='relu', input_shape=(784, )))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(units=256, activation='relu'))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Dense(units=512, activation='relu'))
model.add(tf.keras.layers.Dropout(0.30))
###Output
_____no_output_____
###Markdown
Adding the output layer- units: number of classes (10 in the Fashion MNIST dataset)- activation: softmax
###Code
model.add(tf.keras.layers.Dense(units=10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Compiling the model- Optimizer: Adam- Loss: Sparse softmax (categorical) crossentropy
###Code
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy']
)
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 128) 100480
_________________________________________________________________
dropout (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 33024
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_2 (Dense) (None, 512) 131584
_________________________________________________________________
dropout_2 (Dropout) (None, 512) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 5130
=================================================================
Total params: 270,218
Trainable params: 270,218
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training the model
###Code
model.fit(X_train, y_train, epochs=50, batch_size=120)
###Output
Epoch 1/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1331 - sparse_categorical_accuracy: 0.9505
Epoch 2/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1333 - sparse_categorical_accuracy: 0.9502
Epoch 3/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1347 - sparse_categorical_accuracy: 0.9506
Epoch 4/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1313 - sparse_categorical_accuracy: 0.9520
Epoch 5/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1356 - sparse_categorical_accuracy: 0.9505
Epoch 6/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1366 - sparse_categorical_accuracy: 0.9498
Epoch 7/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1391 - sparse_categorical_accuracy: 0.9492
Epoch 8/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1300 - sparse_categorical_accuracy: 0.9505
Epoch 9/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1317 - sparse_categorical_accuracy: 0.9508
Epoch 10/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1330 - sparse_categorical_accuracy: 0.9516
Epoch 11/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1331 - sparse_categorical_accuracy: 0.9508
Epoch 12/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1290 - sparse_categorical_accuracy: 0.9520
Epoch 13/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1343 - sparse_categorical_accuracy: 0.9506
Epoch 14/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1340 - sparse_categorical_accuracy: 0.9507
Epoch 15/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1323 - sparse_categorical_accuracy: 0.9511
Epoch 16/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1278 - sparse_categorical_accuracy: 0.9530
Epoch 17/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1329 - sparse_categorical_accuracy: 0.9503
Epoch 18/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1309 - sparse_categorical_accuracy: 0.9523
Epoch 19/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1297 - sparse_categorical_accuracy: 0.9519
Epoch 20/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1355 - sparse_categorical_accuracy: 0.9515
Epoch 21/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1288 - sparse_categorical_accuracy: 0.9519
Epoch 22/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1291 - sparse_categorical_accuracy: 0.9523
Epoch 23/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1340 - sparse_categorical_accuracy: 0.9511
Epoch 24/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1281 - sparse_categorical_accuracy: 0.9523
Epoch 25/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1279 - sparse_categorical_accuracy: 0.9535
Epoch 26/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1277 - sparse_categorical_accuracy: 0.9525
Epoch 27/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1318 - sparse_categorical_accuracy: 0.9515
Epoch 28/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1330 - sparse_categorical_accuracy: 0.9510
Epoch 29/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1287 - sparse_categorical_accuracy: 0.9536
Epoch 30/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1267 - sparse_categorical_accuracy: 0.9534
Epoch 31/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1266 - sparse_categorical_accuracy: 0.9541
Epoch 32/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1272 - sparse_categorical_accuracy: 0.9528
Epoch 33/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1289 - sparse_categorical_accuracy: 0.9535
Epoch 34/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1267 - sparse_categorical_accuracy: 0.9532
Epoch 35/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1274 - sparse_categorical_accuracy: 0.9529
Epoch 36/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1254 - sparse_categorical_accuracy: 0.9539
Epoch 37/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1243 - sparse_categorical_accuracy: 0.9551
Epoch 38/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1283 - sparse_categorical_accuracy: 0.9538
Epoch 39/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1302 - sparse_categorical_accuracy: 0.9523
Epoch 40/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1279 - sparse_categorical_accuracy: 0.9532
Epoch 41/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1287 - sparse_categorical_accuracy: 0.9530
Epoch 42/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1274 - sparse_categorical_accuracy: 0.9538
Epoch 43/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1250 - sparse_categorical_accuracy: 0.9536
Epoch 44/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1231 - sparse_categorical_accuracy: 0.9544
Epoch 45/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1233 - sparse_categorical_accuracy: 0.9541
Epoch 46/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1216 - sparse_categorical_accuracy: 0.9551
Epoch 47/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1258 - sparse_categorical_accuracy: 0.9542
Epoch 48/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1223 - sparse_categorical_accuracy: 0.9557
Epoch 49/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1214 - sparse_categorical_accuracy: 0.9557
Epoch 50/50
500/500 [==============================] - 1s 3ms/step - loss: 0.1246 - sparse_categorical_accuracy: 0.9547
###Markdown
Model evaluation and prediction
###Code
test_loss, test_accuracy = model.evaluate(X_test, y_test)
print("Test accuracy: {}".format(test_accuracy))
###Output
Test accuracy: 0.8967999815940857
|
src/Projeto3.ipynb | ###Markdown
**Modelagem e Simulação do Mundo Físico: Projeto 3** Francisco Janela | Nicolas Queiroga | Rafael Niccherri | Rodrigo Griner - 1C Foguete de garrafa PET:Em uma feira de ciências, uma escola decide competir num lançamento de foguetes de garrafa PET. Com a intenção de ajudá-los, nosso projeto será baseado nisso.Para o projeto, decidimos modelar as equações que regem o lançamento de um foguete de garrafa PET, considerando a força e ângulo de lançamento, resistência do ar e massa variável.Figura 1: Modelo desenhado de um Foguete de garrafa PET Perguntas:- **Pergunta 1**: Como o ângulo de lançamento do foguete influencia o alcance? - **Pergunta 2**: Como a massa de água (“combustível”) influencia o alcance?- **Pergunta 3**: Como a massa do bico do foguete influencia o alcance? (0.1) Importando bibliotecas e definindo parâmetrosPara este modelo vamos usar como parâmetros:
###Code
## Importando Bibliotecas para o nosso Projeto:
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
import math
%matplotlib inline
from ipywidgets import interactive
## Parâmetros e Variáveis
# gravidade -> m/s2:
g = 9.81
# densidade da água -> kg/m3:
dw = 997
# densidade do ar -> kg/m3
dar = 1.27
# raio do bico da garrafa -> m:
rn = 0.01
#raio da garrafa -> m:
rg = 0.055
# massa seca -> kg
mS = 0.3
# massa de água para propulsão -> kg:
mP = 0.66
# massa inicial do foguete -> kg:
M = mS + mP
# pressão inicial -> pascal:
p0 = 517107
# pressão atmosférica -> pascal:
pout = 101325
# compartimento de propulsão - garrafa PET de 2L -> m3:
V = 0.002
# volume inicial de ar -> m3:
V0 = 0.002-(mP/dw)
# coeficiente adiabático:
gama = 1.4
# coeficiente de arrasto:
Ca = 0.9
# Área de secção transversal -> m2:
A = (math.pi*rg**2)
###Output
_____no_output_____
###Markdown
(0.2) Condições inicias e lista de tempoDefinindo para o modelo as condições iniciais e a lista tempo (por meio do numpy):
###Code
# condições iniciais:
x0=0
y0=0
vx0=0
vy0=0
m0 = M
X_0=[x0,y0,vx0,vy0,m0]
# lista de tempo utilizada:
dt=1e-5
lista_tempo = np.arange(0,10,dt)
###Output
_____no_output_____
###Markdown
(1) 1ª Iteração do modeloPara a primeira iteração desenvolvemos o modelo desconsiderando a resistência do ar.Figura 2: Diagrama do corpo livre da 1ª IteraçãoFigura 3: Legenda do diagramaPara implementar com ODEINT, as duas derivadas de 2ª ordem que variam o x e o y do foguete foram transformadas em 4 de 1ª ordem, resultando nas seguintes equações do sistema:$\frac{dx}{dt}=v_x$$\frac{dy}{dt}=v_y$$\frac{dvx}{dt}=\frac{1}{m}\cdot[\pi\cdot r_n^2 \cdot d_w \cdot v_e^2 \cdot cos \theta]$$\frac{dvy}{dt}=\frac{1}{m}\cdot[\pi\cdot r_n^2 \cdot d_w \cdot v_e^2 \cdot sen \theta - m \cdot g]$$\frac{dm}{dt}=-\pi \cdot r_n^2 \cdot d_w \cdot v_e$ (1.1) 1º Modelo:
###Code
def modelo1 (X,t,teta):
x = X[0]
y = X[1]
vx = X[2]
vy = X[3]
m = X[4]
# velocidade:
v = math.sqrt(vx**2+vy**2)
# definindo os senos e cossenos do modelo
if v>0:
sen_t = vy/v
cos_t = vx/v
else:
sen_t = math.sin(teta)
cos_t = math.cos(teta)
# variando a pressão interna de ar:
pin = p0*((V0+(M-m)/dw)/V0)**(-gama)
# velocidade de escape do líquido:
ve = math.sqrt((2*(pin-pout))/dw)
# Thrust:
T = (math.pi*(rn**2)*dw*(ve**2))
#---------- derivadas do modelo ---------
if y >= 0:
# enquanto houver combustível para Thrust:
if (m > mS):
dxdt = vx
dydt = vy
dvxdt = (T*cos_t)/m
dvydt = (T*sen_t-m*g)/m
dmdt = -math.pi*(rn**2)*dw*ve
# quando acabar:
else:
dxdt = vx
dydt = vy
dvxdt = 0
dvydt = -g
dmdt = 0
else:
dxdt = 0
dydt = 0
dvxdt = 0
dvydt = 0
dmdt = 0
# formando a lista com todas as variações
dXdt = [dxdt,dydt,dvxdt,dvydt,dmdt]
return dXdt
###Output
_____no_output_____
###Markdown
(1.2) Aplicando ODEINT e plotando os gráficosPara aplicar a função ODEINT e plotar os gráficos com a barra interativa usamos a biblioteca 'ipywidgets'. Basta variar a barra que o angulo de lançamento varia para visualização.
###Code
def funcao_interactive(teta):
# passando graus para radianos:
teta = math.radians(teta)
#---------- rodando ODEINT -----------
X = odeint(modelo1,X_0,lista_tempo,args=(teta,))
lista_x = X[:,0]
lista_y = X[:,1]
lista_vx = X[:,2]
lista_vy = X[:,3]
#-------- plotando o gráfico ---------
plt.plot(lista_x, lista_y, label='Sem resistência do ar')
plt.title('Gráfico de y(t) por x(t)')
plt.ylabel('y(t)')
plt.xlabel('x(t)')
plt.xticks([-10,0,10,20,30,40,50,60,70,80])
plt.yticks([0,5,10,15,20,25,30])
plt.legend(loc="best")
plt.grid(True)
plt.show()
interactive_plot = interactive(funcao_interactive,teta=(40,90,5))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
###Output
_____no_output_____
###Markdown
(2) 2ª Iteração do modeloPara a segunda iterção foi considerada a resistência do ar:Figura 4: Diagrama do corpo livre da 2ª IteraçãoFigura 5: Legenda do diagramaAs equações para o ODEINT são:$\frac{dx}{dt}=v_x$$\frac{dy}{dt}=v_y$$\frac{dvx}{dt}=\frac{1}{m}\cdot[\pi\cdot r_n^2 \cdot d_w \cdot v_e^2 \cdot cos \theta - \frac{1}{2}\cdot d_ar \cdot v^2 \cdot C_d \cdot A \cdot cos\theta]$$\frac{dvy}{dt}=\frac{1}{m}\cdot[\pi\cdot r_n^2 \cdot d_w \cdot v_e^2 \cdot sen \theta - \frac{1}{2}\cdot d_ar \cdot v^2 \cdot C_d \cdot A \cdot sen\theta - m \cdot g]$$\frac{dm}{dt}=-\pi \cdot r_n^2 \cdot d_w \cdot v_e$ (2.1) 2º Modelo:
###Code
def modelo2 (X,t,teta):
x = X[0]
y = X[1]
vx = X[2]
vy = X[3]
m = X[4]
# velocidade:
v = math.sqrt(vx**2+vy**2)
# definindo os senos e cossenos do modelo
if v>0:
sen_t = vy/v
cos_t = vx/v
else:
sen_t = math.sin(teta)
cos_t = math.cos(teta)
# variando a pressão interna de ar:
pin = p0*((V0+(M-m)/dw)/V0)**(-gama)
# velocidade de escape do líquido:
ve = math.sqrt((2*(pin-pout))/dw)
# Thrust:
T = (math.pi*(rn**2)*dw*(ve**2))
# Forças de resistência do ar em x e y
Frarx = 0.5*Ca*dar*A*vx*v
Frary = 0.5*Ca*dar*A*vy*v
#---------- derivadas do modelo ---------
if y >= 0:
if (m > mS):
dxdt = vx
dydt = vy
dvxdt = (T*cos_t-Frarx)/m
dvydt = (T*sen_t-Frary-m*g)/m
dmdt = -math.pi*(rn**2)*dw*ve
else:
dxdt = vx
dydt = vy
dvxdt = -Frarx/m
dvydt = (-Frary-m*g)/m
dmdt = 0
else:
dxdt = 0
dydt = 0
dvxdt = 0
dvydt = 0
dmdt = 0
dXdt = [dxdt,dydt,dvxdt,dvydt,dmdt]
return dXdt
###Output
_____no_output_____
###Markdown
(2.2) Aplicando ODEINT e plotando os gráficosDa mesma forma que na primeira iteração, temos a barra interativa para variar o ângulo de lançamento
###Code
def funcao_interactive_2(angulo):
# de graus para radianos:
teta = math.radians(angulo)
#---------- rodando ODEINT -----------
X2 = odeint(modelo2,X_0,lista_tempo,args=(teta,))
lista_x2 = X2[:,0]
lista_y2 = X2[:,1]
lista_vx2 = X2[:,2]
lista_vy2 = X2[:,3]
#-------- plotando o gráfico ---------
plt.plot(lista_x2, lista_y2, 'r', label='Com resistência do ar')
plt.title('Gráfico de y(t) por x(t)')
plt.ylabel('y(t)')
plt.xlabel('x(t)')
plt.xticks([-10,0,10,20,30,40,50])
plt.yticks([0,5,10,15,20,25,30])
plt.legend()
plt.grid(True)
plt.show()
interactive_plot = interactive(funcao_interactive_2,angulo=(40,90,5))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
###Output
_____no_output_____
###Markdown
(3) Validação do modeloA partir de um experimentro já feito de lançamento vertical, e utilizando seus parâmetros de lançamento, validamos o modelo. Utilizamos somente os pontos de subida, uma vez que a descida é feita com um paraquedas.
###Code
#--------- validando o modelo com o lançamento em y ---------
# condições iniciais do lançamento experimental:
p0 = 482633
mP = 0.8
mS = 0.1
M = mS + mP
#condições iniciais:
x0=0
y0=0
vx0=0
vy0=0
m0 = M
X_0=[x0,y0,vx0,vy0,m0]
# lista de tempo e posição em y:
lista_tempoMedido = [0.107421875,0.15625,0.1953125,0.25390625,0.322265625,0.390625,0.44921875,0.52734375,0.595703125,0.673828125,0.732421875,0.810546875,0.947265625,1.07421875,1.220703125]# abertura do paraquedas : [1.46484375,1.767578125,2.03125,2.24609375,2.421875,2.65625,2.822265625,2.978515625,3.125,3.349609375,3.603515625,3.76953125,3.9453125,4.111328125,4.2578125,4.39453125,4.541015625,4.66796875,4.765625,4.86328125,4.98046875,5.05859375]
lista_yMedido = [-0.01245503,1.849296346,3.82402476,5.798532731,8.22439775,10.98886322,13.07623801,16.00989348,19.1129594,22.10304829,23.00532149,22.60940586,22.72072958,23.05789715,23.16911064]# abertura do paraquedas : [23.16635511,23.27580506,22.93422863,22.19816944,21.1803841,19.76690357,18.57992822,17.05446265,16.03700797,14.28503721,12.6456026,11.7407943,10.49727532,9.197433162,8.010678259,6.824033578,5.524411858,4.394310807,3.65957428,2.69910412,1.230512839,0.721730389,]
# rodando ODEINT para o lançamento vertival:
teta = math.radians(90)
X2 = odeint(modelo2,X_0,lista_tempo,args=(teta,))
lista_y2 = X2[:,1]
# plotando o gráfico:
plt.plot(lista_tempo, lista_y2, 'r', label='Modelo')
plt.plot(lista_tempoMedido,lista_yMedido,'bo',label='dados experimentais')
plt.title('Gráfico de y(t) por tempo')
plt.ylabel('y(t)')
plt.xlabel('tempo')
plt.yticks([0,5,10,15,20,25])
plt.legend()
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
(4) Respondendo às Perguntas:Com o modelo validado, podemos agora gerar os gráficos para responder às perguntas feitas no início do projeto. (4.1) Alcance em função do ângulo:Para responder essa pergunta vamos rodar o ODEINT para vários angulos de lançamento e pegar o valor daquele que possui o maior alcance, sendo assim o melhor ângulo para o lançamento.A resposta sairá no terminal depois do gráfico de alcance pelo ângulo.
###Code
# voltando para os padrões utilizados
# massa seca
mS = 0.3
# massa de água para propulsão:
mP = 0.66
# massa inicial do foguete:
M = mS + mP
# pressão inicial:
p0 = 517107
# lista de angulos de lançamento:
lista_angulos = np.arange(45,90,1)
lista_x_max = []
for angulo in lista_angulos:
teta = math.radians(angulo)
X2 = odeint(modelo2,X_0,lista_tempo,args=(teta,))
lista_x2 = X2[:,0]
lista_x_max.append(max(lista_x2))
ax=plt.axes()
plt.plot(lista_angulos,lista_x_max,'ro',markersize=4)
ax.set_facecolor('xkcd:ivory')
plt.title('Gráfico de x(t) pelo angulo de lançamento')
plt.ylabel('x(t)')
plt.xlabel('angulo')
plt.grid(True)
plt.show()
print('O ângulo que gera maior distância percorrida pelo foguete é {0} graus'.format(lista_angulos[lista_x_max.index(max(lista_x_max))]))
###Output
C:\Users\franc\Anaconda3\lib\site-packages\scipy\integrate\odepack.py:247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
###Markdown
(4.2) Alcance pela massa de propulsão:Rodando ODEINT variando o mP colocado no modelo, para poder responder qual a melhor massa de propulsão que gera o maior alcance do foguete. A resposta sairá no terminal depois do gráfico.
###Code
# melhor angulo de lançamento para alcance
angulo = 66
lista_massa_propulsao = np.arange(0.01,1.5,0.01)
lista_x_max_2 = []
for mP in lista_massa_propulsao:
M = mS + mP
m0 = M
X_0=[x0,y0,vx0,vy0,m0]
teta = math.radians(angulo)
X2 = odeint(modelo2,X_0,lista_tempo,args=(teta,))
lista_x2 = X2[:,0]
lista_x_max_2.append(max(lista_x2))
ax=plt.axes()
plt.plot(lista_massa_propulsao,lista_x_max_2,'co',markersize=3)
ax.set_facecolor('xkcd:ivory')
plt.title('Gráfico de x(t) pela massa de propulsão')
plt.ylabel('x(t)')
plt.xlabel('massa de propulsão')
plt.grid(True)
plt.show()
print('A massa de propulsão que gera maior distância percorrida pelo foguete é {0} kg'.format(lista_massa_propulsao[lista_x_max_2.index(max(lista_x_max_2))]))
###Output
_____no_output_____
###Markdown
(4.2) Alcance pela massa seca:Agora, com o angulo ideal e a massa de propulsão ideal, vamos descobrir qual a massa seca ideal para o lançamento do foguete, de modo a chegar o mais longe possível e ajudar a escola a ganhar a competição. Novamente a resposta sairá depois do gráfico.
###Code
# melhor massa de propulsão para o foguete:
mP = 0.88
lista_massa_seca = np.arange(0.01,0.5,0.01)
lista_x_max_3 = []
for mS in lista_massa_seca:
M = mS + mP
m0 = M
X_0=[x0,y0,vx0,vy0,m0]
teta = math.radians(angulo)
X2 = odeint(modelo2,X_0,lista_tempo,args=(teta,))
lista_x2 = X2[:,0]
lista_x_max_3.append(max(lista_x2))
ax=plt.axes()
plt.plot(lista_massa_seca,lista_x_max_3,'bo',markersize=4)
ax.set_facecolor('xkcd:ivory')
plt.title('Gráfico de x(t) pela massa seca')
plt.ylabel('x(t)')
plt.xlabel('massa seca')
plt.grid(True)
plt.show()
print('A massa seca que gera maior distância percorrida pelo foguete é {0} kg'.format(lista_massa_seca[lista_x_max_3.index(max(lista_x_max_3))]))
###Output
_____no_output_____ |
Experiments/Data Visualize and Analyse/Analysis 1- based on behaviour.ipynb | ###Markdown
Cheating case
###Code
# df = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Research/Test Data/Test New/gazeData.csv',names=["X", "Y"])
# df=df[:-5]
# x_min = 17
# x_max =1485
# y_max =860
# y_min =6
# df = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Research/Test Data/webquiz_cheat1.csv',names=["X", "Y"])
# df = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Research/Test Data/Test New/gazeData.csv',names=["X", "Y"])
df = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Research/Test Data/webquiz_cheat.csv',names=["X", "Y"])
# df=df[:-5]
df.head()
# x_min = -26
# x_max =1332
# y_max =822
# y_min =23
# x_min = -6
# x_max =1505
# y_max =839
# y_min =27
# x_min = 17
# x_max =1485
# y_max =860
# y_min =6
x_min = -6
x_max =1505
y_max =839
y_min =27
for index, row in df.iterrows():
if (row['X']>x_max or row['X']<x_min or row['Y']>y_max or row['Y']<y_min):
df.loc[index,'status']= 1
else:
df.loc[index,'status']= 0
df.head(4)
df= df.iloc[:1500]
df.head()
data = {'status': [], 'elapased_time': []}
df_new = pd.DataFrame(data)
c=0
status = 0
for index, row in df.iterrows():
if (row['status'] == status):
c=c+1
else:
# print(status,c)
df_new=df_new.append({'status':status,'elapased_time':c},ignore_index=True)
c=1
status=row['status']
# status=df.iloc[index,2]
df_new=df_new.append({'status':status,'elapased_time':c},ignore_index=True)
new_dtypes = {"status": int, "elapased_time": int}
df_new = df_new.astype(new_dtypes)
df_new.head()
df_new=df_new.iloc[1:]
df_new.head()
# pd.set_option("display.max_rows", None, "display.max_columns", None)
# df[:-11]
# df_new['elapased_time'].sum()
# cheat
# x_min = -26.0
# x_max =1332
# y_max =822
# y_min =23
# cheat1
c=0
k=0
for index, row in df.iterrows():
if (row['X']>x_max or row['X']<x_min or row['Y']>y_max or row['Y']<y_min):
c=c+1
df.loc[index,'status']= 1
df.loc[index,'elapased_time']= c
# print('looked away')
k=0
else:
k=k+1
# df.loc[index,'status']= 'in'
df.loc[index,'status']= 0
df.loc[index,'elapased_time']=k
# if (c!=0):
# print (c)
# df.loc[index-1,'away_time']=c
c=0
# df.iloc[205:220]/
plt.figure(figsize=(40, 6))
plt.plot(df.index,df['status'])
# df['index_col'] = df.index
# x = df[['index_col','status']]
# x.head()
from sklearn.model_selection import GridSearchCV
from sklearn.svm import OneClassSVM
scores = ['precision', 'recall']
# gammas = np.logspace(-9, 3, 13)
# nus = np.linspace(0.01, 0.99, 99)
# param_grid = {'gamma':gammas,
# 'nu':nus}
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-2, 1e-3, 1e-4, 1e-5],
'nu': [0.001, 0.10, 0.1, 10, 25, 50, 100, 1000]},
{'kernel': ['linear'], 'nu': [0.001, 0.10, 0.1, 10, 25, 50, 100, 1000]}
]
for score in scores:
clf = GridSearchCV(OneClassSVM(), tuned_parameters, cv=10,
scoring='%s_macro' % score, return_train_score=True)
clf.fit(df_new[['status']], df_new[['elapased_time']])
resultDf = pd.DataFrame(clf.cv_results_)
print(resultDf[["mean_test_score", "std_test_score", "params"]].sort_values(by=["mean_test_score"], ascending=False).head())
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
from sklearn.svm import OneClassSVM
svm = OneClassSVM(kernel='rbf', gamma=0.001, nu=0.1)
# svm = OneClassSVM(kernel='linear', gamma=0.01)
svm.fit(df_new)
pred = svm.predict(df_new)
from numpy import where
anom_index = where(pred==-1)
df_new = df_new.reset_index(drop=True)
values = df_new.loc[anom_index]
plt.scatter(df_new.iloc[:,0], df_new.iloc[:,1])
plt.scatter(values.iloc[:,0], values.iloc[:,1], color='r')
# plt.axvline(x=x_min,color='red')
# plt.axvline(x=x_max,color='red')
# plt.axhline(y=y_min,color='green')
# plt.axhline(y=y_max,color='green')
plt.show()
values
df_new['svm_p']=pred
df_new['cum_sum'] = df_new['elapased_time'].cumsum(axis = 0)
df_new.head(40)
data = {'status': [], 'elapased_time': []}
results = pd.DataFrame(data)
t1=0
t2=0
for index, row in df_new.iterrows():
if (row['svm_p']== -1 and row['status']==0):
if t2!=0:
# cheat
results=results.append({'status':'cheat','elapased_time':t2},ignore_index=True)
t2=0
t1=t1+row['elapased_time']
else:
if t1!=0:
# non-cheat
results=results.append({'status':'non-cheat','elapased_time':t1},ignore_index=True)
t1=0
t2=t2+row['elapased_time']
results.head(25)
nc_median = results.loc[results['status'] == 'non-cheat']['elapased_time'].sum(axis=0)
nc_median
c_median = results.loc[results['status'] == 'cheat']['elapased_time'].sum(axis=0)
c_median
nc_count = results.loc[results['status'] == 'non-cheat'].shape[0]
c_count = results.loc[results['status'] == 'cheat'].shape[0]
nc_median*nc_count/(nc_median*nc_count+ c_median*c_count)
c_median*c_count/(nc_median*nc_count+ c_median*c_count)
# df_for_graph = df_new.loc[df_new['svm_p'] == -1]
plt.figure(figsize=(40, 6))
plt.plot(df.index,df['status'])
for index, row in df_new.iterrows():
if row['svm_p'] == -1:
plt.plot([row['cum_sum']-row['elapased_time'],row['cum_sum']],[row['status'],row['status']],c='r',linestyle="--")
plt.figure(figsize=(150, 6))
plt.plot(df.index,df['status'])
plt.scatter(df_for_graph['cum_sum'],df_for_graph['svm_p']+1,c='r')
# index
values['status'].value_counts()
values['elapased_time'].value_counts()
df['status'].value_counts()
###Output
_____no_output_____
###Markdown
Normal case - but think, look time , non cheating but looking away
###Code
df = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Research/Test Data/webquiz_normal.csv',names=["X", "Y"])
x_min = 49
x_max =1399
y_max =817
y_min =65
# x_min = -75
# x_max =1515
# y_max =805
# y_min =-23
for index, row in df.iterrows():
if (row['X']>x_max or row['X']<x_min or row['Y']>y_max or row['Y']<y_min):
df.loc[index,'status']= 1
else:
df.loc[index,'status']= 0
df.head()
df= df.iloc[:1000]
df.head()
# df = df.iloc[100:]
df = df.iloc[:-200]
plt.figure(figsize=(40, 6))
plt.plot(df.index,df['status'])
###Output
_____no_output_____
###Markdown
* first out - Looked time * second out - thinking* third out - looking away from the window
###Code
data = {'status': [], 'elapased_time': []}
df_new = pd.DataFrame(data)
c=0
status = 0
for index, row in df.iterrows():
if (df.iloc[index,2] == status):
c=c+1
else:
# print(status,c)
df_new=df_new.append({'status':status,'elapased_time':c},ignore_index=True)
c=1
status=df.iloc[index,2]
df_new=df_new.append({'status':status,'elapased_time':c},ignore_index=True)
new_dtypes = {"status": int, "elapased_time": int}
df_new = df_new.astype(new_dtypes)
df_new.head(10)
df_new=df_new.iloc[1:]
from sklearn.svm import OneClassSVM
svm = OneClassSVM(kernel='rbf', gamma=0.01, nu=0.001)
svm.fit(df_new)
pred = svm.predict(df_new)
from numpy import where
anom_index = where(pred==-1)
values = df_new.loc[anom_index]
plt.scatter(df_new.iloc[:,0], df_new.iloc[:,1])
plt.scatter(values.iloc[:,0], values.iloc[:,1], color='r')
plt.show()
values
df_new['svm_p']=pred
df_new['cum_sum'] = df_new['elapased_time'].cumsum(axis = 0)
df_new.head()
plt.figure(figsize=(60, 6))
plt.plot(df.index,df['status'])
for index, row in df_new.iterrows():
if row['svm_p'] == -1:
plt.plot([row['cum_sum']-row['elapased_time'],row['cum_sum']],[row['status'],row['status']],c='r',linestyle="--")
###Output
_____no_output_____ |
Pong_game_play.ipynb | ###Markdown
Add noice v1
###Code
import datetime
import numpy as np
import random
import cv2
import matplotlib.pyplot as plt
import time
%matplotlib inline
def sp_noise(image, prob):
'''
Add salt and pepper noise to image
prob: Probability of the noise
'''
output = np.zeros(image.shape,np.uint8)
thres = 1 - prob
for i in range(image.shape[0]):
for j in range(image.shape[1]):
rdn = random.random()
if rdn < prob:
output[i][j] = 0
elif rdn > thres:
output[i][j] = 255
else:
output[i][j] = image[i][j]
return output
# image = cv2.imread('pong.jpg')
# for i in range(1000000):
# noise_img = sp_noise(image, 0.005)
# if i%1000 == 0:
# print(i, datetime.datetime.now().time())
image = cv2.imread('messigray.png')
start = time.time()
noise_img = sp_noise(image, 0.005)
end = time.time()
print('execution: ', end - start, 'seconds')
cv2.imwrite('sp_noise.jpg', noise_img)
#plt.imshow(image[:,:,::-1])
#plt.show()
plt.imshow(noise_img[:,:,::-1])
plt.show()
def show_random_env():
env = make_atari('PongNoFrameskip-v4')
env = deepq.wrap_atari_dqn(env)
obs = env.reset()
for i in range(15):
obs, rew, done, _ = env.step(1)
img = np.array(obs[None])[:,:,1].reshape((84,84))
print(img.shape)
plt.imshow(img)
cv2.imwrite('messigray.png',img)
plt.show()
obs_noice = sp_noise(img, 0.005)
plt.imshow(obs_noice)
plt.show()
show_random_env()
###Output
(84, 84)
###Markdown
Add noice v2
###Code
from PIL import Image
import numpy as np
import time
def add_noise(image, paper_threshold, salt_threshold):
w, h = image.shape
random_paper = np.random.rand(w, h)
random_salt = np.random.rand(w, h)
image[random_paper < paper_threshold] = 0
image[random_salt < salt_threshold] = 255
return image
def add_noise_frames(lazy_frames, paper_threshold, salt_threshold):
images = []
for image in lazy_frames._frames:
output = add_noise(image.reshape((84, 84)), paper_threshold, salt_threshold)
images.append(output.reshape((84, 84, 1)))
return LazyFrames(list(images))
def add_noise_t(image, paper_threshold, salt_threshold):
w, h = image.size
pixels = np.array(image)
random_paper = np.random.rand(w, h)
random_salt = np.random.rand(w, h)
pixels[random_paper < paper_threshold] = 0
pixels[random_salt < salt_threshold] = 255
return Image.fromarray(pixels)
def test_add_noise():
n = 10000
avg_agg = 0
median_agg = []
image = Image.open("messigray.png")
for i in range(n):
start = time.time()
noisy = add_noise_t(image, 0.01, 0.01)
duration = time.time() - start
avg_agg += duration
median_agg.append(duration)
noisy.save("out.png")
avg = avg_agg / n
median_agg.sort()
print("number or runs", n)
print("average duration:", avg, "seconds")
print("median duration: ", median_agg[n // 2], "seconds")
# test performance for reshaping
env = make_atari('PongNoFrameskip-v4')
env = deepq.wrap_atari_dqn(env)
obs = env.reset()
start = time.time()
a = obs._frames[0]
a.reshape((84, 84))
a.reshape((84, 84,1))
end = time.time()
print('test reshaping, execution: ', end - start, 'seconds')
test_add_noise()
###Output
number or runs 10000
average duration: 0.00032285914421081543 seconds
median duration: 0.00023698806762695312 seconds
test reshaping, execution: 5.9604644775390625e-06 seconds
###Markdown
Train and Test
###Code
import tensorflow as tf
from baselines.common.tf_util import get_session
def clean_session_fix():
get_session().close()
tf.reset_default_graph()
def play_pong():
logger.configure()
env = make_atari('PongNoFrameskip-v4')
env = bench.Monitor(env, logger.get_dir())
env = deepq.wrap_atari_dqn(env)
model = deepq.learn(
env,
"conv_only",
convs=[(32, 8, 4), (64, 4, 2), (64, 3, 1)],
hiddens=[256],
dueling=True,
total_timesteps=0,
load_path="models/pong_model_r.pkl"
)
while True:
obs, done = env.reset(), False
episode_rew = 0
while not done:
env.render()
#start = time.time()
obs_noice = add_noise_frames(obs, 0.005, 0.005)
# end = time.time()
# print('test reshaping, execution: ', end - start, 'seconds')
obs, rew, done, _ = env.step(model(obs_noice[None])[0])
episode_rew += rew
print("Episode reward", episode_rew)
def test_play_pong(use_noice=False, model_name='models/pong_model_d.pkl',
number_of_runs = 10, show_render = False):
env = make_atari('PongNoFrameskip-v4')
env = deepq.wrap_atari_dqn(env)
model = deepq.learn(
env,
"conv_only",
convs=[(32, 8, 4), (64, 4, 2), (64, 3, 1)],
hiddens=[256],
dueling=True,
total_timesteps=0,
load_path=model_name
)
rewards = []
for i in range(number_of_runs):
obs, done = env.reset(), False
episode_rew = 0
while not done:
if show_render:
env.render()
#start = time.time()
obs_updated = obs
if use_noice:
obs_updated = add_noise_frames(obs, 0.005, 0.005)
# end = time.time()
# print('test reshaping, execution: ', end - start, 'seconds')
obs, rew, done, _ = env.step(model(obs_updated[None])[0])
episode_rew += rew
rewards.append(episode_rew)
# print(datetime.datetime.now().time(), i, episode_rew)
return np.mean(rewards)
clean_session_fix()
avg_reward = test_play_pong(False, "models/pong_model_d.pkl")
print('DD', avg_reward)
clean_session_fix()
avg_reward = test_play_pong(True, "models/pong_model_d.pkl")
print('DR', avg_reward)
clean_session_fix()
avg_reward = test_play_pong(False, "models/pong_model_r.pkl")
print('RD', avg_reward)
clean_session_fix()
avg_reward = test_play_pong(True, "models/pong_model_r.pkl")
print('RR', avg_reward)
clean_session_fix()
test_play_pong(True, "models/pong_model_d.pkl", 1, True)
###Output
_____no_output_____ |
AI_Demo_Natural_Language_Processing_1.ipynb | ###Markdown
AI Demo - Natural Language Processing 'Is de film recensie positief of negatief?' Hoi! Welkom bij deze demo. In het komende uur gaan we in sneltreinvaart kijken hoe we een computer tekst kunnen leren herkennen. Om specifiek te zijn we gaan de computer leren om op basis van een film recensie aan te geven of deze positief of negatief is.De stappen die we gaan volgen zijn een eenvoudige samenvatting van een zeer uitgebreidde handleiding over dit onderwerp bij Tensorflow. Mocht je de uitgebreide code dus een keer willen doorwerken dan kun je deze vinden op onderstaande link:https://www.tensorflow.org/tutorials/keras/text_classification Wat gaan we de computer leren?We hebben om de computer tekst te leren herkennen een dataset met veel tekst nodig. We gaan de IMDB Dataset hiervoor gebruiken.De IMDB dataset bevat de tekst van 50000 film recensies. Er zijn film recensies die negatief zijn of die positief zijn.We gaan de computer trainen met 25000 film recensies en vervolgens gaan we kijken of de computer voor een 2de set (de testset) van 25000 recensies voor elke recensie kan voorspellen of deze positief of negatief is.Om de computer iets te leren moeten we een klein computer programma schrijven. Dit computer programma noemen we het 'model'.Zodra we de computer daadwerkelijk gaan laten leren zijn we het 'model' aan het 'trainen'. De startWe gaan eerst een aantal Python programma's laden en in gebruik nemen. Deze hebben we nodig om het machine learning model op te zetten en de data te kunnen gebruiken.
###Code
# Tensorflow is een Artificial Intelligence en Machine Learning programma van Google (Meer weten? https://www.tensorflow.org/learn )
import tensorflow as tf
from tensorflow import keras
# Met de tensorflow_datasets kunnen we de IMDB dataset straks ophalen
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
# Numpy is voor diverse berekeningen. Die hebben we straks ook nodig.
import numpy as np
###Output
_____no_output_____
###Markdown
Data verzamelenWe gaan de data voor de film recensies downloaden via de eerdere programma's.De dataset is al helemaal voor ons voorbereid. Het belangrijkste is dat alle woorden zijn omgezet naar nummers. En elk nummer verwijst naar het woord in wat ze een vocabulary noemen.Een computer kan niet rechtstreeks met de tekst werken...maar als we de tekst nou op een slimme wijze omzetten naar nummers..dan kan het wel.
###Code
# We laden de IMDB Dataset
(train_data, test_data), info = tfds.load(
# Use the version pre-encoded with an ~8k vocabulary.
'imdb_reviews/subwords8k',
# Return the train/test datasets as a tuple.
split = (tfds.Split.TRAIN, tfds.Split.TEST),
# Return (example, label) pairs from the dataset (instead of a dictionary).
as_supervised=True,
# Also return the `info` structure.
with_info=True)
###Output
_____no_output_____
###Markdown
We kunnen heel makkelijk zien hoe de tekst word omgezet naar een aantal nummers.
###Code
# We maken een vertaler aan...die de tekst omzet naar nummers
encoder = info.features['text'].encoder
# En we zetten een voorbeeld zin om
voorbeeld = 'Hallo allemaal. Welkom bij de tekst demo.'
voorbeeld_in_nummers = encoder.encode(voorbeeld)
print(f'Voorbeeld in nummers: {voorbeeld_in_nummers}')
###Output
_____no_output_____
###Markdown
Of nu met de losse woorden....
###Code
print(f'Hallo = {encoder.encode("Hallo")}')
print(f'Hallo = {encoder.encode("Hallo ")}')
print(f'allemaal. = {encoder.encode("allemaal.")}')
###Output
_____no_output_____
###Markdown
En wat we zien is dat 1 woord toch meerdere nummers kan opleveren. Wat de encoder doet is bijvoorbeeld het einde van de zin of een spatie aangeven. Ook worden woorden wel eens opgesplitst.Kijk maar :-)
###Code
print(f'4313 = "{encoder.decode([4313])}"')
print(f'8040 = "{encoder.decode([8040])}"')
print(f'222 = "{encoder.decode([222])}"')
###Output
_____no_output_____
###Markdown
Laten we nog even een 2 tal voorbeelden uit de IMDB dataset bekijken.Voor elk voorbeeld laten we de volledig tekst in nummers, de tekst zelf en het label.
###Code
for train_example, train_label in train_data.take(2):
print('Tekst nummers:', train_example.numpy())
print('Tekst:', encoder.decode(train_example))
print('Label:', train_label.numpy())
###Output
_____no_output_____
###Markdown
We doen nu even de laatste paar stappen om de data voor te bereiden.We bereidden de trainings data voor waarmee we het model trainen en de tekst leren te herkennen.En we bereidden de test data voor waarmee we het model straks testen en kijken hoe goed we voor onbekende film recensies kunen voorspellen of deze positief of negatief zijn.
###Code
BUFFER_SIZE = 1000
train_batches = (train_data.shuffle(BUFFER_SIZE).padded_batch(32))
test_batches = (test_data.padded_batch(32))
###Output
_____no_output_____
###Markdown
We gaan nu een simpel model opzetten met behulp van een programma genaamd 'Tensorflow'. Met Tensorflow kun je eenvoudige modellen zoals we hierna gaan doen maken tot de meest complexe AI systemen denkbaar.Wat we in het model stoppen zijn de reeksen met cijfers die de woorden voorstellen. Wat we voorspellen is of de recensie negatief (0) of positief (1) is.
###Code
model = keras.Sequential([
keras.layers.Embedding(encoder.vocab_size, 16),
keras.layers.GlobalAveragePooling1D(),
keras.layers.Dense(1, activation='sigmoid')])
###Output
_____no_output_____
###Markdown
We kunnen heel simpel bekijken hoe ons model er uitziet.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Model TrainenWe gaan nu ons model trainen door het 10 keer alle film recensies te laten 'lezen'. En elke keer wordt erbij getoond (met het label!) of deze recensie positief of negatief was.Het model gaat dan leren bijvoorbeeld of er bepaalde woorden of zinnen gebruikt worden voor een positieve of negatieve recensie.
###Code
# Deze regels zijn nodig om het model compleet te maken..
loss = tf.keras.losses.BinaryCrossentropy(from_logits = False)
# Als je zelf wat wilt spelen met hoe het model leert...
# Je kan de learning_rate bijvoorbeel groter of kleiner maken.
# Als je hem kleiner maakt duurt het leren langer.
# Als je hem groter maakt leert hij sneller en het kan zelfs zijn dat hij slechter leert.
optimizer = tf.keras.optimizers.Adam(learning_rate = 0.001)
# We 'compileren' ==> 'gereed maken' het model met zijn optimizer, loss en metrics
model.compile(optimizer = optimizer, loss = loss, metrics = ['accuracy'])
# Hier starten we het trainen van het model
model.fit(train_batches,
epochs = 10,
validation_data = test_batches)
###Output
_____no_output_____
###Markdown
We hebben nu ons model getrained. De getallen aan de rechterkant ('loss' en 'accuracy') geven aan het goed het model is getrained.De 'loss' willen we altijd zo laag mogelijk (richting 0 krijgen) en de 'accuracy' zo hoog mogelijk (richting 1)We kunnen ons model nu testen. Laten we eens kijken hoe goed (of misschien wel slecht??) het model is.
###Code
# Test het model
loss, accuracy = model.evaluate(test_batches)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
###Output
_____no_output_____
###Markdown
We zien dat ons model ongeveer rond de 85% a 86% juist een voorspelling maakt.Ik ben benieuwd of we nog hoger kunnen scoren...laten we nog een 2de model proberen wat net iets beter kan leren.
###Code
# Model 2
model2 = keras.Sequential([
keras.layers.Embedding(encoder.vocab_size, 16),
keras.layers.GlobalAveragePooling1D(),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')])
# Samenvatting van Model 2
model2.summary()
# Deze regels zijn nodig om het model compleet te maken..
loss = tf.keras.losses.BinaryCrossentropy(from_logits = False)
optimizer = tf.keras.optimizers.Adam(learning_rate = 0.0005)
# We 'compileren' ==> 'gereed maken' het model met zijn optimizer, loss en metrics
model2.compile(optimizer = optimizer, loss = loss, metrics = ['accuracy'])
# Hier starten we het trainen van het 2de model
model2.fit(train_batches,
epochs = 10,
validation_data = test_batches)
###Output
_____no_output_____
###Markdown
En als we nu weer ons model testen...zullen we zien dat het net iets beter scored dan het eerste model.Doordat we het model hebben uitgebreid kan het als het ware meer dingen leren.
###Code
# Test het Model 2
loss, accuracy = model2.evaluate(test_batches)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
# Los voorbeeld tekst.
voorbeeld_in_nummers = encoder.encode('This was probably the most terrible movie.. a complete disaster')
print(model2.predict([voorbeeld_in_nummers]))
###Output
_____no_output_____ |
11. PCAP Practice Exam.ipynb | ###Markdown
HyperLearning AI - Introduction to PythonAn introductory course to the Python 3 programming language, with a curriculum aligned to the Certified Associate in Python Programming (PCAP) examination syllabus (PCAP-31-02).https://knowledgebase.hyperlearning.ai/courses/introduction-to-python 11. PCAP Practice Examhttps://knowledgebase.hyperlearning.ai/en/courses/introduction-to-python/modules/11/pcap-practice-examIn this final module of our Introduction to Python course, we will consolidate everything that we have learnt by taking a practice Certified Associate in Python Programming (PCAP) examination paper (PCAP-31-02). Question 1
###Code
2 ** 3 ** 2 ** 1
###Output
_____no_output_____
###Markdown
Question 2
###Code
print("Peter's sister's name's \"Anna\"")
print('Peter\'s sister\'s name\'s \"Anna\"')
###Output
Peter's sister's name's "Anna"
Peter's sister's name's "Anna"
###Markdown
Question 3
###Code
i = 250
while len(str(i)) > 72:
i *= 2
else:
i //= 2
print(i)
###Output
125
###Markdown
Question 4
###Code
n = 0
while n < 4:
n += 1
print(n, end=" ")
###Output
1 2 3 4
###Markdown
Question 5
###Code
x = 0
y = 2
z = len("Python")
x = y > z
print(x)
print(type(x))
###Output
False
<class 'bool'>
###Markdown
Question 6
###Code
Val = 1
Val2 = 0
Val = Val ^ Val2
Val2 = Val ^ Val2
Val = Val ^ Val2
print(Val)
###Output
0
###Markdown
Question 7
###Code
z, y, x = 2, 1, 0
x, z = z, y
y = y - z
x, y, z = y, z, x
print(x, y, z)
###Output
0 1 2
###Markdown
Question 8
###Code
a = 0
b = a ** 0
if b < a + 1:
c = 1
elif b == 1:
c = 2
else:
c = 3
print(a + b + c)
###Output
3
###Markdown
Question 9
###Code
i = 10
while i > 0 :
i -= 3
print("*")
if i <= 3:
break
else:
print("*")
###Output
*
*
*
###Markdown
Question 10
###Code
# Example 1
for i in range(1, 4, 2):
print("*")
# Example 2
for i in range(1, 4, 2):
print("*", end="")
# Example 3
for i in range(1, 4, 2):
print("*", end="**")
# Example 4
for i in range(1, 4, 2):
print("*", end="**")
print("***")
list(range(1, 4, 2))
###Output
_____no_output_____
###Markdown
Question 11
###Code
print('N/A')
###Output
N/A
###Markdown
Question 12
###Code
x = "20"
y = "30"
print(x > y)
###Output
False
###Markdown
Question 13
###Code
s = "Hello, Python!"
print(s[-14:15])
print(s[0:30])
###Output
Hello, Python!
###Markdown
Question 14
###Code
lst = ["A", "B", "C", 2, 4]
del lst[0:-2]
print(lst)
###Output
[2, 4]
###Markdown
Question 15
###Code
dict = { 'a': 1, 'b': 2, 'c': 3 }
for item in dict:
print(item)
###Output
a
b
c
###Markdown
Question 16
###Code
s = 'python'
for i in range(len(s)):
i = s[i].upper()
print(s, end="")
###Output
python
###Markdown
Question 17
###Code
lst = [i // i for i in range(0,4)]
sum = 0
for n in lst:
sum += n
print(sum)
###Output
_____no_output_____
###Markdown
Question 18
###Code
lst = [[c for c in range(r)] for r in range(3)]
for x in lst:
for y in x:
if y < 2:
print('*', end='')
###Output
***
###Markdown
Question 19
###Code
lst = [2 ** x for x in range(0, 11)]
print(lst[-1])
###Output
1024
###Markdown
Question 20
###Code
lst1 = "12,34"
lst2 = lst1.split(',')
print(len(lst1) < len(lst2))
###Output
False
###Markdown
Question 21
###Code
def fun(a, b=0, c=5, d=1):
return a ** b ** c
print(fun(b=2, a=2, c=3))
###Output
256
###Markdown
Question 22
###Code
x = 5
f = lambda x: 1 + 2
print(f(x))
###Output
3
###Markdown
Question 23
###Code
from math import pi as xyz
print(pi)
###Output
_____no_output_____
###Markdown
Question 24
###Code
print('N/A')
###Output
N/A
###Markdown
Question 25
###Code
from random import randint
for i in range(10):
print(random(1, 5))
###Output
_____no_output_____
###Markdown
Question 26
###Code
x = 1 # line 1
def a(x): # line 2
return 2 * x
# line 3
x = 2 + a(x) # line 4
print(a(x)) # line 5
###Output
8
###Markdown
Question 27
###Code
a = 'hello' # line 1
def x(a,b): # line 2
z = a[0] # line 3
return z # line 4
print(x(a)) # line 5
###Output
_____no_output_____
###Markdown
Question 28
###Code
s = 'SPAM'
def f(x):
return s + 'MAPS'
print(f(s))
###Output
SPAMMAPS
###Markdown
Question 29
###Code
print('N/A')
###Output
N/A
###Markdown
Question 30
###Code
def gen():
lst = range(5)
for i in lst:
yield i*i
for i in gen():
print(i, end="")
###Output
014916
###Markdown
Question 31
###Code
print('N/A')
###Output
N/A
###Markdown
Question 32
###Code
print('N/A')
###Output
N/A
###Markdown
Question 33
###Code
print('N/A')
###Output
N/A
###Markdown
Question 34
###Code
# Example 1
x = 1
y = 0
z = x%y
print(z)
# Example 2
x = 1
y = 0
z = x/y
print(z)
###Output
_____no_output_____
###Markdown
Question 35
###Code
x = 0
try:
print(x)
print(1 / x)
except ZeroDivisionError:
print("ERROR MESSAGE")
finally:
print(x + 1)
print(x + 2)
###Output
0
ERROR MESSAGE
1
2
###Markdown
Question 36
###Code
class A:
def a(self):
print("A", end='')
class B(A):
def a(self):
print("B", end='')
class C(B):
def b(self):
print("B", end='')
a = A()
b = B()
c = C()
a.a()
b.a()
c.b()
###Output
ABB
###Markdown
Question 37
###Code
try:
print("Hello")
raise Exception
print(1/0)
except Exception as e:
print(e)
###Output
Hello
###Markdown
Question 38
###Code
# Example 1
class CriticalError(Exception):
def __init__(self, message='ERROR MESSAGE A'):
Exception.__init__(self, message)
raise CriticalError
raise CriticalError("ERROR MESSAGE B")
# Example 2
class CriticalError(Exception):
def __init__(self, message='ERROR MESSAGE A'):
Exception.__init__(self, message)
raise CriticalError("ERROR MESSAGE B")
###Output
_____no_output_____
###Markdown
Question 39
###Code
file = open(test.txt)
print(file.readlines())
file.close()
###Output
_____no_output_____
###Markdown
Question 40
###Code
f = open("file.txt", "w")
f.close()
###Output
_____no_output_____ |
python/api/update-view-definition-with-polygon-hosted-feature-layer/Update View Definition of a Hosted Feature Layer with a Polygon.ipynb | ###Markdown
Update a Feature Layer View with a Polygon Area of InterestThis notebook will update the `viewLayerDefinition` of a hosted feature layer view to only show features that interset a polygon. In our example below, we are querying a layer of generalized U.S. States for boundary of Indiana, then applying that polygon as the spatial filter for another layer.**Please note this may have negative performance implications. Consider your polygons and the level of detail they have before implementing this approach.**
###Code
from arcgis.gis import GIS
from arcgis.features import FeatureLayer
from arcgis.geometry import Geometry, SpatialReference
from copy import deepcopy
gis = GIS("home")
###Output
_____no_output_____
###Markdown
Some basic setup of what state boundary we want to grab for our filter
###Code
state_abbr_to_use = 'IN'
state_abbr_field = 'STATE_ABBR'
where_clause = f"{state_abbr_field} = '{state_abbr_to_use}'"
where_clause
###Output
_____no_output_____
###Markdown
Get a reference to the State boundary layer and the hosted feature layer view we are going to update
###Code
fl = FeatureLayer.fromitem(gis.content.get('99fd67933e754a1181cc755146be21ca'))
flview_to_update = FeatureLayer.fromitem(gis.content.get('400ab9c2d7024058b5c6c2f38a714fd3'))
###Output
_____no_output_____
###Markdown
Here is our template JSON we will use. We'll replace the `rings` property with what comes back from our State boundary query
###Code
vld_base = {
"viewLayerDefinition": {
"filter": {
"operator": "esriSpatialRelIntersects",
"value": {
"geometryType": "esriGeometryPolygon",
"geometry": {
"rings": [
[
[-13478878.103229811, 5474302.767027485],
[-12940761.424102314, 5880336.2612782335],
[-12877165.816569064, 5469410.797217235],
[-13718584.62393206, 4857914.570935988],
[-13713692.654121809, 5430275.038735235],
[-13478878.103229811, 5474302.767027485]
]
],
"spatialReference": { "wkid": 102100, "latestWkid": 3857 }
}
}
}
}
}
###Output
_____no_output_____
###Markdown
Make a copy of our template JSON
###Code
vld_to_update = deepcopy(vld_base)
###Output
_____no_output_____
###Markdown
Execute our query Indiana's geometry
###Code
fset = fl.query(where=where_clause, out_fields=state_abbr_field)
geom = fset.features[0].geometry
###Output
_____no_output_____
###Markdown
Set our `rings` to be that of Indiana's
###Code
vld_to_update['viewLayerDefinition']['filter']['value']['geometry']['rings'] = geom['rings']
###Output
_____no_output_____
###Markdown
Tell the feature layer view to update its definition
###Code
flview_to_update.manager.update_definition(vld_to_update)
###Output
_____no_output_____ |
session-5/Session_5_ensemble.ipynb | ###Markdown
###Code
!pip install -U -q PyDrive
import tensorflow as tf
print(tf.__version__)
if tf.__version__.startswith('2')==False :
!pip uninstall tensorflow
!pip install tensorflow-gpu
print(tf.__version__)
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
import zipfile, os
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import xception, inception_v3, resnet_v2, vgg19,densenet
from tensorflow.keras.applications.mobilenet import preprocess_input
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Dropout, Conv2D, Flatten, MaxPool2D
from tensorflow.keras.activations import relu,softmax
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
# data = > https://drive.google.com/file/d/1GEKK8oRNntFyR0ZxPdcvPut-15b7CvrW/view?usp=sharing
# small Data => https://drive.google.com/file/d/1OHGNsTfvVZvWYQ7B29SYcxrLGVdeCoQb/view?usp=sharing
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
if not os.path.exists('MLIntroData'):
os.makedirs('MLIntroData')
# Download Zip
myzip = drive.CreateFile({'id': '1GEKK8oRNntFyR0ZxPdcvPut-15b7CvrW'})
myzip.GetContentFile('data.zip')
# 3. Unzip
zip_ref = zipfile.ZipFile('data.zip', 'r')
zip_ref.extractall('MLIntroData/data')
zip_ref.close()
if os.path.exists('MLIntroData'):
print(os.listdir("MLIntroData/data/data"))
#default sizes
Image_Width = 100
Image_Height = 100
Image_Depth = 3
targetSize = (Image_Width,Image_Height)
targetSize_withdepth = (Image_Width,Image_Height,Image_Depth)
epochs = 100
x_train = []
y_train = []
y_labels = []
#define the sub folders for both training and test
training = os.path.join("MLIntroData/data/data",'train')
train_data_generator = ImageDataGenerator(preprocessing_function=xception.preprocess_input,
width_shift_range=0.2,
height_shift_range=0.2,
zoom_range=0.2,
fill_mode='nearest')
train_generator = train_data_generator.flow_from_directory(training,
batch_size=20,
target_size=targetSize,
#seed=12
shuffle=False
)
y_train = train_generator.classes
for k in train_generator.class_indices.keys():
y_labels.append(k)
y_train = to_categorical(y_train)
print(len(y_train))
# NOW WE LOAD THE PRE_TRAINED MODEL
FEATURE_EXTRACTOR = vgg19.VGG19(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model = Sequential()
model.add(FEATURE_EXTRACTOR)
model.add(Flatten())
features_x = model.predict_generator(train_generator)
print(features_x.shape)
FEATURE_EXTRACTOR1 = xception.Xception(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model1 = Sequential()
model1.add(FEATURE_EXTRACTOR1)
model1.add(Flatten())
features_x1 = model1.predict_generator(train_generator)
print(features_x1.shape)
FEATURE_EXTRACTOR2 = resnet_v2.ResNet152V2(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model2 = Sequential()
model2.add(FEATURE_EXTRACTOR2)
model2.add(Flatten())
features_x2 = model2.predict_generator(train_generator)
print(features_x2.shape)
FEATURE_EXTRACTOR3 = inception_v3.InceptionV3(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model3 = Sequential()
model3.add(FEATURE_EXTRACTOR3)
model3.add(Flatten())
features_x3 = model3.predict_generator(train_generator)
print(features_x3.shape)
FEATURE_EXTRACTOR4 = densenet.DenseNet201(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model4 = Sequential()
model4.add(FEATURE_EXTRACTOR4)
model4.add(Flatten())
features_x4 = model4.predict_generator(train_generator)
print(features_x4.shape)
import numpy as np
all_features = np.concatenate((features_x, features_x1,features_x2,features_x3,features_x4), axis=1)
print(all_features.shape)
model = Sequential()
#add our layers
model.add(Flatten(input_shape=all_features.shape[1:]))
model.add(Dense(128,activation=relu))
model.add(Dropout(0.1))
model.add(Dense(64,activation=relu))
model.add(Dense(len(y_labels),activation='softmax'))
history = model.compile(optimizer=Adam(lr=0.0001), loss="categorical_crossentropy", metrics=['accuracy'])
model.summary()
from tensorflow.keras.callbacks import Callback
class myCallBacks(Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('loss')<=self.loss) :
print("\n Reached {1} loss on epoch {0}, stopping training".format(epoch+1,self.loss))
self.model.stop_training = True
def __init__(self, loss=1E-4):
self.loss = loss
epochs = 500
callBack = myCallBacks(loss=1E-7)
model.fit(all_features,y_train,epochs=epochs,shuffle=True,verbose=2,callbacks=[callBack])
test_data_generator = ImageDataGenerator(preprocessing_function=xception.preprocess_input)
test_generator = test_data_generator.flow_from_directory("MLIntroData/data/data/test",
target_size=(100,100),
shuffle=False)
FEATURE_EXTRACTOR = vgg19.VGG19(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model0 = Sequential()
model0.add(FEATURE_EXTRACTOR)
model0.add(Flatten())
features_x = model0.predict_generator(test_generator)
print(type(features_x).__name__)
print(features_x.shape)
FEATURE_EXTRACTOR1 = xception.Xception(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model1 = Sequential()
model1.add(FEATURE_EXTRACTOR1)
model1.add(Flatten())
features_x1 = model1.predict_generator(test_generator)
print(type(features_x1).__name__)
print(features_x1.shape)
FEATURE_EXTRACTOR2 = resnet_v2.ResNet152V2(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model2 = Sequential()
model2.add(FEATURE_EXTRACTOR2)
model2.add(Flatten())
features_x2 = model2.predict_generator(test_generator)
print(type(features_x2).__name__)
print(features_x2.shape)
FEATURE_EXTRACTOR3 = inception_v3.InceptionV3(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model3 = Sequential()
model3.add(FEATURE_EXTRACTOR3)
model3.add(Flatten())
features_x3 = model3.predict_generator(test_generator)
print(type(features_x3).__name__)
print(features_x3.shape)
FEATURE_EXTRACTOR4 = densenet.DenseNet201(weights='imagenet',include_top=False,input_shape=targetSize_withdepth)
model4 = Sequential()
model4.add(FEATURE_EXTRACTOR4)
model4.add(Flatten())
features_x4 = model4.predict_generator(test_generator)
print(type(features_x4).__name__)
print(features_x4.shape)
all_features = np.concatenate((features_x, features_x1,features_x2,features_x3,features_x4), axis=1)
predictions = model.predict(all_features)
from sklearn.metrics import confusion_matrix,classification_report
row_index = predictions.argmax(axis=1)
filenames = test_generator.filenames
nb_samples = len(filenames)
y_true = test_generator.classes
target_names = test_generator.class_indices.keys()
print(target_names)
print(confusion_matrix(y_true, row_index))
print('Classification Report')
target_names = test_generator.class_indices.keys()
print(classification_report(test_generator.classes, row_index, target_names=target_names))
###Output
dict_keys(['bar_chart', 'bubble_chart', 'pie_chart', 'radar_chart', 'treemap_chart'])
[[19 0 0 0 0]
[ 0 19 0 0 0]
[ 0 0 18 0 0]
[ 0 1 0 17 0]
[ 0 0 0 0 19]]
Classification Report
precision recall f1-score support
bar_chart 1.00 1.00 1.00 19
bubble_chart 0.95 1.00 0.97 19
pie_chart 1.00 1.00 1.00 18
radar_chart 1.00 0.94 0.97 18
treemap_chart 1.00 1.00 1.00 19
accuracy 0.99 93
macro avg 0.99 0.99 0.99 93
weighted avg 0.99 0.99 0.99 93
|
notebooks/archive/Dan_Waters_Summarization_Preprocessing.ipynb | ###Markdown
Data acquisition
###Code
import re
import pandas as pd
# Google Drive IDs for downloading:
# cnn_stories: 1-cnX-wKYbPffFwYjjAqLMTeg06-ZA20l
# cnn: 1qSFGn8k8UGvYyativmDBAvK0VoCdL3qb
# dailymail_stories: 1DB4jQ0EppiTbU2VFypS737UzNcwPMgWl
# dailymail: 1MESLPrObd9Vd97rBb2J5jvs1CuvXywvD
DATASET_CNN_STORIES = 'cnn_stories'
DATASET_CNN = 'cnn'
DATASET_DM_STORIES = 'dailymail_stories'
DATASET_DM = 'dailymail'
g_ids = {
DATASET_CNN_STORIES: '1-cnX-wKYbPffFwYjjAqLMTeg06-ZA20l',
DATASET_CNN: '1qSFGn8k8UGvYyativmDBAvK0VoCdL3qb',
DATASET_DM_STORIES: '1DB4jQ0EppiTbU2VFypS737UzNcwPMgWl',
DATASET_DM: '1MESLPrObd9Vd97rBb2J5jvs1CuvXywvD'
}
# create directories
!mkdir ./data
# data helpers
def download_dataset(name):
assert name in g_ids.keys(), 'Dataset not found.'
!gdown --id {g_ids[name]} -d
def unzip_dataset(name):
assert name in g_ids.keys(), 'Dataset not found.'
!mkdir ./data/{name}
!tar -xf /content/{name}.tgz -C ./data/{name}/
download_dataset(DATASET_CNN_STORIES)
unzip_dataset(DATASET_CNN_STORIES)
!cat /content/data/cnn_stories/cnn/stories/00027e965c8264c35cc1bc55556db388da82b07f.story
def split_stories_and_highlights(story):
hl_token = '@highlight'
hl_index = story.find(hl_token)
story_text = story[:hl_index] # up to the first @highlight
highlights = story[hl_index:].split(hl_token)
# strip whitespace
story_text = story_text.strip()
highlights = [h.strip() for h in highlights if len(h) > 0]
return story_text, highlights
def remove_cnn_token(line):
cnn_token = '(CNN) -- '
if cnn_token in line:
line = line[line.find(cnn_token) + len(cnn_token):]
return line
def preprocess(story_file):
story_lines = []
with open(story_file) as file:
text = file.read()
story, highlights = split_stories_and_highlights(text)
for line in story.split('\n'):
# Strip the reporting office header
line = remove_cnn_token(line)
# Lowercase and kill punctuation
line = re.sub('[^a-zA-Z0-9 ]', '', line.lower())
story_lines.append(line)
return ' '.join(story_lines)
preprocess('/content/data/cnn_stories/cnn/stories/0005d61497d21ff37a17751829bd7e3b6e4a7c5c.story')
###Output
_____no_output_____ |
Task 1 - Prediction using supervised machine learning @ GRIP (3).ipynb | ###Markdown
Author: Raghul V Task: Prediction Using Supervised Machine Learning Graduate Rotational Internship Program @ THE SPARKS FOUNDATION This simple linear regression involves two variables to predict the percentage of an student with respect to his/her study hour. Technical Requirements
###Code
# Importing the required libraries
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Extraction of data from source
###Code
# Reading the data from the remote link
url = r"https://raw.githubusercontent.com/AdiPersonalWorks/Random/master/student_scores%20-%20student_scores.csv"
s_data = pd.read_csv(url)
print("Import Affirmative")
s_data.head(10)
###Output
Import Affirmative
###Markdown
Data Visualization
###Code
# Plotting the distribution of scores obtained by the students
s_data.plot(x='Hours', y='Scores', style='o')
plt.title('Hours vs Percentage')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show()
###Output
_____no_output_____
###Markdown
This above graph shows the positive linear relation between the number of study hours and the percentage of scores. Preprocessing the data Seperation data into "attributes" (input) and "labels" (output).
###Code
x = s_data.iloc[:, :-1].values
y = s_data.iloc[:, 1].values
###Output
_____no_output_____
###Markdown
Model Training Splitting the algorithm into training and testing set.
###Code
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 0)
regressor = LinearRegression()
regressor.fit(x_train.reshape(-1,1), y_train)
print("Training complete.")
###Output
Training complete.
###Markdown
Plotting the line of regression Since the model is trained, we can now visualize the best-fit line of regression.
###Code
# Plotting the regression line
line = regressor.coef_*x+regressor.intercept_
# Plotting for the test data
plt.scatter(x, y)
plt.plot(x, line,color='orange');
plt.show()
###Output
_____no_output_____
###Markdown
Predictions Now, the algorithm is trained and it is time to make some predictions. - To do this, we use the test_set data.
###Code
# Testing the data
print(x_test)
# Predicting the model
y_pred = regressor.predict(x_test)
###Output
[[1.5]
[3.2]
[7.4]
[2.5]
[5.9]]
###Markdown
Comparing the actual result with the predicted model result.
###Code
# Comparing Actual vs Predicted
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df
#Estimating training and test score
print("Training Score:",regressor.score(x_train,y_train))
print("Test Score:",regressor.score(x_test,y_test))
# Plotting the Bar graph to depict the difference between the actual and predicted value
df.plot(kind='bar',figsize=(5,5))
plt.grid(which='major', linewidth='0.5', color='blue')
plt.grid(which='minor', linewidth='0.5', color='orange')
plt.show()
# Testing the model with our own data
hours = 9.25
test = np.array([hours])
test = test.reshape(-1, 1)
own_pred = regressor.predict(test)
print("No of Hours = {}".format(hours))
print("Predicted Score = {}".format(own_pred[0]))
###Output
No of Hours = 9.25
Predicted Score = 93.69173248737538
###Markdown
Evaluation of Model Evaluate the performance of algorithm. - This step is particularly important to compare how well different algorithms perform on a particular dataset. - Here different errors have been calculated to compare the model performance and predict the accuracy.
###Code
from sklearn import metrics
print('Mean Absolute Error:',metrics.mean_absolute_error(y_test, y_pred))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
print('R-2:', metrics.r2_score(y_test, y_pred))
###Output
Mean Absolute Error: 4.183859899002975
Mean Squared Error: 21.5987693072174
Root Mean Squared Error: 4.6474476121003665
R-2: 0.9454906892105356
|
notebooks/2.8 Gradient Bandit Algorithms.ipynb | ###Markdown
Gradient Bandit Algorithms Iván Vallés Pérez - 2018 The idea of this notebook is to run a multiarmed bandits solution based on gradient: Gradient Bandit Algorithms. They will be compared with the Sample Average Multi Armed Bandits as a baseline
###Code
import numpy as np
import random
import matplotlib.pyplot as plt
from IPython.display import clear_output
import pandas as pd
%matplotlib inline
class TestBed:
def __init__(self, n_actions, scale):
self.n_actions = n_actions
self.actions_q_values = np.random.normal(size=n_actions, scale=scale)
self.initial_action_q_values = self.actions_q_values
def reset_to_initial_q_values(self):
np.random.seed(655321)
self.actions_q_values = self.initial_action_q_values
def update_action_values(self):
self.actions_q_values = self.actions_q_values + np.random.normal(size=self.n_actions, scale=0.01)
def get_reward(self, action):
return(np.random.normal(self.actions_q_values[action]))
def get_optimal_reward(self):
return(np.max(self.actions_q_values))
def get_optimal_action(self):
return(np.argmax(self.actions_q_values))
env = TestBed(1000, scale=0.75)
print("Optimal action value achievable: {}\nOptimal action: {}"
.format(env.get_optimal_reward(), env.get_optimal_action()))
n = np.ones(env.n_actions)
q = np.zeros(env.n_actions)
rewards = [0]
q_error = [np.mean(np.abs(q-env.actions_q_values))]
actions = [1/env.n_actions]
epsilons = []
lamb = 1-1/100000
epsilon =0.5
for episode in range(100000):
epsilon = epsilon * lamb + 0.01 * (1-lamb) # Epsilon exponential decaying
rand_epsilon = random.random()
if rand_epsilon > epsilon:
# Greedy
action = np.argmax(q)
reward = env.get_reward(action)
else:
# Random
action = random.randint(0, env.n_actions-1)
reward = env.get_reward(action)
n[action] = n[action] + 1
q[action] = q[action] + (1.0/n[action])*(reward-q[action])
epsilons.append(epsilon)
actions.append(action)
rewards.append(reward)
q_error.append(np.mean(np.abs(q-env.actions_q_values)))
if episode % 2000==0:
clear_output(True)
plt.figure(figsize=(20,4))
plt.subplot(141)
plt.plot(pd.Series(rewards).ewm(span=2000).mean())
plt.title("Average reward")
plt.subplot(142)
plt.plot(pd.Series(q_error).ewm(span=2000).mean())
plt.title("Error in the q action-value function")
plt.subplot(143)
plt.plot(pd.Series(actions==env.get_optimal_action()).ewm(span=2000).mean())
plt.title("% optimal action was taken")
plt.subplot(144)
plt.plot(pd.Series(epsilons))
plt.title("Epsilon Value")
plt.show()
softmax = lambda x: np.exp(x)/np.sum(np.exp(x))
avg_reward=0
h = np.zeros(env.n_actions)
n = np.ones(env.n_actions)
actions = [1/env.n_actions]
confidence=[0]
rewards = [0]
alpha = 0.1
epsilons = []
lamb = 1-1/100000
for episode in range(100000):
pi = softmax(h)
action = np.random.choice(range(len(pi)), p=pi)
ohe_action = np.zeros(env.n_actions)
np.put(ohe_action, action, 1)
reward = env.get_reward(action)
n[action] = n[action] + 1
avg_reward = avg_reward + (1.0/(episode+1))*(reward-avg_reward)
update = alpha*(reward-avg_reward)*(1-pi)*ohe_action \
- alpha*(reward-avg_reward)*pi*(1-ohe_action)
h = h+update
actions.append(action)
rewards.append(reward)
confidence.append(np.max(pi))
if episode % 2000==0:
clear_output(True)
plt.figure(figsize=(20,4))
plt.subplot(131)
plt.plot(pd.Series(rewards).ewm(span=2000).mean())
plt.title("Average reward")
plt.subplot(132)
plt.plot(pd.Series(confidence).ewm(span=2000).mean())
plt.title("Confidence in the probability distribution (max($\pi$))")
plt.subplot(133)
plt.plot(pd.Series(actions==env.get_optimal_action()).ewm(span=2000).mean())
plt.title("% optimal action was taken")
plt.show()
###Output
_____no_output_____ |
content/Pandas Operations/Split and Join Columns.ipynb | ###Markdown
Pandas Dataframe
###Code
from IPython.display import HTML
import pandas as pd
import numpy as np
sample_data = {'Name': ['Jason Miller', 'Molly Jacobson', 'Tina Milner', 'Jake', 'Amy Schumaker'],
'Age': [31, 25, 32, 29, 28],
'LanguagesKnown': ["C_Java_C++","Python_C_Java_Spring","C_Spring_C++",
"Jason_Java_Node.js","Angular.js_Java_C++_Python"],
'Salary': [125000, 94000, 57000, 62100, 70001]}
df = pd.DataFrame(sample_data, columns = ['Name', 'Age', 'LanguagesKnown', 'Salary'])
df
result = pd.concat([df, df['LanguagesKnown'].str.split('_',expand=True)], axis=1, ignore_index=False )
result
result.columns
print( pd.get_dummies(result[0]) )
dataFrame = df.LanguagesKnown.astype(str).str.get_dummies('_')
dataFrame
result = pd.concat([df, dataFrame], axis=1, ignore_index=False)
result
del result['LanguagesKnown']
result
###Output
_____no_output_____ |
notebooks/57-PRMT-2270--QA-cutoff-14-vs-28-recategorisation.ipynb | ###Markdown
PRMT-2270 14 vs 28 day cutoff with re-categorisation
###Code
import pandas as pd
import numpy as np
pd.set_option("display.max_rows", None, "display.max_columns", None)
transfer_file_14_day_cutoff = "s3://prm-gp2gp-data-sandbox-dev/transfers-sample-6/2021-5-transfers_14_day_conversation_cutoff.parquet"
transfers_raw_14_day_cutoff = pd.read_parquet(transfer_file_14_day_cutoff)
transfers_14_day_cutoff = transfers_raw_14_day_cutoff.copy()
transfers_14_day_cutoff["status"] = transfers_14_day_cutoff["status"].str.replace("_", " ").str.title()
transfer_file_28_day_cutoff = "s3://prm-gp2gp-data-sandbox-dev/transfers-sample-6/2021-5-transfers_28_day_conversation_cutoff.parquet"
transfers_raw_28_day_cutoff = pd.read_parquet(transfer_file_28_day_cutoff)
transfers_28_day_cutoff = transfers_raw_28_day_cutoff.copy()
transfers_28_day_cutoff["status"] = transfers_28_day_cutoff["status"].str.replace("_", " ").str.title()
outcome_counts_14_day_cutoff = transfers_14_day_cutoff.fillna("N/A").groupby(by=["status", "failure_reason"]).agg({"conversation_id": "count"})
outcome_counts_14_day_cutoff = outcome_counts_14_day_cutoff.rename({"conversation_id": "Number of transfers", "failure_reason": "Failure Reason"}, axis=1)
outcome_counts_14_day_cutoff["% of transfers"] = (outcome_counts_14_day_cutoff["Number of transfers"] / outcome_counts_14_day_cutoff["Number of transfers"].sum()).multiply(100)
outcome_counts_14_day_cutoff
outcome_counts_28_day_cutoff = transfers_28_day_cutoff.fillna("N/A").groupby(by=["status", "failure_reason"]).agg({"conversation_id": "count"})
outcome_counts_28_day_cutoff = outcome_counts_28_day_cutoff.rename({"conversation_id": "Number of transfers", "failure_reason": "Failure Reason"}, axis=1).astype('int32')
outcome_counts_28_day_cutoff["% of transfers"] = (outcome_counts_28_day_cutoff["Number of transfers"] / outcome_counts_28_day_cutoff["Number of transfers"].sum()).multiply(100)
outcome_counts_28_day_cutoff
# High level summary of diff based on status
transfers_28_day_cutoff.fillna("N/A").groupby(by=["status", "failure_reason"]).agg({"conversation_id": "count"}).rename(columns={"conversation_id": "total difference"}) - transfers_14_day_cutoff.fillna("N/A").groupby(by=["status", "failure_reason"]).agg({"conversation_id": "count"}).rename(columns={"conversation_id": "total difference"})
outcome = outcome_counts_14_day_cutoff.compare(outcome_counts_28_day_cutoff, keep_equal=True, keep_shape=True).round(2).rename(columns={"self":"14 day cutoff","other":"28 day cutoff"})
outcome["Difference"] = (outcome["Number of transfers"]["28 day cutoff"] - outcome["Number of transfers"]["14 day cutoff"]).astype('int32')
outcome["% Difference"] = (outcome["% of transfers"]["28 day cutoff"] - outcome["% of transfers"]["14 day cutoff"])
outcome[[
('Number of transfers', '14 day cutoff'),
('Number of transfers', '28 day cutoff'),
('Difference', ''),
('% of transfers', '14 day cutoff'),
('% of transfers', '28 day cutoff'),
('% Difference', '')
]]
###Output
_____no_output_____ |
books/oop.ipynb | ###Markdown
Object Oriented Programming
###Code
# Class
class Circus:
# mutable class variable, becareful
animals = []
# immutable class variable
count_of_animal = 0
def welcome_new(self, animal):
self.animals.append(animal)
self.count_of_animal = self.count_of_animal + 1
class Animal:
name_type = None
# this is initializer, called when create new object of class (with args passed)
def __init__(self, name):
self.name = name
print(f"New animal name {self.name} is born in the circus")
# Inheritance
class Dog(Animal):
name_type = "Dog"
def sound(self):
print("Woop Woop")
# Inheritance
class Elephant(Animal):
name_type = "Elephant"
def sound(self):
print("LOL I dont know how it sounds")
circus1 = Circus()
gogo = Elephant("GoGo")
circus1.welcome_new(gogo)
miumiu = Elephant("Miumiu")
circus1.welcome_new(miumiu)
circus2 = Circus()
lulu = Dog("LuLu")
circus2.welcome_new(lulu)
kiki = Dog("KiKi")
circus2.welcome_new(kiki)
print("========= Circus 1")
print(circus1.count_of_animal)
print(circus1.animals) # Weird ??? Nope, it's mutable class variable
print("========= Circus 2")
print(circus2.count_of_animal)
print(circus2.animals)
for a in circus2.animals:
a.sound()
# classmethod, staticmethod
class Tool:
secret_number = 42
# need indentify which class is called to get that class info
@classmethod
def get_secret(cls):
return cls.secret_number
class Hammer(Tool):
secret_number = 99
# don't need any self or cls, put in class just because isolate business logic or make senses
@staticmethod
def hit(nail):
print(f"Hitted the nail {nail}")
print(Tool.get_secret())
print(Hammer.get_secret())
Hammer.hit("nail1")
Hammer.hit("nail2")
# Magic methods
class Num1:
def __init__(self, num):
self.num = num
def __add__(self, other_num):
return Num1(self.num + other_num.num)
def __sub__(self, other_num):
return Num1(self.num - other_num.num)
def __eq__(self, other_num):
return self.num == other_num.num
num1 = Num1(6)
num2 = Num1(9)
num3 = num1 + num2
num4 = num3 - num2
print(num1, num1.num)
print(num2, num2.num)
print(num3, num3.num)
print(num1 == num2)
print(num1 == num4)
###Output
<__main__.Num1 object at 0x7f0fe5df55b0> 6
<__main__.Num1 object at 0x7f0fe5df5d30> 9
<__main__.Num1 object at 0x7f0fe5df0610> 15
False
True
|
Demand Day Classification (StatsCan Weather Data).ipynb | ###Markdown
Read Data
###Code
demand_data = pd.read_csv("data/ZonalDemands_2003-2017.csv")
weather_data = pd.read_csv("data/weather_data_2002_2018.csv",index_col=0)
demand_data['Date'] = pd.to_datetime(demand_data['Date']) + pd.to_timedelta(demand_data['Hour'], unit='h')
#remove zones
demand_data.drop(demand_data.columns[3:],axis = 1, inplace=True)
demand_data.head()
weather_data['Date/Time'] = pd.to_datetime(weather_data['Date/Time'])
weather_data.head()
###Output
_____no_output_____
###Markdown
Merge Datasets
###Code
weather_data = weather_data.rename(index=str, columns = {"Date/Time":"Date"})
data = demand_data.merge(right=weather_data, how='left', on='Date')
data.head(2)
data.drop('Time', axis = 1, inplace = True)
data.info(verbose=True)
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 128616 entries, 0 to 128615
Data columns (total 25 columns):
Date 128616 non-null datetime64[ns]
Hour 128616 non-null int64
Total Ontario 128616 non-null int64
Year 128616 non-null int64
Month 128616 non-null int64
Day 128616 non-null int64
Temp (°C) 126884 non-null float64
Temp Flag 6 non-null object
Dew Point Temp (°C) 126885 non-null float64
Dew Point Temp Flag 5 non-null object
Rel Hum (%) 126885 non-null float64
Rel Hum Flag 5 non-null object
Wind Dir (10s deg) 0 non-null float64
Wind Dir Flag 126883 non-null object
Wind Spd (km/h) 283 non-null float64
Wind Spd Flag 126883 non-null object
Visibility (km) 0 non-null float64
Visibility Flag 13295 non-null object
Stn Press (kPa) 123095 non-null float64
Stn Press Flag 3795 non-null object
Hmdx 21457 non-null float64
Hmdx Flag 0 non-null float64
Wind Chill 0 non-null float64
Wind Chill Flag 0 non-null float64
Weather 0 non-null float64
dtypes: datetime64[ns](1), float64(12), int64(5), object(7)
memory usage: 25.5+ MB
###Markdown
Create Dummies for Categorical Features
###Code
data_cat_features = [pcol for pcol in data.columns if data[pcol].dtype == 'object']
data_cat_features
data = pd.get_dummies(data, columns=data_cat_features)
data.info(verbose=True)
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 128616 entries, 0 to 128615
Data columns (total 25 columns):
Date 128616 non-null datetime64[ns]
Hour 128616 non-null int64
Total Ontario 128616 non-null int64
Year 128616 non-null int64
Month 128616 non-null int64
Day 128616 non-null int64
Temp (°C) 126884 non-null float64
Dew Point Temp (°C) 126885 non-null float64
Rel Hum (%) 126885 non-null float64
Wind Dir (10s deg) 0 non-null float64
Wind Spd (km/h) 283 non-null float64
Visibility (km) 0 non-null float64
Stn Press (kPa) 123095 non-null float64
Hmdx 21457 non-null float64
Hmdx Flag 0 non-null float64
Wind Chill 0 non-null float64
Wind Chill Flag 0 non-null float64
Weather 0 non-null float64
Temp Flag_M 128616 non-null uint8
Dew Point Temp Flag_M 128616 non-null uint8
Rel Hum Flag_M 128616 non-null uint8
Wind Dir Flag_M 128616 non-null uint8
Wind Spd Flag_M 128616 non-null uint8
Visibility Flag_M 128616 non-null uint8
Stn Press Flag_M 128616 non-null uint8
dtypes: datetime64[ns](1), float64(12), int64(5), uint8(7)
memory usage: 19.5 MB
###Markdown
Feature Creation/Engineering
###Code
#add day of week (Sun-Sat)
data['Day of Week'] = data['Date'].apply(lambda x: x.dayofweek)
#add Heating/Cooling Degree Days
talpha = 14.5
tbeta = 14.5
data['CDD'] = (data['Temp (°C)']-talpha)
data['HDD'] = (tbeta-data['Temp (°C)'])
data['CDD'][data['CDD'] < 0] = 0
data['HDD'][data['HDD'] < 0] = 0
data.set_index('Date',drop=True, inplace = True)
data.head(1)
#add top five days (add 1 for whole day i.e 24 1's per day or 24*5 1's per year)
top_days = 5
data['topdays'] = 0
for year in range(data['Year'].min(),data['Year'].max()+1):
indices = data[data['Year'] == year].resample('D').max().nlargest(top_days,'Total Ontario').index
for i in range(len(indices)):
y = data[data.index == indices[i]]['Year'].as_matrix()[0]
m = data[data.index == indices[i]]['Month'].as_matrix()[0]
d = data[data.index == indices[i]]['Day'].as_matrix()[0]
data.loc[data[(data['Year'] == y) & (data['Month'] == m) & (data['Day'] == d)].index, 'topdays'] = 1
#data[(data['topdays']==1) & (data['Year'] == 2017)]
sns.countplot(x='topdays',data=data)
###Output
_____no_output_____
###Markdown
severely imbalanced, will need to weight topdays more heavily
###Code
data.head(1)
###Output
_____no_output_____
###Markdown
Clean Data
###Code
#Remove Features witrh 80% Missing
data = data[data.columns[data.isnull().mean() < 0.80]]
#get target variable
y = data['topdays']
del data['topdays']
del data['Year']
data.head(1)
y[0:5]
###Output
_____no_output_____
###Markdown
XG Boost
###Code
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 128616 entries, 2003-05-01 01:00:00 to 2018-01-01 00:00:00
Data columns (total 18 columns):
Hour 128616 non-null int64
Total Ontario 128616 non-null int64
Month 128616 non-null int64
Day 128616 non-null int64
Temp (°C) 126884 non-null float64
Dew Point Temp (°C) 126885 non-null float64
Rel Hum (%) 126885 non-null float64
Stn Press (kPa) 123095 non-null float64
Temp Flag_M 128616 non-null uint8
Dew Point Temp Flag_M 128616 non-null uint8
Rel Hum Flag_M 128616 non-null uint8
Wind Dir Flag_M 128616 non-null uint8
Wind Spd Flag_M 128616 non-null uint8
Visibility Flag_M 128616 non-null uint8
Stn Press Flag_M 128616 non-null uint8
Day of Week 128616 non-null int64
CDD 126884 non-null float64
HDD 126884 non-null float64
dtypes: float64(6), int64(5), uint8(7)
memory usage: 17.6 MB
###Markdown
GridSearch
###Code
cv = StratifiedKFold(y, n_folds=10, shuffle=True, random_state=seed)
params_grid = {
'max_depth': [2,3,4,5],
'n_estimators': [25,50,100],
'learning_rate': np.linspace(0.01, 2, 5),
'colsample_bytree': np.linspace(0.05, 1, 5),
}
params_fixed = {
'objective': 'binary:logistic',
'silent': 1,
'scale_pos_weight': float(np.sum(y == 0)) / np.sum(y == 1), #imbalanced set, this weights topdays more heavily
}
#score based on recall (imbalanced set)
scoring = {'AUC': make_scorer(roc_auc_score), 'Recall': make_scorer(recall_score)}
bst_grid = GridSearchCV(
estimator=XGBClassifier(**params_fixed, seed=seed),
param_grid=params_grid,
cv=cv,
scoring=scoring,
refit='AUC',
verbose = 10,
)
bst_grid.fit(data,y) #started 4:29
bst_grid.best_score_
bst_grid.best_params_
y_pred = bst_grid.best_estimator_.predict(data)
print(confusion_matrix(y,y_pred))
print(classification_report(y,y_pred))
#print out important features and plot
xg.plot_importance(bst_grid.best_estimator_)
###Output
_____no_output_____
###Markdown
10 Fold Cross Validation with Optimal Parameters
###Code
#10 fold cross-validation 10:07-10:16
cv = StratifiedKFold(y, n_folds=10, shuffle=True, random_state=seed)
default_params = {
'objective': 'binary:logistic',
'max_depth': 5,
'learning_rate': 0.5075,
'silent': 1.0,
'scale_pos_weight': float(np.sum(y == 0)) / np.sum(y == 1),
}
n_estimators_range = np.linspace(100, 200, 10).astype('int')
train_scores, test_scores = validation_curve(
XGBClassifier(**default_params),
data, y,
param_name = 'n_estimators',
param_range = n_estimators_range,
cv=cv,
scoring = 'roc_auc',
)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
fig = plt.figure(figsize=(10, 6), dpi=100)
plt.title("Validation Curve with XGBoost (eta = 0.3)")
plt.xlabel("number of trees")
plt.ylabel("AUC")
plt.ylim(0.999, 1.0001)
plt.plot(n_estimators_range,
train_scores_mean,
label="Training score",
color="r")
plt.plot(n_estimators_range,
test_scores_mean,
label="Cross-validation score",
color="g")
plt.fill_between(n_estimators_range,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha=0.2, color="r")
plt.fill_between(n_estimators_range,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.2, color="g")
plt.axhline(y=1, color='k', ls='dashed')
plt.legend(loc="best")
plt.show()
i = np.argmax(test_scores_mean)
print("Best cross-validation result ({0:.2f}) obtained for {1} trees".format(test_scores_mean[i], n_estimators_range[i]))
###Output
_____no_output_____ |
mathematics/linear_algebra/Eigenvalues_and_eigenvectors.ipynb | ###Markdown
Eigenvalues and eigenvectorsTo introduce eigenvalues and eigenvectors, let us begin with an example of matrix-vector multiplication. Consider the following square matrix $A \in \mathbb{R}^{2 \times 2}$ multiplying a vector $\mathbf{u}$:$$ A \mathbf{u} = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}\begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 2 \\ 1 \end{pmatrix} $$We see that the multiplication has rotated and extended the vector. Let us now consider a multiplication with a different vector $\mathbf{v}$:$$ A \mathbf{v} = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}\begin{pmatrix} 1 \\ 1 \end{pmatrix} = \begin{pmatrix} 3 \\ 3 \end{pmatrix} =3 \begin{pmatrix} 1 \\ 1 \end{pmatrix} $$Now the product is a vector which is not rotated, but is only scaled by a factor of 3. We call such vectors **eigenvectors** - an eigenvector (or *characteristic vector*) of a square matrix $A$ is a vector which when operated on by $A$ gives a scalar multiple of itself. These scalars are called **eigenvalues** (or *characteristic values*). We can write this as $A \mathbf{v} = \lambda \mathbf{v}$, where $\mathbf{v}$ is an eigenvector and $\lambda$ is an eigenvalue corresponding to that eigenvector.The above example has two eigenvectors: $\mathbf{v}_1 = (1, 1)^T$ and $\mathbf{v}_2 = (1, -1)^T$ with respective eigenvalues $\lambda_1 = 3$ and $\lambda = 1$. The figure below shows the effect of this transformation on point coordinates in the plane. Notice how the blue and purple vectors (which are parallel to eigenvectors) have their directions preserved, while every otherwise oriented vector (e.g. red vectors) are rotated.```{figure} linalgdata/Eigenvectorsgif.gif---name: eigvectors---source: [Wikipedia](https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors)``` Another way of representing this is by transforming a circle. We can think of a circle as a collection of points, each representing a vector from the origin.Let us consider the effect of a transformation matrix $ D = \begin{pmatrix} 2 & 0.5 \\ 0.5 & 0.5 \end{pmatrix} $ on a circle.
###Code
import numpy as np
import matplotlib.pyplot as plt
theta = np.linspace(0, 2*np.pi, 500)
r = np.sqrt(0.6)
x1 = r*np.cos(theta)
x2 = r*np.sin(theta)
D = np.array([[2, 0.5],
[0.5, 0.5]])
ell = D @ np.array([x1, x2])
fig, ax = plt.subplots(1)
ax.plot(x1, x2, '-k')
ax.plot(ell[0, :], ell[1, :], '-r')
for i in [24, 165, 299]:
ax.plot(x1[i], x2[i], 'ko', zorder=10)
ax.plot(ell[0, i], ell[1, i], 'ro')
ax.plot([x1[i], ell[0, i]], [x2[i], ell[1, i]], '--k')
ax.quiver(0.95709203, 0.28978415, scale=4, alpha=0.5)
ax.quiver(-0.28978415, 0.95709203, scale=4, alpha=0.5)
ax.set_xlim(-2, 2)
ax.set_ylim(-1, 1)
ax.set_aspect(1)
plt.show()
###Output
_____no_output_____
###Markdown
The vectors now map an ellipse! Some vectors got rotated and squished, while some got rotated and elongated (scaled). However, there are some vectors which only got scaled and did not get rotated. These vectors are in the direction of the eigenvectors (grey arrows). Characteristic polynomialHow do we actually find eigenvectors and eigenvalues? Let us consider a general square matrix $A \in \mathbb{C}^{n \times n}$ with eigenvectors $\mathbf{x} \in \mathbb{C}^n$ and eigenvalues $\lambda \in \mathbb{C}$ such that:$$ A \mathbf{x} = \lambda \mathbf{x}. $$After subtracting the right hand side:$$ A \mathbf{x} - \lambda \mathbf{x} = \mathbf{0}$$$$ (A -\lambda I) \mathbf{x} = \mathbf{0} $$Therefore, we are solving a homogeneous system of linear equations, but we want to find non-trivial solutions ($ \mathbf{x} \neq \mathbf{0} $). Recall from the section on null spaces that a homogeneous system will have non-zero solutions iff the matrix of the system is singular, i.e.$$ \det(A - \lambda I) = 0. $$This is a polynomial of degree $n$ with roots $\lambda_1, \lambda_2, \dots, \lambda_k$, $k \leq n$. This polynomial is termed the **characteristic polynomial** of $A$, where the roots of the polynomial are the eigenvalues. The eigenvectors are then found by plugging each eigenvalue back in $ (A -\lambda I) \mathbf{x} = \mathbf{0} $ and solving it. ExampleLet us find the eigenvalues and eigenvectors of the following matrix $A \in \mathbb{R}^{3 \times 3}$:$$ A = \begin{pmatrix} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{pmatrix} $$The characteristic polynomial is:$$ \det (A - \lambda I) = \left | \begin{array}{ccc} 2 - \lambda & 1 & 0 \\ 1 & 2 - \lambda & 1 \\ 0 & 1 & 2 - \lambda \end{array} \right | \\= (2 - \lambda)[(2-\lambda)^2 - 1] - 1 \\= - \lambda^3 + 6 \lambda^2 - 10 \lambda + 4 = (2 - \lambda)(\lambda^2 - 4\lambda + 2) = 0$$The roots of this polynomial, which are the eigenvalues, are $ \lambda_{1, 2, 3} = 2, 2 \pm \sqrt{2} $. Now to find the eigenvectors we need to plug these values into $(A - \lambda I)\mathbf{x} = 0$.Consider first $\lambda = 2$:$$ (A - \lambda I)\mathbf{x} =\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix},$$where $x_1$, $x_2$ and $x_3$ are entries of the eigenvector $\mathbf{x}$. The solution may be obvious to some, but let us calculate it by solving this system of linear equations. Let us write it with an augmented matrix and reduce it to RREF by swapping the 1st and 2nd row and subtracting the 1st row (2nd after swapping) from the last row:$$ \left ( \begin{array}{ccc|c} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right ) \longrightarrow \left ( \begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right ). $$As expected, there is no unique solution because we required before that $(A - \lambda I) $ is singular. Therefore, we can parameterise the first equation: $x_1 = -x_3$ in terms of the free variable $x_3 = t, t \in \mathbb{R}$. We read from the second equation that $x_2 = 0$. The solution set is then $ \{ (-t, 0, t)^T, t \in \mathbb{R} \}$. If we let $t = 1$ then the eigenvector $\mathbf{x}_1$ corresponding to eigenvalue $ \lambda_1 = 2$ is $\mathbf{x}_1 = (-1, 0, 1)^T$. We do this because we only care about the direction of the eigenvector and can scale it arbitrarily.We leave it to the readers to convince themselves that the other two eigenvectors are $ (1, \sqrt{2}, 1)^T $ and $ (1, -\sqrt{2}, 1)^T $. Example: Algebraic and geometric multiplicity```{index} Algebraic multiplicity``````{index} Geometric multiplicity```Now consider a matrix:$$ A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \RightarrowA - \lambda I = \begin{pmatrix} 1-\lambda & 0 & 0 \\ 0 & 1-\lambda & 0 \\ 0 & 0 & -1 - \lambda \end{pmatrix}. $$The characteristic equation is $\det(A - \lambda I) = (\lambda - 1)(\lambda - 1)(\lambda + 1) = (\lambda - 1)^2(\lambda + 1) = 0 $.We see that the eigenvalues are $\lambda_1 = 1, \lambda_2 = -1$, where $\lambda_1$ is repeated twice. We therefore say that the **algebraic multiplicity**, which is the number of how many times an eigenvalue is repeated, of $\lambda_1$ is 2 and of $\lambda_2$ it is 1.Let us now find the eigenvectors corresponding to these eigenvalues. For $\lambda_1 = 1$:$$ (A - I)\mathbf{x} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -2 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} $$The only constraint on our eigenvector is that $x_3 = 0$, whereas there are no constraints on $x_1$ and $x_2$ - they can be whatever we want. In cases like this, we still try to define as many linearly independent eigenvectors as possible, which does not have to be equal to the algebraic multiplicity of an eigenvalue. In our case, we can easily define two linearly independent vectors by choosing $x_1=1, x_2=0$ for one vector and $x_1=0, x_2=1$ for the other. Therefore, we managed to get two linearly independent eigenvectors corresponding to the same eigenvalue:$$ \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \quad \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}. $$The number of linearly independent eigenvectors corresponding to an eigenvalue $\lambda$ is called the **geometric multiplicity** of that eigenvalue. The algebraic multiplicity of $\lambda$ is equal or greater than its geometric multiplicity. An eigenvalue for which algebraic multiplicity $>$ geometric multiplicity is called, rather harshly, a *defective* eigenvalue.Now consider the non-repeated eigenvalue $\lambda_2 = -1$:$$ (A - I)\mathbf{x} = \begin{pmatrix} -2 & 0 & 0 \\ 0 & -2 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}. $$We have $x_1 = 0, x_2 = 0$ and there is no constraint on $x_3$, so now $x_3$ can be any number we want. For simplicity we choose it to be 1. Then the eigenvector is simply$$ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} $$and we conclude that the geometric multiplicity of $\lambda_2$ is 1.Finally, we check if our findings agree with that of NumPy in Python:
###Code
A = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, -1]])
evals, evecs = np.linalg.eig(A)
print('A = \n', A)
print('Eigenvalues:', evals)
print('Eigenvectors: \n', evecs)
###Output
A =
[[ 1 0 0]
[ 0 1 0]
[ 0 0 -1]]
Eigenvalues: [ 1. 1. -1.]
Eigenvectors:
[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
###Markdown
Example: Fibonacci numbersThe [Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number), often denoted by $F_n$, form a *Fibonacci sequence* where each number is the sum of the two preceding numbers. Let us write the beginning of that sequence, starting from 0 and 1:$$ 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, \dots $$We can express this with the help of a Fibonacci matrix:$$ \begin{pmatrix} F_n \\ F_{n-1} \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} F_{n-1} \\ F_{n-2} \end{pmatrix}.$$This is a normal system of equations where the first one is what we are after $ F_n = F_{n-1} + F_{n-2} $ and the second one is trivial $F_{n-1} = F_{n-1}$. Let us plot some of these points $(F_n, F_{n-1})$:
###Code
points = np.array([[1, 0], [1, 1], [2, 1], [3, 2], [5, 3], [8, 5], [13, 8], [21, 13]])
plt.plot(points[:, 0], points[:, 1], 'o', alpha=0.9)
plt.plot([0, 46368], [0, 28657])
plt.xlim(0, 25)
plt.ylim(0, 18)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
It looks like these points plot very closely onto the line which we also plotted in the figure. As it turns out, that line is an eigenvector of the Fibonacci matrix. The eigenvalue corresponding to that eigenvector is $\approx 1.618034$, the *golden ratio*.What that means is that each point on that line will get scaled by the golden ratio further along that line. For the first several elements of our sequence this will not be entirely precise because they do not lie exactly on that line. However, for $F_n$ where $n \to \infty$, this error goes to zero. Therefore, given a very large element in the Fibonacci sequence, say 196418, we can find the next term by multiplying it by the golden ratio: $196418 \cdot 1.618034 = 317811.002$. Indeed, the next element is $317811$.But what if we do not start our sequence from 0 and 1? Our findings would still be the same, because the eigenvalues and eigenvectors are properties of our operator, in this case the Fibonacci matrix. So no matter how we start our sequence, the elements of that sequence will be spaced out by the eigenvalue along the eigenvector line. Let us quickly show this for a selection of different points which will all move closer onto the eigenvector after each transformation:
###Code
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.animation import FuncAnimation
def init():
point.set_data([], [])
return point,
def update_plot(i):
global points
if i > 0:
points = A @ points
point.set_data(points[0, :], points[1, :])
return point,
x = np.linspace(0, 100, 20)
x[0] += 0.001
y = np.linspace(0, 100, 20)
X, Y = np.meshgrid(x, y)
points = []
for i in range(len(X)):
for j in range(len(X)):
if Y[i, j] / X[i, j] <= 1:
points.append([X[i, j], Y[i, j]])
points = np.array(points).T
A = np.array([[1, 1],
[1, 0]])
fig = plt.figure(figsize=(10, 7))
ax = plt.axes(xlim=(0, 500), ylim=(0, 350))
ax.plot([0, 46368], [0, 28657], zorder=10)
point, = ax.plot([], [], 'o', alpha = 0.9)
anim = FuncAnimation(fig, update_plot, init_func=init, frames=7, interval=1000, blit=True)
plt.show()
# anim.save('fibonacci.mp4', writer='ffmpeg')
###Output
_____no_output_____ |
attack-master/Fairness_attack/results/plot_results_all_metrics.ipynb | ###Markdown
__This notebook has been made to evaluate which metric (max, mean, last) is the most comparable to the results of the authors. Please keep in mind that:__> This is only applicable if the provided datasets are used (german, compas and drug).> Make sure to run the attacks with epsilons from 0.0 up to 1, otherwise it will most likely throw errors.> Make sure to run it for all three the attacks. Since the code is written to plot 9 figures (mean, max and last) for all three the metrics.> Make sure to uncomment the code to evaluate (this one is not used to generate the results).
###Code
import glob
import csv
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
methods = ["/IAF-", "/RAA-", "/NRAA-"]
folder_measures = ["test_accs", "parities and biases"]
measures = ["test_acc", "parity", "EO bias"]
time_and_it = "time_and_it"
time_and_it_columns = ["time_taken_seconds", "iteration"]
# # seed still needs to be implemented
# def get_test_dicts(dataset, data_choice, methods, folder_measure, measure):
# # make the dicts
# mean_dict = {"IAF":dict(), "RAA":dict(), "NRAA":dict()}
# max_dict = {"IAF":dict(), "RAA":dict(), "NRAA":dict()}
# last_dict = {"IAF":dict(), "RAA":dict(), "NRAA":dict()}
# # find all the files for the dataset
# for file_name in glob.glob("{}/{}/{}/*".format(data_choice, dataset, folder_measure)):
# for method in methods:
# if method in file_name:
# # strip the methods (/IAF-) etc.
# meth = method[1:-1]
# splits = file_name.split("_")
# # find the epsilon in the filename
# epsilon = [i for i in splits if "eps" in i][0].split("eps-")[1]
# data = pd.read_csv(file_name)
# # if there are nans in the data, skip them
# measured = data[~data[measure].isna()][measure]
# if measure == "test_acc":
# # calculate test error
# mean_dict[meth][epsilon] = np.mean(1 - measured)
# max_dict[meth][epsilon] = (1-measured).max()
# last_dict[meth][epsilon] = 1 - measured[measured.index[-1]]
# else:
# mean_dict[meth][epsilon] = np.mean(measured)
# max_dict[meth][epsilon] = measured.max()
# last_dict[meth][epsilon] = measured[measured.index[-1]]
# return mean_dict, max_dict, last_dict
# # same holds for this function, but this time we save the iterations and time taken
# # which you have to run once for the time and one for the iterations
# def get_time_and_it_dicts(dataset, data_choice, methods, time_and_it_folder, t_i_col):
# dict_ = {"IAF":dict(), "RAA":dict(), "NRAA":dict()}
# for file_name in glob.glob("{}/{}/{}/*".format(data_choice, dataset, time_and_it_folder)):
# for method in methods:
# if method in file_name:
# meth = method[1:-1]
# splits = file_name.split("_")
# epsilon = [i for i in splits if "eps" in i][0].split("eps-")[1]
# data = pd.read_csv(file_name)
# measured = data[~data[t_i_col].isna()][t_i_col]
# dict_[meth][epsilon] = measured[measured.index[-1]]
# return dict_
# def plot_seed(dataset, data_choice, methods, folder_measure, measure, time_and_it, t_i_col):
# # ta = test accuracy
# mean_ta_dict, max_ta_dict, last_ta_dict = get_test_dicts(dataset, data_choice, methods,
# folder_measure[0], measures[0])
# # concat all three dicts into one df
# acc_df = pd.concat({'mean_acc': pd.DataFrame(mean_ta_dict), 'max_acc': pd.DataFrame(max_ta_dict),
# 'last_acc': pd.DataFrame(last_ta_dict)}).unstack(0).sort_index(axis = 0)
# # p = parity
# mean_p_dict, max_p_dict, last_p_dict = get_test_dicts(dataset, data_choice, methods,
# folder_measure[1], measure[1])
# # concat all three dicts into one df
# p_df = pd.concat({'mean_par': pd.DataFrame(mean_p_dict), 'max_par': pd.DataFrame(max_p_dict),
# 'last_par': pd.DataFrame(last_p_dict)}).unstack(0).sort_index(axis = 0)
# # b = biases
# mean_b_dict, max_b_dict, last_b_dict = get_test_dicts(dataset, data_choice, methods,
# folder_measure[1], measure[2])
# # concat all three dicts into one df
# b_df = pd.concat({'mean_bias': pd.DataFrame(mean_b_dict), 'max_bias': pd.DataFrame(max_b_dict),
# 'last_bias': pd.DataFrame(last_b_dict)}).unstack(0).sort_index(axis = 0)
# # to be able to make a plot loop
# dfs = [acc_df, p_df, b_df]
# ylabels = ["Test error", "Statistical parity", "Equality of opportunity"]
# lines = ['b-s', 'g-^', 'r-D']
# fig, axs = plt.subplots(3,3, figsize=(15, 10))
# axs = axs.ravel()
# fig.suptitle("{} - {}".format(data_choice, dataset), fontsize=20)
# for i in range(9):
# a = 0
# for j in range(0,9,3):
# t = i % 3
# # makes sure everything works as it has to
# if i <= 2:
# j += 0
# elif i > 2 and i <= 5:
# j += 1
# elif i > 5:
# j += 2
# col = dfs[t].columns[j]
# column_data = dfs[t][col]
# axs[i].plot(column_data, lines[a], label="{}".format(column_data.name[0]))
# axs[i].set_title("{}".format(column_data.name[1]), fontweight='bold')
# axs[i].set_xlabel('Epsilon', fontweight='heavy')
# if "acc" in column_data.name[1]:
# axs[i].set_ylabel(ylabels[0], fontweight='heavy')
# elif "par" in column_data.name[1]:
# axs[i].set_ylabel(ylabels[1], fontweight='heavy')
# else:
# axs[i].set_ylabel(ylabels[2], fontweight='heavy')
# axs[i].legend(loc=9, ncol=3)
# axs[i].set_yticks([0, 0.2, 0.4, 0.6, 0.8, 1, 1.2])
# axs[i].set_yticklabels([0, 0.2, 0.4, 0.6, 0.8, 1, ""])
# a += 1
# plt.subplots_adjust(left=0.1,
# bottom=0.1,
# right=0.9,
# top=0.9,
# wspace=0.4,
# hspace=0.4)
# plt.show()
# # show time taken and number of iterations in a dataframe
# time_taken = get_time_and_it_dicts(dataset, data_choice, methods, time_and_it, t_i_col[0])
# last_iter = get_time_and_it_dicts(dataset, data_choice, methods, time_and_it, t_i_col[1])
# time_taken = pd.DataFrame(time_taken).sort_index(axis=0)
# last_iter = pd.DataFrame(last_iter).sort_index(axis=0)
# display(pd.concat({"Time taken in seconds": time_taken, "Number of iterations": last_iter}).unstack(0))
# print()
# methods = ["/IAF-", "/RAA-", "/NRAA-"]
# folder_measures = ["test_accs", "parities and biases"]
# measures = ["test_acc", "parity", "EO bias"]
# time_and_it = "time_and_it"
# time_and_it_columns = ["time_taken_seconds", "iteration"]
# plot_seed("compas", "Authors data seed 0", methods, folder_measures, measures,
# time_and_it, time_and_it_columns)
###Output
_____no_output_____ |
KUAKE-QTR.ipynb | ###Markdown
一、数据读入与处理 1. 数据读入
###Code
train_data_df = pd.read_json('../data/source_datasets/KUAKE-QTR/KUAKE-QTR_train.json')
train_data_df = (train_data_df
.rename(columns={'query': 'text_a', 'title': 'text_b'})
.loc[:,['text_a', 'text_b', 'label']])
dev_data_df = pd.read_json('../data/source_datasets/KUAKE-QTR/KUAKE-QTR_dev.json')
dev_data_df = (dev_data_df
.rename(columns={'query': 'text_a', 'title': 'text_b'})
.loc[:,['text_a', 'text_b', 'label']])
tm_train_dataset = Dataset(train_data_df)
tm_dev_dataset = Dataset(dev_data_df)
###Output
_____no_output_____
###Markdown
2. 词典创建和生成分词器
###Code
tokenizer = Tokenizer(vocab='hfl/chinese-macbert-base', max_seq_len=50)
###Output
_____no_output_____
###Markdown
3. ID化
###Code
tm_train_dataset.convert_to_ids(tokenizer)
tm_dev_dataset.convert_to_ids(tokenizer)
###Output
_____no_output_____
###Markdown
二、模型构建 1. 模型参数设置
###Code
config = BertConfig.from_pretrained('hfl/chinese-macbert-base',
num_labels=len(tm_train_dataset.cat2id))
###Output
_____no_output_____
###Markdown
2. 模型创建
###Code
torch.cuda.empty_cache()
dl_module = Bert.from_pretrained('hfl/chinese-macbert-base',
config=config)
dl_module.pooling = 'last_avg'
###Output
_____no_output_____
###Markdown
三、任务构建 1. 任务参数和必要部件设定
###Code
# 设置运行次数
num_epoches = 3
batch_size = 16
optimizer = get_default_model_optimizer(dl_module)
###Output
_____no_output_____
###Markdown
2. 任务创建
###Code
model = Task(dl_module, optimizer, 'ce', cuda_device=0)
###Output
_____no_output_____
###Markdown
3. 训练
###Code
model.fit(tm_train_dataset,
tm_dev_dataset,
lr=3e-5,
epochs=num_epoches,
batch_size=batch_size
)
###Output
_____no_output_____
###Markdown
四、模型验证与保存
###Code
import json
from ark_nlp.model.tm.bert import Predictor
tm_predictor_instance = Predictor(model.module, tokenizer, tm_train_dataset.cat2id)
test_df = pd.read_json('../data/source_datasets/KUAKE-QTR/KUAKE-QTR_test.json')
submit = []
for _id, _text_a, _text_b in zip(test_df['id'], test_df['query'], test_df['title']):
_predict = tm_predictor_instance.predict_one_sample([_text_a, _text_b])[0]
submit.append({
'id': _id,
'query': _text_a,
'title': _text_b,
'label': _predict
})
output_path = '../data/output_datasets/KUAKE-QTR_test.json'
with open(output_path,'w', encoding='utf-8') as f:
f.write(json.dumps(submit, ensure_ascii=False))
###Output
_____no_output_____ |
coursera-week2-3-moving-data-around.ipynb | ###Markdown
Coursera Week2 Moving Data Around
###Code
import numpy as np
A = np.matrix('1 2; 3 4; 5 6')
A
np.size(A)
A.shape
np.shape(A)
import pandas as pd
import matplotlib.pyplot as plt
%pylab inline
df = pd.read_csv('ex1data1.txt', header=None);
df.head()
df.shape
df.plot()
###Output
_____no_output_____ |
notebooks_vacios/022-matplotlib-GeoData-cartopy.ipynb | ###Markdown
Representación de datos geográficos con `cartopy` En ocasiones necesitaremos representar datos sobre un mapa. En este tipo de casos `basemap` es una buena alternativa dentro del ecosistema Python, pero pronto [será sustituida](https://matplotlib.org/basemap/users/intro.htmlcartopy-new-management-and-eol-announcement) por [`cartopy`](http://scitools.org.uk/cartopy/docs/latest/index.html). Aunque seguirá teniendo mantenimiento hasta 2020 y `cartopy` todavía no ha incorporado todas las características de `basemap`, miraremos al futuro y haremos nuestros primeros ejemplos con la nueva biblioteca. Si aun así te interesa `basemap` puedes ver [esta entrada](http://jakevdp.github.io/blog/2015/08/14/out-of-core-dataframes-in-python/Diving-Into-the-Data:-Geography-of-Coffee) en el blog de Jake Vanderplas o [este notebook](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.13-Geographic-Data-With-Basemap.ipynb) de su libro sobre data science. En primer lugar, como siempre importamos la librería y el resto de cosas que necesitaremos: Utilizando diferentes proyecciones Mercator Veremos en un primer ejemplo cómo crear un mapa con una proyección y añadiremos la información que nos interese:
###Code
# Inicializamos una figura con el tamaño que necesitemos
# si no la queremos por defecto
# Creamos unos ejes con la proyección que queramos
# por ejemplo, Mercator
# Y lo que queremos representar en el mapa
# Tierra
# Océanos
# Líneas de costa (podemos modificar el color)
# Fronteras
# Ríos y lagos
# Por último, podemos pintar el grid, si nos interesa
###Output
_____no_output_____
###Markdown
InterruptedGoodeHomolosine Veremos ahora otro ejemplo usando una proyección distinta y colorearemos el mapa de forma diferente:
###Code
# Inicializamos una figura con el tamaño que necesitemos
# si no la queremos por defecto
# Elegimos la proyección InterruptedGoodeHomolosine
# Y lo que queremos representar en el mapa
###Output
_____no_output_____
###Markdown
Puede interesarnos poner etiquetas a los ejes. Podemos utilizar entonces las herramientas dentro de : `cartopy.mpl.gridliner` PlateCarree
###Code
# Importamos los formatos de ejes para latitud y longitud
# Elegimos la proyección PlateCarree
# Y lo que queremos representar en el mapa
# Tierra
# Océanos
# Líneas de costa (podemos modificar el color)
# Fronteras
# Ríos y lagos
# Dentro de los ejes seleccionamos las lineas del grid y
# activamos la opción de mostrar etiquetas
# Sobre las líneas del grid, ajustamos el formato en x e y
###Output
_____no_output_____
###Markdown
Fijando la extensión de nuestra representación Para las ocasiones en las que no queremos mostrar un mapa entero, sino que sólo necesitamos representar una determinada localización, vale todo lo anterior, simplemente tendremos espedificar el área a mostrar con el método `set_extent`y tomar alguna precaución...
###Code
# Elegimos la proyección
# Fijar el punto y la extensión del mapa que queremos ver
# Y lo que queremos representar en el mapa
###Output
_____no_output_____
###Markdown
Como se ve en la figura enterior, la representación que obtenemos es demasiado burda. Esto se debe a que los datos por defecto se encuentran descargados a una escala poco detallada.Cartopy permite acceder a datos propios almacenados en nuestro ordenador o descargarlos de algunas bases de datos reconocidas. En este caso accederemos a NaturalEarthFeature, que es la que hemos utilizado por defecto hasta ahora sin saberlo.Ver http://www.naturalearthdata.com/
###Code
# Importando Natural Earth Feature
# Elegimos la proyección
# Fijar el punto y la extensión del mapa que queremos ver
# Y lo que queremos representar en el mapa
# Hasta ahora utilizábamos:
# ax.add_feature(cfeature.COASTLINE,
# edgecolor=(0.3, 0.3, 0.3),
# facecolor=cfeature.COLORS['land']
# )
# Pero ahora descargaremos primero la característica que
# queremos representar:
# Y después la añadiremso al mapa con las propiedades que creamos convenientes.
###Output
_____no_output_____
###Markdown
Desde Naturale Earth Feature, no sólo podemos descargar caraterísticas físicas, sino que también podemos acceder a datasets demográficos Representando datos sobre el mapa Habitualmente, no queremos sólo pintar un mapa sino que queremos representar datos sobre él. Estos datos pueden venir del dataset anterior de cualquier otra fuente.En este ejemplo representaremos los datos de impactos de meteoritos que han caído en la tierra que están recogidos en el dataset: [www.kaggle.como/nasa/meteorite-landings](www.kaggle.como/nasa/meteorite-landings). En el link se pude encontrar toda la información.
###Code
# Leeremos el csv que ya tenemos descargado utilizando pandas
# Cremos un mapa sobre el que representar los datos:
# Elegimos la proyección PlateCarree
# Y representamos las líneas de costa
# Ahora podemos añadir sobre ese mapa los datos con un scatter
###Output
_____no_output_____
###Markdown
Casi cualquier representación de las que hemos cisto anteriormente con `matploltib` es posible. --- Ejemplo de PythonDataScienceHandbook (Jake Vanderplas) https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.13-Geographic-Data-With-Basemap.ipynb Example: Surface Temperature DataAs an example of visualizing some more continuous geographic data, let's consider the "polar vortex" that hit the eastern half of the United States in January of 2014.A great source for any sort of climatic data is [NASA's Goddard Institute for Space Studies](http://data.giss.nasa.gov/).Here we'll use the GIS 250 temperature data, which we can download using shell commands (these commands may have to be modified on Windows machines).The data used here was downloaded on 6/12/2016, and the file size is approximately 9MB: The data comes in NetCDF format, which can be read in Python by the ``netCDF4`` library.You can install this library as shown here```$ conda install netcdf4```We read the data as follows:
###Code
# preserve
from netCDF4 import Dataset
from netCDF4 import date2index
from datetime import datetime
# preserve
data = Dataset('../data/gistemp250.nc')
###Output
_____no_output_____
###Markdown
The file contains many global temperature readings on a variety of dates; we need to select the index of the date we're interested in—in this case, January 15, 2014:
###Code
# preserve
timeindex = date2index(datetime(2014, 1, 15),
data.variables['time'])
###Output
_____no_output_____
###Markdown
Now we can load the latitude and longitude data, as well as the temperature anomaly for this index:
###Code
# preserve
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
lon, lat = np.meshgrid(lon, lat)
temp_anomaly = data.variables['tempanomaly'][timeindex]
###Output
_____no_output_____
###Markdown
Finally, we'll use the ``pcolormesh()`` method to draw a color mesh of the data.We'll look at North America, and use a shaded relief map in the background.Note that for this data we specifically chose a divergent colormap, which has a neutral color at zero and two contrasting colors at negative and positive values.We'll also lightly draw the coastlines over the colors for reference:
###Code
# preserve
fig = plt.figure(figsize=(8,4))
# Elegimos la proyección
ax = plt.axes(projection=ccrs.PlateCarree())
# Y lo que queremos representar en el mapa
coastline = NaturalEarthFeature(category='physical', name='coastline', scale='50m')
# ax.add_feature(land, color=cfeature.COLORS['land'])
ax.add_feature(coastline, facecolor=cfeature.COLORS['land'], edgecolor='k', alpha=0.5)
ax.pcolormesh(lon, lat, temp_anomaly, cmap='RdBu_r')
###Output
_____no_output_____ |
Classification ML Comparison.ipynb | ###Markdown
ML Classification Template Feature Scaling are applied to improve the overall result even though some algorithms DO NOT need it.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv("Data.csv")
df.head()
df.isnull().sum()
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
###Output
_____no_output_____
###Markdown
Decision Trees
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
precision recall f1-score support
2 0.97 0.96 0.97 107
4 0.94 0.95 0.95 64
accuracy 0.96 171
macro avg 0.96 0.96 0.96 171
weighted avg 0.96 0.96 0.96 171
[[103 4]
[ 3 61]]
0.9590643274853801
###Markdown
K Nearst Neighbors
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
precision recall f1-score support
2 0.95 0.96 0.96 107
4 0.94 0.92 0.93 64
accuracy 0.95 171
macro avg 0.95 0.94 0.94 171
weighted avg 0.95 0.95 0.95 171
[[103 4]
[ 5 59]]
0.9473684210526315
###Markdown
Support Vector Machine (kernel='linear')
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
precision recall f1-score support
2 0.95 0.95 0.95 107
4 0.92 0.92 0.92 64
accuracy 0.94 171
macro avg 0.94 0.94 0.94 171
weighted avg 0.94 0.94 0.94 171
[[102 5]
[ 5 59]]
0.9415204678362573
###Markdown
Support Vector Machine (Kernel="rbf")
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
precision recall f1-score support
2 0.97 0.95 0.96 107
4 0.92 0.95 0.94 64
accuracy 0.95 171
macro avg 0.95 0.95 0.95 171
weighted avg 0.95 0.95 0.95 171
[[102 5]
[ 3 61]]
0.9532163742690059
###Markdown
Logistic Regression
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
precision recall f1-score support
2 0.95 0.96 0.96 107
4 0.94 0.92 0.93 64
accuracy 0.95 171
macro avg 0.95 0.94 0.94 171
weighted avg 0.95 0.95 0.95 171
[[103 4]
[ 5 59]]
0.9473684210526315
###Markdown
Naive Bayes (GaussianNB)
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
precision recall f1-score support
2 0.98 0.93 0.95 107
4 0.89 0.97 0.93 64
accuracy 0.94 171
macro avg 0.93 0.95 0.94 171
weighted avg 0.94 0.94 0.94 171
[[99 8]
[ 2 62]]
0.9415204678362573
###Markdown
Navie Bayes (MultinomialNB)
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Stochastic Gradient Descent
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.linear_model import SGDClassifier
# classifier = SGDClassifier(loss="hinge", penalty="l2", max_iter=5)
classifier = SGDClassifier()
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
precision recall f1-score support
2 0.95 0.97 0.96 107
4 0.95 0.91 0.93 64
accuracy 0.95 171
macro avg 0.95 0.94 0.94 171
weighted avg 0.95 0.95 0.95 171
[[104 3]
[ 6 58]]
0.9473684210526315
###Markdown
XGBoost
###Code
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from xgboost import XGBClassifier
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
[12:32:31] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
precision recall f1-score support
2 0.95 0.96 0.96 107
4 0.94 0.92 0.93 64
accuracy 0.95 171
macro avg 0.95 0.94 0.94 171
weighted avg 0.95 0.95 0.95 171
[[103 4]
[ 5 59]]
0.9473684210526315
###Markdown
Catboost (Tune by itself and no need to parameter tuning, good results when having multiple categorical variables)
###Code
# pip install catboost
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from catboost import CatBoostClassifier
classifier = CatBoostClassifier()
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
###Output
Learning rate set to 0.007741
0: learn: 0.6767912 total: 149ms remaining: 2m 29s
1: learn: 0.6653965 total: 155ms remaining: 1m 17s
2: learn: 0.6509535 total: 160ms remaining: 53.1s
3: learn: 0.6353293 total: 165ms remaining: 41.1s
4: learn: 0.6212601 total: 170ms remaining: 33.8s
5: learn: 0.6063374 total: 175ms remaining: 29s
6: learn: 0.5924529 total: 182ms remaining: 25.8s
7: learn: 0.5781366 total: 186ms remaining: 23s
8: learn: 0.5642849 total: 190ms remaining: 21s
9: learn: 0.5503309 total: 195ms remaining: 19.3s
10: learn: 0.5380644 total: 201ms remaining: 18.1s
11: learn: 0.5266812 total: 206ms remaining: 17s
12: learn: 0.5165041 total: 211ms remaining: 16s
13: learn: 0.5056042 total: 216ms remaining: 15.2s
14: learn: 0.4939845 total: 221ms remaining: 14.5s
15: learn: 0.4817636 total: 226ms remaining: 13.9s
16: learn: 0.4719479 total: 230ms remaining: 13.3s
17: learn: 0.4626034 total: 235ms remaining: 12.8s
18: learn: 0.4533419 total: 238ms remaining: 12.3s
19: learn: 0.4453692 total: 243ms remaining: 11.9s
20: learn: 0.4355885 total: 247ms remaining: 11.5s
21: learn: 0.4248365 total: 251ms remaining: 11.2s
22: learn: 0.4173125 total: 256ms remaining: 10.9s
23: learn: 0.4090736 total: 261ms remaining: 10.6s
24: learn: 0.4011153 total: 266ms remaining: 10.4s
25: learn: 0.3938866 total: 270ms remaining: 10.1s
26: learn: 0.3855890 total: 275ms remaining: 9.9s
27: learn: 0.3784071 total: 280ms remaining: 9.71s
28: learn: 0.3712019 total: 284ms remaining: 9.52s
29: learn: 0.3636089 total: 289ms remaining: 9.36s
30: learn: 0.3563297 total: 295ms remaining: 9.21s
31: learn: 0.3490327 total: 300ms remaining: 9.07s
32: learn: 0.3436578 total: 304ms remaining: 8.92s
33: learn: 0.3359463 total: 309ms remaining: 8.78s
34: learn: 0.3299830 total: 314ms remaining: 8.64s
35: learn: 0.3238590 total: 318ms remaining: 8.53s
36: learn: 0.3185455 total: 323ms remaining: 8.4s
37: learn: 0.3129120 total: 328ms remaining: 8.29s
38: learn: 0.3066220 total: 332ms remaining: 8.18s
39: learn: 0.3006019 total: 336ms remaining: 8.07s
40: learn: 0.2961250 total: 341ms remaining: 7.96s
41: learn: 0.2913075 total: 345ms remaining: 7.86s
42: learn: 0.2863332 total: 349ms remaining: 7.77s
43: learn: 0.2814019 total: 354ms remaining: 7.68s
44: learn: 0.2764050 total: 358ms remaining: 7.59s
45: learn: 0.2715972 total: 362ms remaining: 7.5s
46: learn: 0.2671359 total: 366ms remaining: 7.42s
47: learn: 0.2627735 total: 370ms remaining: 7.35s
48: learn: 0.2583273 total: 375ms remaining: 7.28s
49: learn: 0.2535858 total: 380ms remaining: 7.22s
50: learn: 0.2502071 total: 384ms remaining: 7.15s
51: learn: 0.2460575 total: 389ms remaining: 7.09s
52: learn: 0.2430172 total: 393ms remaining: 7.03s
53: learn: 0.2386936 total: 398ms remaining: 6.97s
54: learn: 0.2348157 total: 402ms remaining: 6.91s
55: learn: 0.2308893 total: 407ms remaining: 6.86s
56: learn: 0.2273545 total: 411ms remaining: 6.81s
57: learn: 0.2233972 total: 416ms remaining: 6.75s
58: learn: 0.2199507 total: 420ms remaining: 6.7s
59: learn: 0.2165792 total: 424ms remaining: 6.64s
60: learn: 0.2136361 total: 428ms remaining: 6.59s
61: learn: 0.2104832 total: 432ms remaining: 6.53s
62: learn: 0.2077612 total: 435ms remaining: 6.46s
63: learn: 0.2058127 total: 438ms remaining: 6.4s
64: learn: 0.2028870 total: 440ms remaining: 6.33s
65: learn: 0.2000411 total: 443ms remaining: 6.27s
66: learn: 0.1965699 total: 445ms remaining: 6.2s
67: learn: 0.1931033 total: 447ms remaining: 6.13s
68: learn: 0.1903893 total: 450ms remaining: 6.07s
69: learn: 0.1878969 total: 452ms remaining: 6.01s
70: learn: 0.1852373 total: 454ms remaining: 5.95s
71: learn: 0.1829545 total: 457ms remaining: 5.89s
72: learn: 0.1800534 total: 459ms remaining: 5.83s
73: learn: 0.1776111 total: 461ms remaining: 5.77s
74: learn: 0.1748191 total: 464ms remaining: 5.72s
75: learn: 0.1726427 total: 466ms remaining: 5.66s
76: learn: 0.1707997 total: 468ms remaining: 5.61s
77: learn: 0.1691747 total: 471ms remaining: 5.56s
78: learn: 0.1671018 total: 473ms remaining: 5.51s
79: learn: 0.1653273 total: 475ms remaining: 5.46s
80: learn: 0.1632592 total: 477ms remaining: 5.41s
81: learn: 0.1612505 total: 480ms remaining: 5.37s
82: learn: 0.1588900 total: 482ms remaining: 5.33s
83: learn: 0.1573289 total: 484ms remaining: 5.28s
84: learn: 0.1551463 total: 486ms remaining: 5.24s
85: learn: 0.1531162 total: 489ms remaining: 5.19s
86: learn: 0.1514024 total: 491ms remaining: 5.15s
87: learn: 0.1498585 total: 493ms remaining: 5.11s
88: learn: 0.1480691 total: 496ms remaining: 5.07s
89: learn: 0.1463901 total: 498ms remaining: 5.03s
90: learn: 0.1449645 total: 500ms remaining: 4.99s
91: learn: 0.1433896 total: 502ms remaining: 4.95s
92: learn: 0.1413951 total: 504ms remaining: 4.91s
93: learn: 0.1400074 total: 506ms remaining: 4.87s
94: learn: 0.1385256 total: 507ms remaining: 4.83s
95: learn: 0.1373312 total: 509ms remaining: 4.79s
96: learn: 0.1357847 total: 511ms remaining: 4.75s
97: learn: 0.1348862 total: 512ms remaining: 4.71s
98: learn: 0.1333012 total: 513ms remaining: 4.67s
99: learn: 0.1319540 total: 515ms remaining: 4.63s
100: learn: 0.1303619 total: 516ms remaining: 4.6s
101: learn: 0.1289833 total: 518ms remaining: 4.56s
102: learn: 0.1275370 total: 519ms remaining: 4.52s
103: learn: 0.1263970 total: 520ms remaining: 4.48s
104: learn: 0.1249636 total: 521ms remaining: 4.44s
105: learn: 0.1238581 total: 523ms remaining: 4.41s
106: learn: 0.1226217 total: 524ms remaining: 4.38s
107: learn: 0.1216122 total: 526ms remaining: 4.34s
108: learn: 0.1206190 total: 527ms remaining: 4.31s
109: learn: 0.1196763 total: 528ms remaining: 4.28s
110: learn: 0.1186479 total: 530ms remaining: 4.24s
111: learn: 0.1176246 total: 531ms remaining: 4.21s
112: learn: 0.1163901 total: 533ms remaining: 4.18s
113: learn: 0.1157427 total: 534ms remaining: 4.15s
114: learn: 0.1146711 total: 536ms remaining: 4.12s
115: learn: 0.1136418 total: 537ms remaining: 4.09s
116: learn: 0.1129285 total: 539ms remaining: 4.07s
117: learn: 0.1118440 total: 540ms remaining: 4.04s
118: learn: 0.1108543 total: 542ms remaining: 4.01s
119: learn: 0.1098521 total: 543ms remaining: 3.98s
120: learn: 0.1090616 total: 544ms remaining: 3.96s
121: learn: 0.1079260 total: 546ms remaining: 3.93s
122: learn: 0.1069743 total: 547ms remaining: 3.9s
123: learn: 0.1061025 total: 549ms remaining: 3.88s
124: learn: 0.1052943 total: 550ms remaining: 3.85s
125: learn: 0.1045611 total: 552ms remaining: 3.83s
126: learn: 0.1036001 total: 553ms remaining: 3.8s
127: learn: 0.1028067 total: 555ms remaining: 3.78s
128: learn: 0.1019559 total: 557ms remaining: 3.76s
129: learn: 0.1013763 total: 558ms remaining: 3.73s
130: learn: 0.1007284 total: 559ms remaining: 3.71s
131: learn: 0.1000668 total: 561ms remaining: 3.69s
132: learn: 0.0991710 total: 562ms remaining: 3.66s
133: learn: 0.0984423 total: 564ms remaining: 3.64s
134: learn: 0.0974067 total: 565ms remaining: 3.62s
135: learn: 0.0967251 total: 567ms remaining: 3.6s
136: learn: 0.0960641 total: 568ms remaining: 3.58s
137: learn: 0.0952613 total: 570ms remaining: 3.56s
138: learn: 0.0944353 total: 571ms remaining: 3.54s
139: learn: 0.0935053 total: 573ms remaining: 3.52s
140: learn: 0.0929169 total: 574ms remaining: 3.5s
141: learn: 0.0922753 total: 576ms remaining: 3.48s
142: learn: 0.0915795 total: 577ms remaining: 3.46s
143: learn: 0.0909980 total: 579ms remaining: 3.44s
144: learn: 0.0903962 total: 580ms remaining: 3.42s
145: learn: 0.0896000 total: 582ms remaining: 3.4s
146: learn: 0.0889082 total: 583ms remaining: 3.38s
147: learn: 0.0882921 total: 584ms remaining: 3.36s
148: learn: 0.0875563 total: 586ms remaining: 3.34s
149: learn: 0.0869580 total: 587ms remaining: 3.33s
150: learn: 0.0863392 total: 588ms remaining: 3.31s
151: learn: 0.0856373 total: 590ms remaining: 3.29s
152: learn: 0.0850605 total: 591ms remaining: 3.27s
153: learn: 0.0846771 total: 593ms remaining: 3.25s
154: learn: 0.0843193 total: 594ms remaining: 3.24s
155: learn: 0.0836269 total: 595ms remaining: 3.22s
156: learn: 0.0831832 total: 596ms remaining: 3.2s
157: learn: 0.0826911 total: 598ms remaining: 3.19s
158: learn: 0.0822052 total: 599ms remaining: 3.17s
159: learn: 0.0816328 total: 601ms remaining: 3.15s
160: learn: 0.0810829 total: 602ms remaining: 3.14s
161: learn: 0.0804743 total: 604ms remaining: 3.12s
162: learn: 0.0799808 total: 605ms remaining: 3.11s
163: learn: 0.0794385 total: 606ms remaining: 3.09s
164: learn: 0.0789257 total: 608ms remaining: 3.08s
165: learn: 0.0784677 total: 609ms remaining: 3.06s
166: learn: 0.0779413 total: 611ms remaining: 3.05s
167: learn: 0.0775829 total: 612ms remaining: 3.03s
168: learn: 0.0771927 total: 614ms remaining: 3.02s
169: learn: 0.0767859 total: 615ms remaining: 3s
170: learn: 0.0763417 total: 617ms remaining: 2.99s
171: learn: 0.0758498 total: 618ms remaining: 2.98s
172: learn: 0.0754488 total: 619ms remaining: 2.96s
173: learn: 0.0751407 total: 621ms remaining: 2.95s
174: learn: 0.0746840 total: 622ms remaining: 2.93s
175: learn: 0.0742753 total: 624ms remaining: 2.92s
176: learn: 0.0738132 total: 625ms remaining: 2.9s
177: learn: 0.0734111 total: 626ms remaining: 2.89s
178: learn: 0.0730830 total: 628ms remaining: 2.88s
179: learn: 0.0726199 total: 629ms remaining: 2.87s
180: learn: 0.0720206 total: 631ms remaining: 2.85s
181: learn: 0.0714278 total: 632ms remaining: 2.84s
182: learn: 0.0709469 total: 633ms remaining: 2.83s
183: learn: 0.0705823 total: 635ms remaining: 2.81s
184: learn: 0.0701621 total: 636ms remaining: 2.8s
185: learn: 0.0697256 total: 638ms remaining: 2.79s
186: learn: 0.0694213 total: 639ms remaining: 2.78s
187: learn: 0.0690216 total: 641ms remaining: 2.77s
188: learn: 0.0686556 total: 642ms remaining: 2.75s
189: learn: 0.0683192 total: 643ms remaining: 2.74s
190: learn: 0.0679752 total: 645ms remaining: 2.73s
191: learn: 0.0676349 total: 647ms remaining: 2.72s
192: learn: 0.0672463 total: 648ms remaining: 2.71s
193: learn: 0.0669648 total: 650ms remaining: 2.7s
194: learn: 0.0665695 total: 651ms remaining: 2.69s
195: learn: 0.0662483 total: 652ms remaining: 2.67s
196: learn: 0.0659071 total: 654ms remaining: 2.66s
197: learn: 0.0655722 total: 655ms remaining: 2.65s
198: learn: 0.0652576 total: 657ms remaining: 2.64s
199: learn: 0.0650229 total: 658ms remaining: 2.63s
200: learn: 0.0647396 total: 660ms remaining: 2.62s
201: learn: 0.0643653 total: 661ms remaining: 2.61s
202: learn: 0.0641658 total: 663ms remaining: 2.6s
203: learn: 0.0638325 total: 664ms remaining: 2.59s
204: learn: 0.0633962 total: 666ms remaining: 2.58s
205: learn: 0.0630423 total: 668ms remaining: 2.57s
206: learn: 0.0627900 total: 669ms remaining: 2.56s
207: learn: 0.0624750 total: 671ms remaining: 2.55s
208: learn: 0.0622083 total: 672ms remaining: 2.54s
209: learn: 0.0619973 total: 673ms remaining: 2.53s
210: learn: 0.0616541 total: 675ms remaining: 2.52s
211: learn: 0.0613959 total: 676ms remaining: 2.51s
212: learn: 0.0611366 total: 677ms remaining: 2.5s
213: learn: 0.0608261 total: 679ms remaining: 2.49s
214: learn: 0.0606650 total: 680ms remaining: 2.48s
215: learn: 0.0603632 total: 681ms remaining: 2.47s
216: learn: 0.0600457 total: 682ms remaining: 2.46s
217: learn: 0.0597078 total: 684ms remaining: 2.45s
218: learn: 0.0593487 total: 685ms remaining: 2.44s
219: learn: 0.0590563 total: 687ms remaining: 2.43s
220: learn: 0.0588247 total: 688ms remaining: 2.42s
221: learn: 0.0586249 total: 689ms remaining: 2.42s
222: learn: 0.0583579 total: 691ms remaining: 2.41s
223: learn: 0.0580068 total: 692ms remaining: 2.4s
224: learn: 0.0577663 total: 694ms remaining: 2.39s
225: learn: 0.0575567 total: 695ms remaining: 2.38s
226: learn: 0.0572760 total: 696ms remaining: 2.37s
227: learn: 0.0571322 total: 698ms remaining: 2.36s
228: learn: 0.0568170 total: 699ms remaining: 2.35s
###Markdown
LightGBM (Light Gradient Boosting Machine)
###Code
# pip install lightgbm
X = df.iloc[:, :-1]
y = df.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from lightgbm import LGBMClassifier
classifier = LGBMClassifier()
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
y_pred = classifier.predict(X_test)
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(accuracy_score(y_test, y_pred))
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=====================
Classifier comparison
=====================
A comparison of a several classifiers in scikit-learn on synthetic datasets.
The point of this example is to illustrate the nature of decision boundaries
of different classifiers.
This should be taken with a grain of salt, as the intuition conveyed by
these examples does not necessarily carry over to real datasets.
Particularly in high-dimensional spaces, data can more easily be separated
linearly and the simplicity of classifiers such as naive Bayes and linear SVMs
might lead to better generalization than is achieved by other classifiers.
The plots show training points in solid colors and testing points
semi-transparent. The lower right shows the classification accuracy on the test
set.
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Andreas Müller
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from tqdm.notebook import tqdm
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable
]
figure = plt.figure(figsize=(27, 9))
i = 1
# iterate over datasets
for ds_cnt, ds in tqdm(enumerate(datasets)):
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.4, random_state=42)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
if ds_cnt == 0:
ax.set_title("Input data")
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,
edgecolors='k')
# Plot the testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6,
edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,
edgecolors='k')
# Plot the testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
edgecolors='k', alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
if ds_cnt == 0:
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
plt.tight_layout()
# plt.show()
%%time
import time
from tqdm import tqdm, trange
for i in tqdm(range(3)):
time.sleep(1)
for i in trange(3):
time.sleep(1)
###Output
100%|████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.01s/it]
100%|████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.01s/it] |
dmu16/dmu16_allwise/WISE_2_GAIA-ELAIS-N1.ipynb | ###Markdown
Puts ALL WISE Astrometry reference catalogues into GAIA reference frameThe WISE catalogues were produced by ../dmu16_allwise/make_wise_samples_for_stacking.cshIn the catalogue, we keep:- The position;- The chi^2This astrometric correction is adapted from master list code (dmu1_ml_XMM-LSS/1.8_SERVS.ipynb) written by Yannick Rohlly and Raphael Shirley
###Code
field="ELAIS-N1"
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
from collections import OrderedDict
import os
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, flux_to_mag
OUT_DIR = os.environ.get('TMP_DIR', "../dmu16_allwise/data/")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "servs_ra"
DEC_COL = "servs_dec"
## I - Reading in WISE astrometric catalogue
wise = Table.read(f"../dmu16_allwise/data/Allwise_PSF_stack_{field}.fits")
wise_coords=SkyCoord(wise['ra'], wise['dec'])
epoch = 2009
wise[:10].show_in_notebook()
###Output
_____no_output_____
###Markdown
III - Astrometry correctionWe match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
###Code
#gaia = Table.read("./dmu17_XMM-LSS/data/GAIA_XMM-LSS.fits")
print(f"../../dmu0/dmu0_GAIA/data/GAIA_{field}.fits")
gaia = Table.read(f"../../dmu0/dmu0_GAIA/data/GAIA_{field}.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(wise_coords.ra, wise_coords.dec,
gaia_coords.ra, gaia_coords.dec, near_ra0=True)
delta_ra, delta_dec = astrometric_correction(
wise_coords,
gaia_coords, near_ra0=True
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
print( wise["ra"])
print(delta_ra.to(u.deg))
#wise["ra"] += delta_ra.to(u.deg)
wise["ra"] = wise["ra"]+ delta_ra.to(u.deg)
wise["dec"] = wise["dec"]+ delta_dec.to(u.deg)
nb_astcor_diag_plot(wise["ra"], wise["dec"],
gaia_coords.ra, gaia_coords.dec, near_ra0=True)
###Output
/Users/sjo/anaconda3/envs/herschelhelp_internal/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg.
warnings.warn("The 'normed' kwarg is deprecated, and has been "
###Markdown
V - Saving to disk
###Code
wise.write(f"../dmu16_allwise/data/Allwise_PSF_stack_GAIA_{field}.fits", overwrite=True)
###Output
_____no_output_____ |
line-follower/src/old_lane_follower_past_project/pictures to numpy.ipynb | ###Markdown
Convert Pictures to Numpy
###Code
#Create references to important directories we will use over and over
import os, sys
current_dir = os.getcwd()
SCRIPTS_HOME_DIR = current_dir
DATA_HOME_DIR = current_dir+'/data'
from glob import glob
import numpy as np
import _pickle as pickle
import PIL
from PIL import Image
from tqdm import tqdm
from PIL import ImageOps
from PIL import Image
from tqdm import tqdm
import bcolz
import seaborn as sns
import matplotlib as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Iterate through pictures in directoryAssuming X_train exists as ordered images Y_train is csv file
###Code
def folder_to_numpy(image_directory_full):
"""
Read sorted pictures (by filename) in a folder to a numpy array
USAGE:
data_folder = '/train/test1'
X_train = folder_to_numpy(data_folder)
Args:
data_folder (str): The relative folder from DATA_HOME_DIR
Returns:
picture_array (np array): The numpy array in tensorflow format
"""
# change directory
print ("Moving to directory: " + image_directory_full)
os.chdir(image_directory_full)
# read in filenames from directory
g = glob('*.png')
if len(g) == 0:
g = glob('*.jpg')
print ("Found {} pictures".format(len(g)))
# sort filenames
g.sort()
# open and convert images to numpy array
print("Starting pictures to numpy conversion")
picture_arrays = np.array([np.array(Image.open(image_path)) for image_path in g])
# reshape to tensorflow format
# picture_arrays = picture_arrays.reshape(*picture_arrays.shape, 1)
print ("Shape of output: {}".format(picture_arrays.shape))
# return array
return picture_arrays
data_folder = '/train/binary/forward'
X_train = folder_to_numpy(data_folder)
Y_train = np.arange(0,754).reshape(754,1)
# Y_train = np.random.rand(X_train.shape[0], 1)
# Y_train = genfromtxt('my_file.csv', delimiter=',')
Y_train.shape
def save_array(fname, arr):
c=bcolz.carray(arr, rootdir=fname, mode='w')
c.flush()
def load_array(fname):
return bcolz.open(fname)[:]
# save_array('test.bc', X_train)
# X_train = load_array('test.bc')
# from keras.preprocessing import image
def flip4DArray(array):
return array[..., ::-1,:] #[:,:,::-1] also works but is 50% slower
X_train_flip = flip3DArray(X_train)
X_train_flip.shape
X_train = X_train.reshape(X_train.shape[:-1])
X_train_flip = X_train_flip.reshape(X_train_flip.shape[:-1])
sns.heatmap(X_train[10], cmap='gray')
sns.heatmap(X_train_flip[10], cmap='gray')
gen = image.ImageDataGenerator()
train = gen.flow(X_train.reshape(*X_train.shape, 1), Y_train, shuffle=False, batch_size=64)
x, y = train.next()
print(x.shape, y.shape)
print(y)
###Output
_____no_output_____ |
lessons/Experimental Design/.ipynb_checkpoints/L2_Experiment_Size-checkpoint.ipynb | ###Markdown
Experiment SizeWe can use the knowledge of our desired practical significance boundary to plan out our experiment. By knowing how many observations we need in order to detect our desired effect to our desired level of reliability, we can see how long we would need to run our experiment and whether or not it is feasible.Let's use the example from the video, where we have a baseline click-through rate of 10% and want to see a manipulation increase this baseline to 12%. How many observations would we need in each group in order to detect this change with power $1-\beta = .80$ (i.e. detect the 2% absolute increase 80% of the time), at a Type I error rate of $\alpha = .05$?
###Code
# import packages
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Method 1: Trial and ErrorOne way we could solve this is through trial and error. Every sample size will have a level of power associated with it; testing multiple sample sizes will gradually allow us to narrow down the minimum sample size required to obtain our desired power level. This isn't a particularly efficient method, but it can provide an intuition for how experiment sizing works.Fill in the `power()` function below following these steps:1. Under the null hypothesis, we should have a critical value for which the Type I error rate is at our desired alpha level. - `se_null`: Compute the standard deviation for the difference in proportions under the null hypothesis for our two groups. The base probability is given by `p_null`. Remember that the variance of the difference distribution is the sum of the variances for the individual distributions, and that _each_ group is assigned `n` observations. - `null_dist`: To assist in re-use, this should be a [scipy norm object](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html). Specify the center and standard deviation of the normal distribution using the "loc" and "scale" arguments, respectively. - `p_crit`: Compute the critical value of the distribution that would cause us to reject the null hypothesis. One of the methods of the `null_dist` object will help you obtain this value (passing in some function of our desired error rate `alpha`).2. The power is the proportion of the distribution under the alternative hypothesis that is past that previously-obtained critical value. - `se_alt`: Now it's time to make computations in the other direction. This will be standard deviation of differences under the desired detectable difference. Note that the individual distributions will have different variances now: one with `p_null` probability of success, and the other with `p_alt` probability of success. - `alt_dist`: This will be a scipy norm object like above. Be careful of the "loc" argument in this one. The way the `power` function is set up, it expects `p_alt` to be greater than `p_null`, for a positive difference. - `beta`: Beta is the probability of a Type-II error, or the probability of failing to reject the null for a particular non-null state. That means you should make use of `alt_dist` and `p_crit` here!The second half of the function has already been completed for you, which creates a visualization of the distribution of differences for the null case and for the desired detectable difference. Use the cells that follow to run the function and observe the visualizations, and to test your code against a few assertion statements. Check the following page if you need help coming up with the solution.
###Code
def power(p_null, p_alt, n, alpha = .05, plot = True):
"""
Compute the power of detecting the difference in two populations with
different proportion parameters, given a desired alpha rate.
Input parameters:
p_null: base success rate under null hypothesis
p_alt : desired success rate to be detected, must be larger than
p_null
n : number of observations made in each group
alpha : Type-I error rate
plot : boolean for whether or not a plot of distributions will be
created
Output value:
power : Power to detect the desired difference, under the null.
"""
# Compute the power
se_null =
null_dist =
p_crit =
se_alt =
alt_dist =
beta =
if plot:
# Compute distribution heights
low_bound = null_dist.ppf(.01)
high_bound = alt_dist.ppf(.99)
x = np.linspace(low_bound, high_bound, 201)
y_null = null_dist.pdf(x)
y_alt = alt_dist.pdf(x)
# Plot the distributions
plt.plot(x, y_null)
plt.plot(x, y_alt)
plt.vlines(p_crit, 0, np.amax([null_dist.pdf(p_crit), alt_dist.pdf(p_crit)]),
linestyles = '--')
plt.fill_between(x, y_null, 0, where = (x >= p_crit), alpha = .5)
plt.fill_between(x, y_alt , 0, where = (x <= p_crit), alpha = .5)
plt.legend(['null','alt'])
plt.xlabel('difference')
plt.ylabel('density')
plt.show()
# return power
return (1 - beta)
power(.1, .12, 1000)
assert np.isclose(power(.1, .12, 1000, plot = False), 0.4412, atol = 1e-4)
assert np.isclose(power(.1, .12, 3000, plot = False), 0.8157, atol = 1e-4)
assert np.isclose(power(.1, .12, 5000, plot = False), 0.9474, atol = 1e-4)
print('You should see this message if all the assertions passed!')
###Output
_____no_output_____
###Markdown
Method 2: Analytic SolutionNow that we've got some intuition for power by using trial and error, we can now approach a closed-form solution for computing a minimum experiment size. The key point to notice is that, for an $\alpha$ and $\beta$ both < .5, the critical value for determining statistical significance will fall between our null click-through rate and our alternative, desired click-through rate. So, the difference between $p_0$ and $p_1$ can be subdivided into the distance from $p_0$ to the critical value $p^*$ and the distance from $p^*$ to $p_1$.Those subdivisions can be expressed in terms of the standard error and the z-scores:$$p^* - p_0 = z_{1-\alpha} SE_{0},$$$$p_1 - p^* = -z_{\beta} SE_{1};$$$$p_1 - p_0 = z_{1-\alpha} SE_{0} - z_{\beta} SE_{1}$$In turn, the standard errors can be expressed in terms of the standard deviations of the distributions, divided by the square root of the number of samples in each group:$$SE_{0} = \frac{s_{0}}{\sqrt{n}},$$$$SE_{1} = \frac{s_{1}}{\sqrt{n}}$$Substituting these values in and solving for $n$ will give us a formula for computing a minimum sample size to detect a specified difference, at the desired level of power:$$n = \lceil \big(\frac{z_{\alpha} s_{0} - z_{\beta} s_{1}}{p_1 - p_0}\big)^2 \rceil$$where $\lceil ... \rceil$ represents the ceiling function, rounding up decimal values to the next-higher integer. Implement the necessary variables in the function below, and test them with the cells that follow.
###Code
def experiment_size(p_null, p_alt, alpha = .05, beta = .20):
"""
Compute the minimum number of samples needed to achieve a desired power
level for a given effect size.
Input parameters:
p_null: base success rate under null hypothesis
p_alt : desired success rate to be detected
alpha : Type-I error rate
beta : Type-II error rate
Output value:
n : Number of samples required for each group to obtain desired power
"""
# Get necessary z-scores and standard deviations (@ 1 obs per group)
z_null =
z_alt =
sd_null =
sd_alt =
# Compute and return minimum sample size
n =
return np.ceil(n)
experiment_size(.1, .12)
assert np.isclose(experiment_size(.1, .12), 2863)
print('You should see this message if the assertion passed!')
###Output
_____no_output_____
###Markdown
Notes on InterpretationThe example explored above is a one-tailed test, with the alternative value greater than the null. The power computations performed in the first part will _not_ work if the alternative proportion is greater than the null, e.g. detecting a proportion parameter of 0.88 against a null of 0.9. You might want to try to rewrite the code to handle that case! The same issue should not show up for the second approach, where we directly compute the sample size.If you find that you need to do a two-tailed test, you should pay attention to two main things. First of all, the "alpha" parameter needs to account for the fact that the rejection region is divided into two areas. Secondly, you should perform the computation based on the worst-case scenario, the alternative case with the highest variability. Since, for the binomial, variance is highest when $p = .5$, decreasing as $p$ approaches 0 or 1, you should choose the alternative value that is closest to .5 as your reference when computing the necessary sample size.Note as well that the above methods only perform sizing for _statistical significance_, and do not take into account _practical significance_. One thing to realize is that if the true size of the experimental effect is the same as the desired practical significance level, then it's a coin flip whether the mean will be above or below the practical significance bound. This also doesn't even consider how a confidence interval might interact with that bound. In a way, experiment sizing is a way of checking on whether or not you'll be able to get what you _want_ from running an experiment, rather than checking if you'll get what you _need_. Alternative ApproachesThere are also tools and Python packages that can also help with sample sizing decisions, so you don't need to solve for every case on your own. The sample size calculator [here](http://www.evanmiller.org/ab-testing/sample-size.html) is applicable for proportions, and provides the same results as the methods explored above. (Note that the calculator assumes a two-tailed test, however.) Python package "statsmodels" has a number of functions in its [`power` module](https://www.statsmodels.org/stable/stats.htmlpower-and-sample-size-calculations) that perform power and sample size calculations. Unlike previously shown methods, differences between null and alternative are parameterized as an effect size (standardized difference between group means divided by the standard deviation). Thus, we can use these functions for more than just tests of proportions. If we want to do the same tests as before, the [`proportion_effectsize`](http://www.statsmodels.org/stable/generated/statsmodels.stats.proportion.proportion_effectsize.html) function computes [Cohen's h](https://en.wikipedia.org/wiki/Cohen%27s_h) as a measure of effect size. As a result, the output of the statsmodel functions will be different from the result expected above. This shouldn't be a major concern since in most cases, you're not going to be stopping based on an exact number of observations. You'll just use the value to make general design decisions.
###Code
# example of using statsmodels for sample size calculation
from statsmodels.stats.power import NormalIndPower
from statsmodels.stats.proportion import proportion_effectsize
# leave out the "nobs" parameter to solve for it
NormalIndPower().solve_power(effect_size = proportion_effectsize(.12, .1), alpha = .05, power = 0.8,
alternative = 'larger')
###Output
_____no_output_____ |
Ugeseddel 2.ipynb | ###Markdown
Opgave 2.1
###Code
class Dice(object):
def __init__(self):
pass
@staticmethod
def outcome_space():
return [i + 1 for i in range(6)]
class Coin(object):
def __init__(self):
pass
@staticmethod
def outcome_space():
return[0, 1]
red = Dice()
white = Dice()
outcomes = []
for i in red.outcome_space():
for j in white.outcome_space():
outcomes.append((i,j))
def Y_var(red, white):
return min(red, white)
def Z_var(red, white):
return max(red, white)
###Output
_____no_output_____
###Markdown
marginal fordeling af $Y$
###Code
Y_outcomes = []
for outcome in outcomes:
result = Y_var(outcome[0],outcome[1])
Y_outcomes.append(result)
_bins = [i*0.5+1 for i in range(13)]
plt.hist(Y_outcomes, bins=_bins)
plt.title("Marginale fordeling af Y")
###Output
_____no_output_____
###Markdown
Den marginale fordeling af Z
###Code
Z_outcomes = []
for outcome in outcomes:
result = Z_var(outcome[0],outcome[1])
Z_outcomes.append(result)
_bins = [i*0.5+ 1 for i in range(13)]
plt.hist(Z_outcomes, bins=_bins)
plt.title("Marginale fordeling af Z")
###Output
_____no_output_____
###Markdown
Opgave 2.3
###Code
Y_vals = [0,1,1,2,2,3,3,4,4,5]
PY_vals = [0,0,0.4,0.4,0.7,0.7,0.9,0.9,1,1]
plt.plot(Y_vals, PY_vals)
plt.title("Fordelingsfunktion af Y")
###Output
_____no_output_____
###Markdown
Opgave B.3For nemhedsskyld kalder vi den første terning **red** og den anden **white**
###Code
red = Dice()
white = Dice()
outcomes = []
for i in red.outcome_space():
for j in white.outcome_space():
outcomes.append((i,j))
def Y(r, w):
return r + w
def Z(r, w):
return r - w
def W(r, w):
return (r-w)**2
def tabulate_pdf(func, dice1, dice2):
outcome_dict = dict()
for i in dice1.outcome_space():
for j in dice2.outcome_space():
try:
outcome_dict[i].append(func(i,j))
except:
outcome_dict[i] = []
outcome_dict[i].append(func(i,j))
dice2.outcome_space()
return pd.DataFrame(outcome_dict, index= dice2.outcome_space())
def tab_pdf_to_list(pdf, dice):
outcomes = list()
for i in dice.outcome_space():
outcomes = outcomes + list(pdf[i])
return outcomes
###Output
_____no_output_____
###Markdown
simultan PDF (fordeling) samt CDF (fordelingsfunktion) for variabel Y
###Code
y_pdf = tabulate_pdf(Y, red, white)
y_pdf
_bins = [i * 0.5 + 1 for i in range(25)]
plt.hist(tab_pdf_to_list(y_pdf, red),bins = _bins, normed=False, cumulative = False)
plt.title("PDF - frekvenser ikke sandsynligheder op af Y aksen (transformer ved i * (1/36))")
plt.hist(tab_pdf_to_list(y_pdf, red),bins = _bins, normed=False, cumulative = True)
plt.title("CDF - frekvenser ikke sandsynligheder op af Y aksen (transformer ved i * (1/36))")
###Output
_____no_output_____
###Markdown
simultan PDF (fordeling) samt CDF (fordelingsfunktion) for variabel Z
###Code
z_pdf = tabulate_pdf(Z, red, white)
z_pdf
_bins = [i * 0.5 for i in range(13)]
plt.hist(tab_pdf_to_list(z_pdf, red),bins = _bins, normed=False)
plt.title("PDF - frekvenser ikke sandsynligheder op af Y aksen (transformer ved i * (1/36))")
plt.hist(tab_pdf_to_list(z_pdf, red),bins = _bins, normed=False, cumulative = True)
plt.title("CDF - frekvenser ikke sandsynligheder op af Y aksen (transformer ved i * (1/36))")
###Output
_____no_output_____
###Markdown
simultan PDF (fordeling) samt CDF (fordelingsfunktion) for variabel W
###Code
w_pdf = tabulate_pdf(W, red, white)
w_pdf
_bins = [i * 0.5 for i in range(51)]
plt.hist(tab_pdf_to_list(w_pdf, red),bins = _bins, normed=False)
plt.title("PDF - frekvenser ikke sandsynligheder op af Y aksen (transformer ved i * (1/36))")
plt.hist(tab_pdf_to_list(w_pdf, red),bins = _bins, normed=False, cumulative = True)
plt.title("CDF - frekvenser ikke sandsynligheder op af Y aksen (transformer ved i * (1/36))")
###Output
_____no_output_____
###Markdown
Opgave B.4
###Code
X_outcomes = []
for i in Dice().outcome_space():
for j in Coin().outcome_space():
X_outcomes.append(i + j)
_bins = [i*0.5 for i in range(17)]
plt.hist(X_outcomes, bins = _bins)
plt.title('PDF af X - frevenser op af y aksen (ikke ssh)')
plt.hist(X_outcomes, bins = _bins, cumulative=True)
plt.title('PDF af X - frevenser op af y aksen (ikke ssh)')
###Output
_____no_output_____
###Markdown
Opgave 2.4
###Code
X_outcomes = [1,2,3]
Y_outcomes = [1/x for x in X_outcomes]
vals, weights = hist_charter(X_outcomes)
plt.hist(vals, weights=weights, rwidth = 0.3)
plt.title("PDF af X")
plt.hist(vals, weights = weights, cumulative =True, bins = [0,1,2,3,4,5])
plt.title("CDF af X")
vals, weights = hist_charter(Y_outcomes)
plt.hist(vals, weights=weights, rwidth = 0.3)
plt.title("PDF af Y")
plt.hist(vals, weights = weights, cumulative =True,)
plt.title("CDF af Y")
###Output
_____no_output_____ |
Tests_for_Correctness.ipynb | ###Markdown
Inputs for testing:
###Code
#inputs with random numbers
ran_pos_range100 = random_numbers(100, 0, 100)
ran_neg_range100 = random_numbers(100, -100, 0)
ran_mixed_range100 = random_numbers(100,-50, 50)
ran_high_repetition_range10 = random_numbers(100, -5, 5)
#random numbers with Sentinel values
ran_pos_range100_s = random_numbers(100, 0, 100, True)
ran_neg_range100_s = random_numbers(100, -100, 0, True)
ran_mixed_range100_s = random_numbers(100,-50, 50, True)
ran_high_repetition_range10_s = random_numbers(100, -5, 5, True)
#Ordered numbers
ordered = gen_ordered(1000, 0)
ordered_neg_pos = gen_ordered(1000, -500)
ordered_sentinel = gen_ordered(1000, -500, sentinel=True)
#Empty list:
empty_list = []
#repeated_number
gen_repeated_number(42,1000)
# Test where we know it would fail:
char_list = ['a', 'c', 'd', 'h', 'i', 'a', 'w']
mixed_list = [2, 5, -1, "Yellow", "Green", "Blue", True, False, 54.20]
#Swap - Helper Method:
def swap(A, i, j):
temp = A[i]
A[i] = A[j]
A[j] = temp
###Output
_____no_output_____
###Markdown
Classic Quicksort implementation
###Code
def quick_sort(A, left, right):
if right - left >= 1:
p = A[right]; i = left-1; j = right
#Currently it will not run any single interation if j>i is not satisfied.
while j>i:
i+=1
while A[i] < p:
i+= 1
j-=1
while A[j] > p:
j-=1
if j>i:
swap(A, i, j)
swap(A, i, right)
quick_sort(A, left, i-1)
quick_sort(A,i+1,right)
return A
###Output
_____no_output_____
###Markdown
Alternative Classic Quicksort:
###Code
def quick_sort2(A, left, right):
if right - left >= 1:
p = A[right]; i = left-1; j = right
#Here it will at least run once before it haults
while True: # difference is here
i+=1
while A[i] < p:
i+= 1
j-=1
while A[j] > p:
j-=1
if j>i:
swap(A, i, j)
if j>i:
break
swap(A, i, right)
quick_sort2(A, left, i-1)
quick_sort2(A,i+1,right)
return A
###Output
_____no_output_____
###Markdown
Dual Pivot Implementation:
###Code
def dual_pivot_quick_sort(A, left, right):
if right - left >= 1:
if A[left] > A[right]:
swap(A, left, right)
p = A[left]; q= A[right]
if p>q: swap(A, p, q)
l = left + 1; g = right-1; k = l
while k <= g:
if A[k] < p:
swap(A, k, l)
l+= 1
else:
if A[k] > q:
while (A[g] > q) & (k < g):
g-= 1
swap(A, k, g)
g-=1
if A[k] < p:
swap(A, k, l)
l+= 1
k += 1
l-= 1
g+= 1
swap(A, left, l)
swap(A, right, g)
dual_pivot_quick_sort(A, left, l-1)
dual_pivot_quick_sort(A, l+1, g-1)
dual_pivot_quick_sort(A, g+1, right)
return A
###Output
_____no_output_____
###Markdown
Tests
###Code
#All test lists are of length 100. Therefore we can enter 99 as our right pivot to start the algorithm
#simple method to check whether the
def is_sorted(list):
prior = min_value
for i in list:
if i < prior:
return False
prior = i
return True
if is_sorted(dual_pivot_quick_sort(ran_pos_range100, 0, len(ran_pos_range100)-1)):
print("Correct output for dual pivot with random posivtive integers of sample size 100 and range 100 in different values")
if is_sorted(dual_pivot_quick_sort(ran_neg_range100, 0, 99)):
print("Correct output for dual pivot with random negative integers of sample size 100 and range 100 in different values")
if is_sorted(dual_pivot_quick_sort(ran_mixed_range100, 0, 99)):
print("correct output")
if is_sorted(dual_pivot_quick_sort(ran_high_repetition_range10, 0, 99)):
print("correct output")
#Test
###Output
Correct output for dual pivot with random posivtive integers of sample size 100 and range 100 in different values
Correct output for dual pivot with random negative integers of sample size 100 and range 100 in different values
correct output
correct output
|
notebooks/pandas/dfassign.ipynb | ###Markdown
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html
###Code
import pandas as pd
value = list(range(5, 0, -1))
material = ["cotton", "funnel", "plastic", "vinyl", "silk"]
df_ori = pd.DataFrame({"type": value, "material": material})
df_ori.head()
###Output
_____no_output_____
###Markdown
Create a new column that add _ori to every item of colum_name material
###Code
df = df_ori
df = df.assign(material_processed = lambda x: x['material'] + "_ori", material_final = lambda x : x['material_processed'] + "_final")
df.head()
###Output
_____no_output_____
###Markdown
Error when direct assignment without lambda
###Code
df = df_ori
df = df.assign(material_processed = df['material'] + "_ori", material_final = df['material_processed'] + "_final")
df.head()
###Output
_____no_output_____
###Markdown
But this works
###Code
df = df_ori
df = df.assign(material_processed = df['material'] + "_ori", material_final = lambda x: x['material_processed'] + "_final")
df.head()
###Output
_____no_output_____ |
research/object_detection/tl_detector.ipynb | ###Markdown
Object Detection DemoWelcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. Imports
###Code
from distutils.version import LooseVersion, StrictVersion
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
print(LooseVersion(tf.__version__))
if LooseVersion(tf.__version__) < LooseVersion('1.4.0'):
raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')
###Output
1.10.0
###Markdown
Env setup
###Code
# This is needed to display the images.
%matplotlib inline
###Output
_____no_output_____
###Markdown
Object detection importsHere are the imports from the object detection module.
###Code
from utils import label_map_util
from utils import visualization_utils as vis_util
###Output
_____no_output_____
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
###Code
# What model to download.
MODEL_NAME = 'faster_rcnn_resnet50_coco_2018_01_28'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('config', 'tl_light_map.pbtxt')
NUM_CLASSES = 4
###Output
_____no_output_____
###Markdown
Download Model
###Code
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
###Output
_____no_output_____
###Markdown
Load a (frozen) Tensorflow model into memory.
###Code
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
print(category_index)
###Output
{1: {'id': 1, 'name': 'Red'}, 2: {'id': 2, 'name': 'Yellow'}, 3: {'id': 3, 'name': 'Green'}, 4: {'id': 4, 'name': 'NoTrafficLight'}}
###Markdown
Helper code
###Code
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
###Output
_____no_output_____
###Markdown
Detection
###Code
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
###Output
_____no_output_____ |
examples/Categorizing_by_topic_using_conversation_threads.ipynb | ###Markdown
Topic Categorization with converstation thread effectsWhen detecting topics in a document, common ways include simple *keyword* matching, *topic modeling*, and many others. While this works fine for large text documents like news articles, applying this type of approach to social media data has a serious *methodological* flaw: posts are not isolated but usually part of a conversation thread. Hence if one post is detected as being on the topic, it is logical that another post that replies to it is also on the topic, however this second post might not have used the listened to *keywords* and hence was not tagged as on the topic. This notebook walks through the problem and shows how **nlpru**'s `Recategorize_topics` method by taking thread affects simplifies this analysis.--------------- The problem space On Twitter, we can say that there are 3 potential scenarios for tweets in the context of a convesation thread: 1. **Topics flow down in threads, not up**: the first scenario is quite simple -- a tweet replies to another tweet. So for instance, if **tweet 1** is categorized as being on a certain topic, then logically every replying tweet is also on the topic (**tweets 2-5**). * If **tweet 3** is on the topic, then **tweet 4** is as well, but not the others (**1, 2, or 5**) * *NOTE*: This is obviously a bit of a simplification and depends on how a topic is defined. A reply can be on a separate topic, especially if the topics being analyzed are quite close logically, then the transition is harder to determine. More on this later... 2. **Adding text/comment while retweeting also has to be taken into account**: if a user retweets a tweet, they have the option to *Retweet with comment* -- which has its own 'tweet' characteristics and is linked (and displayed) with the retweeted tweet as embedded below. In the Twitter API, the retweeted tweet is called a `quoted_status`. Hence if a *quoted tweet*, for instance **tweet 10** is categorized on the topic, then the *commenting tweet* or **tweet 9**, must also be on the topic; and 3. **As regularly retweeted tweets have their own unique tweet id, this also has to be taken into account**: Twitter stores a reply relationship in the original (i.e. retweeted) tweet, not the tweet that retweeted it. As we often investigate tweet topics by also including the tweets that retweeted others as copies of the original, we need to take this into account. * In other words, say we want to create a picture of the topics that were discussed during a particular day. We would pull all the tweets and then pull all the retweets that were made during that day, and plot them by topic per hour. This means that we pull the `retweet tweet id`, `retweet created at`, and `retweeted tweet text`. In this case, we need to check if the retweeted tweet was a reply to another tweet that was determined to be on a specific topic! Conceptual solution As such we have 4 inputs:* the original tweets pre-categorized, for instance using the `nlpru.Keyword_Match` method;* a list of tweets that reply to other tweets: `list_replies = [('twtid','inreplytotwtid'),...]`;* a list of retweeted tweet ids that retweeted a previous tweet: `list_rts = [('twtid','rttwtid'),...]`; and* a list of quotes, i.e. when one tweet quoted another: `list_quotes = [('twtid','qttwtid'),...]`The output should be like the pre-categorized list of tweets, but the topic labels should be changed. Conceptually, there are **two ways** to solve the problem:1. Using the tweets that **are categorized** about the topic * In other words, for each tweet that is a about a topic, use the conversation thread linkages based on certain rules to tag all related or **downstream** tweets as also about the topic. This will mean that a conversation thread is *virtually* created for each tweet coded about the topic.2. The reverse, starting with the tweets **not categorized** as about the topic * This approach will require an iteration through all uncategorized tweets, checking each that the rules and conversation thread linkages allow the tweet to be recategorized as about the topic. This step is repeated again and again until tweets are **no longer** being recategorized. While we need a thorough test of efficiency to know for sure, option 1 requires building a conversation thread *object* first, and using it to categorize tweets. As this method does not currently exist, this will be done in a later section. *(Note, some like @fionapigott have created [conversation thread builders](https://github.com/fionapigott/conversation-builder), but they only work with *replies*, and not *quotes*, as there is often an interelation of quotes that starts separate conversation threads)*This workbook will hence follow the 2nd option in solving the problem Approach The solution will be based on the following steps/rules:1. Create a dictionary for each convesation relationship, such as `replies_dict = {'twtid':'inreplytotwtid',...}`2. Create a dictionary for all tweets within the sample, and a sub-dictionary that contains the necessary parameters (topic, etc). For instance `tweet_dict = {'twtid':{'topic':'protests','twt_text':'bla bla bla','userid':'123456'}, ...}`3. Iterate through all tweets **not** on the topic, and using the convesation relationships from 1, find the the tweets that each tweet refers to (i.e. the *parent* to the *child*)4. Use the dictionary of tweets and topics from 2, to check which topic the *parent* tweet is on, and if its on the topic, change the topic of the *child*5. Continue until no more tweets are recategorized with each loop > The full code of the solution can be seen in the [conversation.py](../nlpru/conversation.py) file Great! Now lets see how to use this method using **nlpru**. We first need to:1. Categorize some tweets about a topic (we duplicate the steps described in the [Topic categorization example](nlpru_topic_categorization_walkthrough.ipynb))2. Prepare the `replies`, `retweets`, and `quotes` lists 1. Get some data and pre-categorize tweets as about a topic-----------------------------
###Code
import pandas as pd
from nlpru import FindTopics
from pysqlc import DB
db = DB('kremlin_tweets_db')
def __get_data__(start_date, end_date):
"""
Collect the data -- as tweets and retweets are stored separately, collect via inner joins and then
use UNION to append.
"""
q = """
SELECT
tmast.twttext as twttext,
tsamp.twtid,
tmast.userid,
twt_createdat,
imrev3
FROM samp_twts_all_rus_twts_str tsamp
INNER JOIN twt_Master tmast
ON tsamp.twtid=tmast.twtid
LEFT JOIN meta_all_users_communities com
ON tmast.userid=com.userid
WHERE tmast.twt_lang='ru'
AND tmast.twt_createdat >= '{start}'
AND tmast.twt_createdat < '{end}'
UNION ALL
SELECT
tmast.twttext AS twttext,
tsamp.twtid,
trts.userid,
twt_createdat,
imrev3
FROM samp_twts_all_rus_twts_str tsamp
INNER JOIN twt_rtmaster trts
ON tsamp.twtid=trts.twtid
INNER JOIN twt_master tmast
ON trts.rttwtid=tmast.twtid
LEFT JOIN meta_all_users_communities com
ON tmast.userid=com.userid
WHERE tmast.twt_lang='ru'
AND tmast.twt_createdat >= '{start}'
AND tmast.twt_createdat < '{end}';
""".format(start=start_date,
end=end_date)
raw = db.query(q)
print("There are {:,} tweets in the captured sample!".format(len(raw)))
return raw
###Output
_____no_output_____
###Markdown
Specify *March 26th* as the day we want to focus on ([the day of massive protests in Russia](https://en.wikipedia.org/wiki/2017%E2%80%932018_Russian_protests26_March_2017))
###Code
start_date='2017-03-26'
end_date='2017-03-27'
raw = __get_data__(start_date=start_date, end_date=end_date)
###Output
There are 44,613 tweets in the captured sample!
###Markdown
Now add some keywords and classify this data as *about* a topic or *not* based on a match of **at least one** keyword:
###Code
#Lets say we pick the following keywords:
keywords1 = "россия, москва, митинг, навальный, задержать, против, акция, полицейский, димонответить, димон, протест, коррупция"
keywords = keywords1.split(", ")
T = FindTopics(
tweet_list=raw,
tweet_text_index=0,
tweet_id_index=1)
r = T.Keyword_Match({'protests':keywords})
###Output
_____no_output_____
###Markdown
`r` is outputted as the following key-value *dictionary* pair:```python'tweet id': { 'clean_words': ['...list of clean words...'], 'other': [ 'user id', datetime.datetime(time stamp tweet created at), int(community (imrev3))], 'text': '...text of the actual tweet...', 'topic': '...topic text label ...' }```To check what proportion of tweets is on the topic, lets convert it to a dataframe and calculate %s:
###Code
df = pd.DataFrame.from_dict(r, orient='index')
df.reset_index(inplace=True)
df[["index","topic"]].groupby("topic").count()/df["index"].count()*100
###Output
_____no_output_____
###Markdown
Hence, aprox 15% of them are categorized as being about the topic we picked based on keywords. -----------------------This is the benchmark -- from this, we can add conversation thread affects 2. Prepare the conversation thread linkages Get the *list of replies* for this sample and date range
###Code
repl_q = """
SELECT
repl.twtid,
inreplytotwtid
FROM meta_repliesmaster repl
INNER JOIN samp_twts_all_rus_twts_str samp
ON repl.twtid=samp.twtid
INNER JOIN twt_master tm
ON tm.twtid=repl.twtid
WHERE tm.twt_createdat >= '{start_date}'
AND tm.twt_createdat < '{end_date}'
""".format(start_date=start_date, end_date=end_date)
reply_list = db.query(repl_q)
###Output
_____no_output_____
###Markdown
Now get the *retweets* for this sample and date range
###Code
retweet_q = """
SELECT
rt.twtid,
rttwtid
FROM twt_rtmaster rt
INNER JOIN samp_twts_all_rus_twts_str samp
ON rt.twtid=samp.twtid
WHERE rt.rt_createdat >= '{start_date}'
AND rt.rt_createdat < '{end_date}'
""".format(start_date=start_date, end_date=end_date)
retweet_list = db.query(retweet_q)
###Output
_____no_output_____
###Markdown
Now get the *quotes* for this sample and the date range
###Code
quote_q = """
SELECT
qt.twtid,
qttwtid
FROM twt_qtmaster qt
INNER JOIN samp_twts_all_rus_twts_str samp
ON qt.twtid=samp.twtid
INNER JOIN twt_master tm
ON tm.twtid=qt.twtid
WHERE tm.twt_createdat >= '{start_date}'
AND tm.twt_createdat < '{end_date}'
""".format(start_date=start_date, end_date=end_date)
quote_list = db.query(quote_q)
###Output
_____no_output_____
###Markdown
Test the modelNow that all data is assembled, lets try to run the model and see what we get
###Code
from nlpru import Conversations
c = Conversations(
reply_list=reply_list,
retweet_list=retweet_list,
quote_list=quote_list)
t = c.Recategorize_topics(topic_for_which_to_check="protests", tweet_dict=r)
###Output
1 iteration completed, recategorized this round: 388
2 iteration completed, recategorized this round: 17
3 iteration completed, recategorized this round: 0
###Markdown
We see that only 3 iterations were required -- in fact the last one wasn't even needed! On the first round 388 tweets needed recategorizing based on coversation affects. Only 17 were reclassified in the second round.
###Code
df_postconvos = pd.DataFrame.from_dict(t, orient='index')
df_postconvos.reset_index(inplace=True)
df_postconvos[["index","topic"]].groupby("topic").count()/df_postconvos["index"].count()*100
###Output
_____no_output_____ |
Probability_Distribution.ipynb | ###Markdown
2.2 Probability DistributionA probability distribution is a function that gives the probabilities of different possible outcomes of an experiment.--- 2.2.1 Probability AxiomsAn **experiment** is any activity or process whose outcome is subject to uncertainty.The **sample space** $S$ of an experiment is the set of all possible outcomes of that experiment. It is usually more meaningful to study collections of outcomes from $S$ than individual outcomes.An **event** is any subset of outcomes contained in the sample space $S$. An event is simple if it consists of only one outcome and compound if it consists of multiple.The **probability distribution** is a function which assigns to each event $A$ a number $P(A)$ which will give a precise measure of the chance that $A$ will occur.* $1 \geq P(A) \geq 0$* $P(S) = 1$* If $A_1,A_2,...$ is an infinite collection of disjoint events, then $P(A_1\cup A_2\cup ...) = \sum\limits_{i=1}^{\infty} P(A_i)$* For any event $A$, $P(A) + P(A') = 1$, from which $P(A) = 1 - P(A')$* When events $A$ and $B$ are mutually exclusive, $P(A\cup B) = P(A) + P(B)$* For any two events $A$ and $B$, $P(A\cup B) = P(A) + P(B) - P(A\cap B)$
###Code
import random
one, two, three, four, five = 0, 0, 0, 0, 0
for i in range(10000):
num = random.randint(1, 5)
if num == 1:
one += 1
elif num == 2:
two += 1
elif num == 3:
three += 1
elif num == 4:
four += 1
else:
five += 1
print("number of 1s:", one, "\nnumber of 2s: ", two, "\nnumber of 3s: ", three, "\nnumber of 4s: ", four, "\nnumber of 5s: ", five)
###Output
number of 1s: 2051
number of 2s: 1980
number of 3s: 1939
number of 4s: 2055
number of 5s: 1975
###Markdown
Given a random number generator with 5 equally likely outcomes, each outcome should happen around 2,000 times if run 10,000 times. From the code above, we can see that the probability is roughly evenly distributed. 2.2.2 Conditional Probability**Condition probability** is defined as the likelihood of an event or outcome happening based on the occurrence of a previous event or outcome. It's expressed as a ratio of unconditional probabilities: the numerator is the probability of the intersection of the two events, whereas the denominator is the probability of the conditioning event $B$. The conditional probability of $A$ given $B$ is proportional to $P(A\cap B)$.The conditional probability of $A$ given that $B$ has occurred is defined by $P(A|B) = \frac{P(A\cap B)}{P(B)}$Conditional probability also gives rise to the multiplication rule$P(A\cap B) = P(A|B) \cdot P(B)$. This is important because $P(A\cap B)$ is often desired, and $P(B)$ and $P(A|B)$ are specified in the problem.$A$ and $B$ are independent events if $P(A|B) = P(A)$ or $P(A\cap B) = P(A)\cdot P(B)$. This also applies to collections of events as well.
###Code
A = 0.4
B = 0.8
cond = "{:.2f}".format(A * B / B)
print(cond)
A *= B
cond = "{:.2f}".format(A * B / B)
print(cond)
###Output
0.40
0.32
|
Prerequisite packages to have the SAME python and R environment.ipynb | ###Markdown
PREREQUISITE PACKAGES TO HAVE SAME ENVIRONMENT PYTHON:python 3.7.6 pandas 1.0.0 numpy 1.18.1 scikit-learn 0.21.2 imbalanced-learn 0.5.0 matplotlib 3.1.2 plotly 4.5.0 seaborn 0.10.0
###Code
conda install python=3.7.6
conda install scikit-learn==0.21.2
conda install -c anaconda pandas==1.0.0
conda install -c conda-forge imbalanced-learn==0.5.0
conda install -c anaconda numpy==1.18.1
conda install -c conda-forge matplotlib==3.1.2
conda install -c plotly plotly=4.5.0
conda install -c anaconda seaborn==0.10.0
###Output
_____no_output_____ |
steps/01_forecasting_api/forecasting-model-selection-and-evaluation.ipynb | ###Markdown
Forecasting: Model selection & evaluationReference issue: [622](https://github.com/alan-turing-institute/sktime/issues/622), [597](https://github.com/alan-turing-institute/sktime/issues/597)Contributors: @aiwalter, @mloning, @fkiraly, @pabworks, @ngupta23, @ViktorKaz IntroductionWe start by making a few conceptual points clarifying (i) the difference between model selection and model evaluation and (ii) different temporal cross-validation strategies. We then suggest possible design solutions. We conclude by highlighting a few technical challenges. Concepts Model selection vs model evaluationIn model evaluation, we are interested in estimating model performance, that is, how the model is likely to perform in deployment. To estimate model performance, we typically use cross-validation. Our estimates are only reliable if are our assumptions hold in deployment. With time series data, for example, we cannot plausibly assume that our observations are i.i.d., and have to replace traditional estimation techniques such as cross-validation based on random sampling with techniques that take into account the temporal dependency structure of the data (e.g. temporal cross-validation techniques like sliding windows). In model selection, we are interested in selecting the best model from a predefined set of possible models, based on the best estimated model performance. So, model selection involves model evaluation, but having selected the best model, we still need to evaluate it in order to estimate its performance in deployment.Literature references:* [On the use of cross-validation for time series predictor evaluation](https://www.sciencedirect.com/science/article/pii/S0020025511006773?casa_token=3s0uDvJVsyUAAAAA:OSzMrqFwpjP-Rz3WKaftf8O7ZYdynoszegwgTsb-pYXAv7sRDtRbhihRr3VARAUTCyCmxjAxXqk), comparative empirical analysis of CV approaches for forecasting model evaluation* [On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation](https://jmlr.csail.mit.edu/papers/volume11/cawley10a/cawley10a.pdf) Different temporal cross-validation strategiesThere is a variety of different approaches to temporal cross-validation.Sampling: how is the data split into training and test windows* blocked cross-validation, random subsampling with some distance between training and test windows,* sliding windows, re-fitting the model for each training window (request [621](https://github.com/alan-turing-institute/sktime/issues/621)),* sliding windows with an initial window, using the initial window for training and subsequent windows for updating,* expanding windows, refitting the model for each training window.It is important to document clearly which software specification implements which (statistical) strategy. DesignSince there is a clear difference between the concepts of model selection and evaluation, there should arguably also be a clear difference for the software API (following domain-driven design principles, more [here](https://arxiv.org/abs/2101.04938)). Potential design solutions:1. Keep `ForecastingGridSearchCV` and add model evaluation functionality2. Factor out model evaluation (e.g. `Evaluator`) and reuse it both inside model selection and for model evaluation functionality3. Keep only `ForecatingGridSearchCV` and use inspection on CV results for model evaluation 1. Keep `ForecastingGridSearchCV` and add model evaluation functions see e.g. `cross_val_score` as in [`pmdarima`](https://alkaline-ml.com/pmdarima/auto_examples/model_selection/example_cross_validation.html)
###Code
def evaluate(forecaster, y, fh, cv=None, strategy="refit", scoring=None):
"""Evaluate forecaster using cross-validation"""
# check cv, compatibility with fh
# check strategy, e.g. assert strategy in ("refit", "update"), compatibility with cv
# check scoring
# pre-allocate score array
n_splits = cv.get_n_splits(y)
scores = np.empty(n_splits)
for i, (train, test) in enumerate(cv.split(y)):
# split data
y_train = y.iloc[train]
y_test = y.iloc[test]
# fit and predict
forecaster.fit(y_train, fh)
y_pred = forecaster.predict()
# score
scores[i] = scoring(y_test, y_pred)
# return scores, possibly aggregate
return scores
###Output
_____no_output_____
###Markdown
2. Factor out model evaluation and reuse it both for model selection and model evaluation functionalityFor further modularizations, see current benchmarking module
###Code
# using evaluate function from above
class ForecastingGridSearchCV:
def fit(self, y, fh=None, X=None):
# note that fh is no longer optional in fit here
cv_results = np.empty(len(self.param_grid))
for i, params in enumerate(self.param_grid):
forecaster = clone(self.forecaster)
forecaster.set_params(**params)
scores = evaluate(forecaster, y, fh, cv=self.cv, strategy=self.strategy, scoring=self.scoring)
cv_results[i] = np.mean(scores)
# note we need to keep track of more than just scores, including fitted models if we do
# not want to refit after model selection
# select best params
return self
###Output
_____no_output_____ |
matplotlib/gallery_jupyter/axes_grid1/make_room_for_ylabel_using_axesgrid.ipynb | ###Markdown
Make Room For Ylabel Using Axesgrid
###Code
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mpl_toolkits.axes_grid1.axes_divider import make_axes_area_auto_adjustable
plt.figure()
ax = plt.axes([0, 0, 1, 1])
ax.set_yticks([0.5])
ax.set_yticklabels(["very long label"])
make_axes_area_auto_adjustable(ax)
plt.figure()
ax1 = plt.axes([0, 0, 1, 0.5])
ax2 = plt.axes([0, 0.5, 1, 0.5])
ax1.set_yticks([0.5])
ax1.set_yticklabels(["very long label"])
ax1.set_ylabel("Y label")
ax2.set_title("Title")
make_axes_area_auto_adjustable(ax1, pad=0.1, use_axes=[ax1, ax2])
make_axes_area_auto_adjustable(ax2, pad=0.1, use_axes=[ax1, ax2])
fig = plt.figure()
ax1 = plt.axes([0, 0, 1, 1])
divider = make_axes_locatable(ax1)
ax2 = divider.new_horizontal("100%", pad=0.3, sharey=ax1)
ax2.tick_params(labelleft=False)
fig.add_axes(ax2)
divider.add_auto_adjustable_area(use_axes=[ax1], pad=0.1,
adjust_dirs=["left"])
divider.add_auto_adjustable_area(use_axes=[ax2], pad=0.1,
adjust_dirs=["right"])
divider.add_auto_adjustable_area(use_axes=[ax1, ax2], pad=0.1,
adjust_dirs=["top", "bottom"])
ax1.set_yticks([0.5])
ax1.set_yticklabels(["very long label"])
ax2.set_title("Title")
ax2.set_xlabel("X - Label")
plt.show()
###Output
_____no_output_____ |
Zawal_serce.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.figure_factory as ff
import seaborn as sns
from scipy import stats # test na normalność rozkładu
from sklearn.preprocessing import scale,StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix,accuracy_score, classification_report
from mlxtend.plotting import plot_confusion_matrix
from google.colab import files
uploaded = files.upload()
serce = pd.read_csv('heart.csv')
serce.head()
###Output
_____no_output_____
###Markdown
**Zmienne:**age - Wiek w latachsex - (1 = męższyzna; 0 = kobieta)cp - Rodzaj bólu w klatce piersiowejtrestbps - Spoczynkowe ciśnienie krwi (w mm Hg przy przyjęciu do szpitala)chol - Cholestoral w surowicy w mg / dlfbs - (cukier we krwi na czczo > 120 mg/dl) (1 = true; 0 = false)restecg - Spoczynkowe wyniki EKGthalach - Osiągnięte maksymalne tętnoexang - Dławica piersiowa wywołana wysiłkiem fizycznym (1 = yes; 0 = no)oldpeak - Wywołane wysiłkiem obniżenie odcinka ST e EKGslope - Nachylenie odcinka ST w EKGca - Liczba dużych naczyń (0–3) zabarwionych metodą flourosopy thal - (telasemia) 3 = normalny; 6 = naprawiona wada; 7 = wada odwracalnatarget - 1 or 0
###Code
serce.shape
nulls_summary = pd.DataFrame(serce.isnull().any(), columns=['Nulls'])
nulls_summary['Num_of_nulls [qty]'] = pd.DataFrame(serce.isnull().sum())
nulls_summary['Num_of_nulls [%]'] = round((serce.isnull().mean()*100),2)
print(nulls_summary)
serce.skew()
corr = serce.corr()
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(corr,cmap='coolwarm', vmin=-1, vmax=1)
fig.colorbar(cax)
ticks = np.arange(0,len(serce.columns),1)
ax.set_xticks(ticks)
plt.xticks(rotation=90)
ax.set_yticks(ticks)
ax.set_xticklabels(serce.columns)
ax.set_yticklabels(serce.columns)
plt.figure(figsize=(100,100))
plt.show()
serce.hist(bins=20) # histogram dla wszystkich zmiennych
serce.select_dtypes([float, int]).apply(stats.normaltest) # p-value to wartość
#sprawdzam gdzie występują wartości odstające
Q_first = serce.quantile(0.25)
Q_third = serce.quantile(0.75)
iqr = Q_third-Q_first
low_boundary = (Q_first - 1.5 * iqr)
upp_boundary = (Q_third + 1.5 * iqr)
num_of_outliers_L = (serce[iqr.index] < low_boundary).sum()
num_of_outliers_U = (serce[iqr.index] > upp_boundary).sum()
wartosci_odstajace = pd.DataFrame({'niska_granica':low_boundary, 'wysoka_granica':upp_boundary,\
'wartosci_odstajace_L':num_of_outliers_L, 'wartosci_odstajace_U':num_of_outliers_U})
wartosci_odstajace
# zależności pomiędzy zmiennymi
# Przypadek A - nie występują wartosci odstające, rozkład normalny.
np.corrcoef(serce.select_dtypes(['float', 'int']), rowvar=0)
# Przypadek B - mogą występować obserwacje odstające, dowolny rozklad.
stats.spearmanr(serce.select_dtypes(['float', 'int']))[0]
serce.head()
#standaryzacja - i chyba nie ma potrzeby
serce_st = serce
#scaler = StandardScaler()
#serce_st[['trestbps', 'chol','thalach','oldpeak']] = scaler.fit_transform(serce_st[['trestbps', 'chol','thalach','oldpeak']])
serce_st.head()
serce_st.target.value_counts().plot(kind='pie')
data = serce_st.copy()
target = data.pop('target')
data.head()
target.head()
X_train, X_test, y_train, y_test = train_test_split(data, target, random_state=42)
print(f'X_train shape {X_train.shape}')
print(f'y_train shape {y_train.shape}')
print(f'X_test shape {X_test.shape}')
print(f'y_test shape {y_test.shape}')
print(f'\nTest ratio: {len(X_test) / len(data):.2f}')
print(f'\ny_train:\n{y_train.value_counts()}')
print(f'\ny_test:\n{y_test.value_counts()}')
#test_size=0.3 - dane testowe 30%
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.3, random_state=42)
print(f'X_train shape {X_train.shape}')
print(f'y_train shape {y_train.shape}')
print(f'X_test shape {X_test.shape}')
print(f'y_test shape {y_test.shape}')
print(f'\nTest ratio: {len(X_test) / len(data):.2f}')
print(f'\ny_train:\n{y_train.value_counts()}')
print(f'\ny_test:\n{y_test.value_counts()}')
#target, train_size=0.9 - dane treningowe
X_train, X_test, y_train, y_test = train_test_split(data, target, train_size=0.9, random_state=42)
print(f'X_train shape {X_train.shape}')
print(f'y_train shape {y_train.shape}')
print(f'X_test shape {X_test.shape}')
print(f'y_test shape {y_test.shape}')
print(f'\nTest ratio: {len(X_test) / len(data):.2f}')
print(f'\ny_train:\n{y_train.value_counts()}')
print(f'\ny_test:\n{y_test.value_counts()}')
#stratify - równy podział ze względu na zmienną docelową
X_train, X_test, y_train, y_test = train_test_split(data, target, random_state=42, test_size=0.1, stratify=target)
print(f'X_train shape {X_train.shape}')
print(f'y_train shape {y_train.shape}')
print(f'X_test shape {X_test.shape}')
print(f'y_test shape {y_test.shape}')
print(f'\nTest ratio: {len(X_test) / len(data):.2f}')
print(f'\ny_train:\n{y_train.value_counts()}')
print(f'\ny_test:\n{y_test.value_counts()}')
X_train, X_test, y_train, y_test = train_test_split(data, target, random_state=40, test_size=0.25, stratify=target)
print(f'X_train shape {X_train.shape}')
print(f'y_train shape {y_train.shape}')
print(f'X_test shape {X_test.shape}')
print(f'y_test shape {y_test.shape}')
print(f'\nTest ratio: {len(X_test) / len(data):.2f}')
print(f'\ntarget:\n{target.value_counts() / len(target)}')
print(f'\ny_train:\n{y_train.value_counts() / len(y_train)}')
print(f'\ny_test:\n{y_test.value_counts() / len(y_test)}')
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
y_pred = log_reg.predict(X_test)
y_pred[:30]
y_prob = log_reg.predict_proba(X_test)
y_prob[:30]
cm = confusion_matrix(y_test, y_pred)
plot_confusion_matrix(cm)
print(f'Accuracy: {accuracy_score(y_test, y_pred)}')
print(classification_report(y_test, y_pred))
def plot_confusion_matrix(cm):
# klasyfikacja binarna
cm = cm[::-1]
cm = pd.DataFrame(cm, columns=['pred_0', 'pred_1'], index=['true_1', 'true_0'])
fig = ff.create_annotated_heatmap(z=cm.values, x=list(cm.columns), y=list(cm.index),
colorscale='ice', showscale=True, reversescale=True)
fig.update_layout(width=500, height=500, title='Confusion Matrix', font_size=16)
fig.show()
plot_confusion_matrix(cm)
###Output
_____no_output_____ |
ipynb/Stage1.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/gdrive/')
# %tensorflow_version 2.x
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow
from tensorflow import keras
from tensorflow.keras import backend
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.preprocessing import label_binarize
from sklearn.metrics import roc_curve, auc, confusion_matrix, accuracy_score
from sklearn.svm import OneClassSVM
from sklearn.neighbors import LocalOutlierFactor
from keras.utils import plot_model
import matplotlib.pyplot as plt
from scipy import interp
import numpy as np
import tqdm
import math
import cv2
import os
! pip install git+https://github.com/divamgupta/image-segmentation-keras.git
# move dataset to colab space
!cp -r "/content/gdrive/My Drive/ECE1512/stage1/" /content/
# GENERATORS FOR model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.utils.class_weight import compute_class_weight
train_directory = '/content/stage1/train'
validation_directory = '/content/stage1/validation'
test_directory = '/content/stage1/test'
CLASSES = ['normal', 'pneumonia']
image_size = (299, 299)
# train image generator
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=10,
horizontal_flip=True,
vertical_flip=True)
train_generator = train_datagen.flow_from_directory(train_directory,
class_mode='categorical',
interpolation='bilinear',
target_size=image_size,
batch_size=16,
shuffle=True,
classes=CLASSES)
unique, train_counts = np.unique(train_generator.labels, return_counts=True)
train_size = train_counts.sum()
# validation image generator
validation_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=10,
horizontal_flip=True,
vertical_flip=True)
validation_generator = validation_datagen.flow_from_directory(validation_directory,
class_mode='categorical',
interpolation='bilinear',
target_size=image_size,
batch_size=16,
shuffle=True,
classes=CLASSES)
unique, validation_counts = np.unique(validation_generator.labels, return_counts=True)
validation_size = validation_counts.sum()
# test image generator
test_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=10,
horizontal_flip=True,
vertical_flip=True)
test_generator = test_datagen.flow_from_directory(test_directory,
class_mode='categorical',
interpolation='bilinear',
target_size=image_size,
batch_size=16,
shuffle=False,
classes=CLASSES)
unique, test_counts = np.unique(test_generator.labels, return_counts=True)
test_size = test_counts.sum()
print(train_generator.class_indices)
print(validation_generator.class_indices)
print(test_generator.class_indices)
class_weights = compute_class_weight('balanced', np.unique(train_generator.classes), train_generator.classes)
print(class_weights)
###Output
Found 4108 images belonging to 2 classes.
Found 878 images belonging to 2 classes.
Found 879 images belonging to 2 classes.
{'normal': 0, 'pneumonia': 1}
{'normal': 0, 'pneumonia': 1}
{'normal': 0, 'pneumonia': 1}
[1.83885407 0.68672685]
###Markdown
**Inceptionv3**
###Code
# LOAD PRETRAINED MODEL InceptionV3
from keras.applications.inception_v3 import InceptionV3
from keras.applications.nasnet import NASNetLarge
# create the base pre-trained model
inceptionv3 = InceptionV3(weights='imagenet', include_top=True)
# nasnetlarge = NASNetLarge(weights='imagenet', include_top=False, pooling='avg', classes=2)
# BUILD NEW CLASSIFICATION MODEL BASED ON inceptionv3
import tensorflow
from keras.optimizers import RMSprop, Adam
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D, Activation, Input, Dense, Lambda
from keras import metrics
from keras.backend import resize_images
import cv2
y = inceptionv3.layers[-2].output
outputs = Dense(2, activation='sigmoid')(y)
# this is the model we will train
model1 = Model(inputs=inceptionv3.inputs, outputs=outputs)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in inceptionv3.layers:
layer.trainable = False
for layer in model1.layers:
layer.trainable = True
adam = Adam()
# compile the model (should be done *after* setting layers to non-trainable)
model1.compile(optimizer=adam, loss='categorical_crossentropy', metrics=[metrics.categorical_accuracy])
# model1.summary()
plot_model(inceptionv3, show_shapes=True)
# TRAIN model
from math import ceil, floor
from keras.callbacks import ModelCheckpoint
# train the model on the new data for a few epochs
steps_per_epoch = ceil(train_size/16)
validation_steps = ceil(validation_size/16)
history_model1 = model1.fit_generator(train_generator, epochs=17, verbose=1,
steps_per_epoch=steps_per_epoch,
validation_data=validation_generator,
validation_steps=validation_steps,
validation_freq=1,
class_weight=class_weights)
# Plot training & validation accuracy values
fig = plt.figure(figsize=(10, 8))
plt.plot(history_model1.history['categorical_accuracy'])
plt.plot(history_model1.history['val_categorical_accuracy'])
plt.title('Model accuracy',fontsize=20)
plt.ylabel('Accuracy',fontsize=18)
plt.xlabel('Epoch',fontsize=18)
plt.yticks(fontsize=16)
plt.xticks(fontsize=16)
plt.legend(['Train', 'Test'], loc='lower right',fontsize=18)
plt.show()
fig.savefig('/content/gdrive/My Drive/ECE1512/stage1/model1/model1_history_accuracy_17epoch.jpeg')
# Plot training & validation loss values
fig = plt.figure(figsize=(10, 8))
plt.plot(history_model1.history['loss'])
plt.plot(history_model1.history['val_loss'])
plt.title('Model loss',fontsize=20)
plt.ylabel('Loss',fontsize=18)
plt.xlabel('Epoch',fontsize=18)
plt.yticks(fontsize=16)
plt.xticks(fontsize=16)
plt.legend(['Train', 'Test'], loc='upper right',fontsize=18)
plt.show()
fig.savefig('/content/gdrive/My Drive/ECE1512/stage1/model1/model1_history_loss_17epoch.jpeg')
results = model1.predict_generator(test_generator)
print(results)
pred_scores = model1.predict(test_generator)
y_pred = np.argmax(pred_scores,axis=1)
print(y_pred)
print(test_generator.classes)
model1.save('/content/gdrive/My Drive/ECE1512/stage1/model1/model1_17epochs.h5')
import pandas as pd
hist_df = pd.DataFrame(history_model1.history)
# save to json:
hist_json_file = '/content/gdrive/My Drive/ECE1512/stage1/model1/history_model1_17epochs.json'
with open(hist_json_file, mode='w') as f:
hist_df.to_json(f)
import pandas as pd
import json
with open('/content/gdrive/My Drive/ECE1512/stage1/model1/history_model1.json', 'r') as f:
data = json.load(f)
history_model1 = pd.DataFrame(data)
eval_results = model1.evaluate_generator(test_generator)
print(eval_results)
from sklearn.metrics import classification_report, precision_score, precision_score, f1_score
pred = model1.predict(test_generator)
y_pred = np.argmax(pred, axis=1)
print(classification_report(test_generator.labels, y_pred, target_names=CLASSES))
###Output
precision recall f1-score support
normal 0.94 0.76 0.84 238
pneumonia 0.92 0.98 0.95 641
accuracy 0.92 879
macro avg 0.93 0.87 0.90 879
weighted avg 0.92 0.92 0.92 879
|
experiments/tl_1/oracle.run1-oracle.run2/trials/1/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1_oracle.run1-oracle.run2",
"device": "cuda",
"lr": 0.001,
"seed": 1337,
"dataset_seed": 1337,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_loss",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 10000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 10000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "ORACLE.run2_",
},
],
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
notebooks/Coding-and-Soaring-The-Sailplane-Polar-Basic-Calculations.ipynb | ###Markdown
Coding and Soaring: Exploring sailplane performance in Python with glidepy The glider polar describes the relationship between speed and sink rate. Usually it is included in the glider's flight manual in the form of a rather crude graph. For example, this is the polar provided for the ASW27. Both unballasted and ballasted curves are shown. 
###Code
from IPython.display import Image, display
display(Image(filename='./asw27polar.png', embed=True))
###Output
_____no_output_____
###Markdown
We will create a mathematical representation of this curve using the programming language Python and the glidepy library. We can then easily analyze various aspects of salplane performance. What is glidepy? glidepy is a Python library that performs common and useful polar and speed-to-fly calculations useful for sailplane performance analysis and simulations.
###Code
import matplotlib.pyplot as plt
import numpy as np
import warnings
###Output
_____no_output_____
###Markdown
glidepy is imported like any other library or module.
###Code
# import the pyglider library
import glidepy as pg
warnings.simplefilter('ignore', np.RankWarning)
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Lets define some useful unit conversion factors.
###Code
kmh_to_knots = 0.539957
ms_to_knots = 1.94384
knots_to_kmh = 1.852
nm_to_feet = 6076.12
nm_to_sm = 1.15078
###Output
_____no_output_____
###Markdown
Creating the asw27 Create an unballasted asw27 Glider object and initialize it with the polar data points and the reference weight. The first vector are speeds in Km/h. The secind are the corresponding sink rates in m/s. The weight is in lbs. This is the part that uniquely specifies the glider we are analyzing and the configuration (ballasted or not).For this, we need at least three points from the original polar and the associated reference weight. The rest is calculated by glidepy, including any data for other weights.
###Code
speeds = [90, 150.0, 200.0]
sink_rates = [-0.55, -1.11, -2.38]
ref_weight = 787
weight = 1102
asw27 = pg.Glider(speeds, sink_rates, ref_weight)
# A ballasted glider is created by specifying the ballasted weight also
asw27_wet = pg.Glider(speeds, sink_rates, ref_weight, weight)
###Output
_____no_output_____
###Markdown
glidepy creates a mathematical model of the sailplane polar. Now we can ask: What is the sink rate (in knots) at 100 knots? We can query the model of the polar.
###Code
asw27.polar(100) # speed and resulting sink rate in knots
###Output
_____no_output_____
###Markdown
We can set a non-zero airmass sink/lift, ie, netto. In this case to get the total sink rate use the sink_rate attribute. The polar attribute used above always stays the same as a reference. Plotting the polar Lets plot the polar in the speed range 40 to 140 knots in 5 knot increments.
###Code
speed_range = np.arange(40, 145, 5)
polar_graph = [asw27.polar(x) for x in speed_range]
# And for the ballasted glider too
polar_graph_wet = [asw27_wet.polar(x) for x in speed_range]
###Output
_____no_output_____
###Markdown
We will plot the the dry and wet polars, together with the source data points from which the models were created.
###Code
fig, ax = plt.subplots()
ax.plot(speed_range, polar_graph, color="blue")
ax.scatter(asw27.speeds, asw27.sink_rates, color="blue")
ax.plot(speed_range, polar_graph_wet, color="red")
ax.scatter(asw27_wet.speeds, asw27_wet.sink_rates, color="red")
ax.set(title='ASW27 Polar',
ylabel='Sink Rate (knots)',
xlabel='Cruise Speed (knots)',
xticks=range(40,150,10),
ylim=(-11, 0),
xlim=(40, 140))
ax.legend(['Dry', 'Ballasted'])
plt.grid()
plt.show()
###Output
_____no_output_____ |
Pytorch/pytorch_basic/autoencoder_and_reuse.ipynb | ###Markdown
import deep learning libs
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import random
torch.__version__
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
###Output
_____no_output_____
###Markdown
seed settings
###Code
torch.manual_seed(1)
random.seed(1)
if device == 'cuda':
torch.cuda.manual_seed_all(1)
import os
import sys
import numpy as np
cwd = os.getcwd()
sys.version_info
###Output
_____no_output_____
###Markdown
Hyperparameter
###Code
EPOCH = 1
BATCH_SIZE = 128
LEARNING_RATE = 1e-3
###Output
_____no_output_____
###Markdown
MNIST Data load
###Code
from mnist import *
mymnist = MyMNIST(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Autoencoder settings
###Code
class CAE(nn.Module):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 32, kernel_size= 3, stride= 1, padding = 1),
nn.ReLU(),
nn.Conv2d(32, 16, kernel_size= 3, stride= 2, padding = 1),
nn.ReLU(),
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(16, 32, kernel_size= 3, stride= 2, padding = 1, output_padding= 1),
nn.ReLU(),
nn.ConvTranspose2d(32, 1, kernel_size= 3, stride= 1, padding = 1),
nn.Sigmoid(),
)
def forward(self, x):
'''
torch.Size([1, 1, 28, 28])
torch.Size([1, 16, 14, 14])
torch.Size([1, 1, 28, 28])
'''
# print(x.size())
encoder = self.encoder(x)
# print(encoder.size())
decoder = self.decoder(encoder)
# print(decoder.size())
return decoder
model = CAE().to(device)
model
model.encoder[0].weight.data[0] # init weights - check update weights
# mymnist.mnist_train.train_data.shape
# x_test_input= torch.randn(1, 1, 28, 28).to(device)
# cae(x_test_input)
%%time
criterion = nn.MSELoss().to(device) # Softmax is internally computed.
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
###Output
Wall time: 0 ns
###Markdown
Train model
###Code
%%time
model.train()
total_batch = len(mymnist.train_data_loader)
print('Learning started. It takes sometime.')
for epoch in range(EPOCH):
avg_cost = 0
for X, Y in mymnist.train_data_loader:
X = X.to(device)
optimizer.zero_grad()
hypothesis = model(X)
cost = criterion(hypothesis, X)
cost.backward()
optimizer.step()
avg_cost += cost / total_batch
print('[Epoch: {:>4}] cost = {:>.9}'.format(epoch + 1, avg_cost))
print('Learning Finished!')
###Output
Learning started. It takes sometime.
[Epoch: 1] cost = 0.0117324367
Learning Finished!
Wall time: 1min 24s
###Markdown
Test Model
###Code
out_img = torch.squeeze(hypothesis.cpu().data)
print(out_img.size())
import matplotlib.pyplot as plt
with torch.no_grad():
model.eval()
X_test = mymnist.mnist_test.data.view(len(mymnist.mnist_test), 1, 28, 28).float().to(device)
X_test /= 255.
ae_result = model(X_test[:10])
for i in range(len(ae_result)):
fig = plt.figure()
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax1.imshow(X_test[i].cpu().numpy().squeeze(), cmap = 'gray')
ax2.imshow(ae_result[i].cpu().numpy().squeeze(), cmap = 'gray')
plt.show()
model.encoder[0].weight.data[0] # check update weights
###Output
_____no_output_____
###Markdown
Reuse encoder part for training CNN classifier
###Code
encoder = model.encoder
encoder
###Output
_____no_output_____
###Markdown
pretrained encoder weights 사용1
###Code
class CNN(nn.Module):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 32, kernel_size= 3, stride= 1, padding = 1),
nn.ReLU(),
nn.Conv2d(32, 16, kernel_size= 3, stride= 2, padding = 1),
nn.ReLU(),
)
self.fc = nn.Linear(14*14*16, 10)
def forward(self, x):
out = self.encoder(x)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
cnn_model = CNN().to(device)
cnn_dict = cnn_model.state_dict()
ae_net_dict = model.state_dict()
# 필터링 디코더 layers
ae_net_dict = {k: v for k, v in ae_net_dict.items() if k in cnn_dict}
# encoder layers overwrite
cnn_dict.update(ae_net_dict)
cnn_model.load_state_dict(cnn_dict)
cnn_model
###Output
_____no_output_____
###Markdown
encoder weights update후 사용2
###Code
class CNN(nn.Module):
def __init__(self, encoder):
super().__init__()
self.encoder = encoder
self.fc = nn.Linear(14*14*16, 10)
def forward(self, x):
out = self.encoder(x)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
cnn_model = CNN(encoder).to(device)
cnn_model
cnn_model.encoder[0].weight.data[0] # check update weights
criterion = torch.nn.CrossEntropyLoss().to(device) # Softmax is internally computed.
optimizer = torch.optim.Adam(cnn_model.parameters(), lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
Train CNN Classifier
###Code
%%time
# train my model
total_batch = len(mymnist.train_data_loader)
model.train() # set the model to train mode (dropout=True)
print('Learning started. It takes sometime.')
for epoch in range(EPOCH):
avg_cost = 0
for X, Y in mymnist.train_data_loader:
# image is already size of (28x28), no reshape
# label is not one-hot encoded
X = X.to(device)
Y = Y.to(device)
optimizer.zero_grad()
hypothesis = cnn_model(X)
cost = criterion(hypothesis, Y)
cost.backward()
optimizer.step()
avg_cost += cost / total_batch
print('[Epoch: {:>4}] cost = {:>.9}'.format(epoch + 1, avg_cost))
print('Learning Finished!')
model.encoder[0].weight.data[0] # check update weights
###Output
_____no_output_____
###Markdown
Test CNN Classifier
###Code
# Test model and check accuracy
with torch.no_grad():
model.eval() # set the model to evaluation mode (dropout=False)
accuracy_true = 0
for X, Y in mymnist.test_data_loader:
X = X.to(device)
Y = Y.to(device)
prediction = cnn_model(X)
correct_prediction = torch.sum(torch.argmax(prediction, 1) == Y)
accuracy_true += correct_prediction
print('Accuracy:', accuracy_true.item() / len(mymnist.mnist_test))
###Output
Accuracy: 0.9432
|
examples/prepare/zsub_prepare_BasicProcessing.ipynb | ###Markdown
How to Perform Basic Processing The purpose of this notebook is to illustrate how to use `ProcessStrings`, a module that processes user input data
###Code
%load_ext autoreload
%autoreload 2
%config Completer.use_jedi=False
from os.path import join, expanduser, dirname
import pandas as pd
import sys
import os
import re
import warnings
warnings.filterwarnings(action='ignore')
home = expanduser('~')
src_path = '{}/zrp'.format(home)
sys.path.append(src_path)
from zrp.prepare.prepare import ProcessStrings
from zrp.prepare.utils import load_file
###Output
_____no_output_____
###Markdown
Load sample data for predictionLoad processed list of New Jersey Mayors downloaded from https://www.nj.gov/dca/home/2022mayors.csv
###Code
nj_mayors = load_file("../2022-nj-mayors-sample.csv")
nj_mayors.shape
nj_mayors
###Output
_____no_output_____
###Markdown
ZRP Preprocessing To quickly process the data we will use `ProcessStrings`. There are other preprocessing classes that should be used on specific data like `ProcessGeo` on data we intend to geocode, `ProcessACS` on ACS data, and `ProcessGLookup` for geographic lookup tables. Implementation is similar for each processing classInput data into the prediction/modeling pipeline is tabluar data with the following columns: first name, middle name, last name, house number, street address (street name), city, state, zip code, and zest key. The `ZEST_KEY` must be specified to establish correspondence between inputs and outputs; it's effectively used as an index for the data table.
###Code
%%time
preprocess = ProcessStrings()
preprocess.fit(nj_mayors)
zrp_output = preprocess.transform(nj_mayors)
###Output
[Start] Validating input data
Number of observations: 462
Is key unique: True
(Warning!!) middle_name is 68.3982683982684% missing, this may impact the ability to return race approximations
[Completed] Validating input data
Formatting P1
Formatting P2
reduce whitespace
CPU times: user 78.8 ms, sys: 3.3 ms, total: 82.1 ms
Wall time: 84.5 ms
###Markdown
Inspect the output- Preview the data
###Code
zrp_output.shape
zrp_output.head()
###Output
_____no_output_____ |
handson-ml2/classification.ipynb | ###Markdown
**Cross validation**
###Code
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
X_test_fold = X_train[test_index]
y_test_fold = y_train_5[test_index]
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
###Output
0.95035
0.96035
0.9604
###Markdown
or simply
###Code
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
###Output
_____no_output_____
###Markdown
a very dumb classifier (always return 0)
###Code
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
return self
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
###Output
_____no_output_____
###Markdown
this shows that accuracy is bad with skewed datasets cross_val_predict
###Code
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=2)
y_train_pred
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
###Output
_____no_output_____
###Markdown
Each row in a confusion matrix represents an actual class, while each column represents a predicted class. The first row of this matrix considers non-5 images (the negative class): 53,057 of them were correctly classified as non-5s (they are called true negatives), while the remaining 1,522 were wrongly classified as 5s (false positives).The second row considers the images of 5s (the positive class): 1,325 were wrongly classified as non-5s (false negatives), while the remaining 4,096 were correctly classified as 5s (true positives). A perfect classifier would have only true positives and true negatives, so its confusion matrix would have nonzero values only on its main diagonal (top left to bottom right):
###Code
confusion_matrix(y_train_5, y_train_5)
from sklearn.metrics import precision_score, recall_score
print(precision_score(y_train_5, y_train_pred)) # == 4096 / (4096 + 1522)
recall_score(y_train_5, y_train_pred) # == 4096 / (4096 + 1325)
###Output
0.6934893928310168
###Markdown
It is often convenient to combine precision and recall into a single metric called the F1 score. The F1 score is the harmonic mean of precision and recall. Whereas the regular mean treats all values equally, the harmonic mean gives much more weight to low values. As a result, the classifier will only get a high F1 score if both recall and precision are$$ F_1 = \frac{2}{\frac{1}{precision}+\frac{1}{recall}} = \frac{TP}{TP+\frac{FP + FN}{2}} $$
###Code
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
###Output
_____no_output_____ |
Model backlog/Training/Classification/Google Colab/23-EfficientNetB3_300x300_Cyclical_triangular.ipynb | ###Markdown
Dependencies
###Code
from utillity_script_cloud_segmentation import *
from utillity_script_lr_schedulers import *
seed = 0
seed_everything(seed)
warnings.filterwarnings("ignore")
#@title
class LRFinder(Callback):
def __init__(self,
num_samples,
batch_size,
minimum_lr=1e-5,
maximum_lr=10.,
lr_scale='exp',
validation_data=None,
validation_sample_rate=5,
stopping_criterion_factor=4.,
loss_smoothing_beta=0.98,
save_dir=None,
verbose=True):
"""
This class uses the Cyclic Learning Rate history to find a
set of learning rates that can be good initializations for the
One-Cycle training proposed by Leslie Smith in the paper referenced
below.
A port of the Fast.ai implementation for Keras.
# Note
This requires that the model be trained for exactly 1 epoch. If the model
is trained for more epochs, then the metric calculations are only done for
the first epoch.
# Interpretation
Upon visualizing the loss plot, check where the loss starts to increase
rapidly. Choose a learning rate at somewhat prior to the corresponding
position in the plot for faster convergence. This will be the maximum_lr lr.
Choose the max value as this value when passing the `max_val` argument
to OneCycleLR callback.
Since the plot is in log-scale, you need to compute 10 ^ (-k) of the x-axis
# Arguments:
num_samples: Integer. Number of samples in the dataset.
batch_size: Integer. Batch size during training.
minimum_lr: Float. Initial learning rate (and the minimum).
maximum_lr: Float. Final learning rate (and the maximum).
lr_scale: Can be one of ['exp', 'linear']. Chooses the type of
scaling for each update to the learning rate during subsequent
batches. Choose 'exp' for large range and 'linear' for small range.
validation_data: Requires the validation dataset as a tuple of
(X, y) belonging to the validation set. If provided, will use the
validation set to compute the loss metrics. Else uses the training
batch loss. Will warn if not provided to alert the user.
validation_sample_rate: Positive or Negative Integer. Number of batches to sample from the
validation set per iteration of the LRFinder. Larger number of
samples will reduce the variance but will take longer time to execute
per batch.
If Positive > 0, will sample from the validation dataset
If Megative, will use the entire dataset
stopping_criterion_factor: Integer or None. A factor which is used
to measure large increase in the loss value during training.
Since callbacks cannot stop training of a model, it will simply
stop logging the additional values from the epochs after this
stopping criterion has been met.
If None, this check will not be performed.
loss_smoothing_beta: Float. The smoothing factor for the moving
average of the loss function.
save_dir: Optional, String. If passed a directory path, the callback
will save the running loss and learning rates to two separate numpy
arrays inside this directory. If the directory in this path does not
exist, they will be created.
verbose: Whether to print the learning rate after every batch of training.
# References:
- [A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, weight_decay, and weight decay](https://arxiv.org/abs/1803.09820)
"""
super(LRFinder, self).__init__()
if lr_scale not in ['exp', 'linear']:
raise ValueError("`lr_scale` must be one of ['exp', 'linear']")
if validation_data is not None:
self.validation_data = validation_data
self.use_validation_set = True
if validation_sample_rate > 0 or validation_sample_rate < 0:
self.validation_sample_rate = validation_sample_rate
else:
raise ValueError("`validation_sample_rate` must be a positive or negative integer other than o")
else:
self.use_validation_set = False
self.validation_sample_rate = 0
self.num_samples = num_samples
self.batch_size = batch_size
self.initial_lr = minimum_lr
self.final_lr = maximum_lr
self.lr_scale = lr_scale
self.stopping_criterion_factor = stopping_criterion_factor
self.loss_smoothing_beta = loss_smoothing_beta
self.save_dir = save_dir
self.verbose = verbose
self.num_batches_ = num_samples // batch_size
self.current_lr_ = minimum_lr
if lr_scale == 'exp':
self.lr_multiplier_ = (maximum_lr / float(minimum_lr)) ** (
1. / float(self.num_batches_))
else:
extra_batch = int((num_samples % batch_size) != 0)
self.lr_multiplier_ = np.linspace(
minimum_lr, maximum_lr, num=self.num_batches_ + extra_batch)
# If negative, use entire validation set
if self.validation_sample_rate < 0:
self.validation_sample_rate = self.validation_data[0].shape[0] // batch_size
self.current_batch_ = 0
self.current_epoch_ = 0
self.best_loss_ = 1e6
self.running_loss_ = 0.
self.history = {}
def on_train_begin(self, logs=None):
self.current_epoch_ = 1
K.set_value(self.model.optimizer.lr, self.initial_lr)
warnings.simplefilter("ignore")
def on_epoch_begin(self, epoch, logs=None):
self.current_batch_ = 0
if self.current_epoch_ > 1:
warnings.warn(
"\n\nLearning rate finder should be used only with a single epoch. "
"Hereafter, the callback will not measure the losses.\n\n")
def on_batch_begin(self, batch, logs=None):
self.current_batch_ += 1
def on_batch_end(self, batch, logs=None):
if self.current_epoch_ > 1:
return
if self.use_validation_set:
X, Y = self.validation_data[0], self.validation_data[1]
# use 5 random batches from test set for fast approximate of loss
num_samples = self.batch_size * self.validation_sample_rate
if num_samples > X.shape[0]:
num_samples = X.shape[0]
idx = np.random.choice(X.shape[0], num_samples, replace=False)
x = X[idx]
y = Y[idx]
values = self.model.evaluate(x, y, batch_size=self.batch_size, verbose=False)
loss = values[0]
else:
loss = logs['loss']
# smooth the loss value and bias correct
running_loss = self.loss_smoothing_beta * loss + (
1. - self.loss_smoothing_beta) * loss
running_loss = running_loss / (
1. - self.loss_smoothing_beta**self.current_batch_)
# stop logging if loss is too large
if self.current_batch_ > 1 and self.stopping_criterion_factor is not None and (
running_loss >
self.stopping_criterion_factor * self.best_loss_):
if self.verbose:
print(" - LRFinder: Skipping iteration since loss is %d times as large as best loss (%0.4f)"
% (self.stopping_criterion_factor, self.best_loss_))
return
if running_loss < self.best_loss_ or self.current_batch_ == 1:
self.best_loss_ = running_loss
current_lr = K.get_value(self.model.optimizer.lr)
self.history.setdefault('running_loss_', []).append(running_loss)
if self.lr_scale == 'exp':
self.history.setdefault('log_lrs', []).append(np.log10(current_lr))
else:
self.history.setdefault('log_lrs', []).append(current_lr)
# compute the lr for the next batch and update the optimizer lr
if self.lr_scale == 'exp':
current_lr *= self.lr_multiplier_
else:
current_lr = self.lr_multiplier_[self.current_batch_ - 1]
K.set_value(self.model.optimizer.lr, current_lr)
# save the other metrics as well
for k, v in logs.items():
self.history.setdefault(k, []).append(v)
if self.verbose:
if self.use_validation_set:
print(" - LRFinder: val_loss: %1.4f - lr = %1.8f " %
(values[0], current_lr))
else:
print(" - LRFinder: lr = %1.8f " % current_lr)
def on_epoch_end(self, epoch, logs=None):
if self.save_dir is not None and self.current_epoch_ <= 1:
if not os.path.exists(self.save_dir):
os.makedirs(self.save_dir)
losses_path = os.path.join(self.save_dir, 'losses.npy')
lrs_path = os.path.join(self.save_dir, 'lrs.npy')
np.save(losses_path, self.losses)
np.save(lrs_path, self.lrs)
if self.verbose:
print("\tLR Finder : Saved the losses and learning rate values in path : {%s}"
% (self.save_dir))
self.current_epoch_ += 1
warnings.simplefilter("default")
def plot_schedule(self, clip_beginning=None, clip_endding=None):
"""
Plots the schedule from the callback itself.
# Arguments:
clip_beginning: Integer or None. If positive integer, it will
remove the specified portion of the loss graph to remove the large
loss values in the beginning of the graph.
clip_endding: Integer or None. If negative integer, it will
remove the specified portion of the ending of the loss graph to
remove the sharp increase in the loss values at high learning rates.
"""
try:
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
except ImportError:
print(
"Matplotlib not found. Please use `pip install matplotlib` first."
)
return
if clip_beginning is not None and clip_beginning < 0:
clip_beginning = -clip_beginning
if clip_endding is not None and clip_endding > 0:
clip_endding = -clip_endding
losses = self.losses
lrs = self.lrs
if clip_beginning:
losses = losses[clip_beginning:]
lrs = lrs[clip_beginning:]
if clip_endding:
losses = losses[:clip_endding]
lrs = lrs[:clip_endding]
plt.plot(lrs, losses)
plt.title('Learning rate vs Loss')
plt.xlabel('learning rate')
plt.ylabel('loss')
plt.show()
@classmethod
def restore_schedule_from_dir(cls,
directory,
clip_beginning=None,
clip_endding=None):
"""
Loads the training history from the saved numpy files in the given directory.
# Arguments:
directory: String. Path to the directory where the serialized numpy
arrays of the loss and learning rates are saved.
clip_beginning: Integer or None. If positive integer, it will
remove the specified portion of the loss graph to remove the large
loss values in the beginning of the graph.
clip_endding: Integer or None. If negative integer, it will
remove the specified portion of the ending of the loss graph to
remove the sharp increase in the loss values at high learning rates.
Returns:
tuple of (losses, learning rates)
"""
if clip_beginning is not None and clip_beginning < 0:
clip_beginning = -clip_beginning
if clip_endding is not None and clip_endding > 0:
clip_endding = -clip_endding
losses_path = os.path.join(directory, 'losses.npy')
lrs_path = os.path.join(directory, 'lrs.npy')
if not os.path.exists(losses_path) or not os.path.exists(lrs_path):
print("%s and %s could not be found at directory : {%s}" %
(losses_path, lrs_path, directory))
losses = None
lrs = None
else:
losses = np.load(losses_path)
lrs = np.load(lrs_path)
if clip_beginning:
losses = losses[clip_beginning:]
lrs = lrs[clip_beginning:]
if clip_endding:
losses = losses[:clip_endding]
lrs = lrs[:clip_endding]
return losses, lrs
@classmethod
def plot_schedule_from_file(cls,
directory,
clip_beginning=None,
clip_endding=None):
"""
Plots the schedule from the saved numpy arrays of the loss and learning
rate values in the specified directory.
# Arguments:
directory: String. Path to the directory where the serialized numpy
arrays of the loss and learning rates are saved.
clip_beginning: Integer or None. If positive integer, it will
remove the specified portion of the loss graph to remove the large
loss values in the beginning of the graph.
clip_endding: Integer or None. If negative integer, it will
remove the specified portion of the ending of the loss graph to
remove the sharp increase in the loss values at high learning rates.
"""
try:
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
except ImportError:
print("Matplotlib not found. Please use `pip install matplotlib` first.")
return
losses, lrs = cls.restore_schedule_from_dir(
directory,
clip_beginning=clip_beginning,
clip_endding=clip_endding)
if losses is None or lrs is None:
return
else:
plt.plot(lrs, losses)
plt.title('Learning rate vs Loss')
plt.xlabel('learning rate')
plt.ylabel('loss')
plt.show()
@property
def lrs(self):
return np.array(self.history['log_lrs'])
@property
def losses(self):
return np.array(self.history['running_loss_'])
base_path = '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/'
data_path = base_path + 'Data/'
model_base_path = base_path + 'Models/files/classification/'
train_path = data_path + 'train.csv'
hold_out_set_path = data_path + 'hold-out.csv'
train_images_dest_path = 'train_images/'
###Output
_____no_output_____
###Markdown
Load data
###Code
train = pd.read_csv(train_path)
hold_out_set = pd.read_csv(hold_out_set_path)
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
print('Compete set samples:', len(train))
print('Train samples: ', len(X_train))
print('Validation samples: ', len(X_val))
# Preprocecss data
train['image'] = train['Image_Label'].apply(lambda x: x.split('_')[0])
label_columns=['Fish', 'Flower', 'Gravel', 'Sugar']
for label in label_columns:
X_train[label].replace({0: 1, 1: 0}, inplace=True)
X_val[label].replace({0: 1, 1: 0}, inplace=True)
display(X_train.head())
###Output
Compete set samples: 22184
Train samples: 4420
Validation samples: 1105
###Markdown
Model parameters
###Code
BATCH_SIZE = 16
WARMUP_EPOCHS = 3
WARMUP_LEARNING_RATE = 1e-3
EPOCHS = 30
BASE_LEARNING_RATE = 10**(-5.6)
LEARNING_RATE = 10**(-2)
HEIGHT = 300
WIDTH = 300
CHANNELS = 3
N_CLASSES = 4
ES_PATIENCE = 8
STEP_SIZE_TRAIN = len(X_train)//BATCH_SIZE
STEP_SIZE_VALID = len(X_val)//BATCH_SIZE
CYCLE_SIZE = 6
STEP_SIZE = (CYCLE_SIZE // 2) * STEP_SIZE_TRAIN
model_name = '23-EfficientNetB3_%sx%s' % (HEIGHT, WIDTH)
model_path = model_base_path + '%s.h5' % (model_name)
###Output
_____no_output_____
###Markdown
Data generator
###Code
datagen=ImageDataGenerator(rescale=1./255.,
vertical_flip=True,
horizontal_flip=True,
zoom_range=[1, 1.1],
fill_mode='constant',
cval=0.)
test_datagen=ImageDataGenerator(rescale=1./255.)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_images_dest_path,
x_col="image",
y_col=label_columns,
target_size=(HEIGHT, WIDTH),
batch_size=BATCH_SIZE,
class_mode="other",
shuffle=True,
seed=seed)
valid_generator=test_datagen.flow_from_dataframe(
dataframe=X_val,
directory=train_images_dest_path,
x_col="image",
y_col=label_columns,
target_size=(HEIGHT, WIDTH),
batch_size=BATCH_SIZE,
class_mode="other",
shuffle=True,
seed=seed)
###Output
Found 4420 validated image filenames.
Found 1105 validated image filenames.
###Markdown
Model
###Code
def create_model(input_shape, N_CLASSES):
input_tensor = Input(shape=input_shape)
base_model = efn.EfficientNetB3(weights='imagenet',
include_top=False,
input_tensor=input_tensor,
pooling='avg')
x = base_model.output
final_output = Dense(N_CLASSES, activation='sigmoid')(x)
model = Model(input_tensor, final_output)
return model
###Output
_____no_output_____
###Markdown
Warmup top layers
###Code
model = create_model((None, None, CHANNELS), N_CLASSES)
metric_list = ['accuracy']
for layer in model.layers[:-1]:
layer.trainable = False
optimizer = optimizers.SGD(lr=WARMUP_LEARNING_RATE, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=metric_list)
warmup_history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
for layer in model.layers:
layer.trainable = True
warm_weights = model.get_weights()
###Output
_____no_output_____
###Markdown
Learning rate finder
###Code
#@title
lr_finder = LRFinder(num_samples=len(X_train), batch_size=BATCH_SIZE, minimum_lr=1e-6, maximum_lr=10, verbose=0)
optimizer = optimizers.SGD(lr=WARMUP_LEARNING_RATE, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss='binary_crossentropy')
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=1,
callbacks=[lr_finder])
plt.rcParams.update({'font.size': 16})
plt.figure(figsize=(24, 8))
plt.axvline(x=np.log10(BASE_LEARNING_RATE), color='green')
plt.axvline(x=np.log10(LEARNING_RATE), color='red')
lr_finder.plot_schedule(clip_beginning=5)
###Output
_____no_output_____
###Markdown
Fine-tune all layers
###Code
model.set_weights(warm_weights)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cyclicalLR = CyclicLR(base_lr=BASE_LEARNING_RATE, max_lr=LEARNING_RATE, step_size=STEP_SIZE, mode='triangular')
callback_list = [checkpoint, es, cyclicalLR]
optimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=metric_list)
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callback_list,
epochs=EPOCHS,
verbose=1).history
###Output
Epoch 1/30
2/276 [..............................] - ETA: 45:28 - loss: 0.6397 - acc: 0.6719
###Markdown
Model loss graph
###Code
#@title
metrics_history = ['loss', 'acc']
for metric_hist in metrics_history:
history[metric_hist] = warmup_history[metric_hist] + history[metric_hist]
history['val_' + metric_hist] = warmup_history['val_' + metric_hist] + history['val_' + metric_hist]
plot_metrics(history, metric_list=metrics_history)
###Output
_____no_output_____
###Markdown
Scheduler learning rates
###Code
#@title
fig, ax1 = plt.subplots(1, 1, figsize=(20, 6))
plt.xlabel('Training Iterations')
plt.ylabel('Learning Rate')
plt.plot(cyclicalLR.history['lr'])
plt.show()
###Output
_____no_output_____ |
V2_Justes_Adams_Sprint_Challenge_1.ipynb | ###Markdown
Data Science Unit 1 Sprint Challenge 1 Loading, cleaning, visualizing, and analyzing dataIn this sprint challenge you will look at a dataset of the survival of patients who underwent surgery for breast cancer.http://archive.ics.uci.edu/ml/datasets/Haberman%27s+SurvivalData Set Information:The dataset contains cases from a study that was conducted between 1958 and 1970 at the University of Chicago's Billings Hospital on the survival of patients who had undergone surgery for breast cancer.Attribute Information:1. Age of patient at time of operation (numerical)2. Patient's year of operation (year - 1900, numerical)3. Number of positive axillary nodes detected (numerical)4. Survival status (class attribute)-- 1 = the patient survived 5 years or longer-- 2 = the patient died within 5 yearSprint challenges are evaluated based on satisfactory completion of each part. It is suggested you work through it in order, getting each aspect reasonably working, before trying to deeply explore, iterate, or refine any given step. Once you get to the end, if you want to go back and improve things, go for it! Part 0 - Revert your version of Pandas right from the startI don't want any of you to get stuck because of Pandas bugs, so right from the get-go revert back to version `0.23.4`- Run the cell below- Then restart your runtime. Go to `Runtime` -> `Restart runtime...` in the top menu (or click the "RESTART RUNTIME" button that shows up in the output of the cell below).
###Code
!pip install pandas==0.23.4
###Output
Collecting pandas==0.23.4
[?25l Downloading https://files.pythonhosted.org/packages/e1/d8/feeb346d41f181e83fba45224ab14a8d8af019b48af742e047f3845d8cff/pandas-0.23.4-cp36-cp36m-manylinux1_x86_64.whl (8.9MB)
[K |████████████████████████████████| 8.9MB 4.8MB/s
[?25hRequirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas==0.23.4) (2.5.3)
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from pandas==0.23.4) (1.16.5)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas==0.23.4) (2018.9)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas==0.23.4) (1.12.0)
[31mERROR: google-colab 1.0.0 has requirement pandas~=0.24.0, but you'll have pandas 0.23.4 which is incompatible.[0m
Installing collected packages: pandas
Found existing installation: pandas 0.24.2
Uninstalling pandas-0.24.2:
Successfully uninstalled pandas-0.24.2
Successfully installed pandas-0.23.4
###Markdown
Part 1 - Load and validate the data- Load the data as a `pandas` data frame.- Validate that it has the appropriate number of observations (you can check the raw file, and also read the dataset description from UCI).- Validate that you have no missing values.- Add informative names to the features.- The survival variable is encoded as 1 for surviving >5 years and 2 for not - change this to be 0 for not surviving and 1 for surviving >5 years (0/1 is a more traditional encoding of binary variables)At the end, print the first five rows of the dataset to demonstrate the above.
###Code
import pandas as pd
import numpy as np
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('haberman.data', header=None)
df.head(10)
df.columns = ('Age', 'Year_of_Op', '#_Positive_Axillary_Nodes', 'Survival_Status')
df.head()
df.describe()
df.Survival_Status = df.Survival_Status.replace(to_replace =2,
value =0)
df.head(10)
for x in df:
if x == 'NaN':
print(occurence) # checking for NaN values, there is no NAN :)
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
Part 2 - Examine the distribution and relationships of the featuresExplore the data - create at least *2* tables (can be summary statistics or crosstabulations) and *2* plots illustrating the nature of the data.This is open-ended, so to remind - first *complete* this task as a baseline, then go on to the remaining sections, and *then* as time allows revisit and explore further.Hint - you may need to bin some variables depending on your chosen tables/plots.
###Code
df.describe()
SurvStatus = pd.cut(df['Survival_Status'], 5)
AgeBin = pd.cut(df['Age'], 5)
OpYear = pd.cut(df['Year_of_Op'], 5)
ct1 = pd.crosstab(AgeBin, SurvStatus)
ct1
ct1.plot(kind='bar')
ct1.plot()
###Output
_____no_output_____
###Markdown
Part 3 - DataFrame FilteringUse DataFrame filtering to subset the data into two smaller dataframes. You should make one dataframe for individuals who survived >5 years and a second dataframe for individuals who did not. Create a graph with each of the dataframes (can be the same graph type) to show the differences in Age and Number of Positive Axillary Nodes Detected between the two groups.
###Code
df2 = df[df.Survival_Status == 0]
df2.head(10)
df3 = df[df.Survival_Status == 1]
df3.head(10)
AxNode2Bins = pd.cut(df2['#_Positive_Axillary_Nodes'], 3)
Age2Bins = pd.cut(df2['Age'], 3)
ct2 = pd.crosstab(AxNode2Bins, Age2Bins)
ct2
AxNode3Bins = pd.cut(df3['#_Positive_Axillary_Nodes'], 3)
Age3Bins = pd.cut(df3['Age'], 3)
ct3 = pd.crosstab(AxNode3Bins, Age3Bins)
ct3
ct2.plot(figsize=(15,10)) #group for not survived
ct3.plot(figsize=(15,10)) #group for survived
###Output
_____no_output_____
###Markdown
Part 4 - Analysis and InterpretationNow that you've looked at the data, answer the following questions:- What is at least one feature that looks to have a positive relationship with survival? (As that feature goes up in value rate of survival increases)- What is at least one feature that looks to have a negative relationship with survival? (As that feature goes down in value rate of survival increases)- How are those two features related with each other, and what might that mean?Answer with text, but feel free to intersperse example code/results or refer to it from earlier.
###Code
###Output
_____no_output_____
###Markdown
**1: Age could have a positive relationship with survival: In both graphs, there are no people in the highest age group who were also in the largest ax node group.2: Positive Ax Nodes could have a negative relationship with survival: In both graphs, the number of those who survived decreases as the Positive Ax Nodes increases3: In the graph of those who did not survive,, towards the greatest of Axillary Nodes, the number of those in the higher age group increases slightly while the other two decrease. This means that as you get older, you are a bit more likely to have more positive ax nodes, and therefore more at risk for death, even if it's only slightly. **
###Code
df.head(10) # JUST SOME TESTS
SurvBins1 = pd.cut(df['Survival_Status'], 10)# just some tests
DateBins1 = pd.cut(df['Year_of_Op'], 10)
testct = pd.crosstab(SurvBins1, DateBins1)
testct
testct.plot(kind='bar', figsize=(15,10))
###Output
_____no_output_____ |
ICCT_en/examples/04/.ipynb_checkpoints/SS-21-Internal_stability_example_4-checkpoint.ipynb | ###Markdown
Internal stability example 4 How to use this notebook?Try to change the dynamic matrix $A$ of the stable linear system below in order to obtain a system with two divergent modes and then change the initial conditions in order to hide the divergent behaviour.$$\dot{x} = \underbrace{\begin{bmatrix}0&1\\-2&-2\end{bmatrix}}_{A}x$$Try to answer:- Is it possible to achieve this? If yes, in which particular case?
###Code
%matplotlib inline
import control as control
import numpy
import sympy as sym
from IPython.display import display, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !
class matrixWidget(widgets.VBox):
def updateM(self,change):
for irow in range(0,self.n):
for icol in range(0,self.m):
self.M_[irow,icol] = self.children[irow].children[icol].value
#print(self.M_[irow,icol])
self.value = self.M_
def dummychangecallback(self,change):
pass
def __init__(self,n,m):
self.n = n
self.m = m
self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))
self.value = self.M_
widgets.VBox.__init__(self,
children = [
widgets.HBox(children =
[widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]
)
for j in range(n)
])
#fill in widgets and tell interact to call updateM each time a children changes value
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
self.children[irow].children[icol].observe(self.updateM, names='value')
#value = Unicode('[email protected]', help="The email value.").tag(sync=True)
self.observe(self.updateM, names='value', type= 'All')
def setM(self, newM):
#disable callbacks, change values, and reenable
self.unobserve(self.updateM, names='value', type= 'All')
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].unobserve(self.updateM, names='value')
self.M_ = newM
self.value = self.M_
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].observe(self.updateM, names='value')
self.observe(self.updateM, names='value', type= 'All')
#self.children[irow].children[icol].observe(self.updateM, names='value')
#overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?)
class sss(control.StateSpace):
def __init__(self,*args):
#call base class init constructor
control.StateSpace.__init__(self,*args)
#disable function below in base class
def _remove_useless_states(self):
pass
# Preparatory cell
A = numpy.matrix([[0.,1.],[-2.,-2.]])
X0 = numpy.matrix([[1.],[0.]])
Aw = matrixWidget(2,2)
Aw.setM(A)
X0w = matrixWidget(2,1)
X0w.setM(X0)
# Misc
#create dummy widget
DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
#create button widget
START = widgets.Button(
description='Test',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Test',
icon='check'
)
def on_start_button_clicked(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW.value> 0 :
DW.value = -1
else:
DW.value = 1
pass
START.on_click(on_start_button_clicked)
# Main cell
def main_callback(A, X0, DW):
sols = numpy.linalg.eig(A)
sys = sss(A,[[0],[1]],[1,0],0)
pole = control.pole(sys)
if numpy.real(pole[0]) != 0:
p1r = abs(numpy.real(pole[0]))
else:
p1r = 1
if numpy.real(pole[1]) != 0:
p2r = abs(numpy.real(pole[1]))
else:
p2r = 1
if numpy.imag(pole[0]) != 0:
p1i = abs(numpy.imag(pole[0]))
else:
p1i = 1
if numpy.imag(pole[1]) != 0:
p2i = abs(numpy.imag(pole[1]))
else:
p2i = 1
print('A\'s eigenvalues are:',round(sols[0][0],4),'and',round(sols[0][1],4))
#T = numpy.linspace(0, 60, 1000)
T, yout, xout = control.initial_response(sys,X0=X0,return_x=True)
fig = plt.figure("Free response", figsize=(16,5))
ax = fig.add_subplot(121)
plt.plot(T,xout[0])
plt.grid()
ax.set_xlabel('time [s]')
ax.set_ylabel(r'$x_1$')
ax1 = fig.add_subplot(122)
plt.plot(T,xout[1])
plt.grid()
ax1.set_xlabel('time [s]')
ax1.set_ylabel(r'$x_2$')
alltogether = widgets.HBox([widgets.VBox([widgets.Label('$A$:',border=3),
Aw]),
widgets.Label(' ',border=3),
widgets.VBox([widgets.Label('$X_0$:',border=3),
X0w]),
START])
out = widgets.interactive_output(main_callback, {'A':Aw, 'X0':X0w, 'DW':DW})
out.layout.height = '350px'
display(out, alltogether)
#create dummy widget 2
DW2 = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
DW2.value = -1
#create button widget
START2 = widgets.Button(
description='Show answers',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click for view the answers',
icon='check'
)
def on_start_button_clicked2(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW2.value> 0 :
DW2.value = -1
else:
DW2.value = 1
pass
START2.on_click(on_start_button_clicked2)
def main_callback2(DW2):
if DW2 > 0:
display(Markdown(r'''>Answer: The only initial condition that hides completly the divergent modes is the state space origin.
$$ $$
Example:
$$
A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}
$$'''))
else:
display(Markdown(''))
#create a graphic structure to hold all widgets
alltogether2 = widgets.VBox([START2])
out2 = widgets.interactive_output(main_callback2,{'DW2':DW2})
#out.layout.height = '300px'
display(out2,alltogether2)
###Output
_____no_output_____ |
filesystems.ipynb | ###Markdown
Show available columns
###Code
dataraw.columns
countfs = dataraw['id'].count()
display(Markdown("#### FileSystems Count: {}".format(countfs)))
###Output
_____no_output_____
###Markdown
Get Filesystems per pool
###Code
countfspool = dataraw[['id', 'pool']].rename(columns={'id': 'Filesystems'}).groupby('pool').count()
plt.figure(figsize=(16, 9))
# plot bar
ax1 = plt.subplot(221)
ax1 = countfspool.plot(kind='bar', legend=False, ax=ax1, fontsize=12, grid=True)
ax1.set_ylabel('FS Count')
ax1.set_xlabel('pool')
# plot table
ax2 = plt.subplot(222)
plt.axis('off')
tbl = table(ax2, countfspool, loc='center', bbox=[0.2, 0.2, 0.5, 0.5])
tbl.auto_set_font_size(False)
tbl.set_fontsize(14)
# plot pie
ax3 = plt.subplot(223)
ax3 = countfspool.plot(kind='pie', legend=False, subplots=True, ax=ax3, startangle=90)
plt.show()
###Output
_____no_output_____
###Markdown
Usage
###Code
sumspace = dataraw[['space_total', 'space_data']].sum()
sumspace.map(human_size)
spacepool = dataraw[['space_total', 'space_data', 'pool']].groupby('pool')
spacepool.sum().applymap(human_size)
###Output
_____no_output_____
###Markdown
Graphical comparison between space total usage and space data usage
###Code
plt.figure(figsize=(16, 4))
# plot bar
ax = plt.subplot(111)
ax = spacepool.sum().plot(kind='barh', legend=True, ax=ax, fontsize=12, grid=True)
ax.set_xlabel("Space (GB)")
plt.show()
###Output
_____no_output_____
###Markdown
Get reservationsGet total reservations assigned and reservations per pool.
###Code
sumrestotal = dataraw[['reservation']].sum()
display(Markdown("#### Reservation total SUM: {}".format(human_size(sumrestotal.reservation))))
reservationpool = dataraw[['reservation', 'pool']].groupby('pool')
reservationpool.sum().applymap(human_size)
unusedrespool = dataraw[['space_unused_res', 'pool']].groupby('pool')
unusedrespool.sum().applymap(human_size)
###Output
_____no_output_____
###Markdown
Show percentaje of unused reservation per filesystem in GBTable with top unused reservation and a graph with red bars in values equal or beyond 50% percent.
###Code
InteractiveShell.ast_node_interactivity = "all"
def highlight(data):
if data.space_unused_res > 50:
return 'background-color: yellow'
unusedresfs = dataraw[['name', 'project', 'space_unused_res', 'reservation']]
percentage = unusedresfs.copy()
percentage['unused_percent'] = (unusedresfs['space_unused_res'] / unusedresfs['reservation']) * 100
percentage['used_percent'] = 100 - (unusedresfs['space_unused_res'] / unusedresfs['reservation']) * 100
percentage['space_unused_res'] = percentage['space_unused_res']
percentage['reservation'] = percentage['reservation']
percentage['space_unused_res'] = percentage['space_unused_res'].apply(human_size)
percentage['reservation'] = percentage['reservation'].apply(human_size)
percentage.set_index(['name', 'project'], inplace=True)
topunusedpercent = percentage.sort_values('unused_percent', ascending=False).dropna()
if topunusedpercent.empty:
display(Markdown("### No unused reservation to report"))
else:
topunusedpercent.style.bar()
InteractiveShell.ast_node_interactivity = "last_expr"
if not topunusedpercent.empty:
colors = []
for val in topunusedpercent['unused_percent'].values:
if val >= 50:
colors.append('r')
else:
colors.append('b')
def vertical_size(count):
if count <= 15:
return 10
else:
size = (count / 15) * 5
if size <= 10:
return 10
else:
return ((count / 15) * 5)
vertical = vertical_size(topunusedpercent.count()[0])
plt.figure(figsize=(16, vertical))
# plot bar
ax = plt.subplot(111)
ax = topunusedpercent['unused_percent'].plot(kind='barh', legend=False, ax=ax,
fontsize=12, grid=True, stacked=True, color=colors)
ax.set_ylabel('Unused Percentage')
ax.set_xlabel('filesystem')
plt.show()
###Output
_____no_output_____
###Markdown
Get quota information
###Code
InteractiveShell.ast_node_interactivity = "all"
quotaref = dataraw[['project', 'name', 'quota', 'space_data']]
quota = quotaref.copy()
# leave just row with non-zero quota values
quota = quota[~(quota == 0).any(axis=1)]
quota['unused_percent'] = 100 - (quota['space_data'] / quota['quota']) * 100
quota['used percent'] = (quota['space_data'] / quota['quota']) * 100
quota['quota'] = quota['quota'].apply(human_size) #/ (1024 * 1024 * 1024)
quota['space_data'] = quota['space_data'].apply(human_size)
#quota.set_index('name', inplace=True)
quota.set_index(['name', 'project'], inplace=True)
topquotaunused = quota.sort_values('unused_percent', ascending=False)
if topquotaunused.empty:
display(Markdown("### No unused quota to report"))
else:
topquotaunused.style.bar()
InteractiveShell.ast_node_interactivity = "last_expr"
if not topquotaunused.empty:
colors = []
for val in topquotaunused['unused_percent'].values:
if val >= 50:
colors.append('r')
else:
colors.append('b')
def vertical_size(count):
if count <= 15:
return 10
else:
size = (count / 15) * 5
if size <= 10:
return 10
else:
return ((count / 15) * 5)
vertical = vertical_size(topquotaunused.count()[0])
plt.figure(figsize=(16, vertical))
# plot bar
ax = plt.subplot(111)
ax = topquotaunused['unused_percent'].plot(kind='barh', legend=False, ax=ax,
fontsize=12, grid=True, stacked=True, color=colors)
ax.set_ylabel('Unused Quota Percentage')
ax.set_xlabel('filesystem')
plt.show()
###Output
_____no_output_____
###Markdown
Get filesystems using compression
###Code
compress = dataraw[['name', 'project', 'pool', 'compressratio']]
compress = compress[compress['compressratio'] != 100]
compress.set_index('name')
###Output
_____no_output_____ |
geeks_for_geeks/Lesson03_Data_types/05.Dictionary.ipynb | ###Markdown
3.5 DictionaryDictionary in Python is an collection of data values, used to store data values like a map, which, unlike other Data Types that hold only a single value as an element, Dictionary holds **key:value pair. Key-value is provided in the dictionary to make it more optimized**. After Python 3.7, dictionary maintains the insertion order, so dictionary is an ordered collection of data value since python 3.7Note – Keys in a dictionary don’t allow Polymorphism. 3.5.1 Creating a DictionaryIn Python, a Dictionary can be created by placing a sequence of elements within curly {} braces, separated by ‘comma’. Dictionary holds pairs of values, one being the Key and the other corresponding pair element being its Key:value. ```text{k1:v1, k2:v2, ... kn:vn}```- Values in a dictionary can be of any data type and can be duplicated- keys can’t be repeated and must be immutable. keys are case sensitive, the same name but different cases of Key will be treated distinctly.
###Code
# with Integer Keys
Dict = {1: 'Geeks', 2: 'For', 3: 'Geeks'}
print("\nDictionary with the use of Integer Keys: ")
print(Dict)
# Creating a Dictionary
# with Mixed keys
Dict = {'Name': 'Geeks', 1: [1, 2, 3, 4]}
print("\nDictionary with the use of Mixed Keys: ")
print(Dict)
###Output
Dictionary with the use of Integer Keys:
{1: 'Geeks', 2: 'For', 3: 'Geeks'}
Dictionary with the use of Mixed Keys:
{'Name': 'Geeks', 1: [1, 2, 3, 4]}
###Markdown
Dictionary can also be created by the built-in function dict(). An empty dictionary can be created by just placing to curly braces{}.
###Code
# Creating an empty Dictionary
Dict = {}
print("Empty Dictionary: ")
print(Dict)
# Creating a Dictionary with dict() method
Dict = dict({1: 'Geeks', 2: 'For', 3:'Geeks'})
print("\nDictionary with the use of dict(): ")
print(Dict)
# Creating a Dictionary with each item as a Pair
Dict = dict([(1, 'Geeks'), (2, 'For')])
print("\nDictionary with each item as a pair: ")
print(Dict)
###Output
Empty Dictionary:
{}
Dictionary with the use of dict():
{1: 'Geeks', 2: 'For', 3: 'Geeks'}
Dictionary with each item as a pair:
{1: 'Geeks', 2: 'For'}
###Markdown
As we said before values in a dictionary can be any data type, so value can be a dictionary
###Code
# Creating a Nested Dictionary
# as shown in the below image
Dict = {1: 'Geeks', 2: 'For',
3:{'A' : 'Welcome', 'B' : 'To', 'C' : 'Geeks'}}
print(Dict)
###Output
{1: 'Geeks', 2: 'For', 3: {'A': 'Welcome', 'B': 'To', 'C': 'Geeks'}}
###Markdown
3.5.2 Adding elements to a DictionaryOne value at a time can be added to a Dictionary by defining value along with the key e.g. Dict[Key] = ‘Value’. Updating an existing value in a Dictionary can be done by using the built-in update() method. Nested key values can also be added to an existing Dictionary. **While adding a value, if the key-value already exists, the value gets updated otherwise a new Key with the value is added to the Dictionary.**
###Code
# Creating an empty Dictionary
Dict = {}
print("Empty Dictionary: ")
print(Dict)
# Adding elements one at a time
Dict[0] = 'Geeks'
Dict[2] = 'For'
Dict[3] = 1
print("\nDictionary after adding 3 elements: ")
print(Dict)
# Adding set of values to a single Key
Dict['Value_set'] = 2, 3, 4
print("\nDictionary after adding 3 elements: ")
print(Dict)
# Updating existing Key's Value
Dict[2] = 'Welcome'
print("\nUpdated key value: ")
print(Dict)
# Adding Nested Key value to Dictionary
Dict[5] = {'Nested' :{'1' : 'Life', '2' : 'Geeks'}}
print("\nAdding a Nested Key: ")
print(Dict)
###Output
Adding a Nested Key:
{0: 'Geeks', 2: 'Welcome', 3: 1, 'Value_set': (2, 3, 4), 5: {'Nested': {'1': 'Life', '2': 'Geeks'}}}
###Markdown
3.5.3 Accessing elements from a DictionaryTo access the items of a dictionary refer to its key name:- Key can be used inside square brackets. - use get() method
###Code
# Python program to demonstrate
# accessing a element from a Dictionary
# Creating a Dictionary
Dict = {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
# accessing a element using key
print("Accessing a element using key:")
print(Dict['name'])
# accessing a element using key
print("Accessing a element using key:")
print(Dict[1])
# accessing a element using get()
# method
print("Accessing a element using get:")
print(Dict.get(3))
print(Dict.get('name'))
###Output
Accessing a element using get:
Geeks
For
###Markdown
Accessing an element of a nested dictionaryIn order to access the value of any key in the nested dictionary, use indexing [] syntax.
###Code
# Creating a Dictionary
Dict = {'Dict1': {1: 'Geeks'},
'Dict2': {'Name': 'For'}}
# Accessing element using key
print(Dict['Dict1'])
print(Dict['Dict1'][1])
print(Dict['Dict2']['Name'])
###Output
{1: 'Geeks'}
Geeks
For
###Markdown
3.5.4 Removing Elements from Dictionary- del dict_name[key]- pop(key)- popitem(key) 3.5.4.1 Using del keywordIn Python Dictionary, deletion of keys can be done by using the del keyword. Using the del keyword, specific values from a dictionary as well as the whole dictionary can be deleted. Items in a Nested dictionary can also be deleted by using the del keyword and providing a specific nested key and particular key to be deleted from that nested Dictionary. Note: The **del \** without key will delete the entire dictionary and hence printing it after deletion will raise an Error.
###Code
# Initial Dictionary
Dict = { 5 : 'Welcome', 6 : 'To', 7 : 'Geeks',
'A' : {1 : 'Geeks', 2 : 'For', 3 : 'Geeks'},
'B' : {1 : 'Geeks', 2 : 'Life'}}
print("Initial Dictionary: ")
print(Dict)
# Deleting a Key value
del Dict[6]
print("\nDeleting a specific key: ")
print(Dict)
# Deleting a Key from
# Nested Dictionary
del Dict['A'][2]
print("\nDeleting a key from Nested Dictionary: ")
print(Dict)
###Output
Initial Dictionary:
{5: 'Welcome', 6: 'To', 7: 'Geeks', 'A': {1: 'Geeks', 2: 'For', 3: 'Geeks'}, 'B': {1: 'Geeks', 2: 'Life'}}
Deleting a specific key:
{5: 'Welcome', 7: 'Geeks', 'A': {1: 'Geeks', 2: 'For', 3: 'Geeks'}, 'B': {1: 'Geeks', 2: 'Life'}}
Deleting a key from Nested Dictionary:
{5: 'Welcome', 7: 'Geeks', 'A': {1: 'Geeks', 3: 'Geeks'}, 'B': {1: 'Geeks', 2: 'Life'}}
###Markdown
3.5.4.2 Using pop() methodPop(key): it deletes the value of the key specified from the dictionary, and returns the deleted value.
###Code
# Creating a Dictionary
Dict = {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
print('Dictionary before deletion: ' + str(Dict))
# Deleting a key
# using pop() method
pop_ele = Dict.pop(1)
print('\nDictionary after deletion: ' + str(Dict))
print('\nValue associated to poped key is: ' + str(pop_ele))
###Output
Dictionary before deletion: {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
Dictionary after deletion: {'name': 'For', 3: 'Geeks'}
Value associated to poped key is: Geeks
###Markdown
3.5.4.3 Using popitem() methodpopitem(key): it deletes the the value of the key specified from the dictionary, returns the (key, value) pair.
###Code
# Creating Dictionary
Dict = {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
print('Dictionary before deletion: ' + str(Dict))
# Deleting an arbitrary key
# using popitem() function
pop_ele = Dict.popitem()
print("\nDictionary after deletion: " + str(Dict))
print("\nThe arbitrary pair returned is: " + str(pop_ele))
###Output
Dictionary before deletion: {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
Dictionary after deletion: {1: 'Geeks', 'name': 'For'}
The arbitrary pair returned is: (3, 'Geeks')
###Markdown
3.5.4.4 Using clear() methodAll the items from a dictionary can be deleted at once by using clear() method.
###Code
# Creating a Dictionary
Dict = {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
# Deleting entire Dictionary
Dict.clear()
print("\nDeleting Entire Dictionary: ")
print(Dict)
###Output
Deleting Entire Dictionary:
{}
|
src/robust-pca.ipynb | ###Markdown
Robust Principal Component Analysis Classifying faces.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.image import imread
import os
import scipy.io
## set plotting paramenters as default for the rest of the notebook
plt.rcParams["figure.figsize"] = [10,4]
plt.rc('font', family='serif')
plt.rc('xtick', labelsize=13)
plt.rc('ytick', labelsize=13)
plt.rcParams.update({'legend.fontsize': 11})
plt.rcParams.update({'axes.labelsize': 15})
plt.rcParams.update({'font.size': 15})
# play with O(n^2) and O(n*log(n))
# Quick refresher that the DFT and FFT scale with O(n^2) and O(n*log(n)), respectively
nf = np.linspace(1,100)
plt.plot(nf, nf**2, label=r"$O(n^2)$")
plt.plot(nf, nf*np.log(nf), label=r"$O(n \log{n})$")
plt.xlabel("number of computations")
plt.title("time to compute")
plt.legend()
###Output
_____no_output_____
###Markdown
Understand $O(n^2)$ vs $O(n \log{n})$ time complexity Eigenfaces Import the **.mat faces dataset, then span an eigenface space and use it to classify poeple and also use it to represent another pictures, e.g. al botoncito.** Find the PCA using:\begin{align*}{\bf B} &= {\bf X - \bar{X}} \\\rightarrow {\bf B} &= {\bf U\Sigma V^*} \end{align*}
###Code
mat_contents = scipy.io.loadmat(os.path.join('/', "home", "igodlab", "Documents", "DataDriven", "DATA", 'allFaces.mat')) ## loads the **.mat file as a Python dictionary
faces = mat_contents['faces'] ## images of faces (each of them is flattened)
m = int(mat_contents['m']) ## actual shape of each image
n = int(mat_contents['n']) ## actual shape of each image
ntot = int(mat_contents["person"]) ## total #of people = 38
nfaces = mat_contents["nfaces"][0] ## #of pictures for the same person, total=38 people
print("'faces' matrix contains pictures as the columns. Every person has 59 to 64 different \
pictures so the total number of columns is the sum of 'nfaces' vector")
faces.shape
## example plot one of the faces
nper = 34 ## #of person
npic = 44
ith = sum(nfaces[:nper-1])+(npic-1) ## 44-th picture of person: nper=34
ith_face = np.reshape(faces[:,ith], (m,n)).T ## reshape and transpose to get the rigth format
plt.imshow(ith_face)
plt.axis("off")
plt.set_cmap("gray")
plt.show()
## compute the eigenface space
nper_train = int(0.95*len(nfaces))
ntrain = sum(nfaces[:nper_train])
Xtrain = faces[:, :ntrain] ## training set
avg_face = np.tile(np.mean(Xtrain, axis=1), (np.shape(Xtrain)[1], 1)).T
B = Xtrain - avg_face
U, S, VT = np.linalg.svd(B, full_matrices=False)
## plot the average face and the first 7 modes
fig, axes = plt.subplots(2,4,figsize=(15,8))
for i in range(4):
if i == 0:
axes[0,0].imshow(np.reshape(avg_face[:,0], (m,n)).T)
axes[0,0].set_title("Average face")
axes[0,0].axis("off")
else:
axes[0,i].imshow(np.reshape(U[:,i], (m,n)).T)
axes[0,i].set_title(r"$u_{:.0g}$".format(i))
axes[0,i].axis("off")
axes[1,i].imshow(np.reshape(U[:,i+4], (m,n)).T)
axes[1,i].set_title(r"$u_{:.0g}$".format(i+4))
axes[1,i].axis("off")
## import this function for case (iii) from github, same authors of the paper referenced
from OptHT import optht
### optimal hard thereshold, method 3
#gamma = 1
beta = np.shape(B)[1]/np.shape(B)[0]
lmbda = (2*(beta+1)+8*beta/((beta+1)+(beta**2+14*beta+1)**(1/2)))**(1/2)
#tau = lmbda*np.sqrt(np.shape(faces)[0])*gamma
r_opt = optht(beta, S)
tau = 1264.0306430252317 ## define the cutoff value
r = len(S)-1 ## use total number -1 because is extremly small
## plot
plt.figure(figsize=(14,4))
plt.subplot(1,2,1)
plt.semilogy(S[:r],'.')
plt.hlines(tau, 0, r, linestyle="--", color="r")
plt.semilogy(S[:r_opt], "r.")
plt.xlim(0.0-50, r+50)
plt.ylabel(r"$\sigma_r$")
plt.xlabel(r"$r$")
plt.subplot(1,2,2)
plt.plot(np.cumsum(S[:r])/sum(S[:r]), ".")
plt.plot(np.cumsum(S[:r_opt])/sum(S[:r]), "r.")
plt.vlines(r_opt, 0, sum(S[:r_opt])/sum(S[:r]), linestyle="--", color="r")
plt.hlines(sum(S[:r_opt])/sum(S[:r]), 0.0, r_opt, linestyle="--", color="r")
plt.xlim(0.0-50, r+50)
plt.ylabel(r"cumsum[$\sigma_r$]")
plt.xlabel(r"$r$")
## show noisy eigenface-space U's
n_ht = 800
plt.imshow(np.reshape(U[:,n_ht], (m,n)).T)
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Example of an eigenface (PCA) past the threshold, in this case number 800
###Code
## built classifier prototype
Xtest = faces[:,ntrain:] ## collection set of faces for the two people of the test set
## plot
fig2 = plt.figure()
axes = fig2.add_subplot(111, projection='3d')
pcax = [3,4, 5] ## 3 PCA axis
for j in range(np.shape(Xtest)[1]):
x = U[:,pcax[0]].T @ Xtest[:,j]
y = U[:,pcax[1]].T @ Xtest[:,j]
z = U[:,pcax[2]].T @ Xtest[:,j]
if (j >= 0) and (j < nfaces[nper_train]):
axes.scatter(x,y,z, marker="s", color="purple", s=40)
else:
axes.scatter(x,y,z, marker="o", color="b", s=40)
axes.view_init(elev=0, azim=0) ## fix the 3D view
axes.scatter([], [], [], marker='s',color='purple', label="person 37")
axes.scatter([], [], [], marker='o',color='b', label="person 38")
axes.set_xlabel("PC"+str(pcax[0]+1))
axes.set_ylabel("PC"+str(pcax[1]+1))
axes.set_zlabel("PC"+str(pcax[2]+1))
axes.legend()
U.T.shape
###Output
_____no_output_____ |
DSNP_3_0_Sobre_o_Matplotlib.ipynb | ###Markdown
###Code
# importar pacotes
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conhecendo o MatplotlibMuitos usam, poucos entendem. Matplotlib é a principal biblioteca para visualização do Python. Construída em cima de *arrays* do `numpy` e concebida para integrar com as principais ferramentes de *Data Science*, Matplotlib foi criada em 2002 pelo John Hunter.John era um neurobiologista que analisava sinais de eletrocorticografia, juntamente com um time de pesquisadores. Como eles usavam um *software* proprietário e tinham apenas uma licença, o pesquisador criou o Matplotlib para suprir essa necessidade original, insipirando-se na interface scriptada que o MATLAB proporcionava.Quando eu disse na primeira linha que muitas pessoas usam a biblioteca, mas poucas de fato a entendem, eu quis dizer que elas desconhecem a maneira como a arquitetura do `matplotlib` foi pensada. Arquitetura do MatplotlibBasicamente, a arquitetura do `matplotlib` é composta de 3 camadas:1. ***Scripting Layer***2. ***Artist Layer***3. ***Backend Layer***Para entender como o pacote funciona, é preciso entender que a arquitetura do Matplotlib foi feita para permitir aos seus usuários criarem, renderizarem e atualizarem objetos do tipo `Figure`. Essas *Figuras* são exibidas na tela e interagem com eventos como os *inputs* do teclado e mouse. Esse tipo de interação é realizada na camada ***backend***.O Matplotlib permite que você crie um gráfico composto por múltiplos objetos diferentes. É como se ele não gerasse uma coisa única, mas uma imagem que é composta de vários pedaços isolados, como o eixo-y, eixo-y, título, legendas, entre outras. A capacidade de alterar todos múltiplos objetos é proporcionada pela camada ***artist***. Olhe o código abaixo e veja como estamos lidandos com esses "múltiplos objetos". Plotamos os dados no plano cartesiano, criamos um título e demos *labels* aos eixos x e y.
###Code
# gerar valores demonstrativos
np.random.seed(42)
x = np.arange(10)
y = np.random.normal(size=10)
# plotar os valores
plt.plot(x, y)
plt.title("Exemplo")
plt.xlabel("Eixo x")
plt.ylabel("Eixo y")
plt.show()
###Output
_____no_output_____
###Markdown
Para você, usuário, conseguir se comunicar com essas duas camadas, e manipular as `Figures`, existe uma terceira camada, a camada ***scripting***. Ela abstrai em um nível mais alto todo contato com o Matplotlib, e permite que de maneira simples e direta possamos criar nossos *plots*. Quero pedir a você para [ler este artigo](https://realpython.com/python-matplotlib-guide/) do *blog* ***Real Python***. É um dos melhores aritogs sobre matplotlib que já tive contato, e vai explicar vários conceitos da ferramenta. Você não precisa saber os detalhes da arquitetura do `matplotlib`, mas precisa ter uma ideia geral sobre seu funcionamento. Caso queira se aprofundar mais ainda, recomendo o livro [*Mastering matplotlib*](https://learning.oreilly.com/library/view/mastering-matplotlib/9781783987542/). Conhecendo mais intimamente o MatplotlibSe você lembrar das aulas anteriores, plotar um gráfico é muito simples e direto. Algo como `plt.plot(x, y)` vai te dar prontamente um gráfico. No entanto, essa abstração esconde um segredo importante: a hierarquia de 3 objetos que estão por trás de cada *plot*. Vou usar as imagens do artigo [*Python Plotting With Matplotlib*](https://realpython.com/python-matplotlib-guide/) para facilitar o entendimento desse conceito.O objeto mais "exterior" a cada plot é o objeto do tipo `Figure`. Dentro de uma `Figure` existe um objeto do tipo `Axes`. Dentro de cada `Axes` ficam os objetos menores, como legendas, títulos, textos e *tick marks*.Como disse Brad Solomon no artigo, a principal confusão das pessoas é não entenderem que um *plot* (ou gráfico) individual está contido dentro de um objeto `Axes`. Uma `Figure` não é o *plot* em si, ela pode conter um ou mais *plots* (e cada *plot* é um `Axes`).Como eu disse, cada `Axes` é composto de objetos menores que compõe o *plot* propriamente dito. A grande maioria das pessoas (incluindo eu mesmo) conhece apenas or principais, como título, eixos, labels e legenda. No entanto, para você ver a anatomia completa dentro de um `Axes`, pode usar o código abaixo, disponibilizado na [documentação oficial do `matplotlib`](https://matplotlib.org/examples/showcase/anatomy.html).
###Code
#@title
# This figure shows the name of several matplotlib elements composing a figure
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator, MultipleLocator, FuncFormatter
np.random.seed(19680801)
X = np.linspace(0.5, 3.5, 100)
Y1 = 3+np.cos(X)
Y2 = 1+np.cos(1+X/0.75)/2
Y3 = np.random.uniform(Y1, Y2, len(X))
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(1, 1, 1, aspect=1)
def minor_tick(x, pos):
if not x % 1.0:
return ""
return "%.2f" % x
ax.xaxis.set_major_locator(MultipleLocator(1.000))
ax.xaxis.set_minor_locator(AutoMinorLocator(4))
ax.yaxis.set_major_locator(MultipleLocator(1.000))
ax.yaxis.set_minor_locator(AutoMinorLocator(4))
ax.xaxis.set_minor_formatter(FuncFormatter(minor_tick))
ax.set_xlim(0, 4)
ax.set_ylim(0, 4)
ax.tick_params(which='major', width=1.0)
ax.tick_params(which='major', length=10)
ax.tick_params(which='minor', width=1.0, labelsize=10)
ax.tick_params(which='minor', length=5, labelsize=10, labelcolor='0.25')
ax.grid(linestyle="--", linewidth=0.5, color='.25', zorder=-10)
ax.plot(X, Y1, c=(0.25, 0.25, 1.00), lw=2, label="Blue signal", zorder=10)
ax.plot(X, Y2, c=(1.00, 0.25, 0.25), lw=2, label="Red signal")
ax.plot(X, Y3, linewidth=0,
marker='o', markerfacecolor='w', markeredgecolor='k')
ax.set_title("Anatomy of a figure", fontsize=20, verticalalignment='bottom')
ax.set_xlabel("X axis label")
ax.set_ylabel("Y axis label")
ax.legend()
def circle(x, y, radius=0.15):
from matplotlib.patches import Circle
from matplotlib.patheffects import withStroke
circle = Circle((x, y), radius, clip_on=False, zorder=10, linewidth=1,
edgecolor='black', facecolor=(0, 0, 0, .0125),
path_effects=[withStroke(linewidth=5, foreground='w')])
ax.add_artist(circle)
def text(x, y, text):
ax.text(x, y, text, backgroundcolor="white",
ha='center', va='top', weight='bold', color='blue')
# Minor tick
circle(0.50, -0.10)
text(0.50, -0.32, "Minor tick label")
# Major tick
circle(-0.03, 4.00)
text(0.03, 3.80, "Major tick")
# Minor tick
circle(0.00, 3.50)
text(0.00, 3.30, "Minor tick")
# Major tick label
circle(-0.15, 3.00)
text(-0.15, 2.80, "Major tick label")
# X Label
circle(1.80, -0.27)
text(1.80, -0.45, "X axis label")
# Y Label
circle(-0.27, 1.80)
text(-0.27, 1.6, "Y axis label")
# Title
circle(1.60, 4.13)
text(1.60, 3.93, "Title")
# Blue plot
circle(1.75, 2.80)
text(1.75, 2.60, "Line\n(line plot)")
# Red plot
circle(1.20, 0.60)
text(1.20, 0.40, "Line\n(line plot)")
# Scatter plot
circle(3.20, 1.75)
text(3.20, 1.55, "Markers\n(scatter plot)")
# Grid
circle(3.00, 3.00)
text(3.00, 2.80, "Grid")
# Legend
circle(3.70, 3.80)
text(3.70, 3.60, "Legend")
# Axes
circle(0.5, 0.5)
text(0.5, 0.3, "Axes")
# Figure
circle(-0.3, 0.65)
text(-0.3, 0.45, "Figure")
color = 'blue'
ax.annotate('Spines', xy=(4.0, 0.35), xycoords='data',
xytext=(3.3, 0.5), textcoords='data',
weight='bold', color=color,
arrowprops=dict(arrowstyle='->',
connectionstyle="arc3",
color=color))
ax.annotate('', xy=(3.15, 0.0), xycoords='data',
xytext=(3.45, 0.45), textcoords='data',
weight='bold', color=color,
arrowprops=dict(arrowstyle='->',
connectionstyle="arc3",
color=color))
ax.text(4.0, -0.4, "Made with http://matplotlib.org",
fontsize=10, ha="right", color='.5')
plt.show()
###Output
_____no_output_____
###Markdown
Abordagem Orientada a ObjetoDe acordo com a documentação oficial do Matplotlib, a biblioteca plota seus dados em objetos do tipo `Figure` (janelas, Jupyter widgets, entre outros), e esses objetos podem conter um ou mais objetos do tipo `Axes` (uma área onde pontos podem ser especificados em termos de coordenadas x-y, coordenadas polares, x-y-z, entre outras).Nessa abordagem orientada a objeto, mais customizações e tipos de gráficos são possíveis.O estilo "orientado a objeto" cria explicitamente dois objetos, `Figure` e `Axes`, e chamada os métodos em cima deles.A maneira mais simples de criar uma figura com um *axes* é usando o `pyplot.subplots`, ou abreviadamente, `plt.subplots`.Então, você pode usar o método `Axes.plot` para desenhar dados em cima de algum *axes*.
###Code
fig, ax = plt.subplots()
###Output
_____no_output_____
###Markdown
O que fizemos acima foi criar uma Figura que irá conter todos os *plots* (`Axes`). Neste caso, como não especificamos nada, foi criado apenas 1 `Figure` e 1 `Axes` (*plot*).A partir disso, a manipulação e customização passa a ser diretamente na variável `ax`, chamando seus métodos.
###Code
x = np.arange(0, 100)
# plotar os valores
fig, ax = plt.subplots()
ax.plot(x, x, label="linear")
ax.plot(x, x**2, label="quadrado")
ax.set_title("Abordagem OO")
ax.set_ylabel("Valores em Y")
ax.set_xlabel("Valores em X")
ax.legend()
# fig.show()
fig.show()
###Output
_____no_output_____
###Markdown
Pronto! Temos o mesmo gráfico, porém ganhamos um controle bem maior de manipulação sobre ele. Veja o exemplo abaixo, onde eu crio 2 objetos do tipo `Axes` para plotar 2 gráficos diferentes em 1 única `Figure`.
###Code
# plotar os 2 gráficos
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8,3))
# gráfico 1
ax[0].plot(x, x)
ax[0].set_title("Linear")
ax[0].set_xlabel("x")
ax[0].set_ylabel("y")
# gráfico 2
ax[1].plot(x, x**2)
ax[1].set_title("Quadrado")
ax[1].set_xlabel("x")
ax[1].set_ylabel("y")
fig.tight_layout();
###Output
_____no_output_____
###Markdown
O que a grande maioria faz é sempre que precisa copiar e colar códigos prontos do Stack Overflow. Não tem nenhum problema com isso, desde que você saiba os conceitos básicos.Se você pegar para estudar a documentação original, e ver exemplos práticos como o do *blog* que eu sugeri, você se sentirá muito mais confiante e capaz de gerar gráficos visualmente impactantes. Abordagem Interface PyplotSe acima nós precisamos criar explicitamente figuras e *axes*, há uma outra abordagem que delega toda a responsabilidade de se criar e gerenciar as mesmas.Para isso, você vai usar as funções do `pyplot` para plotar. Veja como usar esse *pyplot-style*.
###Code
x = np.arange(10)
plt.plot(x, x)
###Output
_____no_output_____
###Markdown
Se eu quiser adicionar um título ao gráfico, posso fazer diretamente, seguindo a mesma abordagem:
###Code
plt.plot(x, x)
plt.title("Linear")
plt.xlabel("Eixo x")
plt.ylabel("Eixo y")
###Output
_____no_output_____
###Markdown
De maneira similar, se eu quiser ir acrescentando objetos ao `Axes`, eu vou apenas adicionando novas funções sequencialmente:
###Code
plt.plot(x, x, label="Linear")
plt.plot(x, x**2, label="Quadrado")
plt.title("Abordagem Pyplot")
plt.ylabel("Valores de y")
plt.xlabel("Valores de x")
plt.legend()
plt.show()
###Output
_____no_output_____ |
spider_classifier.ipynb | ###Markdown
The Amazing Australian Spider Classifier! You need to know whether you're being scared by a dangerous spider or just a normal house spider, and you need an answer *fast*? Then you've come to the right place. Take a pic of the potentially vicious killer, and click 'upload' to classify it. (Important: this only handles sydney funnel web spider, red back spider, mouse spider, black house spider and trapdoor spider. It will **not** give a sensible answer for spider man, a spider fish, a spider crawler bot, or hot dogs.----
###Code
path = Path()
learn_inf = load_learner(path/'export.pkl', cpu=True)
btn_upload = widgets.FileUpload()
out_pl = widgets.Output()
lbl_pred = widgets.Label()
def on_data_change(change):
lbl_pred.value = ''
img = PILImage.create(btn_upload.data[-1])
out_pl.clear_output()
with out_pl: display(img.to_thumb(128,128))
pred,pred_idx,probs = learn_inf.predict(img)
lbl_pred.value = f'Prediction: {pred}; Probability: {probs[pred_idx]:.04f}'
btn_upload.observe(on_data_change, names=['data'])
display(VBox([widgets.Label('Select your bear!'), btn_upload, out_pl, lbl_pred]))
###Output
_____no_output_____ |
scratch/039_new_particle.ipynb | ###Markdown
w subdivide
###Code
image_path= '/home/naka/art/wigglesphere.jpg'
filename = 'vp_test12.svg'
paper_size:str = '11x14 inches'
border:float=20 # mm
image_rescale_factor:float=0.04
smooth_disk_size:int=1
hist_clip_limit=0.1
hist_nbins=32
intensity_min=0.
intensity_max=1.
hatch_spacing_min=0.35 # mm
hatch_spacing_max=1.1 # mm
pixel_width=1 # mm
pixel_height=1 # mm
angle_jitter='ss.norm(loc=10, scale=0).rvs' # degrees
pixel_rotation='0' # degrees
merge_tolerances=[0.3, 0.4,] # mm
simplify_tolerances=[0.2,] # mm
savedir='/home/naka/art/plotter_svgs'
# make page
paper = Paper(paper_size)
drawbox = paper.get_drawbox(border)
# load
img = rgb2gray(io.imread(Path(image_path)))
xgen = ss.uniform(loc=0.45, scale=0.0).rvs
split_func = functools.partial(gp.split_along_longest_side_of_min_rectangle, xgen=xgen)
splits = gp.recursive_split_frac_buffer(
drawbox,
split_func=split_func,
p_continue=0.8,
depth=0,
depth_limit=3,
buffer_frac=-0.0
)
# split_func = functools.partial(gp.random_bezier_subdivide, x0=0.19, x1=0.85, n_eval_points=50)
# splits = gp.recursive_split_frac_buffer(
# drawbox,
# split_func=split_func,
# p_continue=0.7,
# depth=0,
# depth_limit=8,
# buffer_frac=-0.0
# )
bps = MultiPolygon([p for p in splits])
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
sk.penWidth('0.5mm')
sk.geometry(bps.boundary)
# tolerance=0.5
sk.display()
# make pixel polys
prms = []
for bp in tqdm(bps):
a = np.random.uniform(0, 240)
prm = {
'geometry':bp,
'raw_pixel_width':pixel_width,
'raw_pixel_height':pixel_height,
'angle':a,
'group': 'raw_hatch_pixel',
'intensity': 1,
}
prms.append(prm)
raw_hatch_pixels = geopandas.GeoDataFrame(prms)
# rescale polys to fit in drawbox
bbox = box(*raw_hatch_pixels.total_bounds)
_, transform = gp.make_like(bbox, drawbox, return_transform=True)
A = gp.AffineMatrix(**transform)
scaled_hatch_pixels = raw_hatch_pixels.copy()
scaled_hatch_pixels['geometry'] = scaled_hatch_pixels.affine_transform(A.A_flat)
scaled_hatch_pixels['scaled_pixel_height'] = scaled_hatch_pixels['geometry'].apply(gp.get_height)
scaled_hatch_pixels['scaled_pixel_width'] = scaled_hatch_pixels['geometry'].apply(gp.get_width)
new_drawbox = so.unary_union(scaled_hatch_pixels.geometry)
db = gp.Poly(new_drawbox)
scaled_hatch_pixels['angle'] = np.interp(scaled_hatch_pixels.geometry.centroid.y, [db.bottom, db.top], [0, 680]) + np.random.randn(len(scaled_hatch_pixels)) * 5
scaled_hatch_pixels['angle'] = scaled_hatch_pixels['angle'] // 5 * 5
# scaled_hatch_pixels['angle'] = np.interp(scaled_hatch_pixels['angle'], xp=[0, 180], fp=[30, 150])
qpg = NoisyQuantizedPiecewiseGrid(scaled_hatch_pixels, xstep=5, ystep=5, noise_scale=0.0001, noise_mult=1, verbose=True)
qpg.make_grid()
# qpg = QuantizedPiecewiseGrid(scaled_hatch_pixels, xstep=5, ystep=5)
# qpg.make_grid()
# # evenly spaced grid
# bins, grid = gp.overlay_grid(new_drawbox, xstep=2.5, ystep=2.5, flatmesh=True)
# xs, ys = grid
# pts = [Point(x,y) for x,y in zip(xs, ys)]
# # random
# pts = gp.get_random_points_in_polygon(new_drawbox, 4000)
# n_points = 5000
# pts = []
# pix_p = np.interp(scaled_hatch_pixels['intensity'], [0, 1], [0.9, 0.1])
# pix_p /= pix_p.sum()
# for ii in range(n_points):
# pix = np.random.choice(scaled_hatch_pixels.index, p=pix_p)
# pt = gp.get_random_point_in_polygon(scaled_hatch_pixels.loc[pix, 'geometry'])
# pts.append(pt)
# # circle
# rad = 50
# n_points = 100
# circ = new_drawbox.centroid.buffer(rad).boundary
# pts = [circ.interpolate(d, normalized=True) for d in np.linspace(0, 1, n_points)]
def get_random_line_in_polygon(polygon, max_dist=None, min_dist=None):
pt0 = gp.get_random_point_in_polygon(polygon)
pt1 = gp.get_random_point_in_polygon(polygon)
if max_dist is not None:
while pt0.distance(pt1) > max_dist:
pt1 = gp.get_random_point_in_polygon(polygon)
if min_dist is not None:
while pt0.distance(pt1) < min_dist:
pt1 = gp.get_random_point_in_polygon(polygon)
return LineString([pt0, pt1])
qpg = NoisyQuantizedPiecewiseGrid(scaled_hatch_pixels, xstep=5, ystep=5, noise_scale=0.1, noise_mult=0.8, verbose=False)
qpg.make_grid()
poly = new_drawbox
pts = []
lss = []
n_lines = 900
for ii in tqdm(range(n_lines)):
ls = get_random_line_in_polygon(poly, min_dist = 10, max_dist=400)
new_pts = [ls.interpolate(d) for d in np.linspace(0, ls.length, np.random.randint(1,32))]
vps = [VectorParticle(pos=pt, vector=np.random.uniform(-1,1,size=2), grid=qpg, stepsize=1, momentum_factor=np.random.uniform(0,0)) for pt in new_pts]
for vp in vps:
for ii in range(10):
vp.step()
vps = [vp for vp in vps if len(vp.pts) > 1]
ls = gp.merge_LineStrings([LineString(vp.pts) for vp in vps])
lss.append(ls)
blss = gp.merge_LineStrings(lss).buffer(0.1, cap_style=2, join_style=2)
poly = new_drawbox
pts = []
lss = []
n_lines = 900
for ii in tqdm(range(n_lines)):
ls = get_random_line_in_polygon(poly, min_dist = 10, max_dist=400)
new_pts = [ls.interpolate(d) for d in np.linspace(0, ls.length, np.random.randint(1,32))]
vps = [VectorParticle(pos=pt, vector=np.random.uniform(-1,1,size=2), grid=qpg, stepsize=1, momentum_factor=np.random.uniform(0,0)) for pt in new_pts]
for vp in vps:
for ii in range(10):
vp.step()
vps = [vp for vp in vps if len(vp.pts) > 1]
ls = gp.merge_LineStrings([LineString(vp.pts) for vp in vps])
lss.append(ls)
blss2 = gp.merge_LineStrings(lss).buffer(0.1, cap_style=2, join_style=2)
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
# for ii, ls in enumerate(lss):
# sk.stroke(ii + 1)
# sk.geometry(ls)
sk.stroke(1)
sk.geometry(blss)
sk.stroke(2)
sk.geometry(blss2)
sk.display()
merge_tolerances = [0.2, 0.3, 0.4, 0.5, 1]
simplify_tolerances = [0.2]
sk.vpype('splitall')
for tolerance in tqdm(merge_tolerances):
sk.vpype(f'linemerge --tolerance {tolerance}mm')
for tolerance in tqdm(simplify_tolerances):
sk.vpype(f'linesimplify --tolerance {tolerance}mm')
sk.vpype('linesort')
sk.display()
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
w subdivide
###Code
image_path= '/home/naka/art/wigglesphere.jpg'
filename = 'vp_test14.svg'
paper_size:str = '11x14 inches'
border:float=20 # mm
image_rescale_factor:float=0.04
smooth_disk_size:int=1
hist_clip_limit=0.1
hist_nbins=32
intensity_min=0.
intensity_max=1.
hatch_spacing_min=0.35 # mm
hatch_spacing_max=1.1 # mm
pixel_width=1 # mm
pixel_height=1 # mm
angle_jitter='ss.norm(loc=10, scale=0).rvs' # degrees
pixel_rotation='0' # degrees
merge_tolerances=[0.3, 0.4,] # mm
simplify_tolerances=[0.2,] # mm
savedir='/home/naka/art/plotter_svgs'
# make page
paper = Paper(paper_size)
drawbox = paper.get_drawbox(border)
# load
img = rgb2gray(io.imread(Path(image_path)))
xgen = ss.uniform(loc=0.5, scale=0.05).rvs
split_func = functools.partial(gp.split_along_longest_side_of_min_rectangle, xgen=xgen)
splits = gp.recursive_split_frac_buffer(
drawbox,
split_func=split_func,
p_continue=1,
depth=0,
depth_limit=7,
buffer_frac=-0.0
)
# split_func = functools.partial(gp.random_bezier_subdivide, x0=0.19, x1=0.85, n_eval_points=50)
# splits = gp.recursive_split_frac_buffer(
# drawbox,
# split_func=split_func,
# p_continue=0.7,
# depth=0,
# depth_limit=8,
# buffer_frac=-0.0
# )
bps = MultiPolygon([p for p in splits])
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
sk.penWidth('0.5mm')
sk.geometry(bps.boundary)
# tolerance=0.5
sk.display()
all_bps = gp.Shape(bps)
# make pixel polys
prms = []
for bp in tqdm(bps):
# a = np.random.uniform(0, 240)
dist_from_center = bp.centroid.distance(bps.centroid)
a = np.interp(dist_from_center, [0, 150], [0, 1020])
prm = {
'geometry':bp,
'raw_pixel_width':pixel_width,
'raw_pixel_height':pixel_height,
'angle':a,
'group': 'raw_hatch_pixel',
'magnitude': np.random.uniform(0.3, 2),
}
prms.append(prm)
raw_hatch_pixels = geopandas.GeoDataFrame(prms)
# rescale polys to fit in drawbox
bbox = box(*raw_hatch_pixels.total_bounds)
_, transform = gp.make_like(bbox, drawbox, return_transform=True)
A = gp.AffineMatrix(**transform)
scaled_hatch_pixels = raw_hatch_pixels.copy()
scaled_hatch_pixels['geometry'] = scaled_hatch_pixels.affine_transform(A.A_flat)
scaled_hatch_pixels['scaled_pixel_height'] = scaled_hatch_pixels['geometry'].apply(gp.get_height)
scaled_hatch_pixels['scaled_pixel_width'] = scaled_hatch_pixels['geometry'].apply(gp.get_width)
new_drawbox = so.unary_union(scaled_hatch_pixels.geometry)
db = gp.Poly(new_drawbox)
# scaled_hatch_pixels['angle'] = np.interp(scaled_hatch_pixels.geometry.centroid.y, [db.bottom, db.top], [0, 680]) + np.random.randn(len(scaled_hatch_pixels)) * 5
scaled_hatch_pixels['angle'] = scaled_hatch_pixels['angle'] // 5 * 5
# scaled_hatch_pixels['angle'] = np.interp(scaled_hatch_pixels['angle'], xp=[0, 180], fp=[30, 150])
def get_random_line_in_polygon(polygon, max_dist=None, min_dist=None):
pt0 = gp.get_random_point_in_polygon(polygon)
pt1 = gp.get_random_point_in_polygon(polygon)
if max_dist is not None:
while pt0.distance(pt1) > max_dist:
pt1 = gp.get_random_point_in_polygon(polygon)
if min_dist is not None:
while pt0.distance(pt1) < min_dist:
pt1 = gp.get_random_point_in_polygon(polygon)
return LineString([pt0, pt1])
qpg = NoisyQuantizedPiecewiseGrid(scaled_hatch_pixels, xstep=5, ystep=5, noise_scale=0.1, noise_mult=0.5, verbose=False)
qpg.make_grid()
poly = new_drawbox
pts = []
lss = []
n_lines = 6000
for ii in tqdm(range(n_lines)):
ls = get_random_line_in_polygon(poly, min_dist = 10, max_dist=400)
new_pts = [ls.interpolate(d) for d in np.linspace(0, ls.length, np.random.randint(1,2))]
vps = [VectorParticle(pos=pt, grid=qpg, stepsize=1, momentum_factor=np.random.uniform(0,0)) for pt in new_pts]
for vp in vps:
for ii in range(15):
vp.step()
vps = [vp for vp in vps if len(vp.pts) > 1]
ls = gp.merge_LineStrings([LineString(vp.pts) for vp in vps])
lss.append(ls)
blss = gp.merge_LineStrings(lss).buffer(0.2, cap_style=2, join_style=2)
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
# for ii, ls in enumerate(lss):
# sk.stroke(ii + 1)
# sk.geometry(ls)
sk.stroke(1)
sk.geometry(blss)
sk.display()
merge_tolerances = [0.2, 0.3, 0.4, 0.5, 1]
simplify_tolerances = [0.2]
sk.vpype('splitall')
for tolerance in tqdm(merge_tolerances):
sk.vpype(f'linemerge --tolerance {tolerance}mm')
for tolerance in tqdm(simplify_tolerances):
sk.vpype(f'linesimplify --tolerance {tolerance}mm')
sk.vpype('linesort')
sk.display()
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
spiral start
###Code
image_path= '/home/naka/art/wigglesphere.jpg'
filename = 'vp_test15.svg'
paper_size:str = '11x14 inches'
border:float=20 # mm
image_rescale_factor:float=0.04
smooth_disk_size:int=1
hist_clip_limit=0.1
hist_nbins=32
intensity_min=0.
intensity_max=1.
hatch_spacing_min=0.35 # mm
hatch_spacing_max=1.1 # mm
pixel_width=1 # mm
pixel_height=1 # mm
angle_jitter='ss.norm(loc=10, scale=0).rvs' # degrees
pixel_rotation='0' # degrees
merge_tolerances=[0.3, 0.4,] # mm
simplify_tolerances=[0.2,] # mm
savedir='/home/naka/art/plotter_svgs'
# make page
paper = Paper(paper_size)
drawbox = paper.get_drawbox(border)
# load
img = rgb2gray(io.imread(Path(image_path)))
xgen = ss.uniform(loc=0.5, scale=0.05).rvs
split_func = functools.partial(gp.split_along_longest_side_of_min_rectangle, xgen=xgen)
splits = gp.recursive_split_frac_buffer(
drawbox,
split_func=split_func,
p_continue=1,
depth=0,
depth_limit=7,
buffer_frac=-0.0
)
# split_func = functools.partial(gp.random_bezier_subdivide, x0=0.19, x1=0.85, n_eval_points=50)
# splits = gp.recursive_split_frac_buffer(
# drawbox,
# split_func=split_func,
# p_continue=0.7,
# depth=0,
# depth_limit=8,
# buffer_frac=-0.0
# )
bps = MultiPolygon([p for p in splits])
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
sk.penWidth('0.5mm')
sk.geometry(bps.boundary)
# tolerance=0.5
sk.display()
all_bps = gp.Shape(bps)
# make pixel polys
prms = []
for bp in tqdm(bps):
# a = np.random.uniform(0, 240)
dist_from_center = bp.centroid.distance(bps.centroid)
a = np.interp(dist_from_center, [0, 150], [0, 1020])
prm = {
'geometry':bp,
'raw_pixel_width':pixel_width,
'raw_pixel_height':pixel_height,
'angle':a,
'group': 'raw_hatch_pixel',
'magnitude': np.random.uniform(0.3, 2),
}
prms.append(prm)
raw_hatch_pixels = geopandas.GeoDataFrame(prms)
# rescale polys to fit in drawbox
bbox = box(*raw_hatch_pixels.total_bounds)
_, transform = gp.make_like(bbox, drawbox, return_transform=True)
A = gp.AffineMatrix(**transform)
scaled_hatch_pixels = raw_hatch_pixels.copy()
scaled_hatch_pixels['geometry'] = scaled_hatch_pixels.affine_transform(A.A_flat)
scaled_hatch_pixels['scaled_pixel_height'] = scaled_hatch_pixels['geometry'].apply(gp.get_height)
scaled_hatch_pixels['scaled_pixel_width'] = scaled_hatch_pixels['geometry'].apply(gp.get_width)
new_drawbox = so.unary_union(scaled_hatch_pixels.geometry)
db = gp.Poly(new_drawbox)
# scaled_hatch_pixels['angle'] = np.interp(scaled_hatch_pixels.geometry.centroid.y, [db.bottom, db.top], [0, 680]) + np.random.randn(len(scaled_hatch_pixels)) * 5
scaled_hatch_pixels['angle'] = scaled_hatch_pixels['angle'] // 5 * 5
# scaled_hatch_pixels['angle'] = np.interp(scaled_hatch_pixels['angle'], xp=[0, 180], fp=[30, 150])
qpg = NoisyQuantizedPiecewiseGrid(scaled_hatch_pixels, xstep=5, ystep=5, noise_scale=0.1, noise_mult=0.5, verbose=False)
qpg.make_grid()
spiral_angle_max = np.pi * 200
spiral_angle_min = 0
spiral_angle_spacing = np.pi * 0.053
sp_angle_range = np.arange(spiral_angle_min, spiral_angle_max, spiral_angle_spacing)
spiral_distances = np.linspace(0, 100, len(sp_angle_range))
start_points = [Point(np.cos(a) * d, np.sin(a) * d) for a, d in zip(sp_angle_range, spiral_distances)]
start_points = gp.make_like(MultiPoint(start_points), db.p)
poly = new_drawbox
pts = []
lss = []
n_steps = 8
for pt in tqdm(start_points):
vp = VectorParticle(pos=pt, grid=qpg, stepsize=1, momentum_factor=np.random.uniform(0,0))
for ii in range(n_steps):
vp.step()
if len(vp.pts) > 1:
ls = gp.merge_LineStrings([LineString(vp.pts)])
lss.append(ls)
for ls in lss:
ls
blss = gp.merge_LineStrings(lss).buffer(0.25, cap_style=2, join_style=2)
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
# for ii, ls in enumerate(lss):
# sk.stroke(ii + 1)
# sk.geometry(ls)
sk.stroke(1)
sk.geometry(blss)
sk.display()
merge_tolerances = [0.2, 0.3, 0.4, 0.5, 1]
simplify_tolerances = [0.2]
sk.vpype('splitall')
for tolerance in tqdm(merge_tolerances):
sk.vpype(f'linemerge --tolerance {tolerance}mm')
for tolerance in tqdm(simplify_tolerances):
sk.vpype(f'linesimplify --tolerance {tolerance}mm')
sk.vpype('linesort')
sk.display()
filename = 'vp_test17.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
spiral start buffer shaded
###Code
image_path= '/home/naka/art/wigglesphere.jpg'
filename = 'vp_test18.svg'
paper_size:str = '11x14 inches'
border:float=20 # mm
image_rescale_factor:float=0.04
smooth_disk_size:int=1
hist_clip_limit=0.1
hist_nbins=32
intensity_min=0.
intensity_max=1.
hatch_spacing_min=0.35 # mm
hatch_spacing_max=1.1 # mm
pixel_width=1 # mm
pixel_height=1 # mm
angle_jitter='ss.norm(loc=10, scale=0).rvs' # degrees
pixel_rotation='0' # degrees
merge_tolerances=[0.3, 0.4,] # mm
simplify_tolerances=[0.2,] # mm
savedir='/home/naka/art/plotter_svgs'
# make page
paper = Paper(paper_size)
drawbox = paper.get_drawbox(border)
# load
img = rgb2gray(io.imread(Path(image_path)))
xgen = ss.uniform(loc=0.5, scale=0.05).rvs
split_func = functools.partial(gp.split_along_longest_side_of_min_rectangle, xgen=xgen)
splits = gp.recursive_split_frac_buffer(
drawbox,
split_func=split_func,
p_continue=1,
depth=0,
depth_limit=7,
buffer_frac=-0.0
)
# split_func = functools.partial(gp.random_bezier_subdivide, x0=0.19, x1=0.85, n_eval_points=50)
# splits = gp.recursive_split_frac_buffer(
# drawbox,
# split_func=split_func,
# p_continue=0.7,
# depth=0,
# depth_limit=8,
# buffer_frac=-0.0
# )
bps = MultiPolygon([p for p in splits])
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
sk.penWidth('0.5mm')
sk.geometry(bps.boundary)
# tolerance=0.5
sk.display()
all_bps = gp.Shape(bps)
# make pixel polys
prms = []
for bp in tqdm(bps):
# a = np.random.uniform(0, 240)
dist_from_center = bp.centroid.distance(bps.centroid)
a = np.interp(dist_from_center, [0, 150], [0, 1020])
prm = {
'geometry':bp,
'raw_pixel_width':pixel_width,
'raw_pixel_height':pixel_height,
'angle':a,
'group': 'raw_hatch_pixel',
'magnitude': np.random.uniform(0.3, 2),
}
prms.append(prm)
raw_hatch_pixels = geopandas.GeoDataFrame(prms)
# rescale polys to fit in drawbox
bbox = box(*raw_hatch_pixels.total_bounds)
_, transform = gp.make_like(bbox, drawbox, return_transform=True)
A = gp.AffineMatrix(**transform)
scaled_hatch_pixels = raw_hatch_pixels.copy()
scaled_hatch_pixels['geometry'] = scaled_hatch_pixels.affine_transform(A.A_flat)
scaled_hatch_pixels['scaled_pixel_height'] = scaled_hatch_pixels['geometry'].apply(gp.get_height)
scaled_hatch_pixels['scaled_pixel_width'] = scaled_hatch_pixels['geometry'].apply(gp.get_width)
new_drawbox = so.unary_union(scaled_hatch_pixels.geometry)
db = gp.Poly(new_drawbox)
# scaled_hatch_pixels['angle'] = np.interp(scaled_hatch_pixels.geometry.centroid.y, [db.bottom, db.top], [0, 680]) + np.random.randn(len(scaled_hatch_pixels)) * 5
scaled_hatch_pixels['angle'] = scaled_hatch_pixels['angle'] // 5 * 5
# scaled_hatch_pixels['angle'] = np.interp(scaled_hatch_pixels['angle'], xp=[0, 180], fp=[30, 150])
qpg = NoisyQuantizedPiecewiseGrid(scaled_hatch_pixels, xstep=5, ystep=5, noise_scale=0.1, noise_mult=0.5, verbose=False)
qpg.make_grid()
spiral_angle_max = np.pi * 200
spiral_angle_min = 0
spiral_angle_spacing = np.pi * 0.063
sp_angle_range = np.arange(spiral_angle_min, spiral_angle_max, spiral_angle_spacing)
spiral_distances = np.linspace(0, 100, len(sp_angle_range))
start_points = [Point(np.cos(a) * d, np.sin(a) * d) for a, d in zip(sp_angle_range, spiral_distances)]
start_points = gp.make_like(MultiPoint(start_points), db.p)
poly = new_drawbox
pts = []
lss = []
n_steps = 5
for pt in tqdm(start_points):
vp = VectorParticle(pos=pt, grid=qpg, stepsize=1, momentum_factor=np.random.uniform(0,0))
for ii in range(n_steps):
vp.step()
if len(vp.pts) > 1:
ls = gp.merge_LineStrings([LineString(vp.pts)])
lss.append(ls)
buffer_gen = ss.uniform(loc=1, scale=1.1).rvs
d_buffer_gen = functools.partial(np.random.uniform, low=-0.35, high=-0.25)
d_translate_factor_gen = ss.uniform(loc=0.6, scale=0.8).rvs
fills = []
all_polys = Polygon()
for ii, l in enumerate(tqdm(lss[:])):
p = l.buffer(0.1, cap_style=2, join_style=3)
p = p.buffer(buffer_gen(), cap_style=2, join_style=2)
angles_gen = gp.make_callable(sp_angle_range[ii]-90)
stp = gp.ScaleTransPrms(d_buffer=d_buffer_gen(),angles=angles_gen(),d_translate_factor=d_translate_factor_gen(), n_iters=300)
stp.d_buffers += np.random.uniform(-0.05, 0.05, size=stp.d_buffers.shape)
P = gp.Poly(p)
P.fill_scale_trans(**stp.prms)
visible_area = p.difference(all_polys)
visible_fill = P.fill.intersection(visible_area.buffer(1e-6))
fills.append(visible_fill)
all_polys = so.unary_union([all_polys, p])
blss = gp.merge_LineStrings([f for f in fills if f.length > 0.1])
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
# for ii, ls in enumerate(lss):
# sk.stroke(ii + 1)
# sk.geometry(ls)
sk.stroke(1)
sk.geometry(blss)
sk.display()
merge_tolerances = [0.2, 0.3, 0.4, 0.5, 1]
simplify_tolerances = [0.2]
sk.vpype('splitall')
for tolerance in tqdm(merge_tolerances):
sk.vpype(f'linemerge --tolerance {tolerance}mm')
for tolerance in tqdm(simplify_tolerances):
sk.vpype(f'linesimplify --tolerance {tolerance}mm')
sk.vpype('linesort')
sk.display()
filename = 'vp_test28.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____ |
notebooks/old/testing_hyperparams-prod.ipynb | ###Markdown
This notebook investigate several strategies to assess how to select hyperparameters for tikhonet.
###Code
from astropy.io import fits as fits
from matplotlib import pyplot as plt
import matplotlib
matplotlib.rcParams['figure.figsize']=[12,8]
matplotlib.rcParams['figure.figsize']=[12,8]
## Set up the sys.path in order to be able to import our modules
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
"""
Based on code written by alechat
"""
import os
import numpy as np
from subprocess import Popen, PIPE
def fits2npy(fits_file, idx_hdu):
'''Read .fits containing the psf'''
data = fits.getdata(fits_file, idx_hdu)
nb_gal_row = data.shape[0]//96
data_list = []
idx_list = range(0, 10000)
for i in idx_list:
y = (96*i)%(nb_gal_row*96)
x = i//nb_gal_row * 96
data_list.append(data[x:x+96,y:y+96])
return np.asarray(data_list)
def StampCollection2Mosaic(stamplist,gal_dim=96,nb_gal=10000):
nb_gal_row = int(np.sqrt(nb_gal)) #nb galaxies per row
mosaic=np.empty((nb_gal_row*gal_dim,nb_gal_row*gal_dim))
for i in range(nb_gal):
y = (gal_dim*i)%(nb_gal_row*gal_dim)
x = i//nb_gal_row * gal_dim
mosaic[x:x+gal_dim,y:y+gal_dim]=stamplist[i,:,:,0]
return mosaic
def compute_pixel_error(target_file, hdu_target, reconst_file, gal_dim=96, nb_gal=10000,xslice=slice(28,69,1),yslice=slice(28,69,1)):
'''
X: ground truth
Y: estimated images
'''
nb_gal_row = int(np.sqrt(nb_gal)) #nb galaxies per row
X = fits.getdata(target_file,hdu_target)
Y = fits.getdata(reconst_file)
DIFF=X-Y
err = []
for i in range(nb_gal):
y = (gal_dim*i)%(nb_gal_row*gal_dim)
x = i//nb_gal_row * gal_dim
if gal_dim == 96:
err.append((np.linalg.norm((DIFF[x:x+gal_dim,y:y+gal_dim])[xslice,
yslice])**2)/(np.linalg.norm(X[x:x+gal_dim,y:y+gal_dim][xslice, yslice])**2))
else:
err.append((np.linalg.norm(DIFF[x:x+gal_dim,y:y+gal_dim])**2)/(np.linalg.norm(X[x:x+gal_dim,y:y+gal_dim])**2))
return err
def generate_shape_txt(gal_file, psf_file, output_file, gal_dim=96, mosaic_size=100, save_weights='', weights_input=''):
print('Computing ellipticity for file: %s'%(gal_file))
print('Saving result in: %s'%(output_file))
executable = '/data/shapelens_v2/shapelens-CEA-master/bin/get_shapes'
if weights_input in '-o-i':
cmd = '%s %s %s -p %s -g %d -s %d -T %s | tee %s'%(executable, weights_input, save_weights, psf_file, mosaic_size, gal_dim, gal_file, output_file)
else:
cmd = '%s -p %s -g %d -s %d -T %s | tee %s'%(executable, psf_file, mosaic_size, gal_dim, gal_file, output_file)
print(cmd)
cmd_file = 'get_shape.cmd'
try:
os.remove(cmd_file)
except OSError:
pass
f = open(cmd_file, 'w')
f.write('#! /bin/bash\n')
f.write('source /home/fsureau/.bashrc\n')
f.write(cmd)
f.close()
os.system('chmod 777 '+cmd_file)
p = Popen('./'+cmd_file, stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate()
return stdout, stderr
def get_target_shape(gal_file, output_file, idx=4):
tmp_file = 'tmp'+str(np.random.randint(999))+'.fits'
tmp_psf_file = 'tmp_psf'+str(np.random.randint(999))+'.fits'
try:
os.remove(tmp_file)
except OSError:
pass
try:
os.remove(tmp_psf_file)
except OSError:
pass
images = fits.getdata(gal_file, idx)
psfs = fits.getdata(gal_file, 3)
fits.writeto(tmp_file, images)
fits.writeto(tmp_psf_file, psfs)
stdout, stderr = generate_shape_txt(tmp_file, tmp_psf_file, output_file)
try:
os.remove(tmp_file)
except OSError:
pass
try:
os.remove(tmp_psf_file)
except OSError:
pass
return stdout, stderr
def get_ellipticity(file_name):
ellip = []
with open(file_name, 'r') as f:
data = f.readlines()
for l in data:
ellip.append(np.array(l.split('\t')[3:5]).astype(np.float32))
return np.asarray(ellip)
def compute_ellipticity_error(fileX, fileY, plot_hist=False, max_idx=10000):
ellipX = get_ellipticity(fileX)[:max_idx]
ellipY = get_ellipticity(fileY)[:max_idx]
err = []
for i in range(len(ellipY)):
if (np.abs(ellipX[i]) > 1).any() or (np.abs(ellipY[i]) > 1).any():
continue
err.append(np.linalg.norm(ellipX[i]-ellipY[i]))
if plot_hist:
plt.figure()
plt.hist(err, 100, range=(0,0.6))
plt.show()
print('Total samples = %d' %len(err))
return err
def oracle_ellip(input_file, output_file, noise_std = 0):
data = fits.getdata(input_file, 1)
psf = fits.getdata(input_file, 3)
if noise_std != 0:
noise = np.random.normal(0, noise_std, size=data.shape)
data += noise
tmp_file = 'tmp'+str(np.random.randint(999))+'.fits'
tmp_psf_file = 'tmp_psf'+str(np.random.randint(999))+'.fits'
try:
os.remove(tmp_file)
except OSError:
pass
try:
os.remove(tmp_psf_file)
except OSError:
pass
fits.writeto(tmp_file, data)
fits.writeto(tmp_psf_file, psf)
generate_shape_txt(tmp_file, tmp_psf_file, output_file)
try:
os.remove(tmp_file)
except OSError:
pass
try:
os.remove(tmp_psf_file)
except OSError:
pass
from skimage import restoration
import copy
def dirac2d(ndim,shape,is_real=True):
impr = np.zeros([3] * ndim)
impr[(slice(1, 2), ) * ndim] = 1.0
return restoration.uft.ir2tf(impr, shape, is_real=is_real), impr
def correct_pixel_window_function(fpsf, size_img):
"""
Correct for pixel window effect (beware of aliasing)
This is useful for convolution with band limited signal sampled higher than Nyquist frequency,
to better approximate continuous convolution followed by sampling with discrete convolution.
@param fpsf fourier transform to be corrected for sampling effect
@param size_img size of input image (to check if real or complex transform)
@return the fourier transform with extra phase (same size as fpsf)
"""
mult_x=np.array(np.fft.fftfreq(size_img[0]),dtype=np.float64)
if fpsf.shape[1] != size_img[1]:
mult_y=np.array(np.fft.rfftfreq(size_img[1]),dtype=np.float64)
else:
mult_y=np.array(np.fft.fftfreq(size_img[1]),dtype=np.float64)
pwf_x=np.array([np.sinc(kx) for kx in mult_x],dtype=np.float64)
pwf_y=np.array([np.sinc(ky) for ky in mult_y],dtype=np.float64)
return copy.deepcopy(fpsf / np.outer(pwf_x, pwf_y))
def perform_shift_in_frequency(fpsf, size_img, shift):
"""
Add phase to fourier transform to shift signal centered in *shift* to 0
@param fpsf fourier transform needing extra phase factor
@param size_img size of input image (to check if real or complex transform)
@param shift, shift in [x,y] for array[x,y]
@return the fourier transform with extra phase (same size as fpsf)
"""
phase_factor= np.float64(2. * np.pi) * shift.astype(np.float64)
if phase_factor[0] ==0.:
kx_ft=np.zeros(size_img[0])+1.
else :
kx_ft=np.exp(np.fft.fftfreq(size_img[0],d=1./phase_factor[0])*1j)
if phase_factor[1] ==0.:
ky_ft=np.zeros(fpsf.shape[1],dtype=np.float64)+1.
else:
if fpsf.shape[1] != size_img[1]:
ky_ft=np.exp(np.fft.rfftfreq(size_img[1],d=1./phase_factor[1])*1j)
else:
ky_ft=np.exp(np.fft.fftfreq(size_img[1],d=1./phase_factor[1])*1j)
return copy.deepcopy(np.outer(kx_ft,ky_ft)*fpsf)
def recenter_psf(psf,param):
fpsf=np.fft.fft2(psf)
fpsf_ctr=perform_shift_in_frequency(fpsf, psf.shape, param)
return np.real(np.fft.ifft2(fpsf_ctr))
%load_ext line_profiler
%load_ext Cython
import line_profiler
#Set compiler directives (cf. http://docs.cython.org/src/reference/compilation.html)
from Cython.Compiler.Options import get_directive_defaults
directive_defaults = get_directive_defaults()
directive_defaults['profile'] = True
directive_defaults['linetrace'] = True
directive_defaults['binding'] = True
###Output
_____no_output_____
###Markdown
Cython versions
###Code
%%cython -f --compile-args=-DCYTHON_TRACE=1 --compile-args=-fopenmp --link-args=-fopenmp
#-a --compile-args=-fopenmp --link-args=-fopenmp
# cython: profile=True, linetrace=True, binding=True
#--annotate
import cython
from cython.parallel import prange
cimport numpy as cnp
import numpy as np
from libc.math cimport pow
@cython.boundscheck(False) # Deactivate bounds checking
@cython.wraparound(False) # Deactivate negative indexing.
@cython.cdivision(True)
cpdef cy_sure_proj_risk_est_1d(double tau, double[::1] psf_ps,double[::1] y_ps, double[::1] reg_ps,
Py_ssize_t Ndata, double sigma2):
cdef Py_ssize_t kx
cdef double den=0.
cdef double risk=0.
for kx in range(Ndata):
den=psf_ps[kx]+tau*reg_ps[kx]
risk+=psf_ps[kx]*y_ps[kx]/pow(den,2.0)+2.0*(sigma2-y_ps[kx])/den
return risk
@cython.boundscheck(False) # Deactivate bounds checking
@cython.wraparound(False) # Deactivate negative indexing.
@cython.cdivision(True)
cpdef cy_sure_pred_risk_est_1d(double tau, double[::1] psf_ps, double[::1] y_ps, double[::1] reg_ps,
Py_ssize_t Ndata, double sigma2):
cdef Py_ssize_t kx
cdef double wiener=0., wiener2=0.
cdef double risk=0.
for kx in range(Ndata):
wiener=psf_ps[kx]/(psf_ps[kx]+tau*reg_ps[kx])
wiener2=pow(wiener,2.0)
risk+=wiener2*y_ps[kx]+2*(sigma2-y_ps[kx])*wiener
return risk
@cython.boundscheck(False) # Deactivate bounds checking
@cython.wraparound(False) # Deactivate negative indexing.
@cython.cdivision(True)
cpdef cy_gcv_risk_est_1d(double tau,double[::1] psf_ps, double[::1] y_ps, double[::1] reg_ps,
Py_ssize_t Ndata, double sigma2):
cdef Py_ssize_t kx
cdef double wiener=0., wiener2=0.
cdef double den=0., num=0.
cdef double risk=0.
for kx in range(Ndata):
wiener=psf_ps[kx]/(psf_ps[kx]+tau*reg_ps[kx])
num+=y_ps[kx]*pow(1.0-wiener,2.0)
den+=(1.0-wiener)
return num/pow(den,2.0)
@cython.boundscheck(False) # Deactivate bounds checking
@cython.wraparound(False) # Deactivate negative indexing.
@cython.cdivision(True)
cpdef cy_pereyra_hyper(double tau0, double alpha, double beta, double[::1] psf_ps,
double[::1] y_ps, double[::1] reg_ps,
Py_ssize_t Ndata,Py_ssize_t Nit, double sigma2):
cdef Py_ssize_t kx,kit
cdef double deconvf2=0.
cdef double hyp_cur=tau0*sigma2
for kit in range(Nit):
deconvf2=0
for kx in range(Ndata):
deconvf2+=psf_ps[kx]*reg_ps[kx]*y_ps[kx]/pow(psf_ps[kx]+hyp_cur*reg_ps[kx],2.0)
hyp_cur=(Ndata/2.0 + alpha - 1.0)/(deconvf2+beta)*sigma2
return hyp_cur
###Output
_____no_output_____
###Markdown
Python Versions
###Code
def proj_sure(h2,y2,x,reg2,sigma2):
den=h2+x*reg2
den2=den**2
return np.sum(h2*y2/den2+2.0*(sigma2-y2)/den)
def pred_risk_est(h2,y2,x,reg2,sigma2):
wiener_f=h2/(h2+x*reg2)
wiener_f2=wiener_f**2
t1=np.sum(wiener_f2 * y2)
t2=2.0*(sigma2) * np.sum(wiener_f)
t3=-2* np.sum(wiener_f*y2)
return t1+t2+t3
def gcv(h2,y2,x,reg2):
wiener_f=h2/(h2+x*reg2)
res=np.sum(y2*(1.0-wiener_f)**2)
tr=np.sum(1.0-wiener_f)**2
return res/tr
import scipy.optimize
def pred_sure_list(h2,y2,xlist,reg2,sigma2):
return [pred_risk_est(h2,y2,x,reg2,sigma2) for x in xlist]
def proj_sure_list(h2,y2,xlist,reg2,sigma2):
return [proj_sure(h2,y2,x,reg2,sigma2) for x in xlist]
def gcv_list(h2,y2,xlist,reg2):
return [gcv(h2,y2,x,reg2) for x in xlist]
def lambda_pereyra_fourier(h2,y2,x,sigma2,reg2,nit=10,alpha=1,beta=1):
tau_list=[x]
tau_cur=x
n_im=np.size(y2)
num_f=h2*reg2*y2
for kit in range(nit):
deconvf2=num_f/(h2+tau_cur*sigma2*reg2)**2
tau_cur=(n_im/2.0 + alpha - 1.0)/(np.sum(deconvf2)+beta)
tau_list.append(tau_cur)
return np.array(tau_list)*sigma2
def min_risk_est_1d(h2,y2,reg2,sigma2,method,risktype="SureProj",tau0=1.0):
bounds=scipy.optimize.Bounds(1e-4,np.inf,keep_feasible=True)
if(risktype is "SureProj"):
if method is "Powell":
return scipy.optimize.minimize(cy_sure_proj_risk_est_1d, tau0, args=(h2,y2,reg2, y2.size,sigma2), method='Powell',
bounds=bounds,options={'xtol': 1e-4, 'maxiter': 100, 'disp': False})
elif method is "Brent" or "golden":
return scipy.optimize.minimize_scalar(cy_sure_proj_risk_est_1d, args=(h2,y2,reg2, y2.size,sigma2), method=method,
bounds=bounds,options={'xtol': 1e-4, 'maxiter': 100})
else:
raise ValueError("Optim. Method {0} is not supported".format(method))
elif(risktype is "SurePred"):
if method is "Powell":
return scipy.optimize.minimize(cy_sure_pred_risk_est_1d, tau0, args=(h2,y2,reg2, y2.size,sigma2), method='Powell',
bounds=bounds,options={'xtol': 1e-4, 'maxiter': 100, 'disp': False})
elif method is "Brent" or "golden":
return scipy.optimize.minimize_scalar(cy_sure_pred_risk_est_1d, args=(h2,y2,reg2, y2.size,sigma2), method=method,
bounds=bounds,options={'xtol': 1e-4, 'maxiter': 100})
else:
raise ValueError("Optim. Method {0} is not supported".format(method))
elif(risktype is "GCV"):
if method is "Powell":
return scipy.optimize.minimize(cy_gcv_risk_est_1d, tau0, args=(h2,y2,reg2, y2.size,sigma2), method='Powell',
bounds=bounds,options={'xtol': 1e-4, 'maxiter': 100, 'disp': False})
elif method is "Brent" or "golden":
return scipy.optimize.minimize_scalar(cy_gcv_risk_est_1d, args=(h2,y2,reg2, y2.size,sigma2), method=method,
bounds=bounds,options={'xtol': 1e-4, 'maxiter': 100})
else:
raise ValueError("Optim. Method {0} is not supported".format(method))
else:
raise ValueError("Risk {0} is not supported".format(risktype))
from skimage import restoration
write_path="/data/DeepDeconv/benchmark/euclidpsf/"
testset_file = 'image-shfl-0-multihdu.fits'
target_name=testset_file.replace('.fits','-target_fwhm0p07.fits')
data_path='/data/DeepDeconv/data/vsc_euclidpsfs/reshuffle/'
#ref=(slice(96,192),slice(96,192)) #for centering
ref=(slice(96,192),slice(0,96)) #for spiral
image=fits.getdata(data_path+testset_file,0)[ref]
psf=fits.getdata(data_path+testset_file,1)[ref]
target=fits.getdata(data_path+testset_file,2)[ref]
psf_ctr=recenter_psf(psf,np.array([-0.5,-0.5]))
psf_tar=fits.getdata('/data/DeepDeconv/data/gauss_fwhm0p07/starfield_image-000-0.fits')
plt.imshow(image)
import scipy.signal
from DeepDeconv.utils.deconv_utils import FISTA,tikhonov
from DeepDeconv.utils.data_utils import add_noise
np.random.seed(0)
SNR_SIMU=1000
noisy_im,SNR_list,sigma_list=add_noise(image,SNR=SNR_SIMU)
yf=restoration.uft.ufft2(noisy_im)
trans_func = restoration.uft.ir2tf(psf_ctr, image.shape, is_real=False)
deconv_im0=np.real(restoration.wiener(noisy_im,trans_func,1/SNR_list[0], is_real=False,clip=False))
tfdirac,imdirac=dirac2d(noisy_im.ndim,noisy_im.shape,is_real=False)
lap_tf, lap_ker = restoration.uft.laplacian(image.ndim, image.shape, is_real=False)
fullh=np.abs(trans_func)
lst_nonz=np.where(fullh>0)
trans_func_ps=np.abs(trans_func)**2
reg_dirac_ps=np.abs(tfdirac)**2
reg_lap_ps=np.abs(lap_tf)**2
im_ps=np.abs(noisy_im)**2
sigma2=sigma_list[0]**2
h2=np.abs(trans_func)**2 #This is |h_w|^2
l2=np.abs(lap_tf)**2 #This is |l_w|^2 in case of laplacian
d2=np.abs(tfdirac)**2 #This is 1 (tikhonov:Dirac kernel)
y2=np.abs(restoration.uft.ufft2(noisy_im))**2 #This is the FFT of noisy image
lst_nonz=np.where(trans_func_ps>1e-8)
h2_nonz=np.abs(trans_func[lst_nonz])**2 #This is |h_w|^2
l2_nonz=np.abs(lap_tf[lst_nonz])**2 #This is |l_w|^2 in case of laplacian
d2_nonz=np.abs(tfdirac[lst_nonz])**2 #This is 1 (tikhonov:Dirac kernel)
y2_nonz=np.abs(restoration.uft.ufft2(noisy_im)[lst_nonz])**2 #This is the FFT of noisy image
# profile = line_profiler.LineProfiler(cy_sure_proj_risk_est_1d)
# profile.runcall(min_sure_proj_risk_est_1d, y2_nonz,h2_nonz,d2_nonz, y2_nonz.size,sigma_list[0]**2,"Brent")
# profile.print_stats()
# %lprun -f cy_sure_proj_risk_est_1d cy_sure_proj_risk_est_1d(y2_nonz,h2_nonz,d2_nonz, y2_nonz.size,sigma_list[0]**2)
###Output
_____no_output_____
###Markdown
Test speed of multidim vs scale minimization.
###Code
tic = timeit.default_timer()
print(scipy.optimize.minimize(cy_sure_proj_risk_est_1d, 1.0, args=(h2_nonz,y2_nonz,d2_nonz, y2_nonz.size,sigma_list[0]**2), method='Powell',
bounds=(0,None),options={'xtol': 0.001, 'maxiter': 100, 'disp': True}))
toc = timeit.default_timer()
print("CYTHON MIN=",toc-tic)
tic = timeit.default_timer()
print(scipy.optimize.minimize_scalar(cy_sure_proj_risk_est_1d, args=(h2_nonz,y2_nonz,d2_nonz, y2_nonz.size,sigma_list[0]**2), method='brent',
bounds=(0,None),options={'xtol': 0.001, 'maxiter': 100}))
toc = timeit.default_timer()
print("CYTHON2 MIN=",toc-tic)
###Output
Optimization terminated successfully.
Current function value: -10.777978
Iterations: 5
Function evaluations: 91
direc: array([[1.]])
fun: array(-10.77797793)
message: 'Optimization terminated successfully.'
nfev: 91
nit: 5
status: 0
success: True
x: array(0.00147857)
CYTHON MIN= 0.009287958033382893
fun: -10.777978031994426
nfev: 25
nit: 21
success: True
x: 0.0014799029962370754
CYTHON2 MIN= 0.001506648026406765
###Markdown
Test speed and results for different risk minimization
###Code
def manual_deconv_l2(noisy_im,trans_func,trans_reg,hyp_param):
hfstar=np.conj(trans_func)
h2=np.abs(trans_func)**2
d2=np.abs(trans_reg)**2
filter_f=hfstar/(h2+hyp_param*d2)#/SNR_list[0]
yf=restoration.uft.ufft2(noisy_im)
sol=np.real(restoration.uft.uifft2(filter_f*yf))
return sol
import timeit
check_hyper=10**np.arange(-5.0,2.5,0.001)
sigma2=sigma_list[0]**2
for reg in ["TIKHO","WIENER"]:
if reg is "TIKHO":
reg2=d2
reg2_nonz=d2_nonz
print("TIKHO SNR {0}:".format(SNR_SIMU))
else:
reg2=l2
reg2_nonz=l2_nonz
print("WIENER SNR {0}:".format(SNR_SIMU))
print("\t TEST SURE PROJ")
tic = timeit.default_timer()
py_sure_proj_risk=proj_sure_list(h2_nonz,y2_nonz,check_hyper,reg2_nonz,sigma2)
py_sure_proj_min_risk=check_hyper[np.argmin(py_sure_proj_risk)]
toc = timeit.default_timer()
print("\t\t PYTHON=",toc-tic,np.min(py_sure_proj_risk),py_sure_proj_min_risk)
tic = timeit.default_timer()
cy_sure_proj_risk= min_risk_est_1d(h2_nonz,y2_nonz,reg2_nonz,sigma2,"Brent",risktype="SureProj",tau0=1.0)
toc = timeit.default_timer()
print("\t\t CYTHON=",toc-tic,cy_sure_proj_risk.fun,cy_sure_proj_risk.x)
print("\t TEST SURE PRED")
tic = timeit.default_timer()
py_sure_pred_risk=pred_sure_list(h2,y2,check_hyper,reg2,sigma2)
py_sure_pred_min_risk=check_hyper[np.argmin(py_sure_pred_risk)]
toc = timeit.default_timer()
print("\t\t PYTHON=",toc-tic,np.min(py_sure_pred_risk),py_sure_pred_min_risk)
tic = timeit.default_timer()
cy_sure_pred_risk= min_risk_est_1d(h2.flatten(),y2.flatten(),reg2.flatten(),sigma2,"Brent",risktype="SurePred",tau0=1.0)
toc = timeit.default_timer()
print("\t\t CYTHON=",toc-tic,cy_sure_pred_risk.fun,cy_sure_pred_risk.x)
print("\t TEST GCV")
tic = timeit.default_timer()
py_gcv_risk=gcv_list(h2,y2,check_hyper,reg2)
py_gcv_min_risk=check_hyper[np.argmin(py_gcv_risk)]
toc = timeit.default_timer()
print("\t\t PYTHON=",toc-tic,np.min(py_gcv_risk),py_gcv_min_risk)
tic = timeit.default_timer()
cy_gcv_risk= min_risk_est_1d(h2.flatten(),y2.flatten(),reg2.flatten(),sigma2,"Brent",risktype="GCV",tau0=1.0)
toc = timeit.default_timer()
print("\t\t CYTHON=",toc-tic,cy_gcv_risk.fun,cy_gcv_risk.x)
print("\t TEST Pereyra")
tau0=1.0
alpha_per=1.0
beta_per=1.0
nit_per=100
tic = timeit.default_timer()
py_per_risk=lambda_pereyra_fourier(h2,y2,tau0,sigma2,reg2,nit=nit_per,alpha=alpha_per,beta=beta_per)
py_per_min_risk=py_per_risk[-1]
toc = timeit.default_timer()
print("\t\t PYTHON=",toc-tic,py_per_min_risk)
tic = timeit.default_timer()
cy_per_risk= cy_pereyra_hyper(tau0,alpha_per,beta_per,h2.flatten(),y2.flatten(),reg2.flatten(),h2.size,nit_per,sigma2)
toc = timeit.default_timer()
print("\t\t CYTHON=",toc-tic,cy_per_risk,"\n")
if reg is "TIKHO":
deconv_sure_proj_tikho=manual_deconv_l2(noisy_im,trans_func,tfdirac,cy_sure_proj_risk.x)
deconv_sure_pred_tikho=manual_deconv_l2(noisy_im,trans_func,tfdirac,cy_sure_pred_risk.x)
deconv_tikho_gcv=manual_deconv_l2(noisy_im,trans_func,tfdirac,cy_gcv_risk.x)
deconv_tikho_per=manual_deconv_l2(noisy_im,trans_func,tfdirac,cy_per_risk)
else:
deconv_sure_proj_wiener=manual_deconv_l2(noisy_im,trans_func,lap_tf,cy_sure_proj_risk.x)
deconv_sure_pred_wiener=manual_deconv_l2(noisy_im,trans_func,lap_tf,cy_sure_pred_risk.x)
deconv_wiener_gcv=manual_deconv_l2(noisy_im,trans_func,lap_tf,cy_gcv_risk.x)
deconv_wiener_per=manual_deconv_l2(noisy_im,trans_func,lap_tf,cy_per_risk)
###Output
TIKHO SNR 1000:
TEST SURE PROJ
PYTHON= 0.6483417088165879 -10.777978076722507 0.0014791083881706755
CYTHON= 0.0014067478477954865 -10.777978095581187 0.0014794472993206098
TEST SURE PRED
PYTHON= 1.4184552738443017 -4.117302247335232 0.0024547089156895414
CYTHON= 0.002738378942012787 -4.117302248029835 0.0024564598387320294
TEST GCV
PYTHON= 1.3873953707516193 8.177611717928745e-10 0.002437810818373227
CYTHON= 0.0022709928452968597 8.177611655678411e-10 0.0024387218094157445
TEST Pereyra
PYTHON= 0.013039573095738888 0.0015343027654104795
CYTHON= 0.0043977489694952965 0.0015343027654104759
WIENER SNR 1000:
TEST SURE PROJ
PYTHON= 0.6701111681759357 -10.63985974298706 8.016780633882364e-05
CYTHON= 0.0016797389835119247 -10.63986004684533 8.023950959543858e-05
TEST SURE PRED
PYTHON= 1.413931267336011 -4.116754480511776 0.00017619760464133175
CYTHON= 0.0027490947395563126 -4.116754481162996 0.00017604851328202983
TEST GCV
PYTHON= 1.3827669592574239 8.395200698854426e-10 0.00018793168168051096
CYTHON= 0.0017383592203259468 8.395200422711281e-10 0.00018772112272045206
TEST Pereyra
PYTHON= 0.013496358878910542 0.0014867202443582634
CYTHON= 0.004087153822183609 0.0014867202443582623
###Markdown
Results Obtained through prototypeTikho SNR : 20.0 SURE PROJ= 5.0118723362713204 SURE PRED= 0.758577575029002 GCV= 0.6456542290345031 PEREYRA GAMMA= 47.408643953151675 Wiener SNR: 20.0 SURE PROJ= 97.72372209554754 SURE PRED= 97.72372209554754 GCV= 97.72372209554754 PEREYRA GAMMA= 46.96075963415518
###Code
hyp_param=1.0/(SNR_list[0]) #Choice of Alexis (incorrect)
skimage_tikho=restoration.wiener(noisy_im,trans_func,hyp_param,reg=tfdirac, is_real=False,clip=False)
skimage_wiener=restoration.wiener(noisy_im,trans_func,hyp_param,reg=lap_tf, is_real=False,clip=False)
plt.figure()
plt.subplot(221),plt.imshow(np.abs(target)),plt.colorbar(),plt.title('Target')
plt.subplot(222),plt.imshow(np.abs(noisy_im)),plt.colorbar(),plt.title('Noisy')
plt.subplot(223),plt.imshow(np.abs(skimage_tikho)),plt.colorbar(),plt.title('Alexis Tikho')
plt.subplot(224),plt.imshow(np.abs(skimage_wiener)),plt.colorbar(),plt.title('Alexis Wiener')
plt.figure()
plt.subplot(221),plt.imshow(np.abs(deconv_sure_proj_tikho)),plt.colorbar(),plt.title('Sure Proj Tikho')
plt.subplot(222),plt.imshow(np.abs(deconv_sure_proj_wiener)),plt.colorbar(),plt.title('Sure Proj Wiener')
plt.subplot(223),plt.imshow(np.abs(deconv_sure_pred_tikho)),plt.colorbar(),plt.title('Sure Pred Tikho')
plt.subplot(224),plt.imshow(np.abs(deconv_sure_pred_wiener)),plt.colorbar(),plt.title('Sure Pred Wiener')
plt.figure()
plt.subplot(221),plt.imshow(np.abs(deconv_tikho_gcv)),plt.colorbar(),plt.title('GCV Tikho')
plt.subplot(222),plt.imshow(np.abs(deconv_wiener_gcv)),plt.colorbar(),plt.title('GCV Wiener')
plt.subplot(223),plt.imshow(np.abs(deconv_tikho_per)),plt.colorbar(),plt.title('Pereyra Tikho')
plt.subplot(224),plt.imshow(np.abs(deconv_wiener_per)),plt.colorbar(),plt.title('Pereyra Wiener')
plt.figure()
plt.subplot(221),plt.imshow(np.abs(skimage_tikho)-target),plt.colorbar(),plt.title('Alexis Tikho')
plt.subplot(222),plt.imshow(np.abs(skimage_wiener)-target),plt.colorbar(),plt.title('Alexis Wiener')
plt.subplot(223),plt.imshow(np.abs(deconv_sure_proj_tikho)-target),plt.colorbar(),plt.title('Sure Proj Tikho')
plt.subplot(224),plt.imshow(np.abs(deconv_sure_proj_wiener)-target),plt.colorbar(),plt.title('Sure Proj Wiener')
plt.figure()
plt.subplot(221),plt.imshow(np.abs(deconv_sure_pred_tikho)-target),plt.colorbar(),plt.title('Sure Pred Tikho')
plt.subplot(222),plt.imshow(np.abs(deconv_sure_pred_wiener)-target),plt.colorbar(),plt.title('Sure Pred Wiener')
plt.subplot(223),plt.imshow(np.abs(deconv_tikho_gcv)-target),plt.colorbar(),plt.title('GCV Tikho')
plt.subplot(224),plt.imshow(np.abs(deconv_wiener_gcv)-target),plt.colorbar(),plt.title('GCV Wiener')
plt.figure()
plt.subplot(221),plt.imshow(np.abs(deconv_tikho_per)-target),plt.colorbar(),plt.title('Pereyra Tikho')
plt.subplot(222),plt.imshow(np.abs(deconv_wiener_per)-target),plt.colorbar(),plt.title('Pereyra Wiener')
###Output
_____no_output_____ |
notebooks/advesarial/seq2seqUNSUP.ipynb | ###Markdown
Data
###Code
#start = time.time()
dataPath = '/data/fs4/datasets/pcaps/smallFlows.pcap'
#pcaps = rdpcap(dataPath)
#sessionPrep = pcaps.sessions()
#end = time.time()
#print end - start
def parse_header(line): # pragma: no cover
ret_dict = {}
h = line.split()
if h[2] == 'IP6':
"""
Conditional formatting based on ethernet type.
IPv4 format: 0.0.0.0.port
IPv6 format (one of many): 0:0:0:0:0:0.port
"""
ret_dict['src_port'] = h[3].split('.')[-1]
ret_dict['src_ip'] = h[3].split('.')[0]
ret_dict['dest_port'] = h[5].split('.')[-1].split(':')[0]
ret_dict['dest_ip'] = h[5].split('.')[0]
else:
if len(h[3].split('.')) > 4:
ret_dict['src_port'] = h[3].split('.')[-1]
ret_dict['src_ip'] = '.'.join(h[3].split('.')[:-1])
else:
ret_dict['src_ip'] = h[3]
ret_dict['src_port'] = ''
if len(h[5].split('.')) > 4:
ret_dict['dest_port'] = h[5].split('.')[-1].split(':')[0]
ret_dict['dest_ip'] = '.'.join(h[5].split('.')[:-1])
else:
ret_dict['dest_ip'] = h[5].split(':')[0]
ret_dict['dest_port'] = ''
return ret_dict
def parse_data(line): # pragma: no cover
ret_str = ''
h, d = line.split(':', 1)
ret_str = d.strip().replace(' ', '')
return ret_str
def process_packet(output): # pragma: no cover
# TODO!! throws away the first packet!
ret_header = {}
ret_dict = {}
ret_data = ''
hasHeader = False
for line in output:
line = line.strip()
if line:
if not line.startswith('0x'):
# header line
if ret_dict and ret_data:
# about to start new header, finished with hex
ret_dict['data'] = ret_data
yield ret_dict
ret_dict.clear()
ret_header.clear()
ret_data = ''
hasHeader = False
# parse next header
try:
ret_header = parse_header(line)
ret_dict.update(ret_header)
hasHeader = True
except:
ret_header.clear()
ret_dict.clear()
ret_data = ''
hasHeader = False
else:
# hex data line
if hasHeader:
data = parse_data(line)
ret_data = ret_data + data
else:
continue
def is_clean_packet(packet): # pragma: no cover
"""
Returns whether or not the parsed packet is valid
or not. Checks that both the src and dest
ports are integers. Checks that src and dest IPs
are valid address formats. Checks that packet data
is hex. Returns True if all tests pass, False otherwise.
"""
if not packet['src_port'].isdigit(): return False
if not packet['dest_port'].isdigit(): return False
if packet['src_ip'].isalpha(): return False
if packet['dest_ip'].isalpha(): return False
if 'data' in packet:
try:
int(packet['data'], 16)
except:
return False
return True
def order_keys(hexSessionDict):
"""
Returns list of the hex sessions in (rough) time order.
"""
orderedKeys = []
for key in sorted(hexSessionDict.keys(), key=lambda key: hexSessionDict[key][1]):
orderedKeys.append(key)
return orderedKeys
def read_pcap(path): # pragma: no cover
print 'starting reading pcap file'
hex_sessions = {}
proc = subprocess.Popen('tcpdump -nn -tttt -xx -r '+path,
shell=True,
stdout=subprocess.PIPE)
insert_num = 0 # keeps track of insertion order into dict
for packet in process_packet(proc.stdout):
if not is_clean_packet(packet):
continue
if 'data' in packet:
key = (packet['src_ip']+":"+packet['src_port'], packet['dest_ip']+":"+packet['dest_port'])
rev_key = (key[1], key[0])
if key in hex_sessions:
hex_sessions[key][0].append(packet['data'])
elif rev_key in hex_sessions:
hex_sessions[rev_key][0].append(packet['data'])
else:
hex_sessions[key] = ([packet['data']], insert_num)
insert_num += 1
print 'finished reading pcap file'
return hex_sessions
def pickleFile(thing2save, file2save2=None, filePath='/work/notebooks/drawModels/', fileName='myModels'): # pragma: no cover
if file2save2 is None:
f = file(filePath+fileName+'.pickle', 'wb')
else:
f = file(filePath+file2save2, 'wb')
cPickle.dump(thing2save, f, protocol=cPickle.HIGHEST_PROTOCOL)
f.close()
def loadFile(filePath): # pragma: no cover
file2open = file(filePath, 'rb')
loadedFile = cPickle.load(file2open)
file2open.close()
return loadedFile
def removeBadSessionizer(hexSessionDict, saveFile=False, dataPath=None, fileName=None): # pragma: no cover
for ses in hexSessionDict.keys():
paclens = []
for pac in hexSessionDict[ses][0]:
paclens.append(len(pac))
if np.min(paclens)<80:
del hexSessionDict[ses]
if saveFile:
print 'pickling sessions'
pickleFile(hexSessionDict, filePath=dataPath, fileName=fileName)
return hexSessionDict
# Making the hex dictionary
def hexTokenizer(): # pragma: no cover
hexstring = '''0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 1A, 1B,
1C, 1D, 1E, 1F, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 2A, 2B, 2C, 2D, 2E, 2F, 30, 31, 32, 33,
34, 35, 36, 37, 38, 39, 3A, 3B, 3C, 3D, 3E, 3F,
40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 4A, 4B,
4C, 4D, 4E, 4F, 50, 51, 52, 53, 54, 55, 56, 57,
58, 59, 5A, 5B, 5C, 5D, 5E, 5F, 60, 61, 62, 63,
64, 65, 66, 67, 68, 69, 6A, 6B, 6C, 6D, 6E, 6F,
70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 7A, 7B,
7C, 7D, 7E, 7F, 80, 81, 82, 83, 84, 85, 86, 87,
88, 89, 8A, 8B, 8C, 8D, 8E, 8F, 90, 91, 92, 93,
94, 95, 96, 97, 98, 99, 9A, 9B, 9C, 9D, 9E, 9F,
A0, A1, A2, A3, A4, A5, A6, A7, A8, A9, AA, AB,
AC, AD, AE, AF, B0, B1, B2, B3, B4, B5, B6, B7,
B8, B9, BA, BB, BC, BD, BE, BF, C0, C1, C2, C3,
C4, C5, C6, C7, C8, C9, CA, CB, CC, CD, CE, CF,
D0, D1, D2, D3, D4, D5, D6, D7, D8, D9, DA, DB,
DC, DD, DE, DF, E0, E1, E2, E3, E4, E5, E6, E7,
E8, E9, EA, EB, EC, ED, EE, EF, F0, F1, F2, F3,
F4, F5, F6, F7, F8, F9, FA, FB, FC, FD, FE, FF'''.replace('\t', '')
hexList = [x.strip() for x in hexstring.lower().split(',')]
hexList.append('<EOP>') # End Of Packet token
hexDict = {}
for key, val in enumerate(hexList):
if len(val) == 1:
val = '0'+val
hexDict[val] = key #dictionary k=hex, v=int
return hexDict
hexSessions = read_pcap(dataPath)
hexSessions = removeBadSessionizer(hexSessions)
hexSessionsKeys = order_keys(hexSessions)
hexDict = hexTokenizer()
###Output
starting reading pcap file
finished reading pcap file
###Markdown
Dictionary of IP communications
###Code
def oneHot(index, granular = 'hex'):
if granular == 'hex':
vecLen = 257
else:
vecLen = 17
zeroVec = np.zeros(vecLen)
zeroVec[index] = 1.0
return zeroVec
def oneSessionEncoder(sessionPackets, hexDict, maxPackets=2, packetTimeSteps=100,
packetReverse=False, charLevel=False, padOldTimeSteps=True): # pragma: no cover
sessionCollect = []
packetCollect = []
if charLevel:
vecLen = 17
else:
vecLen = 257
if len(sessionPackets) > maxPackets: #crop the number of sessions to maxPackets
sessionList = copy(sessionPackets[:maxPackets])
else:
sessionList = copy(sessionPackets)
for packet in sessionList:
packet = packet[32:36]+packet[44:46]+packet[46:48]+packet[52:60]+packet[60:68]+\
packet[68:70]+packet[70:72]+packet[72:74]
packet = [hexDict[packet[i:i+2]] for i in xrange(0,len(packet)-2+1,2)]
if len(packet) >= packetTimeSteps: #crop packet to length packetTimeSteps
packet = packet[:packetTimeSteps]
packet = packet+[256] #add <EOP> end of packet token
else:
packet = packet+[256] #add <EOP> end of packet token
packetCollect.append(packet)
pacMat = np.array([oneHot(x) for x in packet]) #one hot encoding of packet into a matrix
pacMatLen = len(pacMat)
#padding packet
if packetReverse:
pacMat = pacMat[::-1]
if pacMatLen < packetTimeSteps:
#pad by stacking zeros on top of data so that earlier timesteps do not have information
#padding the packet such that zeros are after the actual info for better translation
if padOldTimeSteps:
pacMat = np.vstack( ( np.zeros((packetTimeSteps-pacMatLen,vecLen)), pacMat) )
else:
pacMat = np.vstack( (pacMat, np.zeros((packetTimeSteps-pacMatLen,vecLen))) )
if pacMatLen > packetTimeSteps:
pacMat = pacMat[:packetTimeSteps, :]
sessionCollect.append(pacMat)
#padding session
sessionCollect = np.asarray(sessionCollect, dtype=theano.config.floatX)
numPacketsInSession = sessionCollect.shape[0]
if numPacketsInSession < maxPackets:
#pad sessions to fit the
sessionCollect = np.vstack( (sessionCollect,np.zeros((maxPackets-numPacketsInSession,
packetTimeSteps, vecLen))) )
return sessionCollect, packetCollect
###Output
_____no_output_____
###Markdown
Learning functions
###Code
def floatX(X):
return np.asarray(X, dtype=theano.config.floatX)
def dropout(X, p=0.):
if p != 0:
retain_prob = 1 - p
X = X / retain_prob * srng.binomial(X.shape, p=retain_prob, dtype=theano.config.floatX)
return X
# Gradient clipping
def clip_norm(g, c, n):
'''n is the norm, c is the threashold, and g is the gradient'''
if c > 0:
g = T.switch(T.ge(n, c), g*c/n, g)
return g
def clip_norms(gs, c):
norm = T.sqrt(sum([T.sum(g**2) for g in gs]))
return [clip_norm(g, c, norm) for g in gs]
# Regularizers
def max_norm(p, maxnorm = 0.):
if maxnorm > 0:
norms = T.sqrt(T.sum(T.sqr(p), axis=0))
desired = T.clip(norms, 0, maxnorm)
p = p * (desired/ (1e-7 + norms))
return p
def gradient_regularize(p, g, l1 = 0., l2 = 0.):
g += p * l2
g += T.sgn(p) * l1
return g
def weight_regularize(p, maxnorm = 0.):
p = max_norm(p, maxnorm)
return p
def Adam(params, cost, lr=0.0002, b1=0.1, b2=0.001, e=1e-8, l1 = 0., l2 = 0., maxnorm = 0., c = 8):
updates = []
grads = T.grad(cost, params)
grads = clip_norms(grads, c)
i = theano.shared(floatX(0.))
i_t = i + 1.
fix1 = 1. - b1**(i_t)
fix2 = 1. - b2**(i_t)
lr_t = lr * (T.sqrt(fix2) / fix1)
for p, g in zip(params, grads):
m = theano.shared(p.get_value() * 0.)
v = theano.shared(p.get_value() * 0.)
m_t = (b1 * g) + ((1. - b1) * m)
v_t = (b2 * T.sqr(g)) + ((1. - b2) * v)
g_t = m_t / (T.sqrt(v_t) + e)
g_t = gradient_regularize(p, g_t, l1=l1, l2=l2)
p_t = p - (lr_t * g_t)
p_t = weight_regularize(p_t, maxnorm=maxnorm)
updates.append((m, m_t))
updates.append((v, v_t))
updates.append((p, p_t))
updates.append((i, i_t))
#if iteration%100 == 0:
# updates.append((lr, lr*0.93))
#else:
# updates.append((lr, lr))
return updates
def RMSprop(cost, params, lr = 0.001, l1 = 0., l2 = 0., maxnorm = 0., rho=0.9, epsilon=1e-6, c = 8):
grads = T.grad(cost, params)
grads = clip_norms(grads, c)
updates = []
for p, g in zip(params, grads):
g = gradient_regularize(p, g, l1 = l1, l2 = l2)
acc = theano.shared(p.get_value() * 0.)
acc_new = rho * acc + (1 - rho) * g ** 2
updates.append((acc, acc_new))
updated_p = p - lr * (g / T.sqrt(acc_new + epsilon))
updated_p = weight_regularize(updated_p, maxnorm = maxnorm)
updates.append((p, updated_p))
return updates
###Output
_____no_output_____
###Markdown
Unsupervised feature extractor Initialization for both the unsupervised net and the classifier
###Code
X = T.tensor4('inputs', dtype=theano.config.floatX)
Y = T.matrix('targets')
wtstd = 0.2
dimIn = 257 #hex has 256 characters + the <EOP> character
dim = 100 #dimension reduction size
rnnType = 'gru' #gru or lstm
bidirectional = False
linewt_init = IsotropicGaussian(wtstd)
line_bias = Constant(1.0)
rnnwt_init = IsotropicGaussian(wtstd)
rnnbias_init = Constant(0.0)
packetReverse = False
###ENCODER
if rnnType == 'gru':
rnn = GatedRecurrent(dim=dim, weights_init = rnnwt_init, biases_init = rnnbias_init, name = 'gru')
dimMultiplier = 2
else:
rnn = LSTM(dim=dim, weights_init = rnnwt_init, biases_init = rnnbias_init, name = 'lstm')
dimMultiplier = 4
fork = Fork(output_names=['linear', 'gates'],
name='fork', input_dim=dimIn, output_dims=[dim, dim * dimMultiplier],
weights_init = linewt_init, biases_init = line_bias)
###CONTEXT
if rnnType == 'gru':
rnnContext = GatedRecurrent(dim=dim, weights_init = rnnwt_init,
biases_init = rnnbias_init, name = 'gruContext')
else:
rnnContext = LSTM(dim=dim, weights_init = rnnwt_init, biases_init = rnnbias_init,
name = 'lstmContext')
forkContext = Fork(output_names=['linearContext', 'gatesContext'],
name='forkContext', input_dim=dim, output_dims=[dim, dim * dimMultiplier],
weights_init = linewt_init, biases_init = line_bias)
if bidirectional:
dimDec = dim*2
if rnnType == 'gru':
rnnContextRev = GatedRecurrent(dim=dim, weights_init = rnnwt_init,
biases_init = rnnbias_init, name = 'gruContextRev')
else:
rnnContextRev = LSTM(dim=dim, weights_init = rnnwt_init, biases_init = rnnbias_init,
name = 'lstmContextRev')
rnnContextRev.initialize()
else:
dimDec = dim
###DECODER
if rnnType == 'gru':
rnnDec = GatedRecurrent(dim=dimIn, weights_init = rnnwt_init,
biases_init = rnnbias_init, name = 'gruDecoder')
else:
rnnDec = LSTM(dim=dimIn, weights_init = rnnwt_init, biases_init = rnnbias_init, name = 'lstmDecoder')
forkDec = Fork(output_names=['declinear', 'decgates'],
name='forkDec', input_dim=dimDec, output_dims=[dim, dimIn*dimMultiplier],
weights_init = linewt_init, biases_init = line_bias)
forkFinal = Fork(output_names=['finallinear', 'finalgates'],
name='forkFinal', input_dim=dim, output_dims=[dimIn, dimIn*dimMultiplier],
weights_init = linewt_init, biases_init = line_bias)
#initialize the weights in all the functions
fork.initialize()
rnn.initialize()
forkContext.initialize()
rnnContext.initialize()
forkDec.initialize()
forkFinal.initialize()
rnnDec.initialize()
###Output
_____no_output_____
###Markdown
Unsupervised graph ctxtest = theano.function([X], [data1, hContext], allow_input_downcast=True)testones = np.ones((12,16,1,257))testones[0] = testones[0]+3testones[-1] = testones[-1]-3ctxtest(testones)[0].shape(4, 2, 1, 100)ctxtest(testones)[1][:,:-1].reshape((4, maxPackets, packetTimeSteps, dim)) = (4, 3, 16, 100)[:,1:,:-1] = (4, 2, 15, 100)ctxtest(testones)[0].reshape((4, maxPackets, packetTimeSteps, dim))[:,1:,:-1]data1, (batch_size, maxPackets, packetTimeSteps, dim))[:,1:,:-1]),(12, 16, 1, 100)ctxtest(testones)[0]START HERE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Transform hContext then add it to the concatenated zeros stuff(4, 2, 16, 100) (batch_size, maxPackets-1, 1, dim)(np.concatenate((np.zeros((4,2,1,100)),ctxtest(testones)[0].reshape((4, maxPackets, packetTimeSteps, dim))[:,1:,:-1]), axis = 2) + ctxtest(testones)[1][:,:-1])from blocks.bricks.recurrent import BaseRecurrent, recurrentfrom blocks.bricks.recurrent import SimpleRecurrentfrom blocks import initializationfrom blocks.bricks import Identityfrom blocks.initialization import Identity@recurrentrnntest = SimpleRecurrent( dim=6, activation=Identity(), name='second_recurrent_layer', weights_init=initialization.Identity())rnntest.initialize()
###Code
#This section can be skipped if you want to go directly to the classifier
learning_rate = theano.shared(np.array(0.001, dtype=theano.config.floatX))
learning_decay = np.array(0.9, dtype=theano.config.floatX)
batch_size = 20
maxPackets = 6
packetTimeSteps = 16
attention = False
def onestepEnc(X):
data1, data2 = fork.apply(X)
if rnnType == 'gru':
hEnc = rnn.apply(data1, data2)
else:
hEnc, _ = rnn.apply(data2)
return hEnc, data1
[hEnc, data1], _ = theano.scan(onestepEnc, X) #(mini*numPackets, packetLen, 1, hexdictLen)
hEncReshape = T.reshape(hEnc[:,-1], (-1, maxPackets, 1, dim)) #[:,-1] takes the last rep for each packet
#(mini, numPackets, 1, dimReduced)
def onestepContext(hEncReshape):
data3, data4 = forkContext.apply(hEncReshape)
if rnnType == 'gru':
hContext = rnnContext.apply(data3, data4)
else:
hContext, _ = rnnContext.apply(data4)
if bidirectional:
data3 = data3[::-1]
data4 = data4[::-1]
if rnnType == 'gru':
hContextRev = rnnContextRev.apply(data3, data4)
else:
hinitContext, _ = rnnContextRev.apply(data4)
hContextRev = hinitContext
hContext = T.concatenate((hContext, hContextRev), axis=2)
return hContext
hContext, _ = theano.scan(onestepContext, hEncReshape)
test5 = theano.function([X,e], data5, allow_input_downcast=True)
test5(testones, floatX(np.random.randn(10, 100))).shape
testones.shape
if attention:
print 'what are you doing?!?!'
else:
zdata5, _ = forkDec.apply(hContext)
#VAE
#TODO: I wonder if this is correct...
def floatX(X):
return np.asarray(X, dtype=theano.config.floatX)
e = T.matrix()
zdim = 100
zfork = Fork(output_names=['mu', 'sigma'],
name='vae', input_dim=dim, output_dims=[zdim, zdim],
weights_init = linewt_init, biases_init = line_bias)
zfork.initialize()
mu, sigma = zfork.apply(T.reshape(zdata5[:,1:], (-1, 100)))
log_sigma = 0.5 * sigma # ? on the 0.5
z = mu + T.exp(log_sigma) * e # e is a vector of random numbers (noise)
data5 = T.reshape(z, (-1, maxPackets-1, 1, dim))
data5 = T.addbroadcast(data5, 2)
data7 = T.concatenate((data5, T.reshape(data1, (-1, maxPackets, packetTimeSteps, dim))[:,1:,:-1]), axis = 2)
#NONVAE data7 = T.concatenate((data5[:,1:], T.reshape(data1, (-1, maxPackets, packetTimeSteps, dim))[:,1:,:-1]), axis = 2)# + data5[:,:-1]
#decoding data needs to be one timestep (next packet in session) ahead, thus data1 we ignore the first packet
#and the last hidden state of the context RNN.
#THINK about L2 pooling before cat
#THINK should we concatenate with X instead of data5
#if packetReverse:
# data1 = data1[:,::-1]
#do we need data5??
#data7 = T.concatenate((data5[:,:-1],
# T.reshape(data1, (batch_size, maxPackets, packetTimeSteps, dim))[:,1:,:-1]),
# axis = 2)
#data1 is the original embedding of X, data5 is transformed context output
#get rid of first packet in data 5
#get rid of last context vector
#predicts all but the first packet, i.e. session[1:]
#output.shape = (20, 2, 100, 257), (minibatch,
def onestepDec(data7):
data8, data9 = forkFinal.apply(data7) #forkFinal transforms back to original dimIn
if rnnType == 'gru':
hDec = rnnDec.apply(data8, data9)
else:
hDec, _ = rnnDec.apply(data9)
#hDec = hinit #hDec shape = (batch_size*(maxPackets-1), packetTimeSteps, 257)
return hDec
hDec, _ = theano.scan(onestepDec, data7)
hDecReshape = T.reshape(hDec, (-1, packetTimeSteps, dimIn))
softmax = NDimensionalSoftmax()
softout = softmax.apply(hDecReshape, extra_ndim = 1)
predX = T.reshape(T.reshape(X,(-1, maxPackets, packetTimeSteps, dimIn))[:,1:,:,:],
(-1, packetTimeSteps, dimIn))
precost = predX*T.log(softout) + (1-predX)*T.log(1-softout)
precost2 = -T.sum(T.sum(precost, axis = 2), axis = 1)
#precost2 = -T.mean(T.sum(T.sum(precost, axis = 2), axis = 1))
#before
params
#after
params
recon_cost = T.mean(precost2)#T.sum(T.sqr(X-out))
kl_cost = 0.5 * T.sum(1 + 2 * log_sigma - mu ** 2 - T.exp(2 * log_sigma))
cost = recon_cost - kl_cost
#cost = T.mean(precost2)
#cost = T.mean(BinaryCrossEntropy().apply(predX, softout))
cg = ComputationGraph([cost])
learning_rate = theano.shared(np.array(0.0001, dtype=theano.config.floatX))
learning_decay = np.array(0.9, dtype=theano.config.floatX)
params = VariableFilter(roles = [PARAMETER])(cg.variables)
updates = Adam(params, cost, learning_rate, c=1) #c is gradient clipping parameter
#updates = RMSprop(cost, params, learning_rate, c=1)
#gradients = T.grad(cost, params)
#gradients = clip_norms(gradients, 1)
#gradientFun = theano.function([X], gradients, allow_input_downcast=True)
testcost = theano.function([X,e], cost, allow_input_downcast=True)
testcost(testones, floatX(np.random.randn(10, 100)))
print "compiling you beautiful person"
train = theano.function([X, e], [cost, hContext], updates = updates, allow_input_downcast=True)
predict = theano.function([X, e], [softout, hContext], allow_input_downcast=True)
print "finished compiling"
###Output
compiling you beautiful person
###Markdown
Unsupervised training
###Code
#randomize data
hexSessionsKeys = hexSessions.keys()
#random.shuffle(hexSessionsKeys)
trainPercent = 0.9
trainIndex = int(len(hexSessionsKeys)*trainPercent)
padOldTimeSteps = False
runname = 'hred'
epochCost = []
gradNorms = []
contextCollect = []
epochs = 80
iteration = 0
for epoch in xrange(epochs):
costCollect = []
for start, end in zip(range(0, trainIndex,batch_size), range(batch_size, trainIndex, batch_size)):
trainingSessions = []
for trainKey in range(start, end):
sessionForEncoding = list(hexSessions[hexSessions.keys()[trainKey]][0])
#encode a normal session
#oneHotSes = oneSessionEncoder(sessionForEncoding,hexDict = hexDict, packetReverse=packetReverse,
# padOldTimeSteps = padOldTimeSteps, maxPackets = maxPackets,
# packetTimeSteps = packetTimeSteps)
#trainingSessions.append(oneHotSes[0])
#trainingTargets.append(normalTarget)
#encode an abby normal session
oneHotSes = oneSessionEncoder(sessionForEncoding,
hexDict = hexDict,
packetReverse=packetReverse,
padOldTimeSteps = padOldTimeSteps,
maxPackets = maxPackets,
packetTimeSteps = packetTimeSteps)
trainingSessions.append(oneHotSes[0])
sessionsMinibatch = np.asarray(trainingSessions).reshape((batch_size*maxPackets, packetTimeSteps, 1, dimIn))
costfun = train(sessionsMinibatch, np.random.randn(zdim, batch_size))
costCollect.append(costfun[0])
if iteration == 0:
print 'you are amazing'
iteration+=1
#if iteration%80 == 0:
# learning_rate.set_value(learning_rate.get_value() * learning_decay)
# print ' learning rate: ', learning_rate.get_value()
####SAVE COST TO FILE
if epoch%2 == 0:
print(' ')
print 'Epoch: ', epoch
epochCost.append(np.mean(costCollect))
contextCollect.append(costfun[1][:4])
print 'Epoch cost average: ', epochCost[-1]
#grads = gradientFun(inputs, outputs)
#for gra in grads:
# print ' gradient norms: ', np.linalg.norm(gra)
#np.savetxt(runname+"_COST.csv", epochCost, delimiter=",")
#without adding the hcontext to every decoder input
predict(sessionsMinibatch)[1]
predict(sessionsMinibatch)[1][:,-1]
tester = predict(sessionsMinibatch)[1][:,-1] #new
import matplotlib.pyplot as plt
t = range(len(tester[0].flatten()))
plt.plot(t, tester[0].flatten(), 'r', t, tester[4].flatten(), 'b')
plt.show()
#Three huge extension:
# CHECK 1) give encoder to all decoding time steps
# 2) give predicted output to next time step as well
# CHECK, but not tested 3) use bidirectional to use give beginning and end of encoder to decoder
#81 to 70
#normal 92 to 78
#294.692
###Output
_____no_output_____ |
contents/neural_networks/pytorch1.ipynb | ###Markdown
Breve tutorial de PyTorch[PyTorch](https://pytorch.org) es una librería de alto nivel para Python que provee 1. Una clase tensor para hacer cómputo de alto rendimiento con capacidad de auto-diferenciación1. Un plataforma para crear y entrenar redes neuronalesEn tutorial revisaremos en detalle como se crean y manipulan tensores. Luego veremos como el submódulo `torch.nn` para crear redes neuronales artificiales**Instalación**Lo más recomendable para instalar esta librería es crear un ambiente de desarrollo con `conda` y ejecutar conda install pytorch torchvision cudatoolkit=11.3 ignite -c pytorchen caso de tener GPU o con conda install pytorch torchvision cpuonly ignite -c pytorch sino se cuenta con GPU:::{seealso}Si no haz utilizado `conda` recomiendo revisar [aquí](https://phuijse.github.io/PythonBook/contents/preliminaries/env_management.htmlconda):::
###Code
import torch
torch.__version__
###Output
_____no_output_____
###Markdown
Objeto `Tensor`La clase [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html) es muy similar en uso al `ndarray` de [*NumPy*](https://numpy.org/). Un tensor corresponde a una matriz o arreglo n-dimensional con tipo definido que soporta operaciónes vectoriales de tipo [SIMD](https://es.wikipedia.org/wiki/SIMD) y broadcastingA continuación revisaremos las operaciones más fundamentales relacionadas a tensores **Creación de tensores**Un tensor puede crearse usando - constructores de torch - a partir de listas de Python o *ndarray* de NumPyPor ejemplo para crear un vector de largo 10 relleno de ceros:
###Code
torch.zeros(10)
###Output
_____no_output_____
###Markdown
Un vector de largo 10 relleno de unos:
###Code
torch.ones(10)
###Output
_____no_output_____
###Markdown
Un vector con 10 números partiendo en cero y terminando en nueve
###Code
torch.linspace(0, 9, steps=10)
###Output
_____no_output_____
###Markdown
Un tensor construido a partir de una lista:
###Code
una_lista = [0, 1, 2, 3, 4, 5, 6]
torch.Tensor(una_lista)
###Output
_____no_output_____
###Markdown
Un tensor construido a partir de un ndarray
###Code
import numpy as np
numpy_array = np.random.randn(10)
torch.from_numpy(numpy_array)
###Output
_____no_output_____
###Markdown
**De PyTorch a NumPy**Para convertir un tensor de pytorch a un ndarray de numpy se utiliza el método `numpy()`:
###Code
data = torch.randn(5)
data
data.numpy()
###Output
_____no_output_____
###Markdown
**Atributos importantes de los tensores**Un tensor tiene un tamaño (dimesiones) y tipo específico. Esto se consulta con los atributos `ndim`/`shape` y `dtype`
###Code
a = torch.randn(10, 20, 30)
a.ndim, a.shape, a.dtype
###Output
_____no_output_____
###Markdown
Un tensor puede estar alojado en la memoria del sistema ('cpu') o en la memoria de dispositivo ('gpu'), esto se consulta con el atributo `device`:
###Code
a.device
###Output
_____no_output_____
###Markdown
Cuando se crea un tensor se puede especificar el tipo y el dispositivo
###Code
a = torch.zeros(10, dtype=torch.int32, device='cpu')
display(a)
###Output
_____no_output_____
###Markdown
**Manipulación de tensores**Sea el siguiente tensor de una dimensión:
###Code
a = torch.linspace(0, 9, 10)
a
###Output
_____no_output_____
###Markdown
Podemos reorganizar las dimensiones del tensor con el método `reshape`:
###Code
b = a.reshape(2, 5)
b
###Output
_____no_output_____
###Markdown
Podemos transponer el método `transpose()` o su alias `T`
###Code
b.T
###Output
_____no_output_____
###Markdown
Podemos convertir un tensor de dimensión arbitraria a uno de una dimensión con `flatten()`
###Code
b.flatten()
###Output
_____no_output_____
###Markdown
Podemos agregar una dimensión en una posición arbitraria con `unsqueeze(d)`
###Code
c = b.unsqueeze(1)
c, c.shape
###Output
_____no_output_____
###Markdown
**Cálculos con tensores**Un tensor soporta operaciones aritméticas y lógicas:::{note}Si el tensor está en memoria de sistema entonces las operaciones son realizadas por la CPU :::
###Code
data = torch.linspace(0, 5, steps=6)
data
###Output
_____no_output_____
###Markdown
Algunos ejemplos de operaciones aritméticas
###Code
data + 5
2*data
data.pow(2)
data.log()
###Output
_____no_output_____
###Markdown
Una operación lógica puede utilizarse para filtrar un tensor:
###Code
mask = data > 3
mask
data[mask]
###Output
_____no_output_____
###Markdown
Un ejemplo de broadcasting:
###Code
data2 = torch.ones(6)
data.unsqueeze(1), data2.unsqueeze(0), data.unsqueeze(1)*data2.unsqueeze(0)
###Output
_____no_output_____
###Markdown
**Cálculos en GPU**Usando el atributo `to` podemos intercambiar un tensor entre memoría de GPU ('device') y CPU ('host')```pythondata = torch.zeros(10)data = data.to('cuda')```:::{important}Cuando todos los tensores involucrados en una operaciones están en memoria de dispositivo entonces el cálculo lo hace la GPU:::La siguiente nota indica las opciones para intercambiar datos entre GPU y CPU que ofrece PyTorch: https://pytorch.org/docs/stable/notes/cuda.html :::{note}Una *Graphical Processing Unit* (GPU) o tarjeta de video es un hardware para hacer cálculos sobre mallas tridimensionales, generación de imágenes (rendering) y otras tareas gráficas. A diferencia de la CPU, la GPU es especialista en cálculo paralelo y tiene miles de nucleos (NVIDIA RTX 2080: 2944 nucleos)::: Auto-diferenciación con TensoresEn general, las redes neuronales se entrenan usando **Gradiente descedente**. Por lo tanto necesitamos calcular las derivadas de la función de costo para todos los parámetros de la redPyTorch tiene incorporado un sistema de diferenciación automática denominado [`autograd`](https://pytorch.org/docs/stable/autograd.html) Para poder derivar una función en pytorch1. Se necesita que su entrada sean tensores con el atributo `requires_grad=True`1. Luego llamamos la función `backward()` de la función1. El resultado queda guardado en el atributo `grad` de la entrada (nodo hoja)**Ejemplo**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
x = torch.linspace(0, 10, steps=1000, requires_grad=True)
y = 5*x - 20
y.backward(torch.ones_like(x))
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(x.detach().numpy(), y.detach().numpy(), label='y')
ax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dy/dx')
plt.legend();
x = torch.linspace(0, 10, steps=1000, requires_grad=True)
y = torch.sin(2.0*np.pi*x)*torch.exp(-(x-5).pow(2)/3)
y.backward(torch.ones_like(x))
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(x.detach().numpy(), y.detach().numpy(), label='y')
ax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dy/dx')
plt.legend();
###Output
_____no_output_____
###Markdown
Comparado con la derivada calculada "a mano":
###Code
dydx = 2*torch.pi*torch.cos(2.0*np.pi*x)*torch.exp(-(x-5).pow(2)/3) - 2/3*(x-5)*torch.sin(2.0*np.pi*x)*torch.exp(-(x-5).pow(2)/3)
torch.sum(torch.pow(x.grad.detach() - dydx, 2))
###Output
_____no_output_____
###Markdown
Grafo de cómputoCuando concatenamos operacionesm PyTorch construye internamente un "grafo de cómputo"$$x \to z = f_1(x) \to y = f_2(z)$$El método `backward()` calcula los gradientes y los almacena en los nodo hoja que tengan `requires_grad=True`Por ejemplo y.backward : Guarda dy/dx en x.grad z.backward : Guarda dz/dx en x.grad:::{note}`backward()` implementa la regla de la cadena de las derivadas:::`backward` recibe una entrada: La derivada de la etapa superior de la cadena. Por defecto usa `torch.ones([1])`, es decir que asume que está en el nivel superior del grafo y que la salida es escalar (unidimensional)
###Code
x = torch.linspace(0, 10, steps=1000, requires_grad=True) # Nodo hoja
x.grad_fn
z = torch.sin(2*x)
z.grad_fn
y = z.pow(2)/2
y.grad_fn
fig, ax = plt.subplots(figsize=(6, 3), tight_layout=True)
ax.plot(x.detach().numpy(), z.detach().numpy(), label='z')
ax.plot(x.detach().numpy(), y.detach().numpy(), label='y')
# Derivada dy/dx
y.backward(torch.ones_like(x), retain_graph=True)
ax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dy/dx')
# Borro el resultado en x.grad
x.grad = None
# Derivada dz/dx
z.backward(torch.ones_like(x))
ax.plot(x.detach().numpy(), x.grad.detach().numpy(), label='dz/dx')
plt.legend();
###Output
_____no_output_____
###Markdown
:::{note}El método `detach()` retorna una copia del tensor que se ha "despegado" del grafo::: Construcción de redes neuronalesPyTorch nos ofrece la clase tensor y las funcionalidades de autograd. Estas poderosas herramientas nos dan todo lo necesario para construir y entrenar redes neuronales artificialesPara facilitar aun más estas tareas PyTorch tiene módulos de alto nivel que implementan1. Modelo base de red neuronal: `torch.nn.Module`1. Distintos tipos de capas, funciones de activación y funciones de costo: [`torch.nn`](https://pytorch.org/docs/stable/nn.html)1. Distintos algoritmos de optimización basados en gradiente descedente: [`torch.optim`](https://pytorch.org/docs/stable/optim.html)Una red neuronal en PyTorch es una clase de Python que hereda de [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). Como mínimo esta clase debe implementar las funciones `__init__` y `forward`- El constructor define las capas que se utilizaran- Heredar de `nn.Module` hace que los parámetros de las capas queden registrados por la máquina de estado de PyTorch- La función `forward` recibe como argumento los datos de entrada y retorna la predicción del modelo, es decir que define como se conectan las capas:::{note}La función `forward()` actua como la función `__call__()` de Python, es decir que se creamos una objeto `model` que herada de `nn.Module` llamar `model.forward(x)` es equivalente a `model(x)`::: Capa completamente conectadaUna capa completamente conectada (*fully-connected*) también llamada capa densa, implementa la siguiente operación$$z = wx + b$$donde $x$ son los datos que entran en la capa y $w/b$ son los parámetros de la capa (pesos y sesgos) de la capaEsta capa está implementada en Pytorch como [`torch.nn.Linear`](https://pytorch.org/docs/stable/generated/torch.nn.Linear.htmltorch.nn.Linear). El constructor de este objeto espera la dimensionalidad (número de neuronas) de entrada y salida de la capaPor ejemplo para crear una capa como el siguiente diagramausariamos```pythondense = torch.nn.Linear(3, 2)```Una vez creada la podemos evaluar con `dense(data)` o `dense.forward(data)`:::{note}Las capas son a su vez instancias de `torch.nn.Module`. Es decir que un módulo puede tener otros módulos anidados::: Funciones de activación Las funciones de activación más comunes de la literatura están implementadas como clases en [`torch.nn`](https://pytorch.org/docs/stable/nn.htmlnon-linear-activations-weighted-sum-nonlinearity)Veamos algunos ejemplos para aprender a utilizarlas:
###Code
data = torch.linspace(-5, 5, steps=100)
activation = torch.nn.Sigmoid()
fig, ax = plt.subplots()
ax.plot(data.detach(), activation(data).detach());
activation = torch.nn.ReLU()
fig, ax = plt.subplots()
ax.plot(data.detach(), activation(data).detach());
activation = torch.nn.Tanh()
fig, ax = plt.subplots()
ax.plot(data.detach(), activation(data).detach());
###Output
_____no_output_____
###Markdown
Perceptron multicapa en PytorchUtilicemos lo aprendio para implementar un perceptrón multicapa con una capa oculta y función de activación sigmoide:
###Code
import torch
import torch.nn as nn
class MultiLayerPerceptron(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(type(self), self).__init__()
self.hidden = nn.Linear(input_dim, hidden_dim)
self.output = nn.Linear(hidden_dim, output_dim)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.activation(self.hidden(x))
return self.output(x)
###Output
_____no_output_____
###Markdown
Crear una capa `Linear` hace que se registren sus parámetros `weight` y `bias` en el grafo. Inicialmente los parámetros tienen valores aleatorios
###Code
model = MultiLayerPerceptron(input_dim=2, output_dim=1, hidden_dim=2)
model.hidden.weight, model.hidden.bias
model.output.weight, model.output.bias
###Output
_____no_output_____
###Markdown
El modelo se evalua sobre un tensor de datos llamando a su función `forward`
###Code
X = 10*torch.rand(10000, 2) - 5
Y = model(X)
###Output
_____no_output_____
###Markdown
PyTorch también admite una forma "más funcional" de crear modelos utilizando [`torch.nn.Sequential`](https://pytorch.org/docs/stable/nn.htmlsequential)El modelo anterior sería:
###Code
model = nn.Sequential(nn.Linear(2, 2),
nn.Sigmoid(),
nn.Linear(2, 1))
###Output
_____no_output_____ |
1_Useful_Methods/6_pd.to_datetime.ipynb | ###Markdown
Pandas to_datetime
###Code
import pandas as pd
import numpy as np
airbnb = pd.read_csv('ny_airbnb_data/AB_NYC_2019.csv')
airbnb.head()
airbnb1 = airbnb.copy()
airbnb.info()
airbnb1.isnull().sum()
###Output
_____no_output_____
###Markdown
we ll be converting last_review column to datetime
###Code
# It has 10052 null values
airbnb1['last_review'] = pd.to_datetime(airbnb1['last_review'])
airbnb1.info()
airbnb1.isnull().sum()
airbnb1.head()
airbnb1['last_review'].dt.year.head()
airbnb1['last_review'].dt.month.head()
airbnb1['last_review'].dt.day.head()
###Output
_____no_output_____ |
Class_Contents/Excercises/02_functions_02_exercises_2.ipynb | ###Markdown
Chapter 2: Functions & Modularization Exercises 2a Solutions The exercises below assume that you have read Chapter 2 in the book.The `...`'s in the code cells indicate where you need to fill in code snippets. The number of `...`'s within a code cell give you a rough idea of how many lines of code are needed to solve the task. You should not need to create any additional code cells for your final solution. However, you may want to use temporary code cells to try out some ideas. Volume of a Sphere **Q1**: The [volume of a sphere ](https://en.wikipedia.org/wiki/Sphere) is defined as $\frac{4}{3} * \pi * r^3$. Calculate this value for $r=10.0$ and round it to 10 digits after the comma. Use the [standard library ](https://docs.python.org/3/library/index.html) to obtain a good approximation of $\pi$.
###Code
import math
r = 10.0
W = 4/3 * math.pi * r ** 3
W
###Output
_____no_output_____
###Markdown
**Q2**: Encapsulate the logic into a function `sphere_volume()` that takes one *positional* argument `radius` and one *keyword-only* argument `digits` defaulting to `5`. The volume should be returned as a `float` object under *all* circumstances.
###Code
def sphere_volume(radius, digits = 5):
"""Calculate the volume of a sphere.
Args:
radius (float): radius of the sphere
digits (optional, int): number of digits
for rounding the resulting volume
Returns:
volume (float)
"""
W = round(4/3 * math.pi * r ** 3,digits)
return W
###Output
_____no_output_____
###Markdown
**Q3**: Evaluate the function with `radius = 100.0` and 1, 5, 10, 15, and 20 digits respectively.
###Code
radius = 100.0
sphere_volume(radius,1)
sphere_volume(radius,5)
sphere_volume(radius,10)
sphere_volume(radius,15)
sphere_volume(radius,20)
###Output
_____no_output_____
###Markdown
**Q4**: What observation do you make? **Q5**: Using the [range() ](https://docs.python.org/3/library/functions.htmlfunc-range) built-in, write a `for`-loop and calculate the volume of a sphere with `radius = 42.0` for all `digits` from `1` through `20`. Print out each volume on a separate line.Note: This is the first task where you need to use the built-in [print() ](https://docs.python.org/3/library/functions.htmlprint) function.
###Code
radius = 42.0
for i in range(1,21):
print(i)
print(sphere_volume(radius,i))
###Output
1
4188.8
2
4188.79
3
4188.79
4
4188.7902
5
4188.7902
6
4188.790205
7
4188.7902048
8
4188.79020479
9
4188.790204786
10
4188.7902047864
11
4188.79020478639
12
4188.790204786391
13
4188.790204786391
14
4188.790204786391
15
4188.790204786391
16
4188.790204786391
17
4188.790204786391
18
4188.790204786391
19
4188.790204786391
20
4188.790204786391
###Markdown
**Q6**: What lesson do you learn about the `float` type? Exercise 2b A. HackerRank exercises:1. https://www.hackerrank.com/challenges/write-a-function/problem2. https://www.hackerrank.com/challenges/python-mutations/problem3. https://www.hackerrank.com/challenges/text-wrap/problem4. https://www.hackerrank.com/challenges/map-and-lambda-expression/problem5. https://www.hackerrank.com/challenges/reduce-function/problem B. Numpy exercises:Q1. Import numpy as np and print the version number. Q2. Create a 1D array of numbers from 0 to 9 How to extract items that satisfy a given condition from 1D array?Q3. Extract all odd numbers from arrDesired output: array([1, 3, 5, 7, 9])
###Code
arr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
# Your code below
###Output
_____no_output_____
###Markdown
How to replace items that satisfy a condition with another value in numpy array?Q4. Replace all odd numbers in arr with -1Desired Output: array([ 0, -1, 2, -1, 4, -1, 6, -1, 8, -1])
###Code
arr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
# Your code below
###Output
_____no_output_____
###Markdown
How to reshape an array?Q5. Convert a 1D array to a 2D array with 2 rows
###Code
arr = np.arange(10)
# Your code below
###Output
_____no_output_____
###Markdown
How to stack two arrays vertically?Q6. Stack arrays a and b vertically in 2 ways. Then do it horizontally.Hint: use concatenate, vstack and hstack
###Code
a = np.arange(10).reshape(2,-1)
b = np.repeat(1, 10).reshape(2,-1)
# Your code below
###Output
_____no_output_____
###Markdown
How to import a dataset with numbers and texts keeping the text intact in python numpy?Q7. Import the iris dataset keeping the text intact (2D array). Print the first 3 rows.
###Code
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
# Your code below: use genfromtxt
###Output
_____no_output_____
###Markdown
How to compute the mean, median, standard deviation of a numpy array?Q8. Find the mean, median, standard deviation of iris's sepallength (1st column)
###Code
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
sepallength = np.genfromtxt(url, delimiter=',', dtype='float', usecols=[0])
# Your code below:
###Output
_____no_output_____ |
seq2seq/Name-Classification.ipynb | ###Markdown
Loading Data
###Code
from __future__ import unicode_literals, print_function, division
from io import open
import glob
import unicodedata
import string
import torch
from torch import nn
def findFiles(path): return glob.glob(path)
print(findFiles('data/names/*.txt'))
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicodeToAscii('Ślusàrski'))
print(unicodeToAscii(u'Málaga'))
# Build the category_lines dictionary, a list of names per language
category_lines = {}
all_categories = []
# Read a file and split into lines
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]
for filename in findFiles('data/names/*.txt'):
category = filename.split('/')[-1].split('.')[0]
all_categories.append(category)
lines = readLines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
print (all_categories)
print(category_lines['Arabic'][200:215])
all_letters
###Output
_____no_output_____
###Markdown
Turning Names into Tensors
###Code
# Find letter index from all_letters, e.g. "a" = 0
def letterToIndex(letter):
return all_letters.find(letter)
# Just for demonstration, turn a letter into a <1 x n_letters> Tensor
def letterToTensor(letter):
tensor = torch.zeros(1, n_letters)
tensor[0][letterToIndex(letter)] = 1
return tensor
# Turn a line into a <line_length x 1 x n_letters>,
# or an array of one-hot letter vectors
def lineToTensor(line):
tensor = torch.zeros(len(line), 1, n_letters)
for li, letter in enumerate(line):
tensor[li][0][letterToIndex(letter)] = 1
return tensor
# print(letterToTensor('J'))
print(lineToTensor('Jones').size())
###Output
torch.Size([5, 1, 57])
###Markdown
Model
###Code
import torch.nn as nn
import torch.nn.functional as F
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
# self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
# output = self.softmax(output)
output = F.log_softmax(output, dim=1)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size).cuda()
n_hidden = 128
rnn = RNN(n_letters, n_hidden, n_categories).cuda()
input = lineToTensor('Binu').cuda()
hidden = torch.zeros(1, n_hidden).cuda()
output, next_hidden = rnn(input[2], hidden)
print(output)
def categoryFromOutput(output):
# top_n, top_i = output.data.topk(1) # Tensor out of Variable with .data
# category_i = top_i[0][0]
val, ind = torch.max(output, 1)
category_i = int(ind)
return all_categories[category_i], category_i
import random
def randomChoice(l):
return l[random.randint(0, len(l) - 1)]
def randomTrainingExample():
category = randomChoice(all_categories)
line = randomChoice(category_lines[category])
category_tensor = torch.LongTensor([all_categories.index(category)]).cuda()
line_tensor = lineToTensor(line).cuda()
return category, line, category_tensor, line_tensor
for i in range(10):
category, line, category_tensor, line_tensor = randomTrainingExample()
print('category =', category, '/ line =', line)
criterion = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
Train
###Code
learning_rate = 0.001 # If you set this too high, it might explode. If too low, it might not learn
def train(category_tensor, line_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
# Add parameters' gradients to their values, multiplied by learning rate
for p in rnn.parameters():
p.data.add_(-learning_rate, p.grad.data)
return output, loss.item()
import time
import math
n_iters = 100000
print_every = 5000
plot_every = 1000
# Keep track of losses for plotting
current_loss = 0
all_losses = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# Print iter number, loss, name and guess
if iter % print_every == 0:
guess, guess_i = categoryFromOutput(output)
correct = '✓' if guess == category else '✗ (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
# Add current loss avg to list of losses
if iter % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.figure()
plt.plot(all_losses)
###Output
_____no_output_____
###Markdown
Evaluate
###Code
# Keep track of correct guesses in a confusion matrix
confusion = torch.zeros(n_categories, n_categories)
n_confusion = 10000
# Just return an output given a line
def evaluate(line_tensor):
hidden = rnn.initHidden()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
return output
# Go through a bunch of examples and record which are correctly guessed
for i in range(n_confusion):
category, line, category_tensor, line_tensor = randomTrainingExample()
output = evaluate(line_tensor)
guess, guess_i = categoryFromOutput(output)
category_i = all_categories.index(category)
confusion[category_i][guess_i] += 1
# Normalize by dividing every row by its sum
for i in range(n_categories):
confusion[i] = confusion[i] / confusion[i].sum()
# Set up plot
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(confusion.numpy())
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
# Force label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
# sphinx_gallery_thumbnail_number = 2
plt.show()
###Output
_____no_output_____
###Markdown
Use GRU
###Code
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class Encoder(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(Encoder, self).__init__()
self.h0 = torch.randn(1, 1, hidden_size).cuda()
self.gru = nn.GRU(input_size, hidden_size)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, inp):
# inp.shape = [seql, 1, n_letters], h0.shape = [1, 1, hidden_size]
output, h = self.gru(inp, self.h0) # output.shape = [seql, 1, hidden_size]
h = h.contiguous()
h = h.view(-1, output.shape[2])
h = self.fc(h)
return F.log_softmax(h, dim=1)
n_hidden = 128
model = Encoder(n_letters, n_hidden, n_categories).cuda()
learning_rate = 0.001 # If you set this too high, it might explode. If too low, it might not learn
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9)
inp = lineToTensor('Binu').cuda()
output = model(inp)
output
###Output
_____no_output_____
###Markdown
Train
###Code
criterion = nn.NLLLoss()
def train(category_tensor, line_tensor):
output = model(line_tensor)
output = output.view(1, n_categories)
loss = criterion(output, category_tensor)
# clear previous gradients, compute gradients of all variables wrt loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
return output, loss.item()
import time
import math
n_iters = 100000
print_every = 5000
plot_every = 1000
# Keep track of losses for plotting
current_loss = 0
all_losses = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
model.train()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# Print iter number, loss, name and guess
if iter % print_every == 0:
guess, guess_i = categoryFromOutput(output)
correct = '✓' if guess == category else '✗ (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
# Add current loss avg to list of losses
if iter % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
# run for another set of iterations
start = time.time()
model.train()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# Print iter number, loss, name and guess
if iter % print_every == 0:
guess, guess_i = categoryFromOutput(output)
correct = '✓' if guess == category else '✗ (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
# Add current loss avg to list of losses
if iter % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.figure()
plt.plot(all_losses)
###Output
_____no_output_____
###Markdown
Evaluate
###Code
# Keep track of correct guesses in a confusion matrix
confusion = torch.zeros(n_categories, n_categories)
n_confusion = 10000
# Just return an output given a line
def evaluate(line_tensor):
output = model(line_tensor)
output.view(1, n_categories)
return output
# Go through a bunch of examples and record which are correctly guessed
for i in range(n_confusion):
category, line, category_tensor, line_tensor = randomTrainingExample()
output = evaluate(line_tensor)
guess, guess_i = categoryFromOutput(output)
category_i = all_categories.index(category)
confusion[category_i][guess_i] += 1
# Normalize by dividing every row by its sum
for i in range(n_categories):
confusion[i] = confusion[i] / confusion[i].sum()
# Set up plot
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(confusion.numpy())
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
# Force label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
# sphinx_gallery_thumbnail_number = 2
plt.show()
###Output
_____no_output_____ |
tui_examples/hydrogeo_salome.ipynb | ###Markdown
Importing libraries
###Code
import numpy as np
import shapefile
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import path
import scipy.interpolate as inter
# Open shapefile
sf = shapefile.Reader("TikunaAquifer.shp")
# Get points ~ has only one shape!!
shp = sf.shapes()[0]
# Collect points
pts = np.asarray(shp.points)
plt.plot(pts[:,0],pts[:,1],'-o')
def CartGrid(x, y, z=None):
"""Build a cartisian grid data (nodes and connections). Returns a tuple with:
(ndarray nodes coordinate, ndarray cells connectivities)"""
if z is None:
nodes = np.array([[i, j, 0.] for j in y for i in x])
nx = x.size
ny = y.size
i, j = np.mgrid[0:nx, 0:ny]
ij = np.ravel_multi_index(
[list(i.ravel()), list(j.ravel())], (nx+1, ny+1), order='F')
cells = np.array([[i, i+1, i+1+nx+1, i+nx+1]
for i in ij], dtype='uint64')
else:
nodes = np.array([[i, j, k] for k in z for j in y for i in x])
nx = x.size - 1
ny = y.size - 1
nz = z.size - 1
i, j, k = np.mgrid[0:nx, 0:ny, 0:nz]
ijk = np.ravel_multi_index(
[list(i.ravel()), list(j.ravel()), list(
k.ravel())], (nx + 1, ny + 1, nz + 1),
order='F')
cells = np.array([[i, i+1, i+1+(nx+1), i+(nx+1),
i+(nx+1)*(ny+1), i+1+(nx+1) *
(ny+1), i+1+(nx+1)+(nx+1)*(ny+1),
i+(nx+1)+(nx+1)*(ny+1)]
for i in ijk], dtype='uint64')
return (nodes, cells)
dx = 15000 # dx ~ 5km
dy = 15000 # dy ~ 5km
nz = 3
x = np.linspace(shp.bbox[0], shp.bbox[2], int(np.floor((shp.bbox[2]-shp.bbox[0])/dx)))
y = np.linspace(shp.bbox[1], shp.bbox[3], int(np.floor((shp.bbox[3]-shp.bbox[1])/dy)))
z = np.linspace(0,1,nz+1)
(nodes, cells) = CartGrid(x, y, z)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot3D(nodes[:,0],nodes[:,1],nodes[:,2],'+')
cell_center = np.zeros((cells.shape[0], 3))
print("compute cell centers")
for c in range(cells.shape[0]):
cell_center[c, :] = np.mean(nodes[cells[c, :], :], axis=0)
p = path.Path(pts)
msk = p.contains_points(cell_center[:,[0,1]])
cnodes = cells[msk]
vnodes = np.unique(cnodes.reshape(cnodes.size))
idx = np.zeros((int(vnodes.max()+1),))
idx[vnodes] = np.arange(0, vnodes.size)
vert = nodes[vnodes]
hexa = np.reshape(idx[cnodes].ravel(), (cnodes.shape[0],8))
# plot nodes
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot3D(vert[:,0],vert[:,1],vert[:,2],'r.')
def find_indexes(b):
"""This function is similar to the 'find' a MATLAB function"""
return [i for (i, vals) in enumerate(b) if vals]
# top horizon
top = vert[:,-1] == 1
D = np.loadtxt('Tikuna_top_horizon.txt', skiprows=1, usecols=[1,2,3])
xt = D[:,0]; xx = [np.min(x), np.max(x)]
yt = D[:,1]; yy = [np.min(y), np.max(y)]
zt = D[:,2]
# base horizon
base = vert[:,-1] == 0
D = np.loadtxt('Tikuna_base_horizon.txt', skiprows=1, usecols=[1,2,3])
xb = D[:,0]; xx = [np.min(x), np.max(x)]
yb = D[:,1]; yy = [np.min(y), np.max(y)]
zb = D[:,2]
# interpolate verticies
zfun = inter.Rbf(xt, yt, zt, function= 'linear',smooth= 100)
vert[top,-1] = zfun(vert[top,0], vert[top,1])
zfun = inter.Rbf(xb, yb, zb, function= 'linear',smooth= 100)
vert[base,-1] = zfun(vert[base,0], vert[base,1])
vmsk = np.zeros((vert.shape[0],), dtype=bool)
# mark horizons (top and base)
horz = np.zeros((vert.shape[0],))
horz[top] = 1; horz[base] = -1
for i in range(vert.shape[0]):
if vmsk[i]:
continue
vmsk[i] = True
# diff of nodes (same pillar has dx=dy=0)
dx = np.abs(vert[i, :] - vert[:, ])[:, 0:2]
# check for pillar
msk = np.array(find_indexes((dx[:, 0] < 1e-9) & (dx[:, 1] < 1e-9)))
hh = horz[msk]
top = np.argmax(hh)
base = np.argmin(hh)
if np.abs(vert[msk[0],-1] - vert[msk[-1],-1]) > 1e-9:
# sort
z_linspace = np.linspace(vert[msk[0],-1], vert[msk[-1],-1], len(msk))
vert[msk, -1] = z_linspace
else:
# sort
z_linspace = np.linspace(vert[msk[0],-1] - 5, vert[msk[-1],-1] + 5, len(msk))
vert[msk, -1] = z_linspace
if vert[msk[top],-1] < vert[msk[base],-1]:
vert[msk] = np.flipud(vert[msk])
vmsk[msk] = True
# plot new verticies
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot3D(vert[:,0],vert[:,1],vert[:,2],'k+')
def write_unv(fname, nodes, cells, mat=None):
"""
Write the UNV (Universal) file dataset format
reference in: https://docs.plm.automation.siemens.com/tdoc/nx/12/nx_help#uid:xid1128419:index_advanced:xid1404601:xid1404604
"""
# consts
sep = " -1"
si, coordsys, vertices, elements = 164, 2420, 2411, 2412
# settings
if mat is None:
mat = np.zeros((cells.shape[0],), dtype=np.int64) + 1
# write unv file
print("-- writing file: {}".format(fname))
with open(fname, "w") as unv:
# unit system (164)
unv.write('{}\n'.format(sep))
unv.write('{:6g}\n'.format(si)) # unv code
unv.write('{:10d}{:20s}{:10d}\n'.format(1, "SI: Meters (newton)", 2))
unv.write('{:25.17E}{:25.17E}{:25.17E}\n{:25.17E}\n'.format(
1, 1, 1, 273.15))
unv.write('{}\n'.format(sep))
# coordinate system (2420)
unv.write('{}\n'.format(sep))
unv.write('{:6g}\n'.format(coordsys)) # unv code
unv.write('{:10d}\n'.format(1)) # coordsys label (uid)
unv.write('{:40s}\n'.format("SMESH_Mesh from Salome Geomechanics"))
# coordsys label, coordsys type (0: cartesian), coordsys color
unv.write('{:10d}{:10d}{:10d}\n'.format(1, 0, 0))
unv.write('{:40s}\n'.format("Global cartesian coord. system"))
unv.write('{:25.16E}{:25.16E}{:25.16E}\n'.format(1, 0, 0))
unv.write('{:25.16E}{:25.16E}{:25.16E}\n'.format(0, 1, 0))
unv.write('{:25.16E}{:25.16E}{:25.16E}\n'.format(0, 0, 1))
unv.write('{:25.16E}{:25.16E}{:25.16E}\n'.format(0, 0, 0))
unv.write('{}\n'.format(sep))
# write nodes coordinates
unv.write('{}\n'.format(sep))
unv.write('{:6g}\n'.format(vertices)) # unv code
for n in range(nodes.shape[0]):
# node-id, coordinate system label, displ. coord. system, color(11)
unv.write('{:10d}{:10d}{:10d}{:10d}\n'.format(n + 1, 1, 1, 11))
unv.write('{:25.16E}{:25.16E}{:25.16E}'.format(
nodes[n, 0], nodes[n, 1], nodes[n, 2]*50))
unv.write('\n')
unv.write('{}\n'.format(sep))
# write cells connectivities
unv.write('{}\n'.format(sep))
unv.write('{:6g}\n'.format(elements)) # unv code
for c in range(cells.shape[0]):
# node-id, coordinate system label, displ. coord. system, color(11)
unv.write('{:10d}{:10d}{:10d}{:10d}{:10d}{:10d}\n'.format(
c + 1, 115, mat[c], mat[c], mat[c], 8))
unv.write('{:10d}{:10d}{:10d}{:10d}{:10d}{:10d}{:10d}{:10d}'.format(
cells[c, 0], cells[c, 1], cells[c, 2], cells[c, 3],
cells[c, 4], cells[c, 5], cells[c, 6], cells[c, 7]))
unv.write('\n')
unv.write('{}\n'.format(sep))
# write cells regions
unv.write('{}\n'.format(sep))
unv.write('{:6g}\n'.format(2467)) # unv code
regions = np.unique(mat)
for region in regions:
ind = find_indexes(mat == region)
unv.write('{:10d}{:10d}{:10d}{:10d}{:10d}{:10d}{:10d}{:10d}\n'.format(
region, 0, 0, 0, 0, 0, 0, len(ind)))
unv.write('Region_{}\n'.format(region))
i = 0
for c in range(len(ind)):
unv.write('{:10d}{:10d}{:10d}{:10d}'.format( 8, ind[c] + 1, 0, 0))
i += 1
if i == 2:
i = 0
unv.write('\n')
if i == 1:
unv.write('\n')
unv.write('{}\n'.format(sep))
write_unv('tikuna.unv', vert, np.int64(hexa)+1)
# Open shapefile
sf = shapefile.Reader("afloramentos_simp.shp")
# Get points ~ has only one shape!!
shp = sf.shapes()[6]
# Collect points
pts2 = np.asarray(shp.points)
plt.plot(pts[:,0],pts[:,1],'-.r',pts2[:,0],pts2[:,1],'-.')
###Output
_____no_output_____ |
FACEBOOK Leads and Campaign Analysis.ipynb | ###Markdown
It shows that there are strong relarionship between Spent,Clicks and Impressions.
###Code
sns.countplot(x='xyz_campaign_id',data=fb)
plt.show()
###Output
_____no_output_____
###Markdown
It clearly visible that campaign_c has most number of ads than others.
###Code
plt.bar(fb['xyz_campaign_id'],fb['Approved_Conversion'])
plt.title('Company VS Approved_Conversion')
plt.ylabel('Approved_Conversion')
plt.show()
###Output
_____no_output_____
###Markdown
From above graph, it clear that campaign_c has highest approved conversion rate,means more people bought product after seeing ads from campaign_c.
###Code
sns.countplot(fb['gender'])
plt.show()
sns.barplot(x='xyz_campaign_id',y='Approved_Conversion',hue='gender',data=fb,ci=None)
plt.show()
###Output
_____no_output_____
###Markdown
It shows that there are no any significant difference between Gender in all three campaigns. Now let's see distribution with age groups
###Code
sns.countplot(fb['age'])
plt.show()
###Output
_____no_output_____
###Markdown
The involvent of 30-34 age group is higher than other age groups.
###Code
sns.barplot(fb['xyz_campaign_id'],fb['Approved_Conversion'],hue=fb['age'],ci=None)
plt.show()
###Output
_____no_output_____
###Markdown
It's interesting to note that in campaign_b and campaign_c age group 30-34 shows more interest, whereas in campaign_a 40-44 age group shows maximum interest.
###Code
fig=plt.figure(figsize=(15,6))
sns.countplot(fb['interest'])
plt.show()
sns.scatterplot(fb['interest'],fb['Approved_Conversion'])
plt.show()
###Output
_____no_output_____
###Markdown
It's interesting to know that, although the count of interest after 100 is less, even their is rise of users after 100 who actually bought the product.
###Code
g=sns.FacetGrid(fb,col='gender')
g.map(sns.scatterplot,'interest','Approved_Conversion',alpha=.4)
g.add_legend()
plt.show()
g=sns.FacetGrid(fb,col='age')
g.map(sns.scatterplot,'interest','Approved_Conversion',alpha=.4)
g.add_legend()
plt.show()
sns.histplot(fb['Spent'],bins=25)
plt.show()
sns.scatterplot(fb['Spent'],fb['Approved_Conversion'])
plt.show()
###Output
_____no_output_____
###Markdown
We can see, as the amount of spent increases no. of product bought increases.
###Code
g=sns.FacetGrid(fb,col='gender')
g.map(sns.scatterplot,'Spent','Approved_Conversion',alpha=.4)
g.add_legend()
plt.show()
g=sns.FacetGrid(fb,col='age')
g.map(sns.scatterplot,'Spent','Approved_Conversion',alpha=.4)
g.add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Impressions**
###Code
sns.histplot(fb['Impressions'],bins=25)
plt.ticklabel_format(useOffset=False,style='Plain',axis='x')
plt.show()
sns.scatterplot(fb['Impressions'],fb['Approved_Conversion'])
plt.ticklabel_format(useOffset=False,style='Plain',axis='x')
plt.show()
###Output
_____no_output_____
###Markdown
There is sudden rise in approved conversion after certain impression point. **People Actually Bought the product after clicking the ad**
###Code
g=sns.FacetGrid(fb,col='gender')
g.map(sns.scatterplot,'Clicks','Approved_Conversion',alpha=.4)
g.add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
It shows that men click on ads more than women but women are most likely to purchase the product after clicking the ad.
###Code
g=sns.FacetGrid(fb,col='age')
g.map(sns.scatterplot,'Clicks','Approved_Conversion',alpha=.4)
g.add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
It looks like age group 30-34 have more tendency to buy product after clicking the ad. **After Enquring The Product**
###Code
g=sns.FacetGrid(fb,col='gender')
g.map(sns.scatterplot,'Total_Conversion','Approved_Conversion',alpha=.4)
g.add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
It seems like women tends to buy more product after enquring but men tends to enquire more about the product.
###Code
g=sns.FacetGrid(fb,col='age')
g.map(sns.scatterplot,'Total_Conversion','Approved_Conversion',alpha=.4)
g.add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
It seems like age group 30-34 more likely to buy product after enquring. **Campaign_c (Campaign with most no. of Approved Conversion)**
###Code
camp_c=fb[['xyz_campaign_id','fb_campaign_id','Approved_Conversion']]
camp_c=camp_c[camp_c['xyz_campaign_id']=='campaign_c']
camp_c.head()
sns.scatterplot(camp_c['fb_campaign_id'],camp_c['Approved_Conversion'])
plt.show()
###Output
_____no_output_____
###Markdown
**MODELLING**
###Code
fb['xyz_campaign_id'].replace({'campaign_a':916,'campaign_b':936,'campaign_c':1178},inplace=True)
from sklearn.preprocessing import LabelEncoder,StandardScaler
encoder=LabelEncoder()
encoder.fit(fb['gender'])
fb['gender']=encoder.fit_transform(fb['gender'])
encoder.fit(fb['age'])
fb['age']=encoder.fit_transform(fb['age'])
x=np.array(fb.drop(fb[['Approved_Conversion','Total_Conversion']],axis=1))
y=np.array(fb['Total_Conversion'])
y.reshape(len(y),1)
std_scalar=StandardScaler()
x=std_scalar.fit_transform(x)
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=42)
from sklearn.ensemble import RandomForestRegressor
rfr=RandomForestRegressor(n_estimators=10,random_state=0)
rfr.fit(x_train,y_train)
y_pred=rfr.predict(x_test)
y_pred=np.round(y_pred)
from sklearn import metrics
print('Mean absolute error',metrics.mean_absolute_error(y_test,y_pred))
print('Mean Squared error',np.sqrt(metrics.mean_squared_error(y_test,y_pred)))
print('r2_Score',metrics.r2_score(y_test,y_pred))
###Output
Mean absolute error 1.0
Mean Squared error 2.4725902660786128
r2_Score 0.7655555120959068
|
notebooks/render-frames-roi.ipynb | ###Markdown
This notebook contains code for rendering .png images according to a predefined fsleyes orthographic view, of T2 highres based MNI normalized tumor regions overlayed on a MNI brain regions template.
###Code
%run utils.py
%run visualiztion.py
%run search.py
###Output
_____no_output_____
###Markdown
1. Specify epi_corrections output directory
###Code
output_directory_suffix = "2019_07_02"
# On local file system:
corrections_base_directory = "../../epi_corrections_out_" + output_directory_suffix
# On samba share (remote file sytem):
#corrections_base_directory = "/run/user/1000/gvfs/smb-share:server=192.168.1.207,share=hdd3tb1/data/IVS_EPI_BASELINE/epi_corrections_out_" + output_directory_suffix
corrections_base_directory
###Output
_____no_output_____
###Markdown
2. Specify output render directory (full path)
###Code
render_dir_full = [str(Path.joinpath(Path.cwd().parent.parent, *Path(relative).parts[2:])) for relative in [corrections_base_directory]][0] + "/render_rois"
render_dir_full
###Output
_____no_output_____
###Markdown
3. Find the MNI-normalized, ONCOHabitats tumor ROIs
###Code
ONCOHabitats_results_folder = "ONCOHabitats_results"
segment_files_relative = find_segment_files(corrections_base_directory + "/" + ONCOHabitats_results_folder)
segment_paths_relative = [Path(file) for file in segment_files_relative]
segments_files_full = [str(Path.joinpath(Path.cwd().parent.parent, *relative.parts[2:])) for relative in segment_paths_relative]
###Output
_____no_output_____
###Markdown
4. Find the corresponding MNI label files (should be identical to each other, but all found for convenience..)
###Code
[raw_label_files_e1, \
topup_label_files_e1, \
epic_label_files_e1, \
raw_label_files_e2, \
topup_label_files_e2, \
epic_label_files_e2] = find_label_files(corrections_base_directory)
[raw_label_files_e1_full, \
topup_label_files_e1_full, \
epic_label_files_e1_full, \
raw_label_files_e2_full, \
topup_label_files_e2_full, \
epic_label_files_e2_full] = \
[[str(Path.joinpath(Path.cwd().parent.parent, *Path(relative).parts[2:])) for relative in raw_label_files_e1], \
[str(Path.joinpath(Path.cwd().parent.parent, *Path(relative).parts[2:])) for relative in topup_label_files_e1], \
[str(Path.joinpath(Path.cwd().parent.parent, *Path(relative).parts[2:])) for relative in epic_label_files_e1], \
[str(Path.joinpath(Path.cwd().parent.parent, *Path(relative).parts[2:])) for relative in raw_label_files_e2], \
[str(Path.joinpath(Path.cwd().parent.parent, *Path(relative).parts[2:])) for relative in topup_label_files_e2], \
[str(Path.joinpath(Path.cwd().parent.parent, *Path(relative).parts[2:])) for relative in epic_label_files_e2]]
# A check
print("Equal number of detected label files for raw (uncorrected), topup, and epic correction methods: %r" % \
(len(raw_label_files_e1_full) == \
len(raw_label_files_e2_full) == \
len(topup_label_files_e1_full) == \
len(topup_label_files_e2_full) == \
len(epic_label_files_e1_full) == \
len(epic_label_files_e2_full)))
print("Number of subject label files: %i" % len(raw_label_files_e1_full))
###Output
Equal number of detected label files for raw (uncorrected), topup, and epic correction methods: True
Number of subject label files: 45
###Markdown
5. For the raw, topup and epic Gradient Echo (GE) and Spin Echo (SE) nrCBV data + MNI regions file, render .png images in a predefined orthographic view, then make a video. __Note:__ The following cell must run within a fsl X environment
###Code
%%bash -s "{" ".join(segments_files_full)}" "{" ".join(topup_label_files_e2_full)}" "{" ".join([render_dir_full])}"
IFS=' ' read -r -a segments_files_full <<< "$1"
IFS=' ' read -r -a topup_label_files_e2_full <<< "$2"
IFS=' ' read -r -a render_dir_full <<< "$3"
render() {
local -n _segments_files=$1
local -n _label_files=$2
local -n _render_dir=$3
# Remove existing render directory.
if [ -d "$_render_dir" ]; then rm -rd "$_render_dir"; fi
mkdir -p "$_render_dir"
for index in "${!_segments_files[@]}"
do
segments_file="${_segments_files[index]}"
label_file="${_label_files[index]}"
segments_file_name="$(basename $segments_file)"
label_file_name="$(basename $label_file)"
render_command='fsleyes
render
--outfile "'$_render_dir'/'output_$(printf "%03d\n" $index)'"
--scene ortho
--worldLoc 9.918212890625e-05 -18.000099182128906 7.999900817871094
--displaySpace
'$segments_file'
--xcentre -0.00000 -0.11019
--ycentre -0.00000 -0.11019
--zcentre -0.00000 -0.00000
--xzoom 803.0573770888577
--yzoom 809.4712464952298
--zzoom 803.057377088858
--hideLabels
--labelSize 14
--layout horizontal
--hideCursor
--bgColour 0.0 0.0 0.0
--fgColour 1.0 1.0 1.0
--cursorColour 0.0 1.0 0.0
--colourBarLocation top
--colourBarLabelSide top-left
--performance 3
'$segments_file'
--name '$segments_file_name'
--overlayType volume
--alpha 100.0
--brightness 64.94720130619707
--contrast 79.89440261239416
--cmap cool
--negativeCmap greyscale
--displayRange 0.0 12.0
--clippingRange 0.0 30.140860195159913
--gamma 0.0
--cmapResolution 256
--interpolation none
--numSteps 100
--blendFactor 0.1
--smoothing 0
--resolution 100
--numInnerSteps 10
--clipMode intersection
--volume 0
'$label_file'
--name '$label_file_name'
--overlayType volume
--alpha 37.33333333798995
--brightness 50.0
--contrast 50.0
--cmap greyscale
--negativeCmap greyscale
--displayRange 0.0 207.0
--clippingRange 0.0 209.07
--gamma 0.0
--cmapResolution 256
--interpolation none
--numSteps 100
--blendFactor 0.1
--smoothing 0
--resolution 100
--numInnerSteps 10
--clipMode intersection
--volume 0'
echo $render_command >> "$_render_dir/commands.txt"
#echo $render_command
eval $render_command
echo "--Finished rendering volume $index to .png--"
done
ffmpeg -framerate 1 -i "$_render_dir"/output_%03d.png -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" -c:v libx264 -preset slow -profile:v high -level:v 4.0 -pix_fmt yuv420p -crf 1 "$_render_dir"/video.mp4
echo "--Finished rendering pngs to video--"
}
render segments_files_full topup_label_files_e2_full render_dir_full
###Output
--Finished rendering volume 0 to .png--
--Finished rendering volume 1 to .png--
--Finished rendering volume 2 to .png--
--Finished rendering volume 3 to .png--
--Finished rendering volume 4 to .png--
--Finished rendering volume 5 to .png--
--Finished rendering volume 6 to .png--
--Finished rendering volume 7 to .png--
--Finished rendering volume 8 to .png--
--Finished rendering volume 9 to .png--
--Finished rendering volume 10 to .png--
--Finished rendering volume 11 to .png--
--Finished rendering volume 12 to .png--
--Finished rendering volume 13 to .png--
--Finished rendering volume 14 to .png--
--Finished rendering volume 15 to .png--
--Finished rendering volume 16 to .png--
--Finished rendering volume 17 to .png--
--Finished rendering volume 18 to .png--
--Finished rendering volume 19 to .png--
--Finished rendering volume 20 to .png--
--Finished rendering volume 21 to .png--
--Finished rendering volume 22 to .png--
--Finished rendering volume 23 to .png--
--Finished rendering volume 24 to .png--
--Finished rendering volume 25 to .png--
--Finished rendering volume 26 to .png--
--Finished rendering volume 27 to .png--
--Finished rendering volume 28 to .png--
--Finished rendering volume 29 to .png--
--Finished rendering volume 30 to .png--
--Finished rendering volume 31 to .png--
--Finished rendering volume 32 to .png--
--Finished rendering volume 33 to .png--
--Finished rendering volume 34 to .png--
--Finished rendering volume 35 to .png--
--Finished rendering volume 36 to .png--
--Finished rendering volume 37 to .png--
--Finished rendering volume 38 to .png--
--Finished rendering volume 39 to .png--
--Finished rendering volume 40 to .png--
--Finished rendering volume 41 to .png--
--Finished rendering volume 42 to .png--
--Finished rendering volume 43 to .png--
--Finished rendering volume 44 to .png--
--Finished rendering pngs to video--
|
notebooks/process_screening_data.ipynb | ###Markdown
SummaryThis notebook reworks code Shanna wrote to process the screening data. It does the following:1. Load data from DrugBank, ReFRAME, and Broad2. Creates RDkit molecules from the SMILES, standardizes and sanitizes the molecules by removing salts3. Calculate Morgan Fingerprints.4. Save fingerints, molecule names, and source dataset.
###Code
import numpy as np
import pandas as pd
import rdkit
from molvs import Standardizer
from rdkit.Chem import PandasTools, SaltRemover, rdMolDescriptors, MolFromSmiles
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.*') # suppresses annoying RDKIT errors and warnings
from tqdm import tqdm
tqdm.pandas()
###Output
/Users/schu3/.conda/envs/covid/lib/python3.7/site-packages/tqdm/std.py:668: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version
from pandas import Panel
###Markdown
1. Load Data
###Code
drugbank = PandasTools.LoadSDF('../data/screening_data/drugbank.sdf') #auto-sanitize function; don't need to do again
reframe = pd.read_csv('../data/screening_data/reframe.csv', encoding='latin1')
broad = pd.read_csv('../data/screening_data/broad.csv', delimiter="\t")
print('len(drugbank) =', len(drugbank))
print('len(reframe) =', len(reframe))
print('len(broad) =', len(broad))
# combine into one dataframe
screening_data = pd.DataFrame(columns=['source', 'name', 'smiles'])
screening_data.source = ['drugbank']*len(drugbank) + ['reframe']*len(reframe) + ['broad']*len(broad)
screening_data.name = pd.concat([drugbank.GENERIC_NAME, reframe.Name, broad.pert_iname], ignore_index=True)
screening_data.smiles = pd.concat([drugbank.SMILES, reframe.SMILES, broad.smiles], ignore_index=True)
print(f"Dropping {screening_data['smiles'].isna().sum()} rows with missing SMILES")
screening_data.dropna(inplace=True)
###Output
Dropping 6 rows with missing SMILES
###Markdown
2. Create, standardize and sanitize molecules
###Code
screening_data['rdkit_mol'] = screening_data['smiles'].progress_apply(MolFromSmiles)
print(f"Dropping {screening_data['rdkit_mol'].isna().sum()} rows which failed molecule creation")
screening_data.dropna(inplace=True)
# standardize molecules
screening_data['rdkit_mol'] = screening_data['rdkit_mol'].progress_apply(Standardizer().standardize)
# remove salts
screening_data['rdkit_mol'] = screening_data['rdkit_mol'].progress_apply(SaltRemover.SaltRemover().StripMol)
###Output
100%|██████████| 21011/21011 [00:16<00:00, 1288.47it/s]
###Markdown
3. Calculate Morgan Fingerprints
###Code
bis=[]
def calculate_morgan_fingerprint(mol):
bi={}
fp = rdMolDescriptors.GetMorganFingerprintAsBitVect(mol, radius=2, useChirality=True, bitInfo=bi)
bis.append(bi)
bit_string = fp.ToBitString()
return np.array([int(char) for char in bit_string], dtype=np.uint8)
screening_data['morgan_fingerprint'] = screening_data['rdkit_mol'].progress_apply(calculate_morgan_fingerprint)
###Output
100%|██████████| 21011/21011 [00:13<00:00, 1502.15it/s]
###Markdown
4. Save Results
###Code
screening_data['bitinfo']=bis
assert not screening_data.isna().values.any() # confirm clean data
#screening_data.drop(columns=['smiles', 'rdkit_mol']).to_pickle('../processed_data/screening_data_processed.pkl')
screening_data.to_pickle('../processed_data/screening_data_processed.pkl')
###Output
_____no_output_____ |
t.ipynb | ###Markdown
Test for time 3rd-party module maya when pendulum moment
###Code
# -*- coding:utf-8 -*-
# import lib
import maya
import pendulum
import when
# import moment:require more than one package
# current date and year and week
##when
whToday = when.today()
whTodayYear = whToday.year
#no week num
whTodayWeekday = whToday.isoweekday()
whTodayFmt=when.format(whToday,"%Y-%m-%d-%A")
print(whToday)
print(whTodayYear)
print(whTodayWeekday)
print(whTodayFmt)
##maya
myToday = maya.now()
myTodayYear = myToday.year
myTodayWeekNum = myToday.week
myTodayWeekday = myToday.weekday
myTodayFmt = format(myToday,"%Y-%m-%d-%A")
print(myToday)
print(myTodayYear)
print(myTodayWeekNum)
print(myTodayWeekday)
print(myTodayFmt)
## hard to code
##pendulum
pdToday = pendulum.today()
pdTodayYear = pdToday.year
pdTodayWeekNum = pdToday.week_of_year
pdTodayWeekDay = pdToday.isoweekday()
pdTodayFmt=format(pdToday,"%Y-%m-%d-%A")
print(pdToday)
print(pdTodayYear)
print(pdTodayWeekNum)
print(pdTodayWeekDay)
print(pdTodayFmt)
#current time and minute
## when
whNow = when.now()
whNowMin = whNow.minute
whNowFmt = format(whNow,"%H:%M:%S")
print(whNow)
print(whNowMin)
print(whNowFmt)
## maya
myNow = maya.now()
myNowMin = myNow.minute
print(myNow)
print(myNowMin)
## pendulum
pdNow = pendulum.now()
pdNowMin = pdNow.minute
print(pdNow)
print(pdNowMin)
#time ibterval
##when
myDateInt = maya.now()-maya.when("yesterday")
print(myDateInt)
###Output
_____no_output_____ |
MLFlow-Faction-Example.ipynb | ###Markdown
This example notebook shows how to perform some of the steps from Databricks MLflow Model Registry example, available at: https://docs.databricks.com/_static/notebooks/mlflow/mlflow-model-registry-example.htmlThe "Faction CCV Settings" cell is Copyright 2021 Faction Group, LLC, under the terms of the MIT license https://opensource.org/licenses/MITAll other code is Copyright the original author(s) and is reproduced here for context.
###Code
import pandas as pd
wind_farm_data = pd.read_csv("https://github.com/dbczumar/model-registry-demo-notebook/raw/master/dataset/windfarm_data.csv", index_col=0)
def get_training_data():
training_data = pd.DataFrame(wind_farm_data["2014-01-01":"2018-01-01"])
X = training_data.drop(columns="power")
y = training_data["power"]
return X, y
def get_validation_data():
validation_data = pd.DataFrame(wind_farm_data["2018-01-01":"2019-01-01"])
X = validation_data.drop(columns="power")
y = validation_data["power"]
return X, y
def get_weather_and_forecast():
format_date = lambda pd_date : pd_date.date().strftime("%Y-%m-%d")
today = pd.Timestamp('today').normalize()
week_ago = today - pd.Timedelta(days=5)
week_later = today + pd.Timedelta(days=5)
past_power_output = pd.DataFrame(wind_farm_data)[format_date(week_ago):format_date(today)]
weather_and_forecast = pd.DataFrame(wind_farm_data)[format_date(week_ago):format_date(week_later)]
if len(weather_and_forecast) < 10:
past_power_output = pd.DataFrame(wind_farm_data).iloc[-10:-5]
weather_and_forecast = pd.DataFrame(wind_farm_data).iloc[-10:]
return weather_and_forecast.drop(columns="power"), past_power_output["power"]
wind_farm_data["2019-01-01":"2019-01-14"]
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
import mlflow
# See https://www.mlflow.org/docs/latest/tracking.html for details on tracking
# We are using a remote MLFLOW server with postgres with initdb pointed at the CCV + artifacts pointed at ccv, spawned with:
# mlflow server --host 0.0.0.0 --backend-store-uri postgresql://mlflow_user:mlflow@localhost/mlflow_db --default-artifact-root file:///ccvs/multicloud/premier/artifacts
# see https://towardsdatascience.com/setup-mlflow-in-production-d72aecde7fef for an example of mflow+postgresql, but note both our backend-store and default-artifact are CCV-based
# postgres initialized with
# initdb -D /ccvs/multicloud/premier/tracking/
tracking_uri = "http://10.220.200.60:5000"
mlflow.set_tracking_uri(tracking_uri)
def train_keras_model(X, y):
model = Sequential()
model.add(Dense(100, input_shape=(X_train.shape[-1],), activation="relu", name="hidden_layer"))
model.add(Dense(1))
model.compile(loss="mse", optimizer="adam")
model.fit(X_train, y_train, epochs=100, batch_size=64, validation_split=.2)
return model
import mlflow
import mlflow.keras
import mlflow.tensorflow
X_train, y_train = get_training_data()
mlflow.set_experiment("FCTNDEMO")
with mlflow.start_run():
# Automatically capture the model's parameters, metrics, artifacts,
# and source code with the `autolog()` function
print(mlflow.get_tracking_uri())
mlflow.tensorflow.autolog()
train_keras_model(X_train, y_train)
run_id = mlflow.active_run().info.run_id
model_name = "power-forecasting-model" # Replace this with the name of your registered model, if necessary.
# The default path where the MLflow autologging function stores the model
artifact_path = "model"
model_uri = "runs:/{run_id}/{artifact_path}".format(run_id=run_id, artifact_path=artifact_path)
mr_uri = mlflow.get_registry_uri()
print("Current model registry uri: {}".format(mr_uri))
# Get the current tracking uri
tracking_uri = mlflow.get_tracking_uri()
print("Current tracking uri: {}".format(tracking_uri))
model_details = mlflow.register_model(model_uri=model_uri, name=model_name)
import time
from mlflow.tracking.client import MlflowClient
from mlflow.entities.model_registry.model_version_status import ModelVersionStatus
def wait_until_ready(model_name, model_version):
client = MlflowClient()
for _ in range(10):
model_version_details = client.get_model_version(
name=model_name,
version=model_version,
)
status = ModelVersionStatus.from_string(model_version_details.status)
print("Model status: %s" % ModelVersionStatus.to_string(status))
if status == ModelVersionStatus.READY:
break
time.sleep(1)
wait_until_ready(model_details.name, model_details.version)
###Output
_____no_output_____ |
docs/notebooks/setting-up-shapeworks-environment.ipynb | ###Markdown
Setting Up ShapeWorks Environment Before you start!- This [notebook](setting-up-shapeworks-environment.ipynb) assumes that shapeworks conda environment has been activated using `conda activate shapeworks` on the terminal. In this notebook, you will learn:1. How to setup shapeworks environment2. How to import shapeworks and test if setting the environment is successfulWe will also define modular/generic helper functions as we walk through these items to reuse functionalities without duplicating code. Notebook keyboard shortcuts- `Esc + H`: displays a complete list of keyboard shortcuts- `Esc + A`: insert new cell above the current cell- `Esc + B`: insert new cell below the current cell- `Esc + D + D`: delete current cell- `Esc + Z`: undo- `Shift + enter`: run current cell and move to next- To show a function's argument list (i.e., signature), use `(` then `shift-tab`- Use `shift-tab-tab` to show more help for a function- To show the help of a function, use `help(function)` or `function?`- To show all functions supported by an object, use `dot-tab` after the variable name 1. Setting up `shapeworks` environment To setup shapeworks environement, please make sure to add the following paths to both your `PYTHONPATH` and your system `PATH`.- shapeworks bin directory- shapeworks dependencies bin directoryThis can be done either by running the following commands on the terminal ```export PYTHONPATH=/path/to/build/bin:$PYTHONPATHexport PATH=/path/to/build/bin:$PATH```Or by appending these paths in python (see below). What paths do we need?In this notebook, we assume the following.- This notebook is located in `Examples/Python/notebooks/tutorials`- You have built shapeworks from source in `build` directory within the shapeworks code directory- You have built shapeworks dependencies (using `build_dependencies.sh`) in the same parent directory of shapeworks code**Note:** If you run from a ShapeWorks installation, you don't need to set the dependencies path and the `shapeworks_bin_dir` would be set as `../../../../bin`.
###Code
# import relevant libraries
# and indicate the bin directories for shapeworks and its dependencies
import os
import sys
import platform
# paths to be set
shapeworks_bin_dir = "../../../../build/bin"
dependencies_bin_dir = "../../../../../shapeworks-dependencies/bin"
###Output
_____no_output_____
###Markdown
Define helper functions for to set up shapeworks environmentBelow, we will define the following helper functions:- a helper function to print out the updated python path- a helper function to print out the updated system path- a helper function to add shapeworks and its dependencies bin directory to both paths
###Code
# helper function to print out python path
def print_python_path():
syspath = sys.path.copy()
print("\nPython path:")
for curpath in syspath:
if curpath != "":
print(curpath)
# helper function to print out system path
def print_env_path():
syspath = os.environ["PATH"].split(os.pathsep)
print("\nSystem path:")
for curpath in syspath:
if curpath != "":
print(curpath)
# helper function to add shapeworks bin directory to the path
def setup_shapeworks_env(shapeworks_bin_dir, # path to the binary directory of shapeworks
dependencies_bin_dir, # path to the binary directory of shapeworks dependencies used when running build_dependencies.sh
verbose = True):
# add shapeworks (and studio on mac) directory to python path
sys.path.append(shapeworks_bin_dir)
if platform.system() == "Darwin": # MacOS
sys.path.append(shapeworks_bin_dir + "/ShapeWorksStudio.app/Contents/MacOS")
# add shapeworks and studio to the system path
os.environ["PATH"] = shapeworks_bin_dir + os.pathsep + os.environ["PATH"]
os.environ["PATH"] = dependencies_bin_dir + os.pathsep + os.environ["PATH"]
if platform.system() == "Darwin": # MacOS
os.environ["PATH"] = shapeworks_bin_dir + "/ShapeWorksStudio.app/Contents/MacOS" + os.pathsep + os.environ["PATH"]
if verbose:
print_python_path()
print_env_path()
###Output
_____no_output_____
###Markdown
Set your shapeworks enviromentNow, call your `setup_shapeworks_env` helper function to set up your shapeworks environment.
###Code
# set up shapeworks environment
setup_shapeworks_env(shapeworks_bin_dir,
dependencies_bin_dir,
verbose = False)
###Output
_____no_output_____
###Markdown
2. Importing `shapeworks` library and test environment setup
###Code
# let's import shapeworks library to test whether shapeworks is now set
# if the error is not printed, we are done with the setup
try:
import shapeworks as sw
except ImportError:
print('ERROR: shapeworks library failed to import')
else:
print('SUCCESS: shapeworks library is successfully imported!!!')
###Output
_____no_output_____ |
sessions/Sessions3_4_WebScrapingBased_DrillDown.ipynb | ###Markdown
We load 2Day sampled conversations observed in the last two weeks
###Code
Path='A_PATH_GOES_HERE'
VacunasRelatedConversationsShortTerm= pd.concat(map(pd.read_feather, glob.glob(Path)),ignore_index=True)
end_time=datetime.datetime.now()-datetime.timedelta(minutes=20)
start_time=end_time-datetime.timedelta(days = 15)
VacunasRelatedConversationsShortTerm[VacunasRelatedConversationsShortTerm['collected_at'].between(start_time, end_time)]
ListofTweetsToDrill=VacunasRelatedConversationsShortTerm[VacunasRelatedConversationsShortTerm['collected_at'].between(start_time, end_time)]['TweetId'].drop_duplicates(keep='first', inplace=False)
len(ListofTweetsToDrill)
###Output
_____no_output_____
###Markdown
Drilling down on each tweet observed We select
###Code
# This is where you initialize the client with your own bearer token
client = Twarc2(bearer_token=BearerToken)
# The tweet_lookup function allows
lookup = client.tweet_lookup(tweet_ids=ListofTweetsToDrill)
ListofTweets=[]
for page in lookup:
# The Twitter API v2 returns the Tweet information and the user, media etc. separately
# so we use expansions.flatten to get all the information in a single JSON
result = expansions.flatten(page)
for tweet in result:
# Here we are printing the full Tweet object JSON to the console
ListofTweets.append(tweet)
ListofTweetsDataFrame=pd.json_normalize(ListofTweets)
ListofTweetsDataFrame
ListofTweetsDataFrame[['public_metrics.retweet_count','created_at','text']].sort_values(by='public_metrics.retweet_count',ascending=False)
ListofTweetsDataFrame['processed_at']=ListofTweetsDataFrame['__twarc.retrieved_at']
OutputPath='A_PATH_GOES_HERE'
ListofTweetsDataFrame.to_feather(OutputPath)
###Output
_____no_output_____ |
notebooks/cosmology/10_peak_count_statistics.ipynb | ###Markdown
optimal filter
###Code
# load optimal wavelet for prediction on heldout dataset
scores = pkl.load(open('results/scores_new.pkl', 'rb'))
row, col = np.unravel_index(np.argmin(scores, axis=None), scores.shape)
bd_opt = bds[row]
idx1, idx2 = list(dics[0]['wt'].keys())[col + 1] ## NEED TO CHECK
# idx2 = 4
wt = dics[0]['wt'][(idx1, idx2)]
lamL1wave = dics[0]['lamL1wave'][(idx1, idx2)]
lamL1attr = dics[0]['lamL1attr'][(idx1, idx2)]
print('lambda: {} gamma: {}'.format(lamL1wave, lamL1attr))
# AWD prediction performance
filt = get_2dfilts(wt)
h = filt[0][0]
g = filt[0][1]
kernels = extract_patches(h, g)
pcw1 = PeakCount(peak_counting_method='custom',
bins=np.linspace(0, bd_opt, 23),
kernels=kernels)
pcw1.fit(test_loader)
# original wavelet prediction performance
filt = get_2dfilts(wt_o)
h = filt[0][0]
g = filt[0][1]
kernels = extract_patches(h, g)
pcw2 = PeakCount(peak_counting_method='custom',
bins=np.linspace(0, bds[np.argmin(scores[:, 0])], 23),
kernels=kernels)
pcw2.fit(test_loader)
keys = list(pcw1.peak_list.keys())
X1 = []
X2 = []
for k in keys:
a = pcw1.peak_list[k]
b = pcw2.peak_list[k]
X1 += a
X2 += b
X1 = np.stack(X1, axis=0)
X2 = np.stack(X2, axis=0)
###Output
_____no_output_____
###Markdown
PCA
###Code
fig = plt.figure(constrained_layout=True, dpi=200, figsize=(4,4))
spec = gridspec.GridSpec(ncols=2, nrows=2, figure=fig)
colors = ['pink', 'lightblue']
n = 2000
# run pca
pca = PCA(n_components=6)
d = np.concatenate((X1,X2), axis=0)
embedding = pca.fit_transform(d)
# embedding1 vs embedding2
f_ax1 = fig.add_subplot(spec[0, 0])
h1 = plt.scatter(embedding[:n, 0], embedding[:n, 1], marker=".", s=5, alpha=0.2) #, cmap='Blues')
h2 = plt.scatter(embedding[n:, 0], embedding[n:, 1], marker=".", s=5, c='pink', alpha=0.2) #y_test, cmap='Reds')
plt.gca().set_aspect('equal', 'datalim')
blue_patch = mpatches.Patch(color='lightblue', label='AWD')
red_patch = mpatches.Patch(color='pink', label='DB5')
plt.legend((h1, h2),
('AWD', 'DB5'),
scatterpoints=1,
loc='lower left',
ncol=3,
fontsize=7,
handles=(blue_patch, red_patch))
plt.title("Peak Count Histograms PC1 vs PC2", fontsize=6)
plt.xticks([-1000, 0, 1000], fontsize=6)
plt.yticks([-1000, 0, 1000], fontsize=6)
plt.xlabel('PC1', fontsize=6)
plt.ylabel('PC2', fontsize=6)
# embedding1 vs embedding2
f_ax2 = fig.add_subplot(spec[0, 1])
b1 = plt.scatter(embedding[:n, 0], embedding[:n, 2], marker=".", s=5, alpha=0.2) #, cmap='Blues')
b2 = plt.scatter(embedding[n:, 0], embedding[n:, 2], marker=".", s=5, c='pink', alpha=0.2) #y_test, cmap='Reds')
plt.gca().set_aspect('equal', 'datalim')
# plt.legend()
plt.title("Peak Count Histograms PC1 vs PC3", fontsize=6)
plt.xticks([-1000, 0, 1000], fontsize=6)
plt.yticks([-1000, 0, 1000], fontsize=6)
# plt.colorbar(b1)
plt.xlabel('PC1', fontsize=6)
plt.ylabel('PC3', fontsize=6)
# embedding1 vs embedding2
f_ax3 = fig.add_subplot(spec[1, 0])
plt.scatter(embedding[:n, 0], embedding[:n, 3], marker=".", s=5, alpha=0.2) #, cmap='Blues')
plt.scatter(embedding[n:, 0], embedding[n:, 3], marker=".", s=5, c='pink', alpha=0.2) #y_test, cmap='Reds')
plt.gca().set_aspect('equal', 'datalim')
# plt.legend()
plt.title("Peak Count Histograms PC1 vs PC4", fontsize=6)
plt.xticks([-1000, 0, 1000], fontsize=6)
plt.yticks([-1000, 0, 1000], fontsize=6)
plt.xlabel('PC1', fontsize=6)
plt.ylabel('PC4', fontsize=6)
# embedding1 vs embedding2
f_ax4 = fig.add_subplot(spec[1, 1])
r1 = plt.scatter(embedding[:n, 0], embedding[:n, 4], marker=".", s=5, alpha=0.2) #, cmap='Blues')
r2 = plt.scatter(embedding[n:, 0], embedding[n:, 4], marker=".", s=5, c='pink', alpha=0.2) #y_test, cmap='Reds')
plt.gca().set_aspect('equal', 'datalim')
plt.title("Peak Count Histograms PC1 vs PC5", fontsize=6)
plt.xticks([-1000, 0, 1000], fontsize=6)
plt.yticks([-1000, 0, 1000], fontsize=6)
# plt.colorbar(r2)
plt.xlabel('PC1', fontsize=6)
plt.ylabel('PC5', fontsize=6)
plt.tight_layout()
plt.show()
# run pca
pcas = []
pcas_db5 = []
for idx in range(4):
pca = PCA(n_components=10)
pca.fit_transform(X1)
pcas.append(deepcopy(pca))
pca = PCA(n_components=10)
pca.fit_transform(X2)
pcas_db5.append(deepcopy(pca))
# embedding = pca.fit_transform(d)
fig = plt.figure(constrained_layout=True, dpi=200, figsize=(2,2))
spec = gridspec.GridSpec(ncols=1, nrows=1, figure=fig)
n = 2000
# plot 1
f_ax1 = fig.add_subplot(spec[0, 0])
plt.plot(pcas[0].explained_variance_ratio_*100, ".", alpha=.5, label='AWD')
plt.plot(pcas_db5[0].explained_variance_ratio_*100, ".", color="pink", alpha=.5, label='DB5')
plt.xlabel("PC")
plt.ylabel("Percentage varaince explained (%)", fontsize=6)
plt.title("Peak Count Histograms Scree plot", fontsize=6)
plt.legend()
plt.show()
###Output
_____no_output_____ |
Sentiment-Analysis-with-BernoulliNB.ipynb | ###Markdown
Get All the necessary packages
###Code
from collections import Counter
import joblib
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import BernoulliNB
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics import (
classification_report, confusion_matrix,
f1_score as calculate_f1_score, accuracy_score as calculate_accuracy_score
)
###Output
_____no_output_____
###Markdown
Get All the necessary utilities
###Code
## utilities
from utils import CleanTextTransformer, load_imdb_sentiment_analysis_dataset
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\christian\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package wordnet to
[nltk_data] C:\Users\christian\AppData\Roaming\nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\christian\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
###Markdown
Load Data
###Code
(X_train, y_train), (X_test, y_test) = load_imdb_sentiment_analysis_dataset(imdb_data_path='aclImdb')
###Output
loading train: pos ...
###Markdown
Visualize dataset size
###Code
keys, values, labels = [], [], []
count = Counter(y_train)
for key, value in count.items():
keys.append(key)
values.append(value)
labels.append("positive" if value else "negative")
print(count)
print()
barlist = plt.bar(keys, values)
plt.title("Frequency of Sentiments")
plt.xticks(keys, labels)
plt.ylabel('Number of Reviews')
plt.xlabel('Sentiment expressed in Reviews')
barlist[0].set_color('red')
barlist[1].set_color('green')
plt.show()
###Output
Counter({0: 12500, 1: 12500})
###Markdown
Using CountVectorizer Create pipeline
###Code
pipeNB = Pipeline([
("clean_text", CleanTextTransformer()),
('count', CountVectorizer(stop_words="english")),
('classifier', BernoulliNB())
])
###Output
_____no_output_____
###Markdown
Fit the model
###Code
pipeNB.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Save model instance
###Code
joblib.dump(pipeNB, "models/bernoulli_naive_bayes_with_count_vectorizer.joblib")
###Output
_____no_output_____
###Markdown
Evaluate model get the prediction (of unseen data)
###Code
y_pred = pipeNB.predict(X_test)
###Output
_____no_output_____
###Markdown
evaluate fitted model
###Code
print("Classification Report")
print("===================================")
print(classification_report(y_test, y_pred))
print("Confusion Matrix")
print("===================================")
print(confusion_matrix(y_test, y_pred))
###Output
Confusion Matrix
===================================
[[11126 1374]
[ 3185 9315]]
###Markdown
perform cross validation
###Code
accuracy, f1_score = [], []
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=100)
for train_index, test_index in tqdm(skf.split(X_train, y_train), total=10):
X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]
y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]
pipeNB.fit(X_train_fold, y_train_fold)
y_pred = pipeNB.predict(X_test_fold)
accuracy.append(calculate_accuracy_score(y_test_fold, y_pred))
f1_score.append(calculate_f1_score(y_test_fold, y_pred))
# make as array
f1_score = np.array(f1_score)
accuracy = np.array(accuracy)
print('\nModel Metrics ==> ')
print("================================================")
print(f'{"descr":5s} | {"accuracy":^10s} | {"f1_score":^10s}')
print("================================================")
print(f'{"Max":5s} | {accuracy.max():^10.2f} | {f1_score.max():^10.2f}')
print(f'{"Min":5s} | {accuracy.min():^10.2f} | {f1_score.min():^10.2f}')
print(f'{"Mean":5s} | {accuracy.mean():^10.2f} | {f1_score.mean():^10.2f}')
###Output
_____no_output_____
###Markdown
Using TfidfVectorizer Create pipeline
###Code
pipeNB = Pipeline([
("clean_text", CleanTextTransformer()),
('tfidf', TfidfVectorizer(stop_words="english")),
('classifier', BernoulliNB())
])
###Output
_____no_output_____
###Markdown
Fit the model
###Code
pipeNB.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Save model instance
###Code
joblib.dump(pipeNB, "models/bernoulli_naive_bayes_with_tfidf_vectorizer.joblib")
###Output
_____no_output_____
###Markdown
Evaluate model get the prediction (of unseen data)
###Code
y_pred = pipeNB.predict(X_test)
###Output
_____no_output_____
###Markdown
evaluate fitted model
###Code
print("Classification Report")
print("===================================")
print(classification_report(y_test, y_pred))
print("Confusion Matrix")
print("===================================")
print(confusion_matrix(y_test, y_pred))
###Output
Confusion Matrix
===================================
[[11126 1374]
[ 3185 9315]]
###Markdown
perform cross validation
###Code
accuracy, f1_score = [], []
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=100)
for train_index, test_index in tqdm(skf.split(X_train, y_train), total=10):
X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]
y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]
pipeNB.fit(X_train_fold, y_train_fold)
y_pred = pipeNB.predict(X_test_fold)
accuracy.append(calculate_accuracy_score(y_test_fold, y_pred))
f1_score.append(calculate_f1_score(y_test_fold, y_pred))
# make as array
f1_score = np.array(f1_score)
accuracy = np.array(accuracy)
print('\nModel Metrics ==> ')
print("================================================")
print(f'{"descr":5s} | {"accuracy":^10s} | {"f1_score":^10s}')
print("================================================")
print(f'{"Max":5s} | {accuracy.max():^10.2f} | {f1_score.max():^10.2f}')
print(f'{"Min":5s} | {accuracy.min():^10.2f} | {f1_score.min():^10.2f}')
print(f'{"Mean":5s} | {accuracy.mean():^10.2f} | {f1_score.mean():^10.2f}')
###Output
_____no_output_____ |
A - 柱状图/高级条形图 - 01/高级柱状图MA_A_07 2.ipynb | ###Markdown
Matplotlib图鉴——高级柱状图 公众号:可视化图鉴 注意,代码在以下环境全部通过测试:- Python 3.7.1- Matplotlib == 3.0.2- pandas == 1.2.0- numpy == 1.15.4因版本不同,可能会有部分语法差异,如有报错,请先检查拼写及版本是否一致!
###Code
import matplotlib.pyplot as plt
from matplotlib.offsetbox import TextArea, DrawingArea, OffsetImage, AnnotationBbox
import matplotlib.image as mpimg
import seaborn as sns
import matplotlib as mpl
WRYH = mpl.font_manager.FontProperties(fname = '/Users/liuhuanshuo/Desktop/可视化图鉴/font/WeiRuanYaHei-1.ttf') #微软雅黑字体
df = pd.read_excel("店铺数量.xlsx").loc[0:3]
plt.rcParams['font.sans-serif'] = ['SimHei']
x = ['北京','上海','广州','深圳']
y1 = list(df['沙县小吃'])
y2 = list(df['兰州拉面'])
y3 = list(df['星巴克'])
y4 = list(df['瑞幸咖啡'])
y5 = list(df['肯德基'])
y6 = list(df['麦当劳'])
sns.set_palette(palette="pastel",n_colors = 6)
WRYH = mpl.font_manager.FontProperties(fname = '/Users/liuhuanshuo/Desktop/可视化图鉴/font/WeiRuanYaHei-1.ttf') #微软雅黑字体
plt.figure(figsize=(10,7),dpi = 120)#设置画布的尺寸
plt.bar(x, y1, label="沙县小吃",edgecolor = 'black',width = 0.8,linewidth = 1.3)
plt.bar(x, y2, label="兰州拉面",edgecolor = 'black',bottom = y1,width = 0.8,linewidth = 1.3)
plt.bar(x, y3, label="星巴克",edgecolor = 'black',bottom = [y1[i]+y2[i] for i in range(4)],width = 0.8,linewidth = 1.3)
plt.bar(x, y4, label="瑞幸咖啡",edgecolor = 'black',bottom = [y1[i]+y2[i]+y3[i] for i in range(4)],width = 0.8,linewidth = 1.3)
plt.bar(x, y5, label="肯德基",edgecolor = 'black',bottom = [y1[i]+y2[i]+y3[i]+y4[i] for i in range(4)],width = 0.8,linewidth = 1.3)
plt.bar(x, y6, label="麦当劳",edgecolor = 'black',bottom = [y1[i]+y2[i]+y3[i]+y4[i]+y5[i] for i in range(4)],width = 0.8,linewidth = 1.3)
plt.legend(loc=0,ncol = 6,fontsize = 13, bbox_to_anchor=(1.05, 0)) # 设置图例位置
plt.rcParams['xtick.top'] = True
plt.rcParams['xtick.labeltop'] = True
plt.rcParams['xtick.bottom'] = False
plt.xticks(fontsize=13,fontproperties = WRYH)
plt.yticks([])
#调整坐标轴
ax = plt.gca()
ax.spines['right'].set_color('None')#隐藏边缘线
ax.spines['left'].set_color('None')#隐藏边缘线
ax.spines['bottom'].set_color('None')#隐藏边缘线
ax.spines['top'].set_linewidth(1.2)#隐藏边缘线
ax.tick_params(which='major',length=0) #不显示刻度线
plt.gca().invert_yaxis() #反转坐标轴
plt.text(-1,-3000,"一线城市快餐品牌分布图",family = 'Songti SC',fontsize = 18)
plt.text(-1,-2000,"数据来源:大众点评采集(截止2020年12月)",fontproperties = WRYH,fontsize = 14)
plt.text(-1,-1000,"公众号:可视化图鉴(id:zaoqi-data)",fontproperties = WRYH,fontsize = 13)
#添加文字,即数值标注
for j in range(4):
flag = 0
for i in range(6):
y_position = flag + df.loc[j][i+1]/2
plt.text(j,y_position,int(df.loc[j][i+1]),bbox=dict(boxstyle='round', fc='w', ec='black',lw=1 ,alpha=1),fontsize = 11,
verticalalignment = 'center',horizontalalignment = 'center')
flag = flag + df.loc[j][i+1]
#以下为添加图片,可以忽略
arr_lena = mpimg.imread('/Users/liuhuanshuo/Downloads/带二维码logo.jpg')
imagebox = OffsetImage(arr_lena, zoom=0.15)
a1 = AnnotationBbox(imagebox, (3, 0), frameon = False,xycoords='data',
boxcoords=("axes fraction", "data"),
box_alignment=(7,-0.3))
ax.add_artist(a1)
plt.show()
###Output
_____no_output_____ |
Programmeerelementen/Structuren/0300_Keuzestructuur.ipynb | ###Markdown
KEUZESTRUCTUUR In deze notebook maak je kennis met de voorwaardelijke keuze. 1. Keuzestructuur Voorbeeld 1.1 Voer de volgende code uit.
###Code
# keuzestructuur
getal = float(input("Geef een getal: ")) # na input() steeds aangezien als string, dus typecasting gebruiken
if getal > 0:
print(getal, "is strikt positief.")
elif getal < 0:
print(getal, "is strikt negatief.")
else:
print(getal, "is nul.")
###Output
Geef een getal: 0
0.0 is nul.
###Markdown
De combinatie sleutelwoorden if - elif - else staat voor als - anders als - anders. Oefening 1.1Schrijf een script dat een woord vraagt aan de gebruiker, en erna weergeeft of het woord exact 8 letters bevat, of meer of minder dan 8.
###Code
# voorbeeldscript
woord = input("Geef een woord: ") # na input() steeds aangezien als string, dus typecasting gebruiken
if len(woord) > 8:
print(woord, "heeft meer dan 8 letters.")
elif len(woord) < 8:
print(woord, "heeft minder dan 8 letters.")
else:
print(woord, "heeft 8 letters.")
###Output
Geef een woord: boterhammetje
boterhammetje heeft meer dan 8 letters.
###Markdown
Oefening 1.2Schrijf een script dat de richtingscoëfficiënt van twee rechten (in het vlak) vraagt aan de gebruiker, en erna weergeeft of de rechten evenwijdig zijn of snijden.
###Code
# voorbeeldscript
rico1 = float(input("Geef de rico van twee rechten in. Eerste rico: ")) # na input() steeds aangezien als string, dus typecasting gebruiken
rico2 = float(input("Tweede rico: ")) # na input() steeds aangezien als string, dus typecasting gebruiken
if rico1 == rico2:
print("De rechten zijn evenwijdig.")
else:
print("De rechten snijden.")
###Output
Geef de rico van twee rechten in. Eerste rico: -2
Tweede rico: 7
De rechten snijden.
###Markdown
Oefening 1.3Schrijf een script dat de richtingscoëfficiënt van een rechte vraagt aan de gebruiker, en erna weergeeft of de rechte stijgt, daalt of evenwijdig is met de x-as.
###Code
# voorbeeldscript
rico = float(input("Geef de rico van een rechte in: ")) # na input() steeds aangezien als string, dus typecasting gebruiken
if rico > 0:
print("De rechte stijgt.")
elif rico < 0:
print("De rechte daalt.")
else:
print("De rechte is evenwijdig met de x-as.")
###Output
Geef de rico van een rechte in: -3
De rechte daalt.
###Markdown
2. Geneste structuur Als structuren in elkaar verweven zijn, dan spreekt men van geneste structuren. Voorbeeld 2.1Voer de volgende code uit.
###Code
# geneste structuur: herhalings- en keuzestructuur
getallen = [3, 7, 1, 13, 2, 4, 5, 11, 15]
for getal in getallen:
if getal % 2 == 0:
print("Het getal", getal, "is even.")
else:
print("Het getal", getal, "is oneven.")
###Output
Het getal 3 is oneven.
Het getal 7 is oneven.
Het getal 1 is oneven.
Het getal 13 is oneven.
Het getal 2 is even.
Het getal 4 is even.
Het getal 5 is oneven.
Het getal 11 is oneven.
Het getal 15 is oneven.
###Markdown
Oefening 2.1Breid de code uit, zodat een lijst met de even en een lijst met de oneven getallen worden getoond.
###Code
# voorbeeldscript
getallen = [3, 7, 1, 13, 2, 4, 5, 11, 15]
even = []
oneven = []
for getal in getallen:
if getal % 2 == 0:
print("Het getal", getal, "is even.")
even.append(getal)
else:
print("Het getal", getal, "is oneven.")
oneven.append(getal)
print(even)
print(oneven)
###Output
Het getal 3 is oneven.
Het getal 7 is oneven.
Het getal 1 is oneven.
Het getal 13 is oneven.
Het getal 2 is even.
Het getal 4 is even.
Het getal 5 is oneven.
Het getal 11 is oneven.
Het getal 15 is oneven.
[2, 4]
[3, 7, 1, 13, 5, 11, 15]
###Markdown
Voorbeeld 2.2Voer de volgende code uit.
###Code
# geneste structuur
wiskundigen = ["Fermat", "Gauss", "Euler", "Fibonacci", "Galois", "Noether", "Nightingale", "Lovelace"]
aantal_vrouwen = 0
for persoon in wiskundigen:
geslacht = "vrouwelijk"
if persoon[0] in ["N", "L"]:
aantal_vrouwen += 1
else:
geslacht = "mannelijk"
print(persoon, "is van het geslacht:", geslacht)
print("Er zijn", aantal_vrouwen, "vrouwelijke wiskundigen in de lijst.")
###Output
Fermat is van het geslacht: mannelijk
Gauss is van het geslacht: mannelijk
Euler is van het geslacht: mannelijk
Fibonacci is van het geslacht: mannelijk
Galois is van het geslacht: mannelijk
Noether is van het geslacht: vrouwelijk
Nightingale is van het geslacht: vrouwelijk
Lovelace is van het geslacht: vrouwelijk
Er zijn 3 vrouwelijke wiskundigen in de lijst.
###Markdown
Oefening 2.2Breid de code uit, zodat ook het aantal mannelijke wiskundigen wordt weergegeven, zonder ze expliciet te tellen.
###Code
# voorbeeldscript
wiskundigen = ["Fermat", "Gauss", "Euler", "Fibonacci", "Galois", "Noether", "Nightingale", "Lovelace"]
aantal_vrouwen = 0
for persoon in wiskundigen:
geslacht = "vrouwelijk"
if persoon[0] in ["N", "L"]:
aantal_vrouwen += 1
else:
geslacht = "mannelijk"
print(persoon, "is van het geslacht:", geslacht)
aantal_mannen = len(wiskundigen) - aantal_vrouwen
print("Er zijn", aantal_vrouwen, "vrouwelijke wiskundigen in de lijst.")
print(aantal_mannen)
###Output
Fermat is van het geslacht: mannelijk
Gauss is van het geslacht: mannelijk
Euler is van het geslacht: mannelijk
Fibonacci is van het geslacht: mannelijk
Galois is van het geslacht: mannelijk
Noether is van het geslacht: vrouwelijk
Nightingale is van het geslacht: vrouwelijk
Lovelace is van het geslacht: vrouwelijk
Er zijn 3 vrouwelijke wiskundigen in de lijst.
5
###Markdown
Uitdaging 2.1Schrijf een script dat 12 veelvouden van 3 afdrukt. Het script moet daarvoor starten met de gegeven lijst en moet deze lijst 'opkuisen' en aanvullen.
###Code
lijst = [1, 3, 6, 7, 8, 9]
###Output
_____no_output_____
###Markdown
Werkwijze: Als Python bij een for-lus een lijst doorloopt, dan zal Python gebruikmaken van de index van een element in die lijst.Als je een element verwijdert, dan krijgen de elementen in de lijst een nieuwe index. Dus het element volgend op het verwijderde element krijgt de reeds gebruikte index. In de for-lus wordt de variabele (de iterator) echter vermeerderd, zodat dat volgende element wordt overgeslagen. Wat je moet doen om dit te vermijden, is de te verwijderen elementen eerst opslaan in een nieuwe lijst, en erna een voor een alle elementen uit die lijst verwijderen.Je kan in je script ook extra printopdrachten toevoegen zodat je ziet wat er gebeurt. Eventueel kun je die achteraf weer 'verwijderen' door er een voor te plaatsen.
###Code
# voorbeeldscript
lijst = [1, 3, 6, 7, 8, 9]
# opkuisen; na opkuis is het eerste element in de lijst 1*3
verwijderen = []
for getal in lijst:
print("getal =", getal)
if getal % 3 != 0:
verwijderen.append(getal)
print(getal, "wordt verwijderd.")
print(verwijderen)
for getal in verwijderen:
lijst.remove(getal)
# aantal elementen in lijst nagaan
aantal = len(lijst)
print("aantal veelvouden =", aantal)
# aanvullen tot er 12 elementen in lijst zijn
for i in range(aantal+1, 13):
# print("i =", i)
veelvoud = 3 * i
lijst.append(veelvoud)
print(lijst)
###Output
getal = 1
1 wordt verwijderd.
getal = 3
getal = 6
getal = 7
7 wordt verwijderd.
getal = 8
8 wordt verwijderd.
getal = 9
[1, 7, 8]
aantal veelvouden = 3
[3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36]
###Markdown
Uitdaging 2.2Vervolledig en verbeter de code, zodat:- elk veelvoud van 3 uit de lijst getallen wordt verwijderd, tenzij het ook een veelvoud is van 5; - elk veelvoud van 7 uit de lijst getallen wordt verwijderd, tenzij het ook een veelvoud is van 2. Maak een tweede lijst die de overgebleven getallen, elk vermenigvuldigd met 10, bevat.Laat uiteindelijk beide lijsten uitprinten. Tip: ga eerst op zoek naar de syntaxfouten, en pas daarna vervolledig je de code tot hij doet wat hij hoort te doen.
###Code
getallen = [3, 7, 1, 25, 10, 8, 30, 40, 28, 49, 33, 13, 37, 101, 56, 111, 2, 4, 5, 11, 15]
lijst2 = []
for getal in getallen:
print(getal)
if getal % 3 == 0:
if getal % 5 != 0
getallen.remove(getal)
else:
lijst2.append(getal * 10)
elif getal % == 0:
if getal % != 0:
getallen.remove(getal)
else
lijst2.append(getal * 10)
else:
lijst2.append(getal * 10)
print getallen
# voorbeeldscript
# syntax
getallen = [3, 7, 1, 25, 10, 8, 30, 40, 28, 49, 33, 13, 37, 101, 56, 111, 2, 4, 5, 11, 15]
lijst2 = []
for getal in getallen:
print("getal =", getal)
if getal % 3 == 0:
if getal % 5 != 0:
getallen.remove(getal)
else:
lijst2.append(getal * 10)
elif getal % 7 == 0:
if getal % 2 != 0:
getallen.remove(getal)
else:
lijst2.append(getal * 10)
else:
lijst2.append(getal * 10)
print (getallen)
# voorbeeldscript
# code vervolledigen
getallen = [3, 7, 1, 25, 10, 8, 30, 40, 28, 49, 33, 13, 37, 101, 56, 111, 2, 4, 5, 11, 15]
lijst2 = []
verwijderen = []
for getal in getallen:
print("getal =", getal)
if getal % 3 == 0:
if getal % 5 != 0:
verwijderen.append(getal)
print(getal, "wordt verwijderd.")
else:
lijst2.append(getal * 10)
print(getal, "wordt niet verwijderd.")
elif getal % 7 == 0:
if getal % 2 != 0:
verwijderen.append(getal)
print(getal, "wordt verwijderd.")
else:
lijst2.append(getal * 10)
print(getal, "wordt niet verwijderd.")
else:
lijst2.append(getal * 10)
print(getal, "wordt niet verwijderd.")
# verwijderen
for getal in verwijderen:
getallen.remove(getal)
print(getallen)
print(lijst2)
###Output
getal = 3
3 wordt verwijderd.
getal = 7
7 wordt verwijderd.
getal = 1
1 wordt niet verwijderd.
getal = 25
25 wordt niet verwijderd.
getal = 10
10 wordt niet verwijderd.
getal = 8
8 wordt niet verwijderd.
getal = 30
30 wordt niet verwijderd.
getal = 40
40 wordt niet verwijderd.
getal = 28
28 wordt niet verwijderd.
getal = 49
49 wordt verwijderd.
getal = 33
33 wordt verwijderd.
getal = 13
13 wordt niet verwijderd.
getal = 37
37 wordt niet verwijderd.
getal = 101
101 wordt niet verwijderd.
getal = 56
56 wordt niet verwijderd.
getal = 111
111 wordt verwijderd.
getal = 2
2 wordt niet verwijderd.
getal = 4
4 wordt niet verwijderd.
getal = 5
5 wordt niet verwijderd.
getal = 11
11 wordt niet verwijderd.
getal = 15
15 wordt niet verwijderd.
[1, 25, 10, 8, 30, 40, 28, 13, 37, 101, 56, 2, 4, 5, 11, 15]
[10, 250, 100, 80, 300, 400, 280, 130, 370, 1010, 560, 20, 40, 50, 110, 150]
|
toy_simulations/deconfounder_pca_logistic.ipynb | ###Markdown
The deconfounder: a PCA factor model + a logistic outcome model
###Code
import numpy.random as npr
import statsmodels.api as sm
import scipy
import numpy as np
from sklearn import linear_model
from sklearn.decomposition import PCA
from sklearn.datasets import make_spd_matrix
from scipy import stats
stats.chisqprob = lambda chisq, df: stats.chi2.sf(chisq, df)
# import time
# timenowseed = int(time.time())
# npr.seed(timenowseed)
# print(timenowseed)
npr.seed(1534727263)
n = 10000 # number of data points
d = 3 # number of causes (=2) + number of confounders (=1)
###Output
_____no_output_____
###Markdown
A simulated dataset simulate correlated causes
###Code
corrcoef = 0.4
stdev = np.ones(d)
corr = np.eye(d) * (1-corrcoef) + np.ones([d,d]) * corrcoef
print("correlation \n", corr)
b = np.matmul(stdev[:,np.newaxis], stdev[:,np.newaxis].T)
cov = np.multiply(b, corr)
mean = np.zeros(d)
# cov = make_spd_matrix(3)
print("covariance \n", cov)
X = npr.multivariate_normal(mean, cov, n)
###Output
correlation
[[1. 0.4 0.4]
[0.4 1. 0.4]
[0.4 0.4 1. ]]
covariance
[[1. 0.4 0.4]
[0.4 1. 0.4]
[0.4 0.4 1. ]]
###Markdown
simulate the outcome
###Code
coef = np.array([0.2, 1.0, 0.9])
assert len(coef) == d
intcpt = 0.4
y = npr.binomial(1, np.exp(intcpt+coef.dot(X.T))/(1+np.exp(intcpt+coef.dot(X.T))))
###Output
_____no_output_____
###Markdown
noncausal estimation: classical logistic regression
###Code
obs_n = d - 1
obs_X = X[:,:obs_n]
#ignore confounder
x2 = sm.add_constant(obs_X)
models = sm.Logit(y,x2)
result = models.fit()
print(result.summary())
###Output
Optimization terminated successfully.
Current function value: 0.540806
Iterations 6
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 10000
Model: Logit Df Residuals: 9997
Method: MLE Df Model: 2
Date: Thu, 20 Sep 2018 Pseudo R-squ.: 0.2107
Time: 01:54:58 Log-Likelihood: -5408.1
converged: True LL-Null: -6851.6
LLR p-value: 0.000
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.3640 0.024 15.325 0.000 0.317 0.411
x1 0.4353 0.027 16.407 0.000 0.383 0.487
x2 1.1362 0.031 36.899 0.000 1.076 1.197
==============================================================================
###Markdown
* The true causal coefficient is (0.2, 1.0). * But with the classical logistic regression, none of the confidence intervals include the truth. causal inference: the deconfounder with a PCA factor model fit a PCA
###Code
n_comp = 1
eps = 0.1
pca = PCA(n_components=n_comp)
pca.fit(obs_X)
pca.components_
print(pca.explained_variance_ratio_)
###Output
[0.70374746]
###Markdown
compute the substitute confounder Z and the reconstructed causes A
###Code
Z = obs_X.dot(pca.components_.T) + npr.normal(scale=eps,size=(n,1))
A = np.dot(pca.transform(obs_X)[:,:n_comp], pca.components_[:n_comp,:]) + npr.normal(scale=eps,size=(n,obs_n))
X_pca_A = np.hstack((obs_X, A))
X_pca_Z = np.hstack((obs_X, Z))
###Output
_____no_output_____
###Markdown
causal estimation with the reconstructed causes A
###Code
x2 = sm.add_constant(X_pca_A)
models = sm.Logit(y,x2)
result = models.fit()
print(result.summary())
###Output
Optimization terminated successfully.
Current function value: 0.540465
Iterations 6
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 10000
Model: Logit Df Residuals: 9995
Method: MLE Df Model: 4
Date: Thu, 20 Sep 2018 Pseudo R-squ.: 0.2112
Time: 01:54:58 Log-Likelihood: -5404.7
converged: True LL-Null: -6851.6
LLR p-value: 0.000
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.3572 0.024 14.809 0.000 0.310 0.404
x1 0.1832 0.173 1.061 0.288 -0.155 0.521
x2 0.8884 0.171 5.199 0.000 0.553 1.223
x3 0.6134 0.241 2.550 0.011 0.142 1.085
x4 -0.1160 0.234 -0.496 0.620 -0.574 0.342
==============================================================================
###Markdown
* The true causal coefficient is (0.2, 1.0). * But with the deconfounder, both of the confidence intervals (for x1, x2) include the truth. causal estimation with the substitute confounder Z
###Code
x2 = sm.add_constant(X_pca_Z)
models = sm.Logit(y,x2)
result = models.fit()
print(result.summary())
###Output
Optimization terminated successfully.
Current function value: 0.540708
Iterations 6
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 10000
Model: Logit Df Residuals: 9996
Method: MLE Df Model: 3
Date: Thu, 20 Sep 2018 Pseudo R-squ.: 0.2108
Time: 01:54:58 Log-Likelihood: -5407.1
converged: True LL-Null: -6851.6
LLR p-value: 0.000
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.3638 0.024 15.314 0.000 0.317 0.410
x1 0.2002 0.170 1.176 0.239 -0.133 0.534
x2 0.9060 0.167 5.411 0.000 0.578 1.234
x3 -0.3298 0.236 -1.398 0.162 -0.792 0.133
==============================================================================
###Markdown
* The true causal coefficient is (0.2, 1.0). * But with the deconfounder, both of the confidence intervals (for x1, x2) include the truth. The oracle case: when the confounder is observed
###Code
# oracle
x2 = sm.add_constant(X)
models = sm.Logit(y,x2)
result = models.fit()
print(result.summary())
###Output
Optimization terminated successfully.
Current function value: 0.496111
Iterations 6
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 10000
Model: Logit Df Residuals: 9996
Method: MLE Df Model: 3
Date: Thu, 20 Sep 2018 Pseudo R-squ.: 0.2759
Time: 01:54:58 Log-Likelihood: -4961.1
converged: True LL-Null: -6851.6
LLR p-value: 0.000
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.4149 0.025 16.480 0.000 0.366 0.464
x1 0.2405 0.028 8.474 0.000 0.185 0.296
x2 1.0272 0.032 31.862 0.000 0.964 1.090
x3 0.8601 0.031 27.771 0.000 0.799 0.921
==============================================================================
|
pygt_example/reswnet.ipynb | ###Markdown
PyGreentea Network Generator Load the dependencies
###Code
%matplotlib inline
from __future__ import print_function
import h5py
import numpy as np
from numpy import float32, int32, uint8, dtype
import sys
import matplotlib.pyplot as plt
import copy
pygt_path = '../PyGreentea'
import sys, os
sys.path.append(os.path.join(os.path.dirname(os.getcwd()), pygt_path))
import math
import PyGreentea as pygt
###Output
/media/c_drive/Users/Fabian/Documents/ETH/BachelorThesis/eth_bsc/caffe_gt/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared_ptr<caffe::Net<float> > already registered; second conversion method ignored.
from ._caffe import \
/media/c_drive/Users/Fabian/Documents/ETH/BachelorThesis/eth_bsc/caffe_gt/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared_ptr<caffe::Blob<float> > already registered; second conversion method ignored.
from ._caffe import \
/media/c_drive/Users/Fabian/Documents/ETH/BachelorThesis/eth_bsc/caffe_gt/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared_ptr<caffe::Solver<float> > already registered; second conversion method ignored.
from ._caffe import \
###Markdown
Load the default network template
###Code
netconf = pygt.netgen.NetConf()
###Output
_____no_output_____
###Markdown
Set the memory limits for the GPU
###Code
# We use cuDNN, so:
netconf.ignore_conv_buffer = True
# 4 GB total, ignore convolution buffer. Let's keep 0.5 GB for implementation dependent buffers.
netconf.mem_global_limit = 3.5 * 1024 * 1024 * 1024
# 4 GB convolution buffer limit
netconf.mem_buf_limit = 3.5 * 1024 * 1024 * 1024
###Output
_____no_output_____
###Markdown
Explore possible network input/output shapes for the chosen settings
###Code
# We test memory usage for training
mode = pygt.netgen.caffe_pb2.TRAIN
# The minimum we're interested in
shape_min = [100,100,100]
# And maximum
shape_max = [200,200,200]
# We want Z == X == Y constrained
constraints = [None, lambda x: x[0], lambda x: x[1]]
# Create a network with no context loss (at least for training)
netconf.u_netconfs[0].use_deconvolution_uppath = True
# Create a W-Net (two U-Nets concatenated)
netconf.u_netconfs += [copy.deepcopy(netconf.u_netconfs[0])]
# Run a shortcut (deep residual, additive) over the first U-Net
netconf.u_netconfs[0].bridge = True
# Run a shortcut (deep residual, additive) over the second U-Net
netconf.u_netconfs[1].bridge = True
# Compute (can be quite intensive)
inshape, outshape, fmaps = pygt.netgen.compute_valid_io_shapes(netconf,mode,shape_min,shape_max,constraints=constraints)
###Output
++++ Valid: [100] => [100]
-- Invalid: [101] => []
-- Invalid: [102] => []
-- Invalid: [103] => []
-- Invalid: [104] => []
-- Invalid: [105] => []
-- Invalid: [106] => []
-- Invalid: [107] => []
++++ Valid: [108] => [108]
-- Invalid: [109] => []
-- Invalid: [110] => []
-- Invalid: [111] => []
-- Invalid: [112] => []
-- Invalid: [113] => []
-- Invalid: [114] => []
-- Invalid: [115] => []
++++ Valid: [116] => [116]
-- Invalid: [117] => []
-- Invalid: [118] => []
-- Invalid: [119] => []
-- Invalid: [120] => []
-- Invalid: [121] => []
-- Invalid: [122] => []
-- Invalid: [123] => []
++++ Valid: [124] => [124]
-- Invalid: [125] => []
-- Invalid: [126] => []
-- Invalid: [127] => []
-- Invalid: [128] => []
-- Invalid: [129] => []
-- Invalid: [130] => []
-- Invalid: [131] => []
++++ Valid: [132] => [132]
-- Invalid: [133] => []
-- Invalid: [134] => []
-- Invalid: [135] => []
-- Invalid: [136] => []
-- Invalid: [137] => []
-- Invalid: [138] => []
-- Invalid: [139] => []
++++ Valid: [140] => [140]
-- Invalid: [141] => []
-- Invalid: [142] => []
-- Invalid: [143] => []
-- Invalid: [144] => []
-- Invalid: [145] => []
-- Invalid: [146] => []
-- Invalid: [147] => []
++++ Valid: [148] => [148]
-- Invalid: [149] => []
-- Invalid: [150] => []
-- Invalid: [151] => []
-- Invalid: [152] => []
-- Invalid: [153] => []
-- Invalid: [154] => []
-- Invalid: [155] => []
++++ Valid: [156] => [156]
-- Invalid: [157] => []
-- Invalid: [158] => []
-- Invalid: [159] => []
-- Invalid: [160] => []
-- Invalid: [161] => []
-- Invalid: [162] => []
-- Invalid: [163] => []
++++ Valid: [164] => [164]
-- Invalid: [165] => []
-- Invalid: [166] => []
-- Invalid: [167] => []
-- Invalid: [168] => []
-- Invalid: [169] => []
-- Invalid: [170] => []
-- Invalid: [171] => []
++++ Valid: [172] => [172]
-- Invalid: [173] => []
-- Invalid: [174] => []
-- Invalid: [175] => []
-- Invalid: [176] => []
-- Invalid: [177] => []
-- Invalid: [178] => []
-- Invalid: [179] => []
++++ Valid: [180] => [180]
-- Invalid: [181] => []
-- Invalid: [182] => []
-- Invalid: [183] => []
-- Invalid: [184] => []
-- Invalid: [185] => []
-- Invalid: [186] => []
-- Invalid: [187] => []
++++ Valid: [188] => [188]
-- Invalid: [189] => []
-- Invalid: [190] => []
-- Invalid: [191] => []
-- Invalid: [192] => []
-- Invalid: [193] => []
-- Invalid: [194] => []
-- Invalid: [195] => []
++++ Valid: [196] => [196]
-- Invalid: [197] => []
-- Invalid: [198] => []
-- Invalid: [199] => []
-- Invalid: [200] => []
++++ Valid: [100, 100] => [100, 100]
++++ Valid: [108, 108] => [108, 108]
++++ Valid: [116, 116] => [116, 116]
++++ Valid: [124, 124] => [124, 124]
++++ Valid: [132, 132] => [132, 132]
++++ Valid: [140, 140] => [140, 140]
++++ Valid: [148, 148] => [148, 148]
++++ Valid: [156, 156] => [156, 156]
++++ Valid: [164, 164] => [164, 164]
++++ Valid: [172, 172] => [172, 172]
++++ Valid: [180, 180] => [180, 180]
++++ Valid: [188, 188] => [188, 188]
++++ Valid: [196, 196] => [196, 196]
++++ Valid: [100, 100, 100] => [100, 100, 100]
++++ Valid: [108, 108, 108] => [108, 108, 108]
++++ Valid: [116, 116, 116] => [116, 116, 116]
++++ Valid: [124, 124, 124] => [124, 124, 124]
++++ Valid: [132, 132, 132] => [132, 132, 132]
++++ Valid: [140, 140, 140] => [140, 140, 140]
++++ Valid: [148, 148, 148] => [148, 148, 148]
++++ Valid: [156, 156, 156] => [156, 156, 156]
++++ Valid: [164, 164, 164] => [164, 164, 164]
++++ Valid: [172, 172, 172] => [172, 172, 172]
++++ Valid: [180, 180, 180] => [180, 180, 180]
++++ Valid: [188, 188, 188] => [188, 188, 188]
++++ Valid: [196, 196, 196] => [196, 196, 196]
2 in [1, 1]
4 in [1, 1]
8 in [1, 1]
16 in [1, 1]
12 in [8, 16]
14 in [13, 16]
13 in [13, 13]
12 in [13, 12]
Current shape: 0, [100, 100, 100], 12
2 in [1, 1]
4 in [1, 1]
8 in [1, 1]
16 in [1, 1]
12 in [8, 16]
9 in [8, 11]
10 in [10, 11]
9 in [10, 9]
Current shape: 1, [108, 108, 108], 9
2 in [1, 1]
4 in [1, 1]
8 in [1, 1]
6 in [4, 8]
7 in [7, 8]
8 in [8, 8]
7 in [8, 7]
Current shape: 2, [116, 116, 116], 7
2 in [1, 1]
4 in [1, 1]
8 in [1, 1]
6 in [4, 8]
7 in [7, 8]
6 in [7, 6]
Current shape: 3, [124, 124, 124], 6
2 in [1, 1]
4 in [1, 1]
8 in [1, 1]
6 in [4, 8]
4 in [4, 5]
5 in [5, 5]
Current shape: 4, [132, 132, 132], 5
2 in [1, 1]
4 in [1, 1]
8 in [1, 1]
6 in [4, 8]
4 in [4, 5]
5 in [5, 5]
4 in [5, 4]
Current shape: 5, [140, 140, 140], 4
2 in [1, 1]
4 in [1, 1]
3 in [2, 4]
4 in [4, 4]
3 in [4, 3]
Current shape: 6, [148, 148, 148], 3
2 in [1, 1]
4 in [1, 1]
3 in [2, 4]
4 in [4, 4]
3 in [4, 3]
Current shape: 7, [156, 156, 156], 3
2 in [1, 1]
4 in [1, 1]
3 in [2, 4]
2 in [2, 2]
Current shape: 8, [164, 164, 164], 2
2 in [1, 1]
4 in [1, 1]
3 in [2, 4]
2 in [2, 2]
Current shape: 9, [172, 172, 172], 2
2 in [1, 1]
1 in [1, 2]
2 in [2, 2]
1 in [2, 1]
Current shape: 10, [180, 180, 180], 1
2 in [1, 1]
1 in [1, 2]
2 in [2, 2]
1 in [2, 1]
Current shape: 11, [188, 188, 188], 1
2 in [1, 1]
1 in [1, 2]
2 in [2, 2]
1 in [2, 1]
Current shape: 12, [196, 196, 196], 1
###Markdown
Visualization
###Code
plt.figure()
# Combined output size versus feature map count
plt.scatter([x[0]*x[1]*x[2] for x in outshape], fmaps, alpha = 0.5)
plt.ylabel('Feature maps')
plt.xlabel('Combined output size')
plt.show()
###Output
_____no_output_____
###Markdown
Pick parameters, actually generate and store the network
###Code
netconf.input_shape = inshape[0]
netconf.output_shape = outshape[0]
netconf.fmap_start = fmaps[0]
print ('Input shape: %s' % netconf.input_shape)
print ('Output shape: %s' % netconf.output_shape)
print ('Feature maps: %s' % netconf.fmap_start)
netconf.loss_function = "euclid"
train_net_conf_euclid, test_net_conf = pygt.netgen.create_nets(netconf)
netconf.loss_function = "malis"
train_net_conf_malis, test_net_conf = pygt.netgen.create_nets(netconf)
with open('net_train_euclid.prototxt', 'w') as f:
print(train_net_conf_euclid, file=f)
with open('net_train_malis.prototxt', 'w') as f:
print(train_net_conf_malis, file=f)
with open('net_test.prototxt', 'w') as f:
print(test_net_conf, file=f)
###Output
Input shape: [100, 100, 100]
Output shape: [100, 100, 100]
Feature maps: 12
Shape: [0]
f: 1 w: [100, 100, 100] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [1]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 1296
CM: 108000000
AM: 96000000
Shape: [2]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [3]
f: 12 w: [48, 48, 48] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [4]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 46656
CM: 143327232
AM: 31850496
Shape: [5]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [6]
f: 36 w: [22, 22, 22] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [7]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 419904
CM: 41399424
AM: 9199872
Shape: [8]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [9]
f: 108 w: [9, 9, 9] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [10]
f: 324 w: [7, 7, 7] d: [1, 1, 1]
WM: 3779136
CM: 8503056
AM: 1889568
Shape: [11]
f: 324 w: [9, 9, 9] d: [1, 1, 1]
WM: 11337408
CM: 12002256
AM: 889056
Shape: [12]
f: 324 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [13]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 139968
CM: 60466176
AM: 0
Shape: [14]
f: 216 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [15]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 2519424
CM: 136048896
AM: 5038848
Shape: [16]
f: 108 w: [22, 22, 22] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [17]
f: 108 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [18]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 15552
CM: 294395904
AM: 0
Shape: [19]
f: 72 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [20]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 279936
CM: 662390784
AM: 24532992
Shape: [21]
f: 36 w: [48, 48, 48] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [22]
f: 36 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [23]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 1728
CM: 1019215872
AM: 0
Shape: [24]
f: 24 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [25]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 31104
CM: 2293235712
AM: 84934656
Shape: [26]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [27]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 15552
CM: 1296000000
AM: 96000000
Shape: [28]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [29]
f: 12 w: [48, 48, 48] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [30]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 46656
CM: 143327232
AM: 31850496
Shape: [31]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [32]
f: 36 w: [22, 22, 22] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [33]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 419904
CM: 41399424
AM: 9199872
Shape: [34]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [35]
f: 108 w: [9, 9, 9] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [36]
f: 324 w: [7, 7, 7] d: [1, 1, 1]
WM: 3779136
CM: 8503056
AM: 1889568
Shape: [37]
f: 324 w: [9, 9, 9] d: [1, 1, 1]
WM: 11337408
CM: 12002256
AM: 889056
Shape: [38]
f: 324 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [39]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 139968
CM: 60466176
AM: 0
Shape: [40]
f: 216 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [41]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 2519424
CM: 136048896
AM: 5038848
Shape: [42]
f: 108 w: [22, 22, 22] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [43]
f: 108 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [44]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 15552
CM: 294395904
AM: 0
Shape: [45]
f: 72 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [46]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 279936
CM: 662390784
AM: 24532992
Shape: [47]
f: 36 w: [48, 48, 48] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [48]
f: 36 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [49]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 1728
CM: 1019215872
AM: 0
Shape: [50]
f: 24 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [51]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 31104
CM: 2293235712
AM: 84934656
Shape: [52]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [53]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [54]
f: 3 w: [100, 100, 100] d: [1, 1, 1]
WM: 144
CM: 48000000
AM: 0
Max. memory requirements: 4876627616 B
Weight memory: 42819552 B
Max. conv buffer: 2293235712 B
Shape: [0]
f: 1 w: [100, 100, 100] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [1]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 1296
CM: 108000000
AM: 96000000
Shape: [2]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [3]
f: 12 w: [48, 48, 48] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [4]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 46656
CM: 143327232
AM: 31850496
Shape: [5]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [6]
f: 36 w: [22, 22, 22] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [7]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 419904
CM: 41399424
AM: 9199872
Shape: [8]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [9]
f: 108 w: [9, 9, 9] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [10]
f: 324 w: [7, 7, 7] d: [1, 1, 1]
WM: 3779136
CM: 8503056
AM: 1889568
Shape: [11]
f: 324 w: [9, 9, 9] d: [1, 1, 1]
WM: 11337408
CM: 12002256
AM: 889056
Shape: [12]
f: 324 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [13]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 139968
CM: 60466176
AM: 0
Shape: [14]
f: 216 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [15]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 2519424
CM: 136048896
AM: 5038848
Shape: [16]
f: 108 w: [22, 22, 22] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [17]
f: 108 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [18]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 15552
CM: 294395904
AM: 0
Shape: [19]
f: 72 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [20]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 279936
CM: 662390784
AM: 24532992
Shape: [21]
f: 36 w: [48, 48, 48] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [22]
f: 36 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [23]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 1728
CM: 1019215872
AM: 0
Shape: [24]
f: 24 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [25]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 31104
CM: 2293235712
AM: 84934656
Shape: [26]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [27]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 15552
CM: 1296000000
AM: 96000000
Shape: [28]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [29]
f: 12 w: [48, 48, 48] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [30]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 46656
CM: 143327232
AM: 31850496
Shape: [31]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [32]
f: 36 w: [22, 22, 22] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [33]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 419904
CM: 41399424
AM: 9199872
Shape: [34]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [35]
f: 108 w: [9, 9, 9] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [36]
f: 324 w: [7, 7, 7] d: [1, 1, 1]
WM: 3779136
CM: 8503056
AM: 1889568
Shape: [37]
f: 324 w: [9, 9, 9] d: [1, 1, 1]
WM: 11337408
CM: 12002256
AM: 889056
Shape: [38]
f: 324 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [39]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 139968
CM: 60466176
AM: 0
Shape: [40]
f: 216 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [41]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 2519424
CM: 136048896
AM: 5038848
Shape: [42]
f: 108 w: [22, 22, 22] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [43]
f: 108 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [44]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 15552
CM: 294395904
AM: 0
Shape: [45]
f: 72 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [46]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 279936
CM: 662390784
AM: 24532992
Shape: [47]
f: 36 w: [48, 48, 48] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [48]
f: 36 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [49]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 1728
CM: 1019215872
AM: 0
Shape: [50]
f: 24 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [51]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 31104
CM: 2293235712
AM: 84934656
Shape: [52]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [53]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [54]
f: 3 w: [100, 100, 100] d: [1, 1, 1]
WM: 144
CM: 48000000
AM: 0
Max. memory requirements: 3606341440 B
Weight memory: 42819552 B
Max. conv buffer: 2293235712 B
Shape: [0]
f: 1 w: [100, 100, 100] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [1]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 1296
CM: 108000000
AM: 96000000
Shape: [2]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [3]
f: 12 w: [48, 48, 48] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [4]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 46656
CM: 143327232
AM: 31850496
Shape: [5]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [6]
f: 36 w: [22, 22, 22] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [7]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 419904
CM: 41399424
AM: 9199872
Shape: [8]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [9]
f: 108 w: [9, 9, 9] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [10]
f: 324 w: [7, 7, 7] d: [1, 1, 1]
WM: 3779136
CM: 8503056
AM: 1889568
Shape: [11]
f: 324 w: [9, 9, 9] d: [1, 1, 1]
WM: 11337408
CM: 12002256
AM: 889056
Shape: [12]
f: 324 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [13]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 139968
CM: 60466176
AM: 0
Shape: [14]
f: 216 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [15]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 2519424
CM: 136048896
AM: 5038848
Shape: [16]
f: 108 w: [22, 22, 22] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [17]
f: 108 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [18]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 15552
CM: 294395904
AM: 0
Shape: [19]
f: 72 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [20]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 279936
CM: 662390784
AM: 24532992
Shape: [21]
f: 36 w: [48, 48, 48] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [22]
f: 36 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [23]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 1728
CM: 1019215872
AM: 0
Shape: [24]
f: 24 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [25]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 31104
CM: 2293235712
AM: 84934656
Shape: [26]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [27]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 15552
CM: 1296000000
AM: 96000000
Shape: [28]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [29]
f: 12 w: [48, 48, 48] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [30]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 46656
CM: 143327232
AM: 31850496
Shape: [31]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [32]
f: 36 w: [22, 22, 22] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [33]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 419904
CM: 41399424
AM: 9199872
Shape: [34]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [35]
f: 108 w: [9, 9, 9] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [36]
f: 324 w: [7, 7, 7] d: [1, 1, 1]
WM: 3779136
CM: 8503056
AM: 1889568
Shape: [37]
f: 324 w: [9, 9, 9] d: [1, 1, 1]
WM: 11337408
CM: 12002256
AM: 889056
Shape: [38]
f: 324 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [39]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 139968
CM: 60466176
AM: 0
Shape: [40]
f: 216 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [41]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 2519424
CM: 136048896
AM: 5038848
Shape: [42]
f: 108 w: [22, 22, 22] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [43]
f: 108 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [44]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 15552
CM: 294395904
AM: 0
Shape: [45]
f: 72 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [46]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 279936
CM: 662390784
AM: 24532992
Shape: [47]
f: 36 w: [48, 48, 48] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [48]
f: 36 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [49]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 1728
CM: 1019215872
AM: 0
Shape: [50]
f: 24 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [51]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 31104
CM: 2293235712
AM: 84934656
Shape: [52]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [53]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [54]
f: 3 w: [100, 100, 100] d: [1, 1, 1]
WM: 144
CM: 48000000
AM: 0
Max. memory requirements: 4876627616 B
Weight memory: 42819552 B
Max. conv buffer: 2293235712 B
Shape: [0]
f: 1 w: [100, 100, 100] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [1]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 1296
CM: 108000000
AM: 96000000
Shape: [2]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [3]
f: 12 w: [48, 48, 48] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [4]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 46656
CM: 143327232
AM: 31850496
Shape: [5]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [6]
f: 36 w: [22, 22, 22] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [7]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 419904
CM: 41399424
AM: 9199872
Shape: [8]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [9]
f: 108 w: [9, 9, 9] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [10]
f: 324 w: [7, 7, 7] d: [1, 1, 1]
WM: 3779136
CM: 8503056
AM: 1889568
Shape: [11]
f: 324 w: [9, 9, 9] d: [1, 1, 1]
WM: 11337408
CM: 12002256
AM: 889056
Shape: [12]
f: 324 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [13]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 139968
CM: 60466176
AM: 0
Shape: [14]
f: 216 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [15]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 2519424
CM: 136048896
AM: 5038848
Shape: [16]
f: 108 w: [22, 22, 22] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [17]
f: 108 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [18]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 15552
CM: 294395904
AM: 0
Shape: [19]
f: 72 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [20]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 279936
CM: 662390784
AM: 24532992
Shape: [21]
f: 36 w: [48, 48, 48] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [22]
f: 36 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [23]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 1728
CM: 1019215872
AM: 0
Shape: [24]
f: 24 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [25]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 31104
CM: 2293235712
AM: 84934656
Shape: [26]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [27]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 15552
CM: 1296000000
AM: 96000000
Shape: [28]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [29]
f: 12 w: [48, 48, 48] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [30]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 46656
CM: 143327232
AM: 31850496
Shape: [31]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [32]
f: 36 w: [22, 22, 22] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [33]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 419904
CM: 41399424
AM: 9199872
Shape: [34]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [35]
f: 108 w: [9, 9, 9] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [36]
f: 324 w: [7, 7, 7] d: [1, 1, 1]
WM: 3779136
CM: 8503056
AM: 1889568
Shape: [37]
f: 324 w: [9, 9, 9] d: [1, 1, 1]
WM: 11337408
CM: 12002256
AM: 889056
Shape: [38]
f: 324 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [39]
f: 108 w: [18, 18, 18] d: [1, 1, 1]
WM: 139968
CM: 60466176
AM: 0
Shape: [40]
f: 216 w: [18, 18, 18] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [41]
f: 108 w: [20, 20, 20] d: [1, 1, 1]
WM: 2519424
CM: 136048896
AM: 5038848
Shape: [42]
f: 108 w: [22, 22, 22] d: [1, 1, 1]
WM: 1259712
CM: 93312000
AM: 6912000
Shape: [43]
f: 108 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [44]
f: 36 w: [44, 44, 44] d: [1, 1, 1]
WM: 15552
CM: 294395904
AM: 0
Shape: [45]
f: 72 w: [44, 44, 44] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [46]
f: 36 w: [46, 46, 46] d: [1, 1, 1]
WM: 279936
CM: 662390784
AM: 24532992
Shape: [47]
f: 36 w: [48, 48, 48] d: [1, 1, 1]
WM: 139968
CM: 378442368
AM: 28032768
Shape: [48]
f: 36 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [49]
f: 12 w: [96, 96, 96] d: [1, 1, 1]
WM: 1728
CM: 1019215872
AM: 0
Shape: [50]
f: 24 w: [96, 96, 96] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [51]
f: 12 w: [98, 98, 98] d: [1, 1, 1]
WM: 31104
CM: 2293235712
AM: 84934656
Shape: [52]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 15552
CM: 1219784832
AM: 90354432
Shape: [53]
f: 12 w: [100, 100, 100] d: [1, 1, 1]
WM: 0
CM: 0
AM: 0
Shape: [54]
f: 3 w: [100, 100, 100] d: [1, 1, 1]
WM: 144
CM: 48000000
AM: 0
Max. memory requirements: 3606341440 B
Weight memory: 42819552 B
Max. conv buffer: 2293235712 B
|
archive/wind/.ipynb_checkpoints/csv_to_nc-checkpoint.ipynb | ###Markdown
Import CSV buoy data, slice, convert to .nc
###Code
# import modules
import xarray as xr
import datetime as dt
import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as sig
import pandas as pd
for i in range(2):
%matplotlib notebook
# read CSV data (DFO Neah Bay buoy 46206, lat 48.83 long 126)
ds = pd.read_csv('../../../Data/wind/c46206_csv.csv',usecols=['DATE','LATITUDE','LONGITUDE','WDIR','WSPD'])
# get data for 2013,2014,2017,2018
dtpd2013 = pd.to_datetime(ds['DATE'][175282:183781]) # str to pd dt (np can't handle the formatting)
dt2013 = np.array(dtpd2013,dtype=np.datetime64) # pd to np datetime64
wdir2013 = np.asarray(ds['WDIR'][175282:183781]) # wdir values for this time period
wspd2013 = np.asarray(ds['WSPD'][175282:183781]) # wspd values for this time period
dtpd2014 = pd.to_datetime(ds['DATE'][183781:190256])
dt2014 = np.array(dtpd2014,dtype=np.datetime64)
wdir2014 = np.asarray(ds['WDIR'][183781:190256])
wspd2014 = np.asarray(ds['WSPD'][183781:190256])
dtpd2017 = pd.to_datetime(ds['DATE'][206883:213293])
dt2017 = np.array(dtpd2017,dtype=np.datetime64)
wdir2017 = np.asarray(ds['WDIR'][206883:213293])
wspd2017 = np.asarray(ds['WSPD'][206883:213293])
dtpd2018 = pd.to_datetime(ds['DATE'][213293:217651])
dt2018 = np.array(dtpd2018,dtype=np.datetime64)
wdir2018 = np.asarray(ds['WDIR'][213293:217651])
wspd2018 = np.asarray(ds['WSPD'][213293:217651])
# save to .nc file
ds_out = xr.Dataset(
data_vars=dict(
wdir2013=(['dt2013'], wdir2013), # wind direction data
wspd2013=(['dt2013'], wspd2013), # wind speed data
wdir2014=(['dt2014'], wdir2014), # wind direction data
wspd2014=(['dt2014'], wspd2014), # wind speed data
wdir2017=(['dt2017'], wdir2017), # wind direction data
wspd2017=(['dt2017'], wspd2017), # wind speed data
wdir2018=(['dt2018'], wdir2018), # wind direction data
wspd2018=(['dt2018'], wspd2018), # wind speed data
),
coords=dict(
dt2013=dt2013,
dt2014=dt2014, # datetime values
dt2017=dt2017,
dt2018=dt2018,
),
attrs=dict(
description=f'Wind data from Neah Bay DFO buoy 46206 for 2013, 2014, 2017, and 2018.',
units=['degrees True, m/s, numpy.datetime64'],
lat=ds['LATITUDE'][0],
long=ds['LONGITUDE'][0],
),
)
ds_out.to_netcdf(f'../../../Data/wind/wind.nc')
###Output
_____no_output_____ |
MachineLearning_projects/tencentAdCtrPredict/.ipynb_checkpoints/3.feature_engineering_and_machine_learning-checkpoint.ipynb | ###Markdown
特征工程与机器学习建模 自定义工具函数库
###Code
#coding=utf-8
import pandas as pd
import numpy as np
import scipy as sp
#文件读取
def read_csv_file(f,logging=False):
print("============================读取数据========================",f)
print("======================我是萌萌哒分界线========================")
data = pd.read_csv(f)
if logging:
data.head(5))
print( f,"包含以下列....")
print data.columns.values
print data.describe()
print data.info()
return data
#第一类编码
def categories_process_first_class(cate):
cate = str(cate)
if len(cate)==1:
if int(cate)==0:
return 0
else:
return int(cate[0])
#第2类编码
def categories_process_second_class(cate):
cate = str(cate)
if len(cate)<3:
return 0
else:
return int(cate[1:])
#年龄处理,切段
def age_process(age):
age = int(age)
if age==0:
return 0
elif age<15:
return 1
elif age<25:
return 2
elif age<40:
return 3
elif age<60:
return 4
else:
return 5
#省份处理
def process_province(hometown):
hometown = str(hometown)
province = int(hometown[0:2])
return province
#城市处理
def process_city(hometown):
hometown = str(hometown)
if len(hometown)>1:
province = int(hometown[2:])
else:
province = 0
return province
#几点钟
def get_time_day(t):
t = str(t)
t=int(t[0:2])
return t
#一天切成4段
def get_time_hour(t):
t = str(t)
t=int(t[2:4])
if t<6:
return 0
elif t<12:
return 1
elif t<18:
return 2
else:
return 3
#评估与计算logloss
def logloss(act, pred):
epsilon = 1e-15
pred = sp.maximum(epsilon, pred)
pred = sp.minimum(1-epsilon, pred)
ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))
ll = ll * -1.0/len(act)
return ll
###Output
_____no_output_____
###Markdown
特征工程+随机森林建模 import 库
###Code
#coding=utf-8
from sklearn.preprocessing import Binarizer
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
读取train_data和ad 特征工程
###Code
#['label' 'clickTime' 'conversionTime' 'creativeID' 'userID' 'positionID' 'connectionType' 'telecomsOperator']
train_data = read_csv_file('./data/train.csv',logging=True)
#['creativeID' 'adID' 'camgaignID' 'advertiserID' 'appID' 'appPlatform']
ad = read_csv_file('./data/ad.csv',logging=True)
#app
app_categories = read_csv_file('./data/app_categories.csv',logging=True)
app_categories["app_categories_first_class"] = app_categories['appCategory'].apply(categories_process_first_class)
app_categories["app_categories_second_class"] = app_categories['appCategory'].apply(categories_process_second_class)
app_categories.head()
user = read_csv_file('./data/user.csv',logging=False)
user.columns
user[user.age!=0].describe()
import matplotlib.pyplot as plt
user.age.value_counts()
#user
user = read_csv_file('./data/user.csv',logging=True)
user['age_process'] = user['age'].apply(age_process)
user["hometown_province"] = user['hometown'].apply(process_province)
user["hometown_city"] = user['hometown'].apply(process_city)
user["residence_province"] = user['residence'].apply(process_province)
user["residence_city"] = user['residence'].apply(process_city)
user.info()
user.head()
train_data.head()
train_data['clickTime_day'] = train_data['clickTime'].apply(get_time_day)
train_data['clickTime_hour']= train_data['clickTime'].apply(get_time_hour)
###Output
_____no_output_____
###Markdown
合并数据
###Code
#train data
train_data['clickTime_day'] = train_data['clickTime'].apply(get_time_day)
train_data['clickTime_hour']= train_data['clickTime'].apply(get_time_hour)
# train_data['conversionTime_day'] = train_data['conversionTime'].apply(get_time_day)
# train_data['conversionTime_hour'] = train_data['conversionTime'].apply(get_time_hour)
#test_data
test_data = read_csv_file('./data/test.csv', True)
test_data['clickTime_day'] = test_data['clickTime'].apply(get_time_day)
test_data['clickTime_hour']= test_data['clickTime'].apply(get_time_hour)
# test_data['conversionTime_day'] = test_data['conversionTime'].apply(get_time_day)
# test_data['conversionTime_hour'] = test_data['conversionTime'].apply(get_time_hour)
train_user = pd.merge(train_data,user,on='userID')
train_user_ad = pd.merge(train_user,ad,on='creativeID')
train_user_ad_app = pd.merge(train_user_ad,app_categories,on='appID')
train_user_ad_app.head()
train_user_ad_app.columns
###Output
_____no_output_____
###Markdown
取出数据和label
###Code
#特征部分
x_user_ad_app = train_user_ad_app.loc[:,['creativeID','userID','positionID',
'connectionType','telecomsOperator','clickTime_day','clickTime_hour','age', 'gender' ,'education',
'marriageStatus' ,'haveBaby' , 'residence' ,'age_process',
'hometown_province', 'hometown_city','residence_province', 'residence_city',
'adID', 'camgaignID', 'advertiserID', 'appID' ,'appPlatform' ,
'app_categories_first_class' ,'app_categories_second_class']]
x_user_ad_app = x_user_ad_app.values
x_user_ad_app = np.array(x_user_ad_app,dtype='int32')
#标签部分
y_user_ad_app =train_user_ad_app.loc[:,['label']].values
###Output
_____no_output_____
###Markdown
随机森林建模&&特征重要度排序
###Code
# %matplotlib inline
# import matplotlib.pyplot as plt
# print('Plot feature importances...')
# ax = lgb.plot_importance(gbm, max_num_features=10)
# plt.show()
# 用RF 计算特征重要度
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score, train_test_split
feat_labels = np.array(['creativeID','userID','positionID',
'connectionType','telecomsOperator','clickTime_day','clickTime_hour','age', 'gender' ,'education',
'marriageStatus' ,'haveBaby' , 'residence' ,'age_process',
'hometown_province', 'hometown_city','residence_province', 'residence_city',
'adID', 'camgaignID', 'advertiserID', 'appID' ,'appPlatform' ,
'app_categories_first_class' ,'app_categories_second_class'])
forest = RandomForestClassifier(n_estimators=100,
random_state=0,
n_jobs=-1)
forest.fit(x_user_ad_app, y_user_ad_app.reshape(y_user_ad_app.shape[0],))
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
train_user_ad_app.shape
importances
['creativeID','userID','positionID',
'connectionType','telecomsOperator','clickTime_day','clickTime_hour','age', 'gender' ,'education',
'marriageStatus' ,'haveBaby' , 'residence' ,'age_process',
'hometown_province', 'hometown_city','residence_province', 'residence_city',
'adID', 'camgaignID', 'advertiserID', 'appID' ,'appPlatform' ,
'app_categories_first_class' ,'app_categories_second_class']
import matplotlib.pyplot as plt
%matplotlib inline
for f in range(x_user_ad_app.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30,
feat_labels[indices[f]],
importances[indices[f]]))
plt.title('Feature Importances')
plt.bar(range(x_user_ad_app.shape[1]),
importances[indices],
color='lightblue',
align='center')
plt.xticks(range(x_user_ad_app.shape[1]),
feat_labels[indices], rotation=90)
plt.xlim([-1, x_user_ad_app.shape[1]])
plt.tight_layout()
#plt.savefig('./random_forest.png', dpi=300)
plt.show()
###Output
1) userID 0.166023
2) residence 0.099107
3) clickTime_day 0.077354
4) age 0.075498
5) positionID 0.065839
6) residence_province 0.063739
7) residence_city 0.057849
8) hometown_province 0.054218
9) education 0.048913
10) hometown_city 0.048328
11) clickTime_hour 0.039196
12) telecomsOperator 0.033300
13) marriageStatus 0.031278
14) creativeID 0.027913
15) adID 0.019010
16) haveBaby 0.018649
17) age_process 0.015707
18) camgaignID 0.015615
19) gender 0.012824
20) advertiserID 0.008360
21) app_categories_second_class 0.006968
22) appID 0.005930
23) connectionType 0.004271
24) app_categories_first_class 0.003228
25) appPlatform 0.000883
###Markdown
随机森林调参
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
param_grid = {
#'n_estimators': [100],
'n_estimators': [10, 100, 500, 1000],
'max_features':[0.6, 0.7, 0.8, 0.9]
}
rf = RandomForestClassifier()
rfc = GridSearchCV(rf, param_grid, scoring = 'neg_log_loss', cv=3, n_jobs=2)
rfc.fit(x_user_ad_app, y_user_ad_app.reshape(y_user_ad_app.shape[0],))
print(rfc.best_score_)
print(rfc.best_params_)
###Output
_____no_output_____
###Markdown
Xgboost调参
###Code
import xgboost as xgb
import os
import numpy as np
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
os.environ["OMP_NUM_THREADS"] = "8" #并行训练
rng = np.random.RandomState(4315)
import warnings
warnings.filterwarnings("ignore")
param_grid = {
'max_depth': [3, 4, 5, 7, 9],
'n_estimators': [10, 50, 100, 400, 800, 1000, 1200],
'learning_rate': [0.1, 0.2, 0.3],
'gamma':[0, 0.2],
'subsample': [0.8, 1],
'colsample_bylevel':[0.8, 1]
}
xgb_model = xgb.XGBClassifier()
rgs = GridSearchCV(xgb_model, param_grid, n_jobs=-1)
rgs.fit(X, y)
print(rgs.best_score_)
print(rgs.best_params_)
###Output
_____no_output_____
###Markdown
正负样本比
###Code
positive_num = train_user_ad_app[train_user_ad_app['label']==1].values.shape[0]
negative_num = train_user_ad_app[train_user_ad_app['label']==0].values.shape[0]
negative_num/float(positive_num)
###Output
_____no_output_____
###Markdown
**我们可以看到正负样本数量相差非常大,数据严重unbalanced** 我们用Bagging修正过后,处理不均衡样本的B(l)agging来进行训练和实验。
###Code
from blagging import BlaggingClassifier
help(BlaggingClassifier)
#处理unbalanced的classifier
classifier = BlaggingClassifier(n_jobs=-1)
classifier.fit(x_user_ad_app, y_user_ad_app)
classifier.predict_proba(x_test_clean)
###Output
_____no_output_____
###Markdown
预测
###Code
test_data = pd.merge(test_data,user,on='userID')
test_user_ad = pd.merge(test_data,ad,on='creativeID')
test_user_ad_app = pd.merge(test_user_ad,app_categories,on='appID')
x_test_clean = test_user_ad_app.loc[:,['creativeID','userID','positionID',
'connectionType','telecomsOperator','clickTime_day','clickTime_hour','age', 'gender' ,'education',
'marriageStatus' ,'haveBaby' , 'residence' ,'age_process',
'hometown_province', 'hometown_city','residence_province', 'residence_city',
'adID', 'camgaignID', 'advertiserID', 'appID' ,'appPlatform' ,
'app_categories_first_class' ,'app_categories_second_class']].values
x_test_clean = np.array(x_test_clean,dtype='int32')
result_predict_prob = []
result_predict=[]
for i in range(scale):
result_indiv = clfs[i].predict(x_test_clean)
result_indiv_proba = clfs[i].predict_proba(x_test_clean)[:,1]
result_predict.append(result_indiv)
result_predict_prob.append(result_indiv_proba)
result_predict_prob = np.reshape(result_predict_prob,[-1,scale])
result_predict = np.reshape(result_predict,[-1,scale])
result_predict_prob = np.mean(result_predict_prob,axis=1)
result_predict = max_count(result_predict)
result_predict_prob = np.array(result_predict_prob).reshape([-1,1])
test_data['prob'] = result_predict_prob
test_data = test_data.loc[:,['instanceID','prob']]
test_data.to_csv('predict.csv',index=False)
print "prediction done!"
###Output
_____no_output_____ |
notebooks/research_notebooks/likelihood_analysis_pubchem.ipynb | ###Markdown
Descriptors Only
###Code
pl_desc = pd.read_excel('../database/plasticizer_data_v5_rdkit.xls')
pl_desc = pl_desc[pl_desc.columns[1:]]
org_data = pd.read_pickle('../data/pubchem/descriptors/org_chem_pubc.pkl')
org_desc = org_data[~org_data.isin([np.nan, np.inf, -np.inf]).any(1)]
org_desc = org_desc.reset_index(drop=True)
shared_cols = set(pl_desc.columns).intersection(set(org_desc.columns))
shared_cols = shared_cols - set(['SMILES', 'Ipc'])
pl_desc = pl_desc[shared_cols].to_numpy()
org_desc = org_desc[shared_cols].to_numpy()
org_smiles = org_data['SMILES'].to_numpy()
best_orgs = {}
for smile in org_smiles:
best_orgs[smile] = [0, 0]
pl_ics, test_pl_ics, org_ics, test_org_ics = find_top_mols(pl_desc, org_desc, org_smiles, best_orgs, scaling_factor=0.1, return_pca=True)
plt.scatter(pl_ics[:,0], pl_ics[:,1], label='Plasticizers', alpha=0.25)
plt.scatter(org_ics[:,0], org_ics[:,1], label='PubChem', alpha=0.25)
plt.scatter(test_pl_ics[:,0], test_pl_ics[:,1], label='Plasticizers Test', c='purple', alpha=0.25)
plt.legend()
plt.show()
pc1_all = np.concatenate([pl_ics[:,0], org_ics[:,0], test_pl_ics[:,0]], axis=0)
pc1_min = math.floor(pc1_all.min())
pc1_max = math.ceil(pc1_all.max())
pc2_all = np.concatenate([pl_ics[:,1], org_ics[:,1], test_pl_ics[:,1]], axis=0)
pc2_min = math.floor(pc2_all.min())
pc2_max = math.ceil(pc2_all.max())
kde, xs, ys = calc_2D_kde(pl_ics, [pc1_min, pc1_max], [pc2_min, pc2_max])
pl_test_kdes = []
pl_train_kdes = []
org_kdes = []
for pl_sample in test_pl_ics:
pl_test_kdes.append(get_2D_kde_value(pl_sample, kde, xs, ys))
for pl_sample in pl_ics:
pl_train_kdes.append(get_2D_kde_value(pl_sample, kde, xs, ys))
cmap = cm.get_cmap('viridis')
psm = plt.contourf(xs, ys, kde, cmap=cmap)
cbar = plt.colorbar()
plt.show()
normalizer = max(pl_test_kdes)
normalizer
best_org_desc = pd.read_pickle('org_ll_analysis/best_orgs_desc.pkl')
top_ten_desc = best_org_desc.iloc[:10,:]
theta_pl = top_ten_desc['Avg. Score'].to_numpy() / normalizer
smiles = top_ten_desc['SMILES'].to_numpy()
urls = []
for hit in smiles:
url = 'https://cactus.nci.nih.gov/chemical/structure/{}/image'.format(hit)
urls.append(url)
print(theta_pl[0])
Disp.Image(requests.get(urls[0]).content)
###Output
1.6485152258066265
###Markdown
Descriptors Only (LASSO)
###Code
pl_desc = pd.read_excel('../database/plasticizer_data_v5_rdkit.xls')
pl_desc = pl_desc[pl_desc.columns[1:]]
org_data = pd.read_pickle('../data/pubchem/descriptors/org_chem_pubc.pkl')
org_desc = org_data[~org_data.isin([np.nan, np.inf, -np.inf]).any(1)]
org_desc = org_desc.reset_index(drop=True)
shared_cols = set(pl_desc.columns).intersection(set(org_desc.columns))
shared_cols = shared_cols - set(['SMILES', 'Ipc'])
pl_desc = pl_desc[shared_cols]
org_desc = org_desc[shared_cols]
ones = np.ones((pl_desc.shape[0],1))
zeros = np.zeros((org_desc.shape[0],1))
pl_desc = np.hstack((pl_desc, ones))
org_desc = np.hstack((org_desc, zeros))
org_smiles = org_data['SMILES'].to_numpy()
best_orgs = {}
for smile in org_smiles:
best_orgs[smile] = [0, 0]
pl_ics, test_pl_ics, org_ics, test_org_ics = lasso_selection(pl_desc, org_desc, org_smiles, best_orgs, return_pca=True)
plt.scatter(pl_ics[:,0], pl_ics[:,1], label='Plasticizers', alpha=0.25)
plt.scatter(org_ics[:,0], org_ics[:,1], label='PubChem', alpha=0.25)
plt.scatter(test_pl_ics[:,0], test_pl_ics[:,1], label='Plasticizers Test', c='purple', alpha=0.25)
plt.legend()
plt.show()
pc1_all = np.concatenate([pl_ics[:,0], org_ics[:,0], test_pl_ics[:,0], test_org_ics[:,0]], axis=0)
pc1_min = math.floor(pc1_all.min())
pc1_max = math.ceil(pc1_all.max())
pc2_all = np.concatenate([pl_ics[:,1], org_ics[:,1], test_pl_ics[:,1], test_org_ics[:,1]], axis=0)
pc2_min = math.floor(pc2_all.min())
pc2_max = math.ceil(pc2_all.max())
kde, xs, ys = calc_2D_kde(pl_ics, [pc1_min, pc1_max], [pc2_min, pc2_max])
pl_test_kdes = []
pl_train_kdes = []
org_kdes = []
for pl_sample in test_pl_ics:
pl_test_kdes.append(get_2D_kde_value(pl_sample, kde, xs, ys))
for pl_sample in pl_ics:
pl_train_kdes.append(get_2D_kde_value(pl_sample, kde, xs, ys))
for org_sample in test_org_ics:
org_kdes.append(get_2D_kde_value(org_sample, kde, xs, ys))
normalizer = max(pl_train_kdes)
normalizer
best_orgs_desc_lasso = pd.read_pickle('org_ll_analysis/best_orgs_desc_lasso.pkl')
top_ten_desc_lasso = best_orgs_desc_lasso.iloc[:10,:]
theta_pl = top_ten_desc_lasso['Avg. Score'].to_numpy() / normalizer
smiles = top_ten_desc_lasso['SMILES'].to_numpy()
urls = []
for hit in smiles:
url = 'https://cactus.nci.nih.gov/chemical/structure/{}/image'.format(hit)
urls.append(url)
print(theta_pl[0])
Disp.Image(requests.get(urls[0]).content)
print(theta_pl[1])
Disp.Image(requests.get(urls[1]).content)
print(theta_pl[2])
Disp.Image(requests.get(urls[2]).content)
print(theta_pl[3])
Disp.Image(requests.get(urls[3]).content)
print(theta_pl[4])
Disp.Image(requests.get(urls[4]).content)
###Output
0.21306977523185147
###Markdown
Fingerprints Only
###Code
# Load Data
pl_data = pd.read_pickle('../data/pubchem/fingerprints/plasticizer_fingerprints.pkl')
pl_data = pl_data[(pl_data.T != 0).any()]
org_data = pd.read_pickle('../data/pubchem/fingerprints/organic_fingerprints.pkl')
org_data = org_data[(org_data.T != 0).any()]
org_cols = org_data.columns.to_list()
org_cols[0] = 'SMILES'
org_data.columns = org_cols
pl_smiles = pl_data['SMILES'].to_numpy()
pl_fps = pl_data[pl_data.columns[1:]].to_numpy()
org_data = org_data.sample(n=org_data.shape[0])
org_smiles = org_data['SMILES'].to_numpy()
org_fps = org_data[org_data.columns[1:]].to_numpy()
best_orgs = {}
for smile in org_smiles:
best_orgs[smile] = [0, 0]
pl_ics, test_pl_ics, org_ics, test_org_ics = find_top_mols(pl_fps, org_fps, org_smiles, best_orgs, scaling_factor=0.1, return_pca=True)
plt.scatter(pl_ics[:,0], pl_ics[:,1], label='Plasticizers', alpha=0.25)
plt.scatter(org_ics[:,0], org_ics[:,1], label='PubChem', alpha=0.25)
plt.scatter(test_pl_ics[:,0], test_pl_ics[:,1], label='Plasticizers Test', c='purple', alpha=0.25)
plt.legend()
plt.show()
pc1_all = np.concatenate([pl_ics[:,0], org_ics[:,0], test_pl_ics[:,0], test_org_ics[:,0]], axis=0)
pc1_min = math.floor(pc1_all.min())
pc1_max = math.ceil(pc1_all.max())
pc2_all = np.concatenate([pl_ics[:,1], org_ics[:,1], test_pl_ics[:,1], test_org_ics[:,1]], axis=0)
pc2_min = math.floor(pc2_all.min())
pc2_max = math.ceil(pc2_all.max())
kde, xs, ys = calc_2D_kde(pl_ics, [pc1_min, pc1_max], [pc2_min, pc2_max])
pl_test_kdes = []
pl_train_kdes = []
org_kdes = []
for pl_sample in test_pl_ics:
pl_test_kdes.append(get_2D_kde_value(pl_sample, kde, xs, ys))
for pl_sample in pl_ics:
pl_train_kdes.append(get_2D_kde_value(pl_sample, kde, xs, ys))
for org_sample in test_org_ics:
org_kdes.append(get_2D_kde_value(org_sample, kde, xs, ys))
normalizer = max(pl_train_kdes)
best_orgs_fps = pd.read_pickle('org_ll_analysis/best_orgs_fps.pkl')
top_ten_fps = best_orgs_fps.iloc[:10,:]
theta_pl = top_ten_fps['Avg. Score'].to_numpy() / normalizer
smiles = top_ten_fps['SMILES'].to_numpy()
urls = []
for hit in smiles:
url = 'https://cactus.nci.nih.gov/chemical/structure/{}/image'.format(hit)
urls.append(url)
print(theta_pl[0])
Disp.Image(requests.get(urls[0]).content)
print(theta_pl[1])
Disp.Image(requests.get(urls[1]).content)
print(theta_pl[2])
Disp.Image(requests.get(urls[2]).content)
print(theta_pl[3])
Disp.Image(requests.get(urls[3]).content)
print(theta_pl[4])
Disp.Image(requests.get(urls[4]).content)
print(theta_pl[5], smiles[5])
Disp.Image(requests.get(urls[5]).content)
print(theta_pl[6])
Disp.Image(requests.get(urls[6]).content)
print(theta_pl[7])
Disp.Image(requests.get(urls[7]).content)
pl_data = pd.read_excel('../database/plasticizer_data_v5_rdkit.xls')
pl_data = pl_data[pl_data.columns[1:]]
categories = pl_data['Chemical Category']
le = LabelEncoder()
le.fit(categories.astype(str))
labels = le.transform(categories.astype(str))
labels
pl_data['Chemical Category'].value_counts()
pl_data = pd.read_pickle('../database/plasticizer_data_v6_desc_fps.pkl')
pl_data = pl_data.drop(['Ipc'], axis=1)
# pl_data = pl_data[pl_data.columns[5:]].to_numpy()
org_data = pd.read_pickle('../data/pubchem/org_desc_fps.pkl')
org_data = org_data[~org_data.isin([np.nan, np.inf, -np.inf]).any(1)]
org_data = org_data.drop(['Ipc'], axis=1)
org_data = org_data[(75 < org_data['MolWt']) & (1500 > org_data['MolWt'])]
org_smiles = org_data['SMILES'].to_numpy()
# org_data = org_data[org_data.columns[1:]].to_numpy()
org_data['MolWt'].min(), org_data['MolWt'].max(), pl_data['MolWt'].min(), pl_data['MolWt'].max()
pl_desc = pl_data[:,:199]
pl_fps = pl_data[:,199:]
org_desc = org_data[:,:199]
org_fps = org_data[:,199:]
scale_factor = 0.05
pl_fps = pl_fps*scale_factor
org_fps = org_fps*scale_factor
pl_all = np.concatenate([pl_desc, pl_fps], axis=1)
org_all = np.concatenate([org_desc, org_fps], axis=1)
org_idxs = np.random.choice(np.arange(len(org_all)), size=210, replace=False)
org_train = org_all[org_idxs,:]
train_data = np.concatenate([pl_all, org_train], axis=0)
scaler = MinMaxScaler()
train_data = scaler.fit_transform(train_data)
train_data.max()
pca = PCA(2)
train_ics = pca.fit_transform(train_data)
plt.scatter(train_ics[:210,0], train_ics[:210,1], label='Plasticizer', alpha=0.25)
plt.scatter(train_ics[210:,0], train_ics[210:,1], label='Organic', alpha=0.25)
plt.legend()
plt.show()
###Output
_____no_output_____ |
AATCC/lab-report/w1/code/practice-leetcode-labs-w1.ipynb | ###Markdown
LeetCode Link1. https://leetcode.com/2. https://leetcode-cn.com/ Leet Code 1. Two Sum 兩數之和 Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.You may assume that each input would have exactly one solution, and you may not use the same element twice.You can return the answer in any order.给定一个整数数组 nums 和一个整数目标值 target,请你在该数组中找出 和为目标值 target 的那 两个 整数,并返回它们的数组下标。你可以假设每种输入只会对应一个答案。但是,数组中同一个元素在答案里不能重复出现。你可以按任意顺序返回答案。Example 1:```Input: nums = [2,7,11,15], target = 9Output: [0,1]Explanation: Because nums[0] + nums[1] == 9, we return [0, 1].```Example 2:```Input: nums = [3,2,4], target = 6Output: [1,2]```Example 3:```Input: nums = [3,3], target = 6Output: [0,1]``` 第一種寫法
###Code
class Solution(object):
def twoSum(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: List[int]
"""
required = {}
for i in range(len(nums)):
if target - nums[i] in required:
return [required[target - nums[i]],i]
else:
required[nums[i]]=i
input_list = [ 2, 7, 11, 15]
target = 9
ob1 = Solution()
print(ob1.twoSum(input_list, target))
###Output
[0, 1]
###Markdown
第一種寫法的結果 Success Details Runtime: 139 ms, faster than 38.38% of Python3 online submissions for Two Sum.Memory Usage: 15.2 MB, less than 41.73% of Python3 online submissions for Two Sum. 第二種寫法 - 課堂第一個範例
###Code
class Solution(object):
def twoSum(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: List[int]
"""
for i in range(len(nums)):
tmp = nums[i]
remain = nums[i+1:]
if target - tmp in remain:
return[i, remain.index(target - tmp)+ i + 1]
input_list = [ 2, 7, 11, 15]
target = 9
ob1 = Solution()
print(ob1.twoSum(input_list, target))
###Output
[0, 1]
###Markdown
第二種寫法的結果 Success Details Runtime: 707 ms, faster than 35.03% of Python3 online submissions for Two Sum.Memory Usage: 14.9 MB, less than 73.00% of Python3 online submissions for Two Sum. 第三種寫法 - 課堂第二個範例
###Code
class Solution(object):
def twoSum(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: List[int]
"""
dict = {}
for i in range(len(nums)):
if target - nums[i] not in dict:
dict[nums[i]] = i
else:
return [dict[target - nums[i]], i]
input_list = [ 2, 7, 11, 15]
target = 9
ob1 = Solution()
print(ob1.twoSum(input_list, target))
###Output
[0, 1]
###Markdown
第三種寫法的結果 Success Details Runtime: 64 ms, faster than 86.00% of Python3 online submissions for Two Sum.Memory Usage: 15.2 MB, less than 57.63% of Python3 online submissions for Two Sum. Two Sum 兩數之和思路總結 課堂的第一種範例用 For 將每個元素讀過一遍,然後將其逐一取出來一個個判斷,若目標為 9,找到元素 2 ,就會找 7,若找到元素 7 ,就會找 2。效率上沒有很理想。 課堂的第二種範例課堂的第二中範例與該題的第一個範例原理類似。運用 Python 的字典可以直接去找。用 For 去找,剩下用 IF 來判斷該值有沒有在字典裡面。相對與第一種課堂範例來的理想。 Leet Code 69. Sqrt(x) x 的平方根 Given a non-negative integer x, compute and return the square root of x.Since the return type is an integer, the decimal digits are truncated, and only the integer part of the result is returned.Note: You are not allowed to use any built-in exponent function or operator, such as pow(x, 0.5) or x ** 0.5.给你一个非负整数 x ,计算并返回 x 的 算术平方根 。由于返回类型是整数,结果只保留 整数部分 ,小数部分将被 舍去 。注意:不允许使用任何内置指数函数和算符,例如 pow(x, 0.5) 或者 x ** 0.5 。Example 1:```Input: x = 4Output: 2```Example 2:```Input: x = 8Output: 2Explanation: The square root of 8 is 2.82842..., and since the decimal part is truncated, 2 is returned.```
###Code
class Solution:
def mySqrt(self, x):
"""
:type x: int
:rtype: int
"""
if x < 2:
return x
left, right = 1, x // 2
while left <= right:
mid = left + (right - left) // 2
if mid > x / mid:
right = mid - 1
else:
left = mid + 1
return left - 1
x1 = 4
x2 = 9
ob1 = Solution()
print(ob1.mySqrt(x1))
print(ob1.mySqrt(x2))
###Output
2
3
###Markdown
結果 Success Details Runtime: 60 ms, faster than 47.79% of Python3 online submissions for Sqrt(x).Memory Usage: 13.9 MB, less than 81.69% of Python3 online submissions for Sqrt(x). Sqrt(x) x 的平方根 的思路總結 二分查找,分成左右區間。 Leet Code 70. Climbing Stairs 爬楼梯 You are climbing a staircase. It takes n steps to reach the top.Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?假设你正在爬楼梯。需要 n 阶你才能到达楼顶。每次你可以爬 1 或 2 个台阶。你有多少种不同的方法可以爬到楼顶呢?Example 1:```Input: n = 2Output: 2Explanation: There are two ways to climb to the top.1. 1 step + 1 step2. 2 steps```Example 2:```Input: n = 3Output: 3Explanation: There are three ways to climb to the top.1. 1 step + 1 step + 1 step2. 1 step + 2 steps3. 2 steps + 1 step```
###Code
class Solution:
def climbStairs(self, n):
"""
:type n: int
:rtype: int
"""
prev, current = 0, 1
for i in range(n):
prev, current = current, prev + current
return current
x1 = 2
x2 = 3
ob1 = Solution()
print(ob1.climbStairs(x1))
print(ob1.climbStairs(x2))
###Output
2
3
|
_notebooks/2021-09-24-ai-and-ml-for-coders-ch2.ipynb | ###Markdown
AI and ML for Coders Ch 2I'm very interested in receiving the Tensorflow Dev CertificateIt's really an opportunity to move into the ML space more confidently. And build my expertise in the field.However, I'm not sure the best way to prepare for the exam. It may not be that tough, but I'm not sure. After noticing the [Tensorflow Developer Certificate Specialization](http://courser.org) on Coursera, I took a peak at the courses and noticed something familiar.The synopsis of the specialization is identical to the book AI and Machine Learning for Coders. This isn't completely surprising because the instructor of the specialization is also the author of said book.So I'm going to work through each chapter to get the dust off of my TF dev skills. Though, that won't be all. I'll also want to build a few projects from scratch. But we'll get there soon. So here's ch 2 Introduction to Computer VisionThe ability to algorithmicly see an image as a type of clothing is very difficult to define with rule-based programming. Instead, let's use machine learning.Using the Fashion MNIST dataset we have 10 types of images based on clothing types. Each image is 28x28 and is in black and white. Meaning each pixel value is between 0 and 255.We can't model a linear relationship between the X (image pixels) and the Y (clothing type)However we can use the output node layer as a represenation of the 10 clothing types. Each image will be loaded into every node and the output will spit out a probability that the image is of the clothing type the output node represents.
###Code
!pip install tensorflow-cpu
import tensorflow as tf
import numpy as np
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow import keras
data = tf.keras.datasets.fashion_mnist # dataset
(training_images, training_labels), (test_images, test_labels) = data.load_data() # load data into train and test sets
training_images = training_images / 255.0 # normalize the images so that they're all between 0 and 1
test_images = test_images / 255.0
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu), # hidden layer
keras.layers.Dense(10, activation=tf.nn.softmax) # output layer
])
model.compile(optimizer='adam', # adam is an evolution of sgd to better find that global optimum (uses momentum)
loss='sparse_categorical_crossentropy', # common loss function for softmax classification
metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
model.evaluate(test_images, test_labels)
# explor the prediction results
classifications = model.predict(test_images)
print(classifications[0]) # probabilities for each class
print(test_labels[0]) # actual correct class
# try 50 epochs to get overfitting
model.fit(training_images, training_labels, epochs=50)
model.evaluate(test_images, test_labels)
###Output
313/313 [==============================] - 0s 559us/step - loss: 0.5352 - accuracy: 0.8879
###Markdown
After adding 50 epochs, the training accuracy increased, but the evaluation decreased.This is a sign of _overfitting_ because the model is having a harding time generalizing on data it hasn't seenBy the way, always recompile the model when retraining. Let's use _callbacks_ to train the same model but stopping it when an accuracy has been reached.
###Code
class myCallback(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs.get('accuracy') > 0.95:
print("\nReached 95% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model = keras.models.Sequential([
keras.layers.Flatten(),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=50, callbacks=[callbacks])
###Output
Epoch 1/50
1875/1875 [==============================] - 2s 756us/step - loss: 0.4989 - accuracy: 0.8232
Epoch 2/50
1875/1875 [==============================] - 1s 756us/step - loss: 0.3711 - accuracy: 0.8649
Epoch 3/50
1875/1875 [==============================] - 1s 756us/step - loss: 0.3344 - accuracy: 0.8776
Epoch 4/50
1875/1875 [==============================] - 1s 749us/step - loss: 0.3114 - accuracy: 0.8853
Epoch 5/50
1875/1875 [==============================] - 1s 756us/step - loss: 0.2932 - accuracy: 0.8913
Epoch 6/50
1875/1875 [==============================] - 1s 747us/step - loss: 0.2788 - accuracy: 0.8967
Epoch 7/50
1875/1875 [==============================] - 1s 750us/step - loss: 0.2646 - accuracy: 0.9012
Epoch 8/50
1875/1875 [==============================] - 1s 761us/step - loss: 0.2573 - accuracy: 0.9042
Epoch 9/50
1875/1875 [==============================] - 1s 752us/step - loss: 0.2458 - accuracy: 0.9086
Epoch 10/50
1875/1875 [==============================] - 1s 778us/step - loss: 0.2399 - accuracy: 0.9111
Epoch 11/50
1875/1875 [==============================] - 1s 764us/step - loss: 0.2291 - accuracy: 0.9140
Epoch 12/50
1875/1875 [==============================] - 1s 759us/step - loss: 0.2235 - accuracy: 0.9167
Epoch 13/50
1875/1875 [==============================] - 1s 768us/step - loss: 0.2171 - accuracy: 0.9190
Epoch 14/50
1875/1875 [==============================] - 1s 756us/step - loss: 0.2120 - accuracy: 0.9210
Epoch 15/50
1875/1875 [==============================] - 1s 795us/step - loss: 0.2069 - accuracy: 0.9226
Epoch 16/50
1875/1875 [==============================] - 1s 778us/step - loss: 0.1991 - accuracy: 0.9255
Epoch 17/50
1875/1875 [==============================] - 1s 779us/step - loss: 0.1936 - accuracy: 0.9273
Epoch 18/50
1875/1875 [==============================] - 1s 758us/step - loss: 0.1892 - accuracy: 0.9290
Epoch 19/50
1875/1875 [==============================] - 1s 771us/step - loss: 0.1844 - accuracy: 0.9310
Epoch 20/50
1875/1875 [==============================] - 1s 762us/step - loss: 0.1793 - accuracy: 0.9328
Epoch 21/50
1875/1875 [==============================] - 1s 762us/step - loss: 0.1762 - accuracy: 0.9338
Epoch 22/50
1875/1875 [==============================] - 1s 758us/step - loss: 0.1702 - accuracy: 0.9360
Epoch 23/50
1875/1875 [==============================] - 1s 771us/step - loss: 0.1670 - accuracy: 0.9373
Epoch 24/50
1875/1875 [==============================] - 2s 800us/step - loss: 0.1628 - accuracy: 0.9391
Epoch 25/50
1875/1875 [==============================] - 1s 761us/step - loss: 0.1600 - accuracy: 0.9400
Epoch 26/50
1875/1875 [==============================] - 1s 762us/step - loss: 0.1559 - accuracy: 0.9410
Epoch 27/50
1875/1875 [==============================] - 1s 757us/step - loss: 0.1526 - accuracy: 0.9418
Epoch 28/50
1875/1875 [==============================] - 1s 768us/step - loss: 0.1479 - accuracy: 0.9440
Epoch 29/50
1875/1875 [==============================] - 1s 760us/step - loss: 0.1463 - accuracy: 0.9447
Epoch 30/50
1875/1875 [==============================] - 1s 785us/step - loss: 0.1409 - accuracy: 0.9468
Epoch 31/50
1875/1875 [==============================] - 1s 768us/step - loss: 0.1395 - accuracy: 0.9477
Epoch 32/50
1875/1875 [==============================] - 1s 798us/step - loss: 0.1371 - accuracy: 0.9486
Epoch 33/50
1875/1875 [==============================] - 1s 754us/step - loss: 0.1353 - accuracy: 0.9484
Epoch 34/50
1875/1875 [==============================] - 1s 760us/step - loss: 0.1303 - accuracy: 0.9503
Reached 95% accuracy so cancelling training!
|
Problem Sets/Problem Set 4/.ipynb_checkpoints/SC_4_2-checkpoint.ipynb | ###Markdown
Self-ConvergenceSolution to Problem Set 4, Problem 2 ~ Arsh R. NadkarniTo run: **jupyter notebook SC_4_2.ipynb** Import Libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams; rcParams["figure.dpi"] = 300
from matplotlib.ticker import (AutoMinorLocator)
#rcParams['text.usetex'] = True
plt.rc('font', family='serif')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
###Output
_____no_output_____
###Markdown
Functions to define the RK4 Step and the ODE Solver RK3 (Self-Derived)The third-order Runge-Kutta formula is:$$ \begin{array}{l} k_1 = h f(y_n,t_n)\\ k_2 = h f(y_n+\frac{h}{2}, t_n+\frac{h}{2}k_1)\\ k_3 = h f(y_n+h, t_n-hk_1+2hk_2)\\ y_{n+1} = y_n + \frac{h}{6}(k_1 + 4k_2 + k_3)\label{RK3}\end{array} $$ RK4The classic fourth-order Runge-Kutta formula is:$$ \begin{array}{l} k_1 = h f(y_n,t_n)\\ k_2 = h f(y_n+\frac{k_1}{2}, t_n+\frac{h}{2})\\ k_3 = h f(y_n+\frac{k_2}{2}, t_n+\frac{h}{2})\\ k_4 = h f(y_n+k_3, t_n+h)\\ y_{n+1} = y_n + \frac{k_1}{6}+ \frac{k_2}{3}+ \frac{k_3}{3} + \frac{k_4}{6} \label{RK4}\end{array} $$
###Code
def RK3_step(t, y, h, f):
"""
Implements a single step of a third-order, explicit Runge-Kutta scheme
"""
k1 = h * f(t, y)
k2 = h * f(t + 0.5*h*k1, y + 0.5*h)
k3 = h * f(t - h*k1 + 2*h*k2, y + h)
return y + (h/6) * (k1 + 4*k2 + k3)
def RK4_step(t, y, h, g, *P):
"""
Implements a single step of a fourth-order, explicit Runge-Kutta scheme
"""
thalf = t + 0.5*h
k1 = h * g(t, y, *P)
k2 = h * g(thalf, y + 0.5*k1, *P)
k3 = h * g(thalf, y + 0.5*k2, *P)
k4 = h * g(t + h, y + k3, *P)
return y + (k1 + 2*k2 + 2*k3 + k4)/6
def odeSolve(t0, y0, tmax, h, RHS, method, *P):
"""
ODE driver with constant step-size, allowing systems of ODE's
"""
# make array of times and find length of array
t = np.arange(t0,tmax+h,h)
ntimes, = t.shape
# find out if we are solving a scalar ODE or a system of ODEs, and allocate space accordingly
if type(y0) in [int, float]: # check if primitive type -- means only one eqn
neqn = 1
y = np.zeros( ntimes )
else: # otherwise assume a numpy array -- a system of more than one eqn
neqn, = y0.shape
y = np.zeros( (ntimes, neqn) )
# set first element of solution to initial conditions (possibly a vector)
y[0] = y0
# march on...
for i in range(0,ntimes-1):
y[i+1] = method(t[i], y[i], h, RHS, *P)
return t,y
###Output
_____no_output_____
###Markdown
* Solve the following initial value problem $$ \frac{dy}{dt} = -2ty, 0 \leq t \leq 3,\;\; y(0)=1 $$ using the RK3 and RK4 implementations developed in class. Function to describe RHS for the given ODE
###Code
def RHS(t, y):
"""
Implements the RHS (y'(x)) of the DE
"""
return -2*t*y
# initial conditions
h = 0.01
t0 = 0.0
y0 = 1.0
tmax = 3.0
# solve the ODE
t, y = odeSolve(t0, y0, tmax, h, RHS, RK4_step)
T, Y = odeSolve(t0, y0, tmax, h, RHS, RK3_step)
# plot
f,a = plt.subplots()
a.plot(t,y,'b', label='RK4')
a.plot(T,Y,'r', label='RK3')
a.set_xlabel('Time')
a.legend()
a.set_title("RK3 v/s RK4 for dy/dt = -2ty", fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
* Now set $h_2$ to the step size of the solution corresponding to N = 800, and $h_3$ to the step size corresponding to N = 1600. Then consider the step size $h$ corresponding to N = 100,200,300,400,500,600, and on a plot show the curves plot $\frac{y_{num}(h)−y_{num}(h3)}{y_{num}(h2)−y_{num}(h3)}$ vs $h$ and $\frac{(h/h3)^n−1}{2^n−1}$ vs $h$. Use this plot to argue that your code is self-convergent Function to vary h
###Code
def h(t0,tmax,N):
return (tmax-t0)/N
# initial conditions
t0 = 0.0
y0 = 1.0
tmax = 3.0
# define N
N2 = 800
N3 = 1600
N = [100, 200, 300, 400, 500, 600]
#define h
h2 = h(t0,tmax,N2)
h3 = h(t0,tmax,N3)
H = np.zeros(len(N))
for i in range(len(N)):
H[i] = h(t0,tmax,N[i])
# solve the ODE
t2, y2 = odeSolve(t0, y0, tmax, h2, RHS, RK4_step)
t3, y3 = odeSolve(t0, y0, tmax, h3, RHS, RK4_step)
T1, Y1 = odeSolve(t0, y0, tmax, H[0], RHS, RK4_step)
T2, Y2 = odeSolve(t0, y0, tmax, H[1], RHS, RK4_step)
T3, Y3 = odeSolve(t0, y0, tmax, H[2], RHS, RK4_step)
T4, Y4 = odeSolve(t0, y0, tmax, H[3], RHS, RK4_step)
T5, Y5 = odeSolve(t0, y0, tmax, H[4], RHS, RK4_step)
T6, Y6 = odeSolve(t0, y0, tmax, H[5], RHS, RK4_step)
# define LHS and RHS form equation 8 on the problem set
LHS = [(Y1[-1]-y3[-1])/(y2[-1]-y3[-1]),
(Y2[-1]-y3[-1])/(y2[-1]-y3[-1]),
(Y3[-1]-y3[-1])/(y2[-1]-y3[-1]),
(Y4[-1]-y3[-1])/(y2[-1]-y3[-1]),
(Y5[-1]-y3[-1])/(y2[-1]-y3[-1]),
(Y6[-1]-y3[-1])/(y2[-1]-y3[-1])]
RHS = []
for i in range(6):
RHS.append(((H[i]/h3)**N[i] - 1)/(2**N[i] - 1))
#plot
f,a = plt.subplots()
a.plot(H,LHS,'r', label=r'$\frac{y_{num}(h)−y_{num}(h3)}{y_{num}(h2)−y_{num}(h3)}$')
a.set_xlabel('$h$')
a.set_title(r"$\frac{y_{num}(h)−y_{num}(h3)}{y_{num}(h2)−y_{num}(h3)}$ for dy/dt = -2ty using RK4", fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.legend()
plt.tight_layout()
plt.show()
f,a = plt.subplots()
a.plot(H,RHS,'b', label=r'$\frac{(h/h3)^n−1}{2^n−1}$')
a.set_xlabel('$h$')
a.set_title(r"$\frac{(h/h3)^n−1}{2^n−1}$ for dy/dt = -2ty using RK4", fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.legend()
plt.tight_layout()
plt.show()
# final plot
f,a = plt.subplots()
a.plot(H,LHS,'r', label=r'$\frac{y_{num}(h)−y_{num}(h3)}{y_{num}(h2)−y_{num}(h3)}$')
a.plot(H,RHS,'b--', label=r'$\frac{(h/h3)^n−1}{2^n−1}$')
a.set_xlabel('$h$')
a.set_title("Self-Convergence for dy/dt = -2ty using RK4", fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
CNN/.ipynb_checkpoints/hp-covidx-checkpoint.ipynb | ###Markdown
CNN Hyperparameters COVIDx Dataset
###Code
from fastai.vision.all import *
from efficientnet_pytorch import EfficientNet
path = Path('/home/jupyter/covidx')
torch.cuda.empty_cache()
# fix result
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
SEED = 42
seed_everything(SEED)
df = pd.read_csv(path/'covidx.csv')
df
get_x=lambda x:path/"images"/f"{x[1]}"
get_y=lambda x:x[2]
splitter=ColSplitter('is_valid')
metrics=[accuracy,
RocAuc(average='macro', multi_class='ovr'),
MatthewsCorrCoef(sample_weight=None),
Precision(average='macro'),
Recall(average='macro'),
F1Score(average='macro')]
item_tfms=Resize(480, method='squish', pad_mode='zeros', resamples=(2, 0))
batch_tfms=[*aug_transforms(mult=1.0, do_flip=False, flip_vert=False,
max_rotate=20.0, max_zoom=1.2, max_lighting=0.3, max_warp=0.2,
p_affine=0.75, p_lighting=0.75,
xtra_tfms=None, size=None, mode='bilinear', pad_mode='reflection',
align_corners=True, batch=False, min_scale=1.0),
Normalize.from_stats(*imagenet_stats)]
db = DataBlock(blocks=(ImageBlock(cls=PILImageBW), CategoryBlock),
get_x=get_x,
get_y=get_y,
splitter=splitter,
item_tfms = item_tfms,
batch_tfms=batch_tfms)
###Output
_____no_output_____
###Markdown
VGG-16 Epoch 10
###Code
from torchvision.models import vgg16
arch = vgg16
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
from torchvision.models import vgg16
arch = vgg16
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
from torchvision.models import vgg16
arch = vgg16
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
from torchvision.models import vgg16
arch = vgg16
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
from torchvision.models import vgg16
arch = vgg16
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
learn.summary()
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
from torchvision.models import vgg16
arch = vgg16
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
VGG-19 Epoch 10
###Code
from torchvision.models import vgg19
arch = vgg19
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
from torchvision.models import vgg19
arch = vgg19
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
from torchvision.models import vgg19
arch = vgg19
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
from torchvision.models import vgg19
arch = vgg19
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
from torchvision.models import vgg19
arch = vgg19
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
from torchvision.models import vgg19
arch = vgg19
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
ResNet-18 Epoch 10
###Code
arch = resnet18
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
arch = resnet18
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
dl = db.dataloaders(df, bs=bs)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
arch = resnet18
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
arch = resnet18
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
arch = resnet18
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
arch = resnet18
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
ResNet-34 Epoch 10
###Code
arch = resnet34
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
arch = resnet34
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
arch = resnet34
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
arch = resnet34
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
arch = resnet34
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
arch = resnet34
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
ResNet-50 Epoch 10
###Code
arch = resnet50
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
from torchvision.models import vgg16
arch = vgg16
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
arch = resnet50
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
arch = resnet50
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
arch = resnet50
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
arch = resnet50
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Efficientnet-B0 Epoch 10
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
dl = db.dataloaders(df, bs=bs)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
bs = 32
epoch = 30
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.2, reduction='mean', flatten=True, floatify=False, is_2d=True)
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(30)
###Output
_____no_output_____ |
Data Science/Pandas/Practice+Exercise+2+Movies.ipynb | ###Markdown
Practice Exercise 2 In this assignment, you will try to find some interesting insights into a few movies released between 1916 and 2016, using Python. You will have to download a movie dataset, write Python code to explore the data, gain insights into the movies, actors, directors, and collections, and submit the code. Some tips before starting the assignment1. Identify the task to be performed correctly, and only then proceed to write the required code. Don’t perform any incorrect analysis or look for information that isn’t required for the assignment.2. In some cases, the variable names have already been assigned, and you just need to write code against them. In other cases, the names to be given are mentioned in the instructions. We strongly advise you to use the mentioned names only.3. Always keep inspecting your data frame after you have performed a particular set of operations.4. There are some checkpoints given in the IPython notebook provided. They're just useful pieces of information you can use to check if the result you have obtained after performing a particular task is correct or not.5. Note that you will be asked to refer to documentation for solving some of the questions. That is done on purpose for you to learn new commands and also how to use the documentation.
###Code
# Import the numpy and pandas packages
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Task 1: Reading and Inspection**Subtask 1.1: Import and read**Import and read the movie database. Store it in a variable called `movies`.
###Code
# Write your code for importing the csv file here
movies = pd.read_csv('Movies.csv')
movies
###Output
_____no_output_____
###Markdown
**Subtask 1.2: Inspect the dataframe**Inspect the dataframe's columns, shapes, variable types etc.
###Code
# Write your code for inspection here
movies.shape
###Output
_____no_output_____
###Markdown
Question 1: How many rows and columns are present in the dataframe? - (3821, 26)- (3879, 28)- (3853, 28)- (3866, 26)
###Code
movies.shape
###Output
_____no_output_____
###Markdown
Question 2: How many columns have null values present in them? Try writing a code for this instead of counting them manually.- 3- 6- 9- 12
###Code
# movies.isna().sum(axis=0)
(movies.isnull()).sum()
###Output
_____no_output_____
###Markdown
movies Task 2: Cleaning the Data**Subtask 2.1: Drop unecessary columns**For this assignment, you will mostly be analyzing the movies with respect to the ratings, gross collection, popularity of movies, etc. So many of the columns in this dataframe are not required. So it is advised to drop the following columns.- color- director_facebook_likes- actor_1_facebook_likes- actor_2_facebook_likes- actor_3_facebook_likes- actor_2_name- cast_total_facebook_likes- actor_3_name- duration- facenumber_in_poster- content_rating- country- movie_imdb_link- aspect_ratio- plot_keywords
###Code
# Check the 'drop' function in the Pandas library - dataframe.drop(list_of_unnecessary_columns, axis = )
# Write your code for dropping the columns here. It is advised to keep inspecting the dataframe after each set of operations
movies.drop(['color', 'director_facebook_likes', 'actor_1_facebook_likes', 'actor_2_facebook_likes',
'actor_3_facebook_likes', 'actor_2_name', 'cast_total_facebook_likes', 'actor_3_name', 'duration',
'facenumber_in_poster', 'content_rating', 'country', 'movie_imdb_link', 'aspect_ratio', 'plot_keywords'], axis=1, inplace=True)
movies
###Output
_____no_output_____
###Markdown
Question 3: What is the count of columns in the new dataframe? - 10- 13- 15- 17
###Code
movies.shape
###Output
_____no_output_____
###Markdown
**Subtask 2.2: Inspect Null values**As you have seen above, there are null values in multiple columns of the dataframe 'movies'. Find out the percentage of null values in each column of the dataframe 'movies'.
###Code
# Write you code here
movies.count().idxmin()
###Output
_____no_output_____
###Markdown
Question 4: Which column has the highest percentage of null values? - language- genres- num_critic_for_reviews- imdb_score **Subtask 2.3: Fill NaN values**You might notice that the `language` column has some NaN values. Here, on inspection, you will see that it is safe to replace all the missing values with `'English'`.
###Code
# Write your code for filling the NaN values in the 'language' column here
movies.loc[pd.isnull(movies['language']), ['language']] = 'English'
(movies['language'] == 'English').sum()
###Output
_____no_output_____
###Markdown
Question 5: What is the count of movies made in English language after replacing the NaN values with English? - 3670- 3674- 3668- 3672 Task 3: Data Analysis**Subtask 3.1: Change the unit of columns**Convert the unit of the `budget` and `gross` columns from `$` to `million $`. movies
###Code
# Write your code for unit conversion here
movies['grossInM'] = movies.gross // 1000000;
movies['budgetInM'] = movies.budget // 1000000;
movies
###Output
_____no_output_____
###Markdown
**Subtask 3.2: Find the movies with highest profit** 1. Create a new column called `profit` which contains the difference of the two columns: `gross` and `budget`. 2. Sort the dataframe using the `profit` column as reference. (Find which command can be used here to sort entries from the documentation) 3. Extract the top ten profiting movies in descending order and store them in a new dataframe - `top10`
###Code
# Write your code for creating the profit column here
movies['profit'] = movies['gross'] - movies['budget']
movies
# Write your code for sorting the dataframe here
movies.sort_values('profit', ascending=False, inplace = True)
movies
top10 = movies.head(10)
top10
###Output
_____no_output_____
###Markdown
**Checkpoint:** You might spot two movies directed by `James Cameron` in the list. Question 6: Which movie is ranked 5th from the top in the list obtained? - E.T. the Extra-Terrestrial- The Avengers- The Dark Knight- Titanic **Subtask 3.3: Find IMDb Top 250**Create a new dataframe `IMDb_Top_250` and store the top 250 movies with the highest IMDb Rating (corresponding to the column: `imdb_score`). Also make sure that for all of these movies, the `num_voted_users` is greater than 25,000. Also add a `Rank` column containing the values 1 to 250 indicating the ranks of the corresponding films.
###Code
# Write your code for extracting the top 250 movies as per the IMDb score here. Make sure that you store it in a new dataframe
# and name that dataframe as 'IMDb_Top_250'
IMDb_Top_250 = (movies[movies['num_voted_users'] > 25000]).sort_values('imdb_score', ascending=False).head(250)
IMDb_Top_250['rank'] = range(1, 251)
IMDb_Top_250
###Output
_____no_output_____
###Markdown
Question 7: Suppose movies are divided into 5 buckets based on the IMDb ratings: - 7.5 to 8- 8 to 8.5- 8.5 to 9- 9 to 9.5- 9.5 to 10 Which bucket holds the maximum number of movies from *IMDb_Top_250*?
###Code
import matplotlib.pyplot as plt
plt.hist(IMDb_Top_250['imdb_score'], bins = 5, range = (7.5,10), edgecolor = 'cyan')
plt.show()
###Output
_____no_output_____
###Markdown
**Subtask 3.4: Find the critic-favorite and audience-favorite actors** 1. Create three new dataframes namely, `Meryl_Streep`, `Leo_Caprio`, and `Brad_Pitt` which contain the movies in which the actors: 'Meryl Streep', 'Leonardo DiCaprio', and 'Brad Pitt' are the lead actors. Use only the `actor_1_name` column for extraction. Also, make sure that you use the names 'Meryl Streep', 'Leonardo DiCaprio', and 'Brad Pitt' for the said extraction. 2. Append the rows of all these dataframes and store them in a new dataframe named `Combined`. 3. Group the combined dataframe using the `actor_1_name` column. 4. Find the mean of the `num_critic_for_reviews` and `num_user_for_review` and identify the actors which have the highest mean.
###Code
# Write your code for creating three new dataframes here
Meryl_Streep = movies[movies['actor_1_name'] == 'Meryl Streep']
Meryl_Streep
Leo_Caprio = movies[movies['actor_1_name'] == 'Leonardo DiCaprio']
Leo_Caprio
Brad_Pitt = movies[movies['actor_1_name'] == 'Brad Pitt']
Brad_Pitt
Meryl_Streep.shape
Leo_Caprio.shape
Brad_Pitt.shape
# Write your code for combining the three dataframes here
Combined = Meryl_Streep.append((Leo_Caprio, Brad_Pitt))
Combined
# Write your code for grouping the combined dataframe here
Combined[['num_user_for_reviews', 'actor_1_name']].groupby('actor_1_name').mean().sort_values('num_user_for_reviews', ascending=False)
# Write the code for finding the mean of critic reviews and audience reviews here
###Output
_____no_output_____
###Markdown
Question 8: Which actor is highest rated among the three actors according to the user reviews? - Meryl Streep- Leonardo DiCaprio- Brad Pitt Question 9: Which actor is highest rated among the three actors according to the critics?- Meryl Streep- Leonardo DiCaprio- Brad Pitt
###Code
Combined[['num_critic_for_reviews', 'actor_1_name']].groupby('actor_1_name').mean().sort_values('num_critic_for_reviews', ascending=False)
help(np.arange)
o1 = np.arange(0, 40)
o1
o1[[:3, 10]]
i1 = np.random.randint(10, 1000, 20)
i1
import math
[(x // 365) for x in i1]
###Output
_____no_output_____ |
trying_new.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pandas.util.testing as tm
data = pd.read_csv('/content/drive/My Drive/train.csv')
from google.colab import drive
drive.mount('/content/drive')
data.shape
train_features, test_features, train_labels, test_labels=train_test_split(
data.drop(labels=['IsAlert'], axis=1),
data['IsAlert'],
test_size=0.2,
random_state=41)
constant_filter = VarianceThreshold(threshold=0)
constant_filter.fit(train_features)
len(train_features.columns[constant_filter.get_support()])
constant_columns = [column for column in train_features.columns
if column not in train_features.columns[constant_filter.get_support()]]
print(len(constant_columns))
for column in constant_columns:
print(column)
train_features = constant_filter.transform(train_features)
test_features = constant_filter.transform(test_features)
train_features.shape, test_features.shape
###Output
_____no_output_____
###Markdown
Removing Quasi-Constant features
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pandas.util.testing as tm
data = pd.read_csv('/content/drive/My Drive/train.csv')
train_features, test_features, train_labels, test_labels = train_test_split(
data.drop(labels=['IsAlert'], axis=1),
data['IsAlert'],
test_size=0.2,
random_state=41)
constant_filter = VarianceThreshold(threshold=0)
constant_filter.fit(train_features)
len(train_features.columns[constant_filter.get_support()])
constant_columns = [column for column in train_features.columns
if column not in train_features.columns[constant_filter.get_support()]]
train_features.drop(labels=constant_columns, axis=1, inplace=True)
test_features.drop(labels=constant_columns, axis=1, inplace=True)
qconstant_filter = VarianceThreshold(threshold=0.01)
qconstant_filter.fit(train_features)
len(train_features.columns[qconstant_filter.get_support()])
qconstant_columns = [column for column in train_features.columns
if column not in train_features.columns[qconstant_filter.get_support()]]
print(len(qconstant_columns))
for column in qconstant_columns:
print(column)
train_features = qconstant_filter.transform(train_features)
test_features = qconstant_filter.transform(test_features)
train_features.shape, test_features.shape
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pandas.util.testing as tm
data = pd.read_csv('/content/drive/My Drive/train.csv')
train_features, test_features, train_labels, test_labels = train_test_split(
data.drop(labels=['IsAlert'], axis=1),
data['IsAlert'],
test_size=0.2,
random_state=41)
train_features_T = train_features.T
train_features_T.shape
print(train_features_T.duplicated().sum())
unique_features = train_features_T.drop_duplicates(keep='first').T
unique_features.shape
duplicated_features = [dup_col for dup_col in train_features.columns if dup_col not in unique_features.columns]
duplicated_features
train_features.drop(labels=duplicated_features, axis=1, inplace=True)
test_features.drop(labels=duplicated_features, axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Removing Correlated Features
###Code
#num_colums = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
#numerical_columns = list(data.select_dtypes(include=num_colums).columns)
#data = data[numerical_columns]
data.shape
train_features, test_features, train_labels, test_labels = train_test_split(
data.drop(labels=['TrialID', 'ObsNum', 'IsAlert'], axis=1),
data['IsAlert'],
test_size=0.2,
random_state=41)
correlated_features = set()
correlation_matrix = data.corr()
for i in range(len(correlation_matrix .columns)):
for j in range(i):
if abs(correlation_matrix.iloc[i, j]) > 0.8:
colname = correlation_matrix.columns[i]
correlated_features.add(colname)
len(correlated_features)
print(correlated_features)
train_features.drop(labels=correlated_features, axis=1, inplace=True)
test_features.drop(labels=correlated_features, axis=1, inplace=True)
train_features.head(5)
train_features.drop(labels=duplicated_features, axis=1, inplace=True)
test_features.drop(labels=duplicated_features, axis=1, inplace=True)
train_features.shape
###Output
_____no_output_____
###Markdown
trying new
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
import pandas.util.testing as tm
from google.colab import drive
drive.mount('/content/drive')
train = pd.read_csv('/content/drive/My Drive/train.csv')
test=pd.read_csv('/content/drive/My Drive/test.csv')
train.head(5)
train.describe()
train.isnull().sum()
train.info()
num_column=train.select_dtypes(include=['int64','float64']).columns
plt.figure(figsize=(8,22))
i=1
for c in num_column:
plt.subplot(11,3,i)
sns.distplot(train[c])
i+=1
plt.tight_layout()
plt.show()
train=train.drop(['P8','V7','V9'],axis=1)
train.columns
corr_features = set()
correlation=train.corr()
correlation
plt.figure(figsize=(12,12))
sns.heatmap(correlation,linewidths=0.2,cmap="YlGnBu")
for i in range(len(correlation.columns)):
for j in range(i):
if abs(correlation.iloc[i, j]) > 0.8:
colname = correlation.columns[i]
corr_features.add(colname)
train.drop(labels=corr_features, axis=1, inplace=True)
test.drop(labels=corr_features, axis=1, inplace=True)
corr_features
train.columns
clf_CV = LogisticRegressionCV(cv=10, random_state=0, solver = 'liblinear') # Model Setting
clf_CV.fit(trai, new_df.IsAlert) # Model Fitting
skf = StratifiedKFold(n_splits=10)
params = {}
nb = GaussianNB()
gs = GridSearchCV(nb, cv=skf, param_grid=params, return_train_score=True)
###Output
_____no_output_____ |
notebooks/decision_tree_classification.ipynb | ###Markdown
Decision Trees
###Code
import graphviz
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.datasets import load_iris
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier, export_graphviz
###Output
_____no_output_____
###Markdown
Skikit-Learn Decision TreesThe main Decision Tree Classifier in Scikit Learn is the `DecisionTreeClassifier()`.There are several parameters that you can set for your decision tree model in Scikit Learn too. Here are a few of the more interesting ones to play around with to try and get some better results:* **max_depth**: The max depth of the tree where we will stop splitting the nodes. This is similar to controlling the maximum number of layers in a deep neural network. Lower will make your model faster but not as accurate; higher can give you accuracy but risks overfitting and may be slow.* **min_samples_split**: The minimum number of samples required to split a node. We discussed this aspect of decision trees above and how setting it to a higher value would help mitigate overfitting.* **max_features**: The number of features to consider when looking for the best split. Higher means potentially better results with the tradeoff of training taking longer.* **min_impurity_split**: Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold. This can be used to tradeoff combating overfitting (high value, small tree) vs high accuracy (low value, big tree).* **presort**: Whether to presort the data to speed up the finding of best splits in fitting. If we sort our data on each feature beforehand, our training algorithm will have a much easier time finding good values to split on.https://scikit-learn.org/stable/modules/tree.htmldecision-trees Example 1Let's work with our toy example from when we first thought about classification. We'll use our four inputs for training a decision tree (weather outlook, temperature, humidity, and wind) to predict if we would play outside or not.We'll have to do a bit of preprocessing to get our values to be numeric.
###Code
# Assigning features
outlook = ['Sunny', 'Sunny', 'Overcast', 'Rain', 'Rain', 'Rain',
'Overcast', 'Sunny', 'Sunny', 'Rain', 'Sunny', 'Overcast', 'Overcast', 'Rain']
temp = ['Hot', 'Hot', 'Hot', 'Mild', 'Cool', 'Cool', 'Cool', 'Mild',
'Cool', 'Mild', 'Mild', 'Mild', 'Hot', 'Mild']
humidity = ['High', 'High', 'High', 'High', 'Normal', 'Normal', 'Normal', 'High',
'Normal', 'Normal', 'Normal', 'High', 'Normal', 'High']
wind = ['Weak', 'Strong', 'Weak', 'Weak', 'Weak', 'Strong', 'Strong', 'Weak',
'Weak', 'Weak', 'Strong', 'Strong', 'Weak', 'Strong']
# Assigning target vector
play = ["Don't Play", "Don't Play", "Play", "Play", "Play", "Don't Play",
"Play", "Don't Play", "Play", "Play", "Play", "Play", "Play", "Don't Play"]
#creating labelEncoder
le = LabelEncoder()
# Converting string labels into numbers.
outlook_encoded=le.fit_transform(outlook)
print(f"Weather: {outlook_encoded}")
# Converting string labels into numbers
temp_encoded=le.fit_transform(temp)
print(f"Temp: {temp_encoded}")
# Converting string labels into numbers
humidity_encoded=le.fit_transform(humidity)
print(f"Humidity: {humidity_encoded}")
# Converting string labels into numbers
wind_encoded=le.fit_transform(wind)
print(f"Wind: {wind_encoded}")
# Convert target strings into numbers (0 = No, 1 = Yes)
label=le.fit_transform(play)
print(f"Play: {label}")
#Combinig weather and temp into single listof tuples
features = np.vstack((outlook_encoded, temp_encoded, humidity_encoded, wind_encoded)).T
classification_tree = DecisionTreeClassifier()
# Train our decision tree (tree induction + pruning)
tree_model = classification_tree.fit(features, label)
# Create some dictionaries linking the string value with the encoded value
# This is done using a dictionary comprehension
outlook_dictionary = {key:value for key, value in zip(outlook, outlook_encoded)}
temperature_dictionary = {key:value for key, value in zip(temp, temp_encoded)}
humidity_dictionary = {key:value for key, value in zip(humidity, humidity_encoded)}
wind_dictionary = {key:value for key, value in zip(wind, wind_encoded)}
predict_outcomes = {key:value for key, value in zip(label, play)}
# Weather Possibilities: Sunny, Overcast, Rainy
# Temp Possibilities: Hot, Mild, Cool
# Humidity Possibilities: High, Normal
# Wind Possibilities: Weak, Strong
new_outlook = outlook_dictionary['Rain']
new_temp = temperature_dictionary['Hot']
new_humidity = humidity_dictionary['High']
new_wind = wind_dictionary['Weak']
ypred = tree_model.predict([[new_outlook, new_temp, new_humidity, new_wind]])
print(f'The model predicts: {predict_outcomes[ypred[0]]}')
yprob = tree_model.predict_proba([[new_outlook, new_temp, new_humidity, new_wind]])
print(f"Predicted Probability of Don't Play: {yprob[0, 0]*100:.2f}%")
print(f"Predicted Probability of Play: {yprob[0, 1]*100:.2f}%")
###Output
_____no_output_____
###Markdown
One of the benefits of the Decision Tree is that we can visualize the tree graphically. Here we'll use the graphviz module to make a nice looking tree.
###Code
dot_data = export_graphviz(tree_model, out_file=None,
feature_names=['Outlook', 'Temp', 'Humidity', 'Wind'],
class_names=['Play', 'Dont Play'],
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
###Markdown
Example 2Now let's work with our Iris dataset and only train our model on a subset of our total data so we can validate our model based on the held back data.
###Code
# Load in our dataset
iris_data = load_iris()
xtrain, xtest, ytrain, ytest = train_test_split(iris_data.data, iris_data.target)
# Initialize our decision tree object
classification_tree = DecisionTreeClassifier()
# Train our decision tree (tree induction + pruning)
classification_tree = classification_tree.fit(xtrain, ytrain)
dot_data = export_graphviz(classification_tree, out_file=None,
feature_names=iris_data.feature_names,
class_names=iris_data.target_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
#graph.render("iris", view=True)
ypred = classification_tree.predict(xtest)
metrics.accuracy_score(ytest, ypred)
sns.heatmap(metrics.confusion_matrix(ytest, ypred), annot=True, cmap=plt.cm.BuPu)
plt.show()
###Output
_____no_output_____ |
docs/_static/demos/mappers/MapReduceExample.ipynb | ###Markdown
Map Reduce ExampleThis demo shows how to use Map and Reduce to count the total number of atoms in PDB Import
###Code
from pyspark import SparkConf, SparkContext
from mmtfPyspark.io import mmtfReader
###Output
_____no_output_____
###Markdown
Configure Spark
###Code
conf = SparkConf().setMaster("local[*]") \
.setAppName("MapReduceExample")
sc = SparkContext(conf = conf)
###Output
_____no_output_____
###Markdown
Read in MMTF files
###Code
path = "../../resources/mmtf_full_sample/"
pdb = mmtfReader.read_sequence_file(path, sc)
###Output
_____no_output_____
###Markdown
Count number of atoms1) Map each mmtf structure to it's number of atoms2) Count total nubmer of atoms using reduce
###Code
numAtoms = pdb.map(lambda t: t[1].num_atoms).reduce(lambda a,b: a+b)
print(f"Total number of atoms in PDB: {numAtoms}")
###Output
Total number of atoms in PDB: 29059439
###Markdown
Terminate Spark
###Code
sc.stop()
###Output
_____no_output_____ |
shaker BFD(not! FFD) pro on rectpack with placement heuristics.ipynb | ###Markdown
Test in on hardest case, ~5050 for shaker FFD pro withoit placement tricks
###Code
count = 10000
test_state = random_state_generator((10, 10),count // 2,1,3,1,3)
for box in random_state_generator((10, 10),count // 2,8,10,8,10).boxes_open:
test_state.append_open_box(box)
path = f"test_instances/extremes_{count}"
ReadWrite.write_state(path=path, state=test_state)
packer = ShakerPackerSortedBBF(pack_algo=GuillotineBafMinas, rotation = False)
for box in test_state.boxes_open:
packer.add_rect(box.w, box.h)
# Add the bins where the rectangles will be placed
packer.add_bin(*test_state.bin_size, float("inf"))
packer.pack()
len(packer)
###Output
_____no_output_____ |
code/Processing.ipynb | ###Markdown
Processing the scraped IMDB dataThis notebook builds upon the previous `Scraper.ipynb` and processes the returned `CSV` file.
###Code
import pandas as pd
import imdb_scraper as scrape
import project_funcs
import json
import os
###Output
_____no_output_____
###Markdown
Finding the MoviesBecause we are working with a user-generated list, we cannot use the IMDB API or any third-party alternatives to obtain the film IDs from this page. Instead, we can perform data munging and return a list of IDs.
###Code
def getMovies(url):
''' scrapes imdb user created list to get the film IDs
these IDs get used in later computation.
'''
response = scrape.getHTML(url)
data = json.loads(response.find('script', type='application/ld+json').text)
data = data['about']['itemListElement']
df = pd.DataFrame(data)
movies = []
for i in df['url']:
movies.append(i[7:-1]) # slice the string to get only the ID
return movies
url = 'https://www.imdb.com/list/ls076439519/'
movies = getMovies(url)
print(movies)
###Output
_____no_output_____
###Markdown
Building the DatasetFrom the list of movies generated above, we can now go ahead and get the required data for each one. The following function parses the list of movie IDs to the `getURL()` function then creates a python list of dictionaries corresponding to an input data row. Then, we create a data frame from that list. This method was chosen over appending to a pre-existing data frame row-by-row because of the huge performance gains, approximately 30x more efficient in regards to the rate of growth given an input.
###Code
def scrapeToDataFrame(movies):
''' parses list of movie IDs to getURL() function
creates a list of dictionaries then creates a
data frame from this list.
'''
li = []
for i in range(len(movies)-1):
dict1 = {}
item = json.loads(scrape.getURL(movies[i]))
dict1.update(imdbID = item['id'],
title = item['title'],
rating = item['rating'],
votes = item['votes'],
rated = item['rated'])
li.append(dict1)
df = pd.DataFrame(li)
return df
# using movies list created above
df = scrapeToDataFrame(movies)
###Output
_____no_output_____
###Markdown
Export to FileNow that we have compiled the dataframe, it is time to push this to `.csv` in our `../data/` directory. To do this, we utilise the `change_dir()` function defined in the `project_funcs.py` file. This file contains various functions used throughout the project.
###Code
project_funcs.change_dir('data') # change directory to ../data/
path = os.getcwd()
export = df.to_csv(r'{}/df.csv'.format(path))
###Output
_____no_output_____ |
notebooks/ruta-training/Chapter 2 - Specific tasks/Exercise 1 - Phrase Chunking.ipynb | ###Markdown
Exercise 1: Phrase ChunkingThe goal of this exercise is to create a minimal chunker, a component that annotates noun, verb, and prepositional phrases. Given a reduced set of Part-of-Speech annotations, annotations of the types `ChunkNP`, `ChunkVP` and `ChunkPP` should be created. For simplicity reasons, part-of-speech tags are mocked.
###Code
%%documentText
The little yellow dog barked at the cat.
My name is Peter and I live in Freiburg.
DECLARE ChunkNP, ChunkVP, ChunkPP;
// A selection of Types for part-of-speech (POS) tags
// For an explanation of the type names, see: https://cs.nyu.edu/~grishman/jet/guide/PennPOS.html
DECLARE DT, JJ, NN, V, IN, PRPS, PRP, CC, PUNCT;
// We are mocking part of speech tags. Normally, they are created by another component.
BLOCK(mockPOS) Document{}{
"The"{-> DT} "little"{-> JJ} "yellow"{-> JJ} "dog"{-> NN} "barked"{-> V}
"at"{-> IN} "the"{-> DT} "cat"{-> NN} "."{-> PUNCT};
"My"{-> PRPS} "name"{-> NN} "is"{-> V} "Peter"{-> NN}
"and"{-> CC} "I"{-> PRP} "live"{-> V} "in"{-> IN} "Freiburg"{-> NN} "."{-> PUNCT};
}
// Noun phrases
((DT | PRPS)? JJ* @NN){-> ChunkNP};
// actually, disjunctive ("|") and conjunctive ("&") rule elements should be avoided.
// the rule could also look like:
//(ANY?{PARTOF({DT,PRPS})} JJ* @NN){-> ChunkNP};
PRP{-> ChunkNP};
// Verb phrases
V{-> ChunkVP};
// Prepositional phrases
(IN ChunkNP){-> ChunkPP};
COLOR(ChunkNP, "lightgreen");
COLOR(ChunkVP, "pink");
COLOR(ChunkPP, "lightblue");
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.