markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
We double-checked it by printing info from Thurmond, whom was part of the senate but appeared as if he had
served 26 periods of 6 years each (26*6 IMPOSIBLE!)
Thurmond = df[df['lastname'] == 'Thurmond']
Thurmond
11) Who has served for more years?
Senators = 6-year terms BUT the data we have is for 2-year terms
Representatives = 2-year terms | terms_served_by_senators= senator.groupby('complete_name')['bioguide'].value_counts()
years= terms_served_by_senators * 2
total_years_served = years.sort_values(ascending=False)
pd.DataFrame(total_years_served)
terms_served_by_representative= representative.groupby("complete_name")['bioguide'].value_counts()
years= terms_served_by_representative * 2
total_years_served = years.sort_values(ascending=False)
pd.DataFrame(total_years_served)
| foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
12)The most popular name in congress is.... | df['firstname'].value_counts()
#this might be counting the same person many times but still we can get an idea of what names are more popular | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
Make three charts with your dataset
1) Distribution of age | plt.style.use("ggplot")
df['age'].hist(bins=15, xlabelsize=12, ylabelsize=12, color=['y'])
df.head(20).sort_values(by='age',ascending=True).plot(kind='barh', x=['complete_name'], y='age', color="y")
df.plot.scatter(x='congress', y='age');
df.plot.hexbin(x='age', y='congress', gridsize=25, legend=True) | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
As always, let's do imports and initialize a logger and a new Bundle. | import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary() | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Passband Options
Passband options follow the exact same rules as dataset columns.
Sending a single value to the argument will apply it to each component in which the time array is attached (either based on the list of components sent or the defaults from the dataset method).
Note that for light curves, in particular, this rule gets slightly bent. The dataset arrays for light curves are attached at the system level, always. The passband-dependent options, however, exist for each star in the system. So, that value will get passed to each star if the component is not explicitly provided. | b.add_dataset('lc',
times=[0,1],
dataset='lc01',
overwrite=True)
print(b.get_parameter(qualifier='times', dataset='lc01'))
print(b.filter(qualifier='ld_mode', dataset='lc01')) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
As you might expect, if you want to pass different values to different components, simply provide them in a dictionary. | b.add_dataset('lc',
times=[0,1],
ld_mode='manual',
ld_func={'primary': 'logarithmic', 'secondary': 'quadratic'},
dataset='lc01',
overwrite=True)
print(b.filter(qualifier='ld_func', dataset='lc01')) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Note here that we didn't explicitly override the defaults for '_default', so they used the phoebe-wide defaults. If you wanted to set a value for the ld_coeffs of any star added in the future, you would have to provide a value for '_default' in the dictionary as well. | print(b.filter(qualifier'ld_func@lc01', check_default=False)) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
This syntax may seem a bit bulky - but alternatively you can add the dataset without providing values and then change the values individually using dictionary access or set_value.
Adding a Dataset from a File
Manually from Arrays
For now, the only way to load data from a file is to do the parsing externally and pass the arrays on (as in the previous section).
Here we'll load times, fluxes, and errors of a light curve from an external file and then pass them on to a newly created dataset. Since this is a light curve, it will automatically know that you want the summed light from all copmonents in the hierarchy. | times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc',
times=times,
fluxes=fluxes,
sigmas=sigmas,
dataset='lc01',
overwrite=True) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Enabling and Disabling Datasets
See the Compute Tutorial
Dealing with Phases
Datasets will no longer accept phases. It is the user's responsibility to convert
phased data into times given an ephemeris. But it's still useful to be able to
convert times to phases (and vice versa) and be able to plot in phase.
Those conversions can be handled via b.get_ephemeris, b.to_phase, and b.to_time. | print(b.get_ephemeris())
print(b.to_phase(0.0))
print(b.to_time(-0.25)) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
All of these by default use the period in the top-level of the current hierarchy,
but accept a component keyword argument if you'd like the ephemeris of an
inner-orbit or the rotational ephemeris of a star in the system.
We'll see how plotting works later, but if you manually wanted to plot the dataset
with phases, all you'd need to do is: | print(b.to_phase(b.get_value(qualifier='times'))) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
or | print(b.to_phase('times@lc01')) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Although it isn't possible to attach data in phase-space, it is possible to tell PHOEBE at which phases to compute the model by setting compute_phases. Note that this overrides the value of times when the model is computed. | b.add_dataset('lc',
compute_phases=np.linspace(0,1,11),
dataset='lc01',
overwrite=True) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
The usage of compute_phases (as well as compute_times) will be discussed in further detail in the compute tutorial and the advanced: compute times & phases tutorial.
Note also that although you can pass compute_phases directly to add_dataset, if you do not, it will be constrained by compute_times by default. In this case, you would need to flip the constraint before setting compute_phases. See the constraints tutorial and the flip_constraint API docs for more details on flipping constraints. | b.add_dataset('lc',
times=[0],
dataset='lc01',
overwrite=True)
print(b['compute_phases@lc01'])
b.flip_constraint('compute_phases', dataset='lc01', solve_for='compute_times')
b.set_value('compute_phases', dataset='lc01', value=np.linspace(0,1,101)) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Removing Datasets
Removing a dataset will remove matching parameters in either the dataset, model, or constraint contexts. This action is permanent and not undo-able via Undo/Redo. | print(b.datasets) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
The simplest way to remove a dataset is by its dataset tag: | b.remove_dataset('lc01')
print(b.datasets) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
But remove_dataset also takes any other tag(s) that could be sent to filter. | b.remove_dataset(kind='rv')
print(b.datasets) | development/tutorials/datasets_advanced.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
TimeDistributed
[wrappers.TimeDistributed.0] wrap a Dense layer with units 4 (input: 3 x 6) | data_in_shape = (3, 6)
layer_0 = Input(shape=data_in_shape)
layer_1 = TimeDistributed(Dense(4))(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(4000 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['wrappers.TimeDistributed.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/wrappers/TimeDistributed.ipynb | qinwf-nuan/keras-js | mit |
[wrappers.TimeDistributed.1] wrap a Conv2D layer with 6 3x3 filters (input: 5x4x4x2) | data_in_shape = (5, 4, 4, 2)
layer_0 = Input(shape=data_in_shape)
layer_1 = TimeDistributed(Conv2D(6, (3,3), data_format='channels_last'))(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(4010 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['wrappers.TimeDistributed.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/wrappers/TimeDistributed.ipynb | qinwf-nuan/keras-js | mit |
export for Keras.js tests | print(json.dumps(DATA)) | notebooks/layers/wrappers/TimeDistributed.ipynb | qinwf-nuan/keras-js | mit |
Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews. | products = graphlab.SFrame('amazon_baby_subset.gl/') | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string manipulation functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details. | import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# Remote punctuation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word)) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
The SFrame products now contains one column for each of the 193 important_words. | products | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Split data into training and validation sets
We will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result. | train_data, validation_data = products.random_split(.9, seed=1)
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Convert SFrame to NumPy array
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term. | import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Note that we convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes. | feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment') | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-10-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Quiz question: In Module 3 assignment, there were 194 features (an intercept + one feature for each of the 193 important words). In this assignment, we will use stochastic gradient ascent to train the classifier using logistic regression. How does the changing the solver to stochastic gradient ascent affect the number of features?
Building on logistic regression
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in Module 3 assignment to make probability predictions, since this part is not affected by using stochastic gradient ascent as a solver. Only the way in which the coefficients are learned is affected by using stochastic gradient ascent as a solver. | '''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1. / (1.+np.exp(-score))
return predictions | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Derivative of log likelihood with respect to a single coefficient
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
In Module 3 assignment, we wrote a function to compute the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts the following two parameters:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
Complete the following code block: | def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
return derivative | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture. | def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)
return lp | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question: Recall from the lecture and the earlier assignment, the log likelihood (without the averaging term) is given by
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
How are the functions $\ell\ell(\mathbf{w})$ and $\ell\ell_A(\mathbf{w})$ related?
Modifying the derivative for stochastic gradient ascent
Recall from the lecture that the gradient for a single data point $\color{red}{\mathbf{x}_i}$ can be computed using the following formula:
$$
\frac{\partial\ell_{\color{red}{i}}(\mathbf{w})}{\partial w_j} = h_j(\color{red}{\mathbf{x}i})\left(\mathbf{1}[y\color{red}{i} = +1] - P(y_\color{red}{i} = +1 | \color{red}{\mathbf{x}_i}, \mathbf{w})\right)
$$
Computing the gradient for a single data point
Do we really need to re-write all our code to modify $\partial\ell(\mathbf{w})/\partial w_j$ to $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$?
Thankfully No!. Using NumPy, we access $\mathbf{x}i$ in the training data using feature_matrix_train[i:i+1,:]
and $y_i$ in the training data using sentiment_train[i:i+1]. We can compute $\partial\ell{\color{red}{i}}(\mathbf{w})/\partial w_j$ by re-using all the code written in feature_derivative and predict_probability.
We compute $\partial\ell_{\color{red}{i}}(\mathbf{w})/\partial w_j$ using the following steps:
* First, compute $P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ using the predict_probability function with feature_matrix_train[i:i+1,:] as the first parameter.
* Next, compute $\mathbf{1}[y_i = +1]$ using sentiment_train[i:i+1].
* Finally, call the feature_derivative function with feature_matrix_train[i:i+1, j] as one of the parameters.
Let us follow these steps for j = 1 and i = 10: | j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0" | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question: The code block above computed $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ for j = 1 and i = 10. Is $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ a scalar or a 194-dimensional vector?
Modifying the derivative for using a batch of data points
Stochastic gradient estimates the ascent direction using 1 data point, while gradient uses $N$ data points to decide how to update the the parameters. In an optional video, we discussed the details of a simple change that allows us to use a mini-batch of $B \leq N$ data points to estimate the ascent direction. This simple approach is faster than regular gradient but less noisy than stochastic gradient that uses only 1 data point. Although we encorage you to watch the optional video on the topic to better understand why mini-batches help stochastic gradient, in this assignment, we will simply use this technique, since the approach is very simple and will improve your results.
Given a mini-batch (or a set of data points) $\mathbf{x}{i}, \mathbf{x}{i+1} \ldots \mathbf{x}{i+B}$, the gradient function for this mini-batch of data points is given by:
$$
\color{red}{\sum{s = i}^{i+B}} \frac{\partial\ell_{s}}{\partial w_j} = \color{red}{\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
Computing the gradient for a "mini-batch" of data points
Using NumPy, we access the points $\mathbf{x}i, \mathbf{x}{i+1} \ldots \mathbf{x}_{i+B}$ in the training data using feature_matrix_train[i:i+B,:]
and $y_i$ in the training data using sentiment_train[i:i+B].
We can compute $\color{red}{\sum_{s = i}^{i+B}} \partial\ell_{s}/\partial w_j$ easily as follows: | j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0" | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question: The code block above computed
$\color{red}{\sum_{s = i}^{i+B}}\partial\ell_{s}(\mathbf{w})/{\partial w_j}$
for j = 10, i = 10, and B = 10. Is this a scalar or a 194-dimensional vector?
Quiz Question: For what value of B is the term
$\color{red}{\sum_{s = 1}^{B}}\partial\ell_{s}(\mathbf{w})/\partial w_j$
the same as the full gradient
$\partial\ell(\mathbf{w})/{\partial w_j}$? | print len(sentiment_train) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Averaging the gradient across a batch
It is a common practice to normalize the gradient update rule by the batch size B:
$$
\frac{\partial\ell_{\color{red}{A}}(\mathbf{w})}{\partial w_j} \approx \color{red}{\frac{1}{B}} {\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
In other words, we update the coefficients using the average gradient over data points (instead of using a summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes of stochastic gradient ascent (including a batch size of all the data points), and study the effect of batch size on the algorithm as well as the choice of step size.
Implementing stochastic gradient ascent
Now we are ready to implement our own logistic regression with stochastic gradient ascent. Complete the following function to fit a logistic regression model using gradient ascent: | from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = predict_probability(feature_matrix[i:i+batch_size,:], coefficients)
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = (sentiment[i:i+batch_size]==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])
# compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += (1./batch_size)*(step_size * derivative)
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Checkpoint
The following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly. | sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),
step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed' | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Compare convergence behavior of stochastic gradient ascent
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question: For what value of batch size B above is the stochastic gradient ascent function logistic_regression_SG act as a standard gradient ascent algorithm?
Running gradient ascent using the stochastic gradient ascent implementation
Instead of implementing batch gradient ascent separately, we save time by re-using the stochastic gradient ascent function we just wrote — to perform gradient ascent, it suffices to set batch_size to the number of data points in the training data. Yes, we did answer above the quiz question for you, but that is an important point to remember in the future :)
Small Caveat. The batch gradient ascent implementation here is slightly different than the one in the earlier assignments, as we now normalize the gradient update rule.
We now run stochastic gradient ascent over the feature_matrix_train for 10 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = 1
* max_iter = 10 | coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=1, max_iter=10) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = len(feature_matrix_train)
* max_iter = 200 | # YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1,
batch_size = len(feature_matrix_train),
max_iter=200) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Make "passes" over the dataset
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows):
$$
[\text{# of passes}] = \frac{[\text{# of data points touched so far}]}{[\text{size of dataset}]}
$$
Quiz Question Suppose that we run stochastic gradient ascent with a batch size of 100. How many gradient updates are performed at the end of two passes over a dataset consisting of 50000 data points? | # number of passes is number to complete the whole dataset
# For each batch size, we update 1 gradient, so
2*(50000/100) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Log likelihood plots for stochastic gradient ascent
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros. | step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
We provide you with a utility function to plot the average log likelihood as a function of the number of passes. | import matplotlib.pyplot as plt
%matplotlib inline
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1') | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Smoothing the stochastic gradient ascent curve
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window "iterations" of stochastic gradient ascent. | make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1') | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Checkpoint: The above plot should look smoother than the previous plot. Play around with smoothing_window. As you increase it, you should see a smoother plot.
Stochastic gradient ascent vs batch gradient ascent
To compare convergence rates for stochastic gradient ascent with batch gradient ascent, we call make_plot() multiple times in the same cell.
We are comparing:
* stochastic gradient ascent: step_size = 0.1, batch_size=100
* batch gradient ascent: step_size = 0.5, batch_size=len(feature_matrix_train)
Write code to run stochastic gradient ascent for 200 passes using:
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros. | step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30. | make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1') | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question: In the figure above, how many passes does batch gradient ascent need to achieve a similar log likelihood as stochastic gradient ascent?
It's always better
10 passes
20 passes
150 passes or more
Explore the effects of step sizes on stochastic gradient ascent
In previous sections, we chose step sizes for you. In practice, it helps to know how to choose good step sizes yourself.
To start, we explore a wide range of step sizes that are equally spaced in the log space. Run stochastic gradient ascent with step_size set to 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, and 1e2. Use the following set of parameters:
* initial_coefficients=np.zeros(194)
* batch_size=100
* max_iter initialized so as to run 10 passes over the data. | batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Plotting the log likelihood as a function of passes for each step size
Now, we will plot the change in log likelihood using the make_plot for each of the following values of step_size:
step_size = 1e-4
step_size = 1e-3
step_size = 1e-2
step_size = 1e-1
step_size = 1e0
step_size = 1e1
step_size = 1e2
For consistency, we again apply smoothing_window=30. | for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Now, let us remove the step size step_size = 1e2 and plot the rest of the curves. | for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size) | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | tuanavu/coursera-university-of-washington | mit |
Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function: | def hat(x,a,b):
v = -a*(x**2) + b*(x**4)
return v
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0 | assignments/assignment11/OptimizationEx01.ipynb | jegibbs/phys202-2015-work | mit |
Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$: | a = 5.0
b = 1.0
x = np.linspace(-3.0, 3.0)
plt.plot(x, hat(x,a,b))
plt.plot(-1.5811388304396232, hat(-1.5811388304396232,a,b), 'ro')
plt.plot(1.58113882, hat(1.58113882,a,b), 'ro')
plt.xlabel('X')
plt.ylabel('V(x)')
plt.title('Hat Potential')
plt.grid(True)
plt.box(False);
assert True # leave this to grade the plot | assignments/assignment11/OptimizationEx01.ipynb | jegibbs/phys202-2015-work | mit |
Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective. | opt.minimize(hat, -3, args=(a,b), method = "Powell")
opt.minimize(hat, -3, args=(a,b))
assert True # leave this for grading the plot | assignments/assignment11/OptimizationEx01.ipynb | jegibbs/phys202-2015-work | mit |
We start by generating some toy data containing 6 instances which we will partition into folds. | data = list(range(6))
labels = [True] * 3 + [False] * 3 | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Standard cross-validation <a id=standard></a>
Each function to be decorated with cross-validation functionality must accept the following arguments:
- x_train: training data
- x_test: test data
- y_train: training labels (required only when y is specified in the cross-validation decorator)
- y_test: test labels (required only when y is specified in the cross-validation decorator)
These arguments will be set implicitly by the cross-validation decorator to match the right folds. Any remaining arguments to the decorated function remain as free parameters that must be set later on.
Lets start with the basics and look at Optunity's cross-validation in action. We use an objective function that simply prints out the train and test data in every split to see what's going on. | def f(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\t train labels:\t" + str(y_train))
print("test data:\t" + str(x_test) + "\t test labels:\t" + str(y_test))
return 0.0 | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
We start with 2 folds, which leads to equally sized train and test partitions. | f_2folds = optunity.cross_validated(x=data, y=labels, num_folds=2)(f)
print("using 2 folds")
f_2folds()
# f_2folds as defined above would typically be written using decorator syntax as follows
# we don't do that in these examples so we can reuse the toy objective function
@optunity.cross_validated(x=data, y=labels, num_folds=2)
def f_2folds(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\t train labels:\t" + str(y_train))
print("test data:\t" + str(x_test) + "\t test labels:\t" + str(y_test))
return 0.0 | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
If we use three folds instead of 2, we get 3 iterations in which the training set is twice the size of the test set. | f_3folds = optunity.cross_validated(x=data, y=labels, num_folds=3)(f)
print("using 3 folds")
f_3folds() | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
If we do two iterations of 3-fold cross-validation (denoted by 2x3 fold), two sets of folds are generated and evaluated. | f_2x3folds = optunity.cross_validated(x=data, y=labels, num_folds=3, num_iter=2)(f)
print("using 2x3 folds")
f_2x3folds() | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Using strata and clusters<a id=strata-clusters></a>
Strata are defined as sets of instances that should be spread out across folds as much as possible (e.g. stratify patients by age). Clusters are sets of instances that must be put in a single fold (e.g. cluster measurements of the same patient).
Optunity allows you to specify strata and/or clusters that must be accounted for while construct cross-validation folds. Not all instances have to belong to a stratum or clusters.
Strata
We start by illustrating strata. Strata are specified as a list of lists of instances indices. Each list defines one stratum. We will reuse the toy data and objective function specified above. We will create 2 strata with 2 instances each. These instances will be spread across folds. We create two strata: ${0, 1}$ and ${2, 3}$. | strata = [[0, 1], [2, 3]]
f_stratified = optunity.cross_validated(x=data, y=labels, strata=strata, num_folds=3)(f)
f_stratified() | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Clusters
Clusters work similarly, except that now instances within a cluster are guaranteed to be placed within a single fold. The way to specify clusters is identical to strata. We create two clusters: ${0, 1}$ and ${2, 3}$. These pairs will always occur in a single fold. | clusters = [[0, 1], [2, 3]]
f_clustered = optunity.cross_validated(x=data, y=labels, clusters=clusters, num_folds=3)(f)
f_clustered() | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Strata and clusters
Strata and clusters can be used together. Lets say we have the following configuration:
1 stratum: ${0, 1, 2}$
2 clusters: ${0, 3}$, ${4, 5}$
In this particular example, instances 1 and 2 will inevitably end up in a single fold, even though they are part of one stratum. This happens because the total data set has size 6, and 4 instances are already in clusters. | strata = [[0, 1, 2]]
clusters = [[0, 3], [4, 5]]
f_strata_clustered = optunity.cross_validated(x=data, y=labels, clusters=clusters, strata=strata, num_folds=3)(f)
f_strata_clustered() | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Aggregators <a id=aggregators></a>
Aggregators are used to combine the scores per fold into a single result. The default approach used in cross-validation is to take the mean of all scores. In some cases, we might be interested in worst-case or best-case performance, the spread, ...
Opunity allows passing a custom callable to be used as aggregator.
The default aggregation in Optunity is to compute the mean across folds. | @optunity.cross_validated(x=data, num_folds=3)
def f(x_train, x_test):
result = x_test[0]
print(result)
return result
f(1) | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
This can be replaced by any function, e.g. min or max. | @optunity.cross_validated(x=data, num_folds=3, aggregator=max)
def fmax(x_train, x_test):
result = x_test[0]
print(result)
return result
fmax(1)
@optunity.cross_validated(x=data, num_folds=3, aggregator=min)
def fmin(x_train, x_test):
result = x_test[0]
print(result)
return result
fmin(1) | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Retaining intermediate results
Often, it may be useful to retain all intermediate results, not just the final aggregated data. This is made possible via optunity.cross_validation.mean_and_list aggregator. This aggregator computes the mean for internal use in cross-validation, but also returns a list of lists containing the full evaluation results. | @optunity.cross_validated(x=data, num_folds=3,
aggregator=optunity.cross_validation.mean_and_list)
def f_full(x_train, x_test, coeff):
return x_test[0] * coeff
# evaluate f
mean_score, all_scores = f_full(1.0)
print(mean_score)
print(all_scores)
| notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Note that a cross-validation based on the mean_and_list aggregator essentially returns a tuple of results. If the result is iterable, all solvers in Optunity use the first element as the objective function value. You can let the cross-validation procedure return other useful statistics too, which you can access from the solver trace. | opt_coeff, info, _ = optunity.minimize(f_full, coeff=[0, 1], num_evals=10)
print(opt_coeff)
print("call log")
for args, val in zip(info.call_log['args']['coeff'], info.call_log['values']):
print(str(args) + '\t\t' + str(val)) | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Cross-validation with scikit-learn <a id=cv-sklearn></a>
In this example we will show how to use cross-validation methods that are provided by scikit-learn in conjunction with Optunity. To do this we provide Optunity with the folds that scikit-learn produces in a specific format.
In supervised learning datasets often have unbalanced labels. When performing cross-validation with unbalanced data it is good practice to preserve the percentage of samples for each class across folds. To achieve this label balance we will use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedKFold.html">StratifiedKFold</a>. | data = list(range(20))
labels = [1 if i%4==0 else 0 for i in range(20)]
@optunity.cross_validated(x=data, y=labels, num_folds=5)
def unbalanced_folds(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\ntrain labels:\t" + str(y_train)) + '\n'
print("test data:\t" + str(x_test) + "\ntest labels:\t" + str(y_test)) + '\n'
return 0.0
unbalanced_folds() | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Notice above how the test label sets have a varying number of postive samples, some have none, some have one, and some have two. | from sklearn.cross_validation import StratifiedKFold
stratified_5folds = StratifiedKFold(labels, n_folds=5)
folds = [[list(test) for train, test in stratified_5folds]]
@optunity.cross_validated(x=data, y=labels, folds=folds, num_folds=5)
def balanced_folds(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\ntrain labels:\t" + str(y_train)) + '\n'
print("test data:\t" + str(x_test) + "\ntest labels:\t" + str(y_test)) + '\n'
return 0.0
balanced_folds() | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
Now all of our train sets have four positive samples and our test sets have one positive sample.
To use predetermined folds, place a list of the test sample idices into a list. And then insert that list into another list. Why so many nested lists? Because you can perform multiple cross-validation runs by setting num_iter appropriately and then append num_iter lists of test samples to the outer most list. Note that the test samples for a given fold are the idicies that you provide and then the train samples for that fold are all of the indices from all other test sets joined together. If not done carefully this may lead to duplicated samples in a train set and also samples that fall in both train and test sets of a fold if a datapoint is in multiple folds' test sets. | data = list(range(6))
labels = [True] * 3 + [False] * 3
fold1 = [[0, 3], [1, 4], [2, 5]]
fold2 = [[0, 5], [1, 4], [0, 3]] # notice what happens when the indices are not unique
folds = [fold1, fold2]
@optunity.cross_validated(x=data, y=labels, folds=folds, num_folds=3, num_iter=2)
def multiple_iters(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\t train labels:\t" + str(y_train))
print("test data:\t" + str(x_test) + "\t\t test labels:\t" + str(y_test))
return 0.0
multiple_iters() | notebooks/basic-cross-validation.ipynb | chrinide/optunity | bsd-3-clause |
The one-step lookahead agent is defined in the next code cell. | # The agent is always implemented as a Python function that accepts two arguments: obs and config
def agent(obs, config):
# Get list of valid moves
valid_moves = [c for c in range(config.columns) if obs.board[c] == 0]
# Convert the board to a 2D grid
grid = np.asarray(obs.board).reshape(config.rows, config.columns)
# Use the heuristic to assign a score to each possible board in the next turn
scores = dict(zip(valid_moves, [score_move(grid, col, obs.mark, config) for col in valid_moves]))
# Get a list of columns (moves) that maximize the heuristic
max_cols = [key for key in scores.keys() if scores[key] == max(scores.values())]
# Select at random from the maximizing columns
return random.choice(max_cols) | notebooks/game_ai/raw/tut2.ipynb | Kaggle/learntools | apache-2.0 |
In the code for the agent, we begin by getting a list of valid moves. This is the same line of code we used in the previous tutorial!
Next, we convert the game board to a 2D numpy array. For Connect Four, grid is an array with 6 rows and 7 columns.
Then, the score_move() function calculates the value of the heuristic for each valid move. It uses a couple of helper functions:
- drop_piece() returns the grid that results when the player drops its disc in the selected column.
- get_heuristic() calculates the value of the heuristic for the supplied board (grid), where mark is the mark of the agent. This function uses the count_windows() function, which counts the number of windows (of four adjacent locations in a row, column, or diagonal) that satisfy specific conditions from the heuristic. Specifically, count_windows(grid, num_discs, piece, config) yields the number of windows in the game board (grid) that contain num_discs pieces from the player (agent or opponent) with mark piece, and where the remaining locations in the window are empty. For instance,
- setting num_discs=4 and piece=obs.mark counts the number of times the agent got four discs in a row.
- setting num_discs=3 and piece=obs.mark%2+1 counts the number of windows where the opponent has three discs, and the remaining location is empty (the opponent wins by filling in the empty spot).
Finally, we get the list of columns that maximize the heuristic and select one (uniformly) at random.
(Note: For this course, we decided to provide relatively slower code that was easier to follow. After you've taken the time to understand the code above, can you see how to re-write it, to make it run much faster? As a hint, note that the count_windows() function is used several times to loop over the locations in the game board.)
In the next code cell, we see the outcome of one game round against a random agent. | from kaggle_environments import make, evaluate
# Create the game environment
env = make("connectx")
# Two random agents play one game round
env.run([agent, "random"])
# Show the game
env.render(mode="ipython") | notebooks/game_ai/raw/tut2.ipynb | Kaggle/learntools | apache-2.0 |
We use the get_win_percentage() function from the previous tutorial to check how we can expect it to perform on average. | #$HIDE_INPUT$
def get_win_percentages(agent1, agent2, n_rounds=100):
# Use default Connect Four setup
config = {'rows': 6, 'columns': 7, 'inarow': 4}
# Agent 1 goes first (roughly) half the time
outcomes = evaluate("connectx", [agent1, agent2], config, [], n_rounds//2)
# Agent 2 goes first (roughly) half the time
outcomes += [[b,a] for [a,b] in evaluate("connectx", [agent2, agent1], config, [], n_rounds-n_rounds//2)]
print("Agent 1 Win Percentage:", np.round(outcomes.count([1,-1])/len(outcomes), 2))
print("Agent 2 Win Percentage:", np.round(outcomes.count([-1,1])/len(outcomes), 2))
print("Number of Invalid Plays by Agent 1:", outcomes.count([None, 0]))
print("Number of Invalid Plays by Agent 2:", outcomes.count([0, None]))
get_win_percentages(agent1=agent, agent2="random") | notebooks/game_ai/raw/tut2.ipynb | Kaggle/learntools | apache-2.0 |
Camera Calibration with OpenCV
Run the code in the cell below to extract object points and image points for camera calibration. | import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
%matplotlib qt
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*8,3), np.float32)
objp[:,:2] = np.mgrid[0:8, 0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('../camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for idx, fname in enumerate(images):
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (8,6), None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (8,6), corners, ret)
#write_name = 'corners_found'+str(idx)+'.jpg'
#cv2.imwrite(write_name, img)
cv2.imshow('img', img)
cv2.waitKey(500)
cv2.destroyAllWindows() | CarND-Advanced-Lane-Lines/src/.ipynb_checkpoints/camera_calibration-checkpoint.ipynb | charliememory/AutonomousDriving | gpl-3.0 |
If the above cell ran sucessfully, you should now have objpoints and imgpoints needed for camera calibration. Run the cell below to calibrate, calculate distortion coefficients, and test undistortion on an image! | import pickle
%matplotlib inline
# Test undistortion on an image
img = cv2.imread('calibration_wide/test_image.jpg')
img_size = (img.shape[1], img.shape[0])
# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
cv2.imwrite('calibration_wide/test_undist.jpg',dst)
# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "calibration_wide/wide_dist_pickle.p", "wb" ) )
#dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
# Visualize undistortion
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Undistorted Image', fontsize=30) | CarND-Advanced-Lane-Lines/src/.ipynb_checkpoints/camera_calibration-checkpoint.ipynb | charliememory/AutonomousDriving | gpl-3.0 |
Now we will form the request to invoke the Open Street Map API. Documentation on this API is found here:
http://wiki.openstreetmap.org/wiki/Nominatim
First, we'll generate an example address to geocode. Why not use Environment Hall? But feel free to use your own address! | #Get the address
address = '9 Circuit Drive, Durham, NC, 27708' | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
An API request consists two components: the service endpoint and a set of parameters associated with the service.
When using the requests module to create and send our request, we supply the service endpoint is a string containing the server address (as a URL) and the service name (here, it's search). And the parameters are supplied in the form of a Python dictionary. Here, the two paramters we'll pass are the format and address parameters. | #Form the request
osmURL = 'http://nominatim.openstreetmap.org/search'
params = {'format':'json','q':address} | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
Now, we can use requests to send our command off to the OSM server. The server's response is saved as the response variable. | #Send the request
response = requests.get(osmURL, params) | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
The response object below contains a lot of information. You are encouraged to explore this object further. Here we'll explore one property which is the full URL created. Copy and paste the result in your favorite browser, and you'll see the result of our request in raw form. When you try this, try changing 'json' to 'html' in the URL... | response.url
#Opens the URL as an html response (vs JSON) in a web browser...
import webbrowser
webbrowser.open_new(response.url.replace('json','html')) | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
What we really want from the response, however, is the data returned by the service. The json function of the response object converts the response to an object in JavaScript Object Notation, or JSON. JSON is esentially a list of dictionaries that we can easily manipulate in Python. | #Read in the response as a JSON encoded object
jsonObj = response.json() | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
pprint or "pretty print" allows us to display JSON objects in a readable format. Let's make a pretty print of our JSON repsonse. | from pprint import pprint
pprint(jsonObj) | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
Our response contains only one item in the JSON list. We'll extract to a dictionary and print it's items. | dataDict = jsonObj[0]
print dataDict.keys() | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
Now we can easily grab the lat and lon objects from our response | lat = float(dataDict['lat'])
lng = float(dataDict['lon'])
print "The lat,lng
d = jsonObj[0]
d['lon'],d['lat'] | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
Now let's inform the user of the result of the whole process... | print "The address {0} is located at\n{1}° Lat, {2}° Lon".format(address,lat,lng) | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | johnpfay/environ859 | gpl-3.0 |
Make the notebook reproducible | np.random.seed(3123) | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Nested analysis
In our discussion below, "Group 2" is nested within "Group 1". As a
concrete example, "Group 1" might be school districts, with "Group
2" being individual schools. The function below generates data from
such a population. In a nested analysis, the group 2 labels that
are nested within different group 1 labels are treated as
independent groups, even if they have the same label. For example,
two schools labeled "school 1" that are in two different school
districts are treated as independent schools, even though they have
the same label. | def generate_nested(
n_group1=200, n_group2=20, n_rep=10, group1_sd=2, group2_sd=3, unexplained_sd=4
):
# Group 1 indicators
group1 = np.kron(np.arange(n_group1), np.ones(n_group2 * n_rep))
# Group 1 effects
u = group1_sd * np.random.normal(size=n_group1)
effects1 = np.kron(u, np.ones(n_group2 * n_rep))
# Group 2 indicators
group2 = np.kron(np.ones(n_group1), np.kron(np.arange(n_group2), np.ones(n_rep)))
# Group 2 effects
u = group2_sd * np.random.normal(size=n_group1 * n_group2)
effects2 = np.kron(u, np.ones(n_rep))
e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)
y = effects1 + effects2 + e
df = pd.DataFrame({"y": y, "group1": group1, "group2": group2})
return df | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Generate a data set to analyze. | df = generate_nested() | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Using all the default arguments for generate_nested, the population
values of "group 1 Var" and "group 2 Var" are 2^2=4 and 3^2=9,
respectively. The unexplained variance, listed as "scale" at the
top of the summary table, has population value 4^2=16. | model1 = sm.MixedLM.from_formula(
"y ~ 1",
re_formula="1",
vc_formula={"group2": "0 + C(group2)"},
groups="group1",
data=df,
)
result1 = model1.fit()
print(result1.summary()) | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
If we wish to avoid the formula interface, we can fit the same model
by building the design matrices manually. | def f(x):
n = x.shape[0]
g2 = x.group2
u = g2.unique()
u.sort()
uv = {v: k for k, v in enumerate(u)}
mat = np.zeros((n, len(u)))
for i in range(n):
mat[i, uv[g2.iloc[i]]] = 1
colnames = ["%d" % z for z in u]
return mat, colnames | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Then we set up the variance components using the VCSpec class. | vcm = df.groupby("group1").apply(f).to_list()
mats = [x[0] for x in vcm]
colnames = [x[1] for x in vcm]
names = ["group2"]
vcs = VCSpec(names, [colnames], [mats]) | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Finally we fit the model. It can be seen that the results of the
two fits are identical. | oo = np.ones(df.shape[0])
model2 = sm.MixedLM(df.y, oo, exog_re=oo, groups=df.group1, exog_vc=vcs)
result2 = model2.fit()
print(result2.summary()) | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Crossed analysis
In a crossed analysis, the levels of one group can occur in any
combination with the levels of the another group. The groups in
Statsmodels MixedLM are always nested, but it is possible to fit a
crossed model by having only one group, and specifying all random
effects as variance components. Many, but not all crossed models
can be fit in this way. The function below generates a crossed data
set with two levels of random structure. | def generate_crossed(
n_group1=100, n_group2=100, n_rep=4, group1_sd=2, group2_sd=3, unexplained_sd=4
):
# Group 1 indicators
group1 = np.kron(
np.arange(n_group1, dtype=int), np.ones(n_group2 * n_rep, dtype=int)
)
group1 = group1[np.random.permutation(len(group1))]
# Group 1 effects
u = group1_sd * np.random.normal(size=n_group1)
effects1 = u[group1]
# Group 2 indicators
group2 = np.kron(
np.arange(n_group2, dtype=int), np.ones(n_group2 * n_rep, dtype=int)
)
group2 = group2[np.random.permutation(len(group2))]
# Group 2 effects
u = group2_sd * np.random.normal(size=n_group2)
effects2 = u[group2]
e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)
y = effects1 + effects2 + e
df = pd.DataFrame({"y": y, "group1": group1, "group2": group2})
return df | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Generate a data set to analyze. | df = generate_crossed() | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Next we fit the model, note that the groups vector is constant.
Using the default parameters for generate_crossed, the level 1
variance should be 2^2=4, the level 2 variance should be 3^2=9, and
the unexplained variance should be 4^2=16. | vc = {"g1": "0 + C(group1)", "g2": "0 + C(group2)"}
oo = np.ones(df.shape[0])
model3 = sm.MixedLM.from_formula("y ~ 1", groups=oo, vc_formula=vc, data=df)
result3 = model3.fit()
print(result3.summary()) | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
If we wish to avoid the formula interface, we can fit the same model
by building the design matrices manually. | def f(g):
n = len(g)
u = g.unique()
u.sort()
uv = {v: k for k, v in enumerate(u)}
mat = np.zeros((n, len(u)))
for i in range(n):
mat[i, uv[g[i]]] = 1
colnames = ["%d" % z for z in u]
return [mat], [colnames]
vcm = [f(df.group1), f(df.group2)]
mats = [x[0] for x in vcm]
colnames = [x[1] for x in vcm]
names = ["group1", "group2"]
vcs = VCSpec(names, colnames, mats) | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Here we fit the model without using formulas, it is simple to check
that the results for models 3 and 4 are identical. | oo = np.ones(df.shape[0])
model4 = sm.MixedLM(df.y, oo[:, None], exog_re=None, groups=oo, exog_vc=vcs)
result4 = model4.fit()
print(result4.summary()) | v0.13.0/examples/notebooks/generated/variance_components.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
We can access individual rows and columns using .loc (with index labels) or .iloc (with indices)
python
medians_df.loc[row labels, column labels]
medians_df.iloc[row indices, column indices] | medians_df.loc[[0, 1, 2, 5], 'County']
medians_df.iloc[10:15, :4] | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
We can also get just a few columns from all rows | medians_df[['Median_age', 'Avg_MonthlyIncome']].head() | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
Extending pandas Spatially
The ArcGIS API for Python provides Spatially Enabled DataFrames, which include geometry information. | from arcgis.features import GeoAccessor, GeoSeriesAccessor
counties_fc_path = r'C:\Users\jdadams\AppData\Roaming\Esri\ArcGISPro\Favorites\opensgid.agrc.utah.gov.sde\opensgid.boundaries.county_boundaries'
counties_df = pd.DataFrame.spatial.from_featureclass(counties_fc_path)
counties_df.head() | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
pandas lets you work on rows that meet a certain condition | counties_df.loc[counties_df['stateplane'] == 'Central', ['name', 'stateplane', 'fips_str']] | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
You can easily add new columns | counties_df['emperor'] = 'Jake'
counties_df.head() | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
pandas provides powerful built in grouping and aggregation tools, along with Spatially Enabled DataFrames' geometry operations | counties_df.groupby('stateplane').count()
counties_df['acres'] = counties_df['SHAPE'].apply(lambda shape: shape.area / 4046.8564)
counties_df.groupby('stateplane')['acres'].sum() | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
pandas Solutions to our Arcpy Problems
row[0] Solution: Field Names
```python
def update_unit_count(parcels_df):
"""Update unit counts in-place for single family, duplex, and tri/quad
Args:
parcels_df (pd.DataFrame): The evaluated parcel dataset with UNIT_COUNT, HOUSE_CNT, SUBTYPE, and NOTE columns
"""
# fix single family (non-pud)
zero_or_null_unit_counts = (parcels_df['UNIT_COUNT'] == 0) | (parcels_df['UNIT_COUNT'].isna())
parcels_df.loc[(zero_or_null_unit_counts) & (parcels_df['SUBTYPE'] == 'single_family'), 'UNIT_COUNT'] = 1
# fix duplex
parcels_df.loc[(parcels_df['SUBTYPE'] == 'duplex'), 'UNIT_COUNT'] = 2
# fix triplex-quadplex
parcels_df.loc[(parcels_df['UNIT_COUNT'] < parcels_df['HOUSE_CNT']) & (parcels_df['NOTE'] == 'triplex-quadplex'),
'UNIT_COUNT'] = parcels_df['HOUSE_CNT']
```
Let's make Erik the emperor of the small counties that use State Plane North | counties_df.loc[(counties_df['pop_lastcensus'] < 100000) & (counties_df['stateplane'] == 'North'), 'emperor'] = 'Erik'
counties_df[['name', 'pop_lastcensus', 'stateplane', 'emperor']].sort_values('name').head() | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
Nested Cursors Solution: Merged DataFrames
```python
def _get_current_attachment_info_by_oid(self, live_data_subset_df):
#: Join live attachment table to feature layer info
live_attachments_df = pd.DataFrame(self.feature_layer.attachments.search())
live_attachments_subset_df = live_attachments_df.reindex(columns=['PARENTOBJECTID', 'NAME', 'ID'])
merged_df = live_data_subset_df.merge(
live_attachments_subset_df, left_on='OBJECTID', right_on='PARENTOBJECTID', how='left'
)
return merged_df
```
Let's add census data to our counties | census_fc_path = r'C:\Users\jdadams\AppData\Roaming\Esri\ArcGISPro\Favorites\opensgid.agrc.utah.gov.sde\opensgid.demographic.census_counties_2020'
census_df = pd.DataFrame.spatial.from_featureclass(census_fc_path)
counties_with_census_df = counties_df.merge(census_df[['geoid20', 'aland20']], left_on='fips_str', right_on='geoid20')
counties_with_census_df.head() | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
Renaming/Reordering Fields Solution: df.rename() and df.reindex()
python
final_parcels_df.rename(
columns={
'name': 'CITY', #: from cities
'NewSA': 'SUBCOUNTY', #: From subcounties/regions
'BUILT_YR': 'APX_BLT_YR',
'BLDG_SQFT': 'TOT_BD_FT2',
'TOTAL_MKT_VALUE': 'TOT_VALUE',
'PARCEL_ACRES': 'ACRES',
},
inplace=True
)
```python
final_fields = [
'SHAPE', 'UNIT_ID', 'TYPE', 'SUBTYPE', 'IS_OUG', 'UNIT_COUNT', 'DUA', 'ACRES', 'TOT_BD_FT2', 'TOT_VALUE',
'APX_BLT_YR', 'BLT_DECADE', 'CITY', 'COUNTY', 'SUBCOUNTY', 'PARCEL_ID'
]
logging.info('Writing final data out to disk...')
output_df = final_parcels_df.reindex(columns=final_fields)
output_df.spatial.to_featureclass(output_fc, sanitize_columns=False)
```
"Emperor" is too bold; let's use "Benevolent Dictator for Life" instead. | renames = {
'name': 'County Name',
'pop_lastcensus': 'Last Census Population',
'emperor': 'Benevolent Dictator for Life',
'acres': 'Acres',
'aland20': 'Land Area',
}
counties_with_census_df.rename(columns=renames, inplace=True)
counties_with_census_df.head() | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
Now that we've got it all looking good, let's reorder the fields and get rid of the ones we don't want | field_order = [
'County Name',
'Benevolent Dictator for Life',
'Acres',
'Land Area',
'Last Census Population',
'SHAPE'
]
final_counties_df = counties_with_census_df.reindex(columns=field_order)
final_counties_df.head() | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
Intermediate Feature Classes: New DataFrame Variables
With everything we've done, we've not written a single feature class to either disk or in_memory
python
counties_df
counties_with_census_df
final_counties_df
Finally, Write It All To Disk | final_counties_df.spatial.to_featureclass(r'C:\gis\Projects\HousingInventory\HousingInventory.gdb\counties_ugic') | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | agrc/Presentations | mit |
Overview
Loading the extension enables three magic functions: %octave, %octave_push, and %octave_pull.
The first is for executing one or more lines of Octave, while the latter allow moving variables between the Octave and Python workspace.
Here you see an example of how to execute a single line of Octave, and how to transfer the generated value back to Python: | x = %octave [1 2; 3 4];
x
a = [1, 2, 3]
%octave_push a
%octave a = a * 2;
%octave_pull a
a | example/octavemagic_extension.ipynb | blink1073/oct2py | mit |
Subsets and Splits