markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
When using the cell magic, %%octave (note the double %), multiple lines of Octave can be executed together. Unlike with the single cell magic, no value is returned, so we use the -i and -o flags to specify input and output variables. Also note the use of the semicolon to suppress the Octave output.
%%octave -i x -o U,S,V [U, S, V] = svd(x); print(U, S, V)
example/octavemagic_extension.ipynb
blink1073/oct2py
mit
Plotting Plot output is automatically captured and displayed, and using the -f flag you may choose its format (currently, png and svg are supported).
%%octave -f svg p = [12 -2.5 -8 -0.1 8]; x = 0:0.01:1; polyout(p, 'x') plot(x, polyval(p, x));
example/octavemagic_extension.ipynb
blink1073/oct2py
mit
The width or the height can be specified to constrain the image while maintaining the original aspect ratio.
%%octave -f png -w 600 % butterworth filter, order 2, cutoff pi/2 radians b = [0.292893218813452 0.585786437626905 0.292893218813452]; a = [1 0 0.171572875253810]; freqz(b, a, 32); %%octave -s 600,200 -f png % Note: On Windows, this will not show the plots unless Ghostscript is installed. subplot(121); [x, y] = meshgrid(0:0.1:3); r = sin(x - 0.5).^2 + cos(y - 0.5).^2; surf(x, y, r); subplot(122); sombrero()
example/octavemagic_extension.ipynb
blink1073/oct2py
mit
Multiple figures can be drawn. Note that when using imshow the image will be created as a PNG with the raw image dimensions.
%%octave -f svg -h 300 sombrero figure imshow(rand(200,200))
example/octavemagic_extension.ipynb
blink1073/oct2py
mit
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 2 sample_id = 3 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ return x / np.max(x) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ return np.eye(10)[x] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a bach of image input : image_shape: Shape of the images : return: Tensor for image input. """ return tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name='x') def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ return tf.placeholder(tf.float32, shape=[None, n_classes], name='y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ return tf.placeholder(tf.float32, name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ F_W = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.shape.as_list()[3], conv_num_outputs], stddev=0.05)) F_b = tf.Variable(tf.zeros(conv_num_outputs)) output = tf.nn.conv2d(x_tensor, F_W, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME') output = tf.nn.bias_add(output, F_b) output = tf.nn.relu(output) output = tf.nn.max_pool(output, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') return output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ return tf.contrib.layers.flatten(x_tensor) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ return tf.contrib.layers.fully_connected(inputs=x_tensor, num_outputs=num_outputs, activation_fn=tf.nn.relu) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ return tf.contrib.layers.fully_connected(inputs=x_tensor, num_outputs=num_outputs, activation_fn=None) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) c_layer = conv2d_maxpool(x, 32, (8, 8), (1, 1), (4, 4), (2, 2)) c_layer = conv2d_maxpool(c_layer, 128, (4,4), (1,1), (4,4), (2,2)) c_layer = conv2d_maxpool(c_layer, 512, (2,2), (1,1), (4,4), (2,2)) c_layer = tf.nn.dropout(c_layer, keep_prob) # Function Definition from Above: # flatten(x_tensor) flat = flatten(c_layer) # Function Definition from Above: # fully_conn(x_tensor, num_outputs) fc_layer = fully_conn(flat, 512) fc_layer = tf.nn.dropout(fc_layer, keep_prob) fc_layer = fully_conn(flat, 128) fc_layer = tf.nn.dropout(fc_layer, keep_prob) fc_layer = fully_conn(flat, 32) # Function Definition from Above: # output(x_tensor, num_outputs) o_layer = output(fc_layer, 10) return o_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network.
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ session.run(optimizer, feed_dict={ x: feature_batch, y: label_batch, keep_prob: keep_probability}) pass """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ loss = session.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.0}) valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0}) print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc)) pass
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout
# TODO: Tune Parameters epochs = 15 batch_size = 512 keep_probability = .7
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
""" DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
""" DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path)
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
""" DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model()
nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb
broundy/udacity
unlicense
Load and transform data We're going to load the Movielens 100k dataset and create triplets of (user, known positive item, randomly sampled negative item). The success metric is AUC: in this case, the probability that a randomly chosen known positive item from the test set is ranked higher for a given user than a ranomly chosen negative item.
latent_dim = 100 num_epochs = 10 # Read data train, test = data.get_movielens_data() num_users, num_items = train.shape # Prepare the test triplets test_uid, test_pid, test_nid = data.get_triplets(test) model = build_model(num_users, num_items, latent_dim) # Print the model structure print(model.summary()) # Sanity check, should be around 0.5 print('AUC before training %s' % metrics.full_auc(model, test))
triplet_keras.ipynb
maciejkula/triplet_recommendations_keras
apache-2.0
Run the model Run for a couple of epochs, checking the AUC after every epoch.
for epoch in range(num_epochs): print('Epoch %s' % epoch) # Sample triplets from the training data uid, pid, nid = data.get_triplets(train) X = { 'user_input': uid, 'positive_item_input': pid, 'negative_item_input': nid } model.fit(X, np.ones(len(uid)), batch_size=64, nb_epoch=1, verbose=0, shuffle=True) print('AUC %s' % metrics.full_auc(model, test))
triplet_keras.ipynb
maciejkula/triplet_recommendations_keras
apache-2.0
But what’s actually being printed here? At first glance, it looks like we’re printing the strings “True” and “False”, but those strings don’t appear anywhere in our code. What is actually being printed is the special built-in values that Python uses to represent true and false – they are capitalized so that we know they’re these special values. We can show that these values are special by trying to print them. The following code runs without errors (note the absence of quotation marks):
print(True) print(False)
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
There’s a wide range of things that we can include in conditions, and it would be impossible to give an exhaustive list here. The basic building blocks are: equals (represented by ==) greater and less than (represented by > and <) greater and less than or equal to (represented by >= and <=) not equal (represented by !=) is a value in a list (represented by in) are two objects the same (represented by is). Notice that the test for equality is two equals signs, not one. Forgetting the second equals sign will cause an error. Now that we know how to express tests as conditions, let’s see what we can do with them. if statements The simplest kind of conditional statement is an if statement. Hopefully the syntax is fairly simple to understand:
expression_level = 125 if expression_level > 100: print("gene is highly expressed")
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
We write the word if, followed by a condition, and end the first line with a colon. There follows a block of indented lines of code (the body of the if statement), which will only be executed if the condition is true. This colon-plus-block pattern should be familiar to you from the sections on loops and functions. Most of the time, we want to use an if statement to test a property of some variable whose value we don’t know at the time when we are writing the program. The example above is obviously useless, as the value of the expression_level variable is not going to change! Here’s a slightly more interesting example: we’ll define a list of gene accession names and print out just the ones that start with “a”:
accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for accession in accs: if accession.startswith('a'): print(accession)
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
If you take a close look at the code above, you’ll see something interesting – the lines of code inside the loop are indented (just as we’ve seen before), but the line of code inside the if statement is indented twice – once for the loop, and once for the if. This is the first time we’ve seen multiple levels of indentation, but it’s very common once we start working with larger programs – whenever we have one loop or if statement nested inside another, we’ll have this type of indentation. Python is quite happy to have as many levels of indentation as needed, but you’ll need to keep careful track of which lines of code belong at which level. If you find yourself writing a piece of code that requires more than three levels of indentation, it’s generally an indication that that piece of code should be turned into a function. else statements Closely related to the if statement is the else statement. The examples above use a yes/no type of decision-making: should we print the gene accession number or not? Often we need an either/or type of decision, where we have two possible actions to take. To do this, we can add on an else clause after the end of the body of an if statement:
expression_level = 125 if expression_level > 100: print("gene is highly expressed") else: print("gene is lowly expressed")
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
The else statement doesn’t have any condition of its own – rather, the else statement body is execute when the if statement to which it’s attached is not executed. Here’s an example which uses if and else to split up a list of accession names into two different files – accessions that start with “a” go into the first file, and all other accessions go into the second file:
file1 = open("a_accessions.txt", "w") file2 = open("other_accessions.txt", "w") accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for accession in accs: if accession.startswith('a'): file1.write(accession + "\n") else: file2.write(accession + "\n")
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
Notice how there are multiple indentation levels as before, but that the if and else statements are at the same level. elif statements What if we have more than two possible branches? For example, say we want three files of accession names: ones that start with “a”, ones that start with “b”, and all others. We could have a second if statement nested inside the else clause of the first if statement:
file1 = open("a_accessions.txt", "w") file2 = open("b_accessions.txt", "w") file3 = open("other_accessions.txt", "w") accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for accession in accs: if accession.startswith('a'): file1.write(accession + "\n") else: if accession.startswith('b'): file2.write(accession + "\n") else: file3.write(accession + "\n")
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
This works, but is difficult to read – we can quickly see that we need an extra level of indentation for every additional choice we want to include. To get round this, Python has an elif statement, which merges together else and if and allows us to rewrite the above example in a much more elegant way:
file1 = open("a_accessions.txt", "w") file2 = open("b_accessions.txt", "w") file3 = open("other_accessions.txt", "w") accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for accession in accs: if accession.startswith('a'): file1.write(accession + "\n") elif accession.startswith('b'): file2.write(accession + "\n") else: file3.write(accession + "\n")
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
Notice how this version of the code only needs two levels of indention. In fact, using elif we can have any number of branches and still only require a single extra level of indentation:
for accession in accs: if accession.startswith('a'): file1.write(accession + "\n") elif accession.startswith('b'): file2.write(accession + "\n") elif accession.startswith('c'): file3.write(accession + "\n") elif accession.startswith('d'): file4.write(accession + "\n") elif accession.startswith('e'): file5.write(accession + "\n") else: file6.write(accession + "\n")
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
Another way of handling complex decision branches like this – especially useful when dealing with validation and errors – is using exceptions, which have their own chapter in Advanced Python for Biologists. while loops Here’s one final thing we can do with conditions: use them to determine when to exit a loop. In section 4 we learned about loops that iterate over a collection of items (like a list, a string or a file). Python has another type of loop called a while loop. Rather than running a set number of times, a while loop runs until some condition is met. For example, here’s a bit of code that increments a count variable by one each time round the loop, stopping when the count variable reaches ten:
count = 0 while count<10: print(count) count = count + 1
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
Because normal loops in Python are so powerful2 , while loops are used much less frequently than in other languages, so we won’t discuss them further. Building up complex conditions What if we wanted to express a condition that was made up of several parts? Imagine we want to go through our list of accessions and print out only the ones that start with “a” and end with “3”. We could use two nested if statements:
accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for accession in accs: if accession.startswith('a'): if accession.endswith('3'): print(accession)
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
but this brings in an extra, unneeded level of indention. A better way is to join up the two condition with and to make a complex expression:
accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for accession in accs: if accession.startswith('a') and accession.endswith('3'): print(accession)
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
This version is nicer in two ways: it doesn’t require the extra level of indentation, and the condition reads in a very natural way. We can also use or to join up two conditions, to produce a complex condition that will be true if either of the two simple conditions are true:
accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for accession in accs: if accession.startswith('a') or accession.startswith('b'): print(accession)
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
We can even join up complex conditions to make more complex conditions – here’s an example which prints accessions if they start with either “a” or “b” and end with “4”:
accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for acc in accs: if (acc.startswith('a') or acc.startswith('b')) and acc.endswith('4'): print(acc)
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
Notice how we have to include parentheses in the above example to avoid ambiguity. Finally, we can negate any type of condition by prefixing it with the word not. This example will print out accessions that start with “a” and don’t end with 6:
accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72'] for acc in accs: if acc.startswith('a') and not acc.endswith('6'): print(acc)
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
By using a combination of and, or and not (along with parentheses where necessary) we can build up arbitrarily complex conditions. This kind of use for conditions – identifying elements in a list – can often be done better using either the filter function, or a list comprehension. These three words are collectively known as boolean operators and crop up in a lot of places. For example, if you wanted to search for information on using Python in biology, but didn’t want to see pages that talked about biology of snakes, you might do a search for “biology python -snake“. This is actually a complex condition just like the ones above – Google automatically adds and between words, and uses the hyphen to mean not. So you’re asking for pages that mention python and biology but not snakes. Writing true/false functions Sometimes we want to write a function that can be used in a condition. This is very easy to do – we just make sure that our function always returns either True or False. Remember that True and False are built-in values in Python, so they can be passed around, stored in variables, and returned, just like numbers or strings. Here’s a function that determines whether or not a DNA sequence is AT-rich (we’ll say that a sequence is AT-rich if it has an AT content of more than 0.65):
def is_at_rich(dna): length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length if at_content > 0.65: return True else: return False
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
We’ll test this function on a few sequences to see if it works:
print(is_at_rich("ATTATCTACTA")) print(is_at_rich("CGGCAGCGCT"))
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
The output shows that the function returns True or False just like the other conditions we’ve been looking at: True False Therefore we can use our function in an if statement:
if is_at_rich(my_dna): # do something with the sequence
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
Because the last four lines of our function are devoted to evaluating a condition and returning True or False, we can write a slightly more compact version. In this example we evaluate the condition, and then return the result right away:
def is_at_rich(dna): length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return at_content > 0.65
Week_06/Week 06 - 02 - Conditionals.ipynb
biof-309-python/BIOF309-2016-Fall
mit
Complete graph Laplacian In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules. A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node. Here is $K_5$:
import networkx as nx K_5=nx.complete_graph(5) nx.draw(K_5)
assignments/assignment03/NumpyEx04.ipynb
CalPolyPat/phys202-2015-work
mit
The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple. The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
def complete_deg(n): return (n-1)*np.identity(n, dtype=int) D = complete_deg(5) assert D.shape==(5, 5) assert D.dtype==np.dtype(int) assert np.all(D.diagonal()==4*np.ones(5)) assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
assignments/assignment03/NumpyEx04.ipynb
CalPolyPat/phys202-2015-work
mit
The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
def complete_adj(n): return np.ones((n,n), dtype=int)-np.identity(n, dtype=int) A = complete_adj(5) assert A.shape==(5,5) assert A.dtype==np.dtype(int) assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
assignments/assignment03/NumpyEx04.ipynb
CalPolyPat/phys202-2015-work
mit
Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
def L(n): return complete_deg(n)-complete_adj(n) smalleig = np.empty((100,)) for n in np.arange(2,100): lap = L(n) eig = np.linalg.eigvals(lap) np.append(smalleig, np.min(eig)) plt.plot(np.arange(100), smalleig)
assignments/assignment03/NumpyEx04.ipynb
CalPolyPat/phys202-2015-work
mit
We import the sample data...
# Load PSU sample data psu_sample_cls = PSUSample() psu_sample_cls.load_data() psu_sample = psu_sample_cls.data # Load PSU sample data ssu_sample_cls = SSUSample() ssu_sample_cls.load_data() ssu_sample = ssu_sample_cls.data full_sample = pd.merge( psu_sample[["cluster", "region", "psu_prob"]], ssu_sample[["cluster", "household", "ssu_prob"]], on="cluster") full_sample["inclusion_prob"] = full_sample["psu_prob"] * full_sample["ssu_prob"] full_sample["design_weight"] = 1 / full_sample["inclusion_prob"] full_sample.head(15)
docs/source/tutorial/replicate_weights.ipynb
survey-methods/samplics
mit
Balanced Repeated Replication (BRR) <a name="section1"></a> The basic idea of BRR is to slip the sample in independent random groups. The groups are then threated as independent replicates of the the sample design. A special case is when the sample is split into two half samples in each stratum. This design is suitable to many survey designs where only two psus are selected by stratum. In practice, one of the psu is asigned to the first random group and the other psu is assign to the second group. The sample weights are double for one group (say the first one) and the sample weights in the other group are set to zero. To ensure that the replicates are independent, we use hadamard matrices to assign the random groups.
import scipy scipy.linalg.hadamard(8)
docs/source/tutorial/replicate_weights.ipynb
survey-methods/samplics
mit
In our example, we have 10 psus. If we do not have explicit stratification then replicate() will group the clusters into 5 strata (2 per stratum). In this case, the smallest number of replicates possible using the hadamard matrix is 8. The result below shows that replicate() created 5 strata by grouping clusters 7 and 10 in the first stratum, clusters 16 and 24 in the second stratum, and so on. We can achieve the same result by providing setting stratification=True and providing the stratum variable to replicate().
brr = ReplicateWeight(method="brr", stratification=False) brr_wgt = brr.replicate(full_sample["design_weight"], full_sample["cluster"]) brr_wgt.drop_duplicates().head(10)
docs/source/tutorial/replicate_weights.ipynb
survey-methods/samplics
mit
An extension of BRR is the Fay's method. In the Fay's approach, instead of multiplying one half-sample by zero, we multiple the sampel weights by a factor $\alpha$ and the other halh-sample by $2-\alpha$. We refer to $\alpha$ as the fay coefficient. Note that when $\alpha=0$ then teh Fay's method reduces to BRR.
fay = ReplicateWeight(method="brr", stratification=False, fay_coef=0.3) fay_wgt = fay.replicate( full_sample["design_weight"], full_sample["cluster"], rep_prefix="fay_weight_", psu_varname="cluster", str_varname="stratum" ) fay_wgt.drop_duplicates().head(10)
docs/source/tutorial/replicate_weights.ipynb
survey-methods/samplics
mit
Bootstrap <a name="section2"></a> For the bootstrap replicates, we need to provide the number of replicates. When the number of replicates is not provided, ReplicateWeight will default to 500. The bootstrap consists of selecting the same number of psus as in the sample but with replacement. The selection is independently repeated for each replicate.
bootstrap = ReplicateWeight(method="bootstrap", stratification=False, number_reps=50) boot_wgt = bootstrap.replicate(full_sample["design_weight"], full_sample["cluster"]) boot_wgt.drop_duplicates().head(10)
docs/source/tutorial/replicate_weights.ipynb
survey-methods/samplics
mit
Jackknife <a name="section3"></a> Below, we illustrate the API for creating replicate weights using the jackknife method.
jackknife = ReplicateWeight(method="jackknife", stratification=False) jkn_wgt = jackknife.replicate(full_sample["design_weight"], full_sample["cluster"]) jkn_wgt.drop_duplicates().head(10)
docs/source/tutorial/replicate_weights.ipynb
survey-methods/samplics
mit
With stratification...
jackknife = ReplicateWeight(method="jackknife", stratification=True) jkn_wgt = jackknife.replicate(full_sample["design_weight"], full_sample["cluster"], full_sample["region"]) jkn_wgt.drop_duplicates().head(10)
docs/source/tutorial/replicate_weights.ipynb
survey-methods/samplics
mit
Important. For any of the three methods, we can request the replicate coefficient instead of the replicate weights by rep_coefs=True.
#jackknife = ReplicateWeight(method="jackknife", stratification=True) jkn_wgt = jackknife.replicate( full_sample["design_weight"], full_sample["cluster"], full_sample["region"], rep_coefs=True ) jkn_wgt.drop_duplicates().sort_values(by="_stratum").head(15) #fay = ReplicateWeight(method="brr", stratification=False, fay_coef=0.3) fay_wgt = fay.replicate( full_sample["design_weight"], full_sample["cluster"], rep_prefix="fay_weight_", psu_varname="cluster", str_varname="stratum", rep_coefs=True ) fay_wgt.drop_duplicates().head(10)
docs/source/tutorial/replicate_weights.ipynb
survey-methods/samplics
mit
This next cell is for internal use with our cluster at the department, a local ipcluster will work: use the cell above.
# import os # from scripts.hpc05 import HPC05Client # os.environ['SSH_AUTH_SOCK'] = os.path.join(os.path.expanduser('~'), 'ssh-agent.socket') # cluster = HPC05Client()
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
Make sure to add the correct path like: sys.path.append("/path/where/to/ipynb/runs")
%%px --local import sys import os # CHANGE THE LINE BELOW INTO THE CORRECT FOLDER! sys.path.append(os.path.join(os.path.expanduser('~'), 'orbitalfield')) import kwant import numpy as np from fun import * def gap_and_decay(lead, p, val, tol=1e-4): gap = find_gap(lead, p, val, tol) decay_length = find_decay_length(lead, p, val) return gap, decay_length import holoviews as hv import holoviews_rc hv.notebook_extension()
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
Uncomment the lines for the wire that you want to use.
%%px --local # angle = 0 # WIRE WITH SC ON TOP angle = 45 # WIRE WITH SC ON SIDE p = make_params(t_interface=7/8*constants.t, Delta=68.4, r1=50, r2=70, orbital=True, angle=angle, A_correction=True, alpha=100) #r2=70 p.V = lambda x, y, z: 2 / 50 * z lead = make_3d_wire_external_sc(a=constants.a, r1=p.r1, r2=p.r2, angle=p.angle) # WIRE WITH CONSTANT GAP # lead = make_3d_wire() # p = make_params(V=lambda x, y, z: 0, orbital=True)
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
You can specify the angles that you want to calculate in thetas and phis. Also specify the range of magnetic field and chemical potential in Bs and mu_mesh.
# give an array of angles that you want to use # thetas = np.array([0, np.tan(1/10), 0.5 * np.pi - np.tan(1/10), 0.5 * np.pi]) # phis = np.array([0, np.tan(1/10), 0.5 * np.pi - np.tan(1/10), 0.5 * np.pi]) thetas = np.array([0.5 * np.pi]) phis = np.array([0]) # the range of magnetic field and chemical potential Bs = np.linspace(0, 2, 400) mu_mesh = np.linspace(0, 35, 400) # creates a 3D array with all values of magnetic field for all specified angles pos = spherical_coords(Bs.reshape(-1, 1, 1), thetas.reshape(1, -1, 1), phis.reshape(1, 1, -1)) pos_vec = pos.reshape(-1, 3) mus_output = lview.map_sync(lambda B: find_phase_bounds(lead, p, B, num_bands=40), pos_vec) mus, vals, mask = create_mask(Bs, thetas, phis, mu_mesh, mus_output) N = len(vals) step = N // (len(phis) * len(thetas)) print(N, step)
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
Check whether the correct angles were used and see the phase boundaries
import holoviews_rc from itertools import product from math import pi kwargs = {'kdims': [dimensions.B, dimensions.mu], 'extents': bnds(Bs, mu_mesh), 'label': 'Topological boundaries', 'group': 'Lines'} angles = list(product(enumerate(phis), enumerate(thetas))) boundaries = {(theta / pi, phi / pi): hv.Path((Bs, mus[i, j, :, ::2]), **kwargs) for (i, phi), (j, theta) in angles} BlochSpherePlot.bgcolor = 'white' sphere = {(theta / pi, phi / pi): BlochSphere([[1, 0, 0], spherical_coords(1, theta, phi)], group='Sphere') for (i, phi), (j, theta) in angles} hv.HoloMap(boundaries, **dimensions.angles) + hv.HoloMap(sphere, **dimensions.angles)
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
Calculate full phase diagram Make sure tempdata exists in the current folder. Set full_phase_diagram to False if you only want the band gap in the non-trivial region or True if you want it in the whole Bs, mu_mesh range.
full_phase_diagram = False
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
The next cell calculates the gaps and decay lengths. You can stop and rerun the code, it will skip over the files that already exist. Make sure the folder tempdata/ exists.
import os.path import sys fname_list = [] for i, n in enumerate(range(0, N, step)): fname = "tempdata/" + str(n)+"-"+str((i+1)*step)+".dat" fname_list.append(fname) if not os.path.isfile(fname): # check if file already exists lview.results.clear() cluster.results.clear() cluster.metadata.clear() print(fname) sys.stdout.flush() if full_phase_diagram: gaps_and_decays_output = lview.map_async(lambda val: gap_and_decay(lead, p, val[:-1] + (True,)), vals[n:(i+1) * step]) else: gaps_and_decays_output = lview.map_async(lambda val: gap_and_decay(lead, p, val), vals[n:(i+1) * step]) gaps_and_decays_output.wait_interactive() np.savetxt(fname, gaps_and_decays_output.result()) print(n, (i+1) * step) cluster.shutdown(hub=True) gaps_and_decay_output = np.vstack([np.loadtxt(fname) for fname in fname_list]) gaps_output, decay_length_output = np.array(gaps_and_decay_output).T gaps = np.array(gaps_output).reshape(mask.shape) gaps[1:, 0] = gaps[0, 0] decay_lengths = np.array(decay_length_output).reshape(mask.shape) decay_lengths[1:, 0] = decay_lengths[0, 0] if full_phase_diagram: gaps = gaps*(mask*2 - 1) decay_lengths = decay_lengths*(mask*2 - 1) gaps_output = gaps.reshape(-1) decay_length_output = decay_lengths.reshape(-1)
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
Save Run this function to save the data to hdf5 format, it will include all data and parameters that are used in the simulation.
fname = 'data/test.h5' save_data(fname, Bs, thetas, phis, mu_mesh, mus_output, gaps_output, decay_length_output, p, constants)
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
Check how the phase diagram looks This will show all data.
%%output size=200 %%opts Image [colorbar=False] {+axiswise} (clims=(0, 0.1)) phase_diagram = create_holoviews(fname) (phase_diagram.Phase_diagram.Band_gap.hist() + phase_diagram.Phase_diagram.Inverse_decay_length + phase_diagram.Sphere.I).cols(2) %%opts Image [colorbar=True] phase_diagram.Phase_diagram.Band_gap phase_diagram.cdims
Phase-diagrams.ipynb
basnijholt/orbitalfield
bsd-2-clause
Using GTK
import gtk import gobject import threading import datetime as dt import matplotlib as mpl import matplotlib.style import numpy as np import pandas as pd from mr_box_peripheral_board.ui.gtk.streaming_plot import StreamingPlot def _generate_data(stop_event, data_ready, data): delta_t = dt.timedelta(seconds=.1) samples_per_plot = 5 while True: time_0 = dt.datetime.now() values_i = np.random.rand(samples_per_plot) absolute_times_i = pd.Series([time_0 + i * delta_t for i in xrange(len(values_i))]) data_i = pd.Series(values_i, index=absolute_times_i) data.append(data_i) data_ready.set() if stop_event.wait(samples_per_plot * delta_t.total_seconds()): break with mpl.style.context('seaborn', {'image.cmap': 'gray', 'image.interpolation' : 'none'}): win = gtk.Window() win.set_default_size(800, 600) view = StreamingPlot(data_func=_generate_data) win.add(view.widget) win.connect('check-resize', lambda *args: view.on_resize()) win.set_position(gtk.WIN_POS_MOUSE) win.show_all() view.fig.tight_layout() win.connect('destroy', gtk.main_quit) gobject.idle_add(view.start) def auto_close(*args): if not view.stop_event.is_set(): # User did not explicitly pause the measurement. Automatically # close the measurement and continue. win.destroy() gobject.timeout_add(5000, auto_close) measurement_complete = threading.Event() view.widget.connect('destroy', lambda *args: measurement_complete.set()) gtk.gdk.threads_init() gtk.gdk.threads_enter() gtk.main() gtk.gdk.threads_leave() print measurement_complete.wait()
mr_box_peripheral_board/notebooks/Streaming plot demo.ipynb
wheeler-microfluidics/mr-box-peripheral-board.py
mit
Example of how to compress bytes (e.g., JSON) to bzip2
from IPython.display import display import bz2 data = pd.concat(view.data) data_json = data.to_json() data_json_bz2 = bz2.compress(data_json) data_from_json = pd.read_json(bz2.decompress(data_json_bz2), typ='series') len(data_json), len(data_json_bz2)
mr_box_peripheral_board/notebooks/Streaming plot demo.ipynb
wheeler-microfluidics/mr-box-peripheral-board.py
mit
Binary Paintshop Problem with Quantum Approximate Optimization Algorithm <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/experiments/qaoa/binary_paintshop>"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/qaoa/binary_paintshop.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/qaoa/binary_paintshop"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/qaoa/binary_paintshop"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table>
from typing import Sequence, Tuple import numpy as np try: import cirq except ImportError: print("installing cirq...") !pip install --quiet cirq print("installed cirq.") import cirq import cirq_ionq as ionq
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
Binary Paintshop Problem Assume an automotive paint shop and a random, but fixed sequence of 2*n cars. Each car has a identical partner that only differs in the color it has to be painted.
CAR_PAIR_COUNT = 10 car_sequence = np.random.permutation([x for x in range(CAR_PAIR_COUNT)] * 2) print(car_sequence)
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
The task is to paint the cars such that in the end for every pair of cars one is painted in red and the other in blue. The objective of the following minimization procedure is to minimize the number of color changes in the paintshop.
def color_changes(paint_bitstring: Sequence[int], car_sequence: Sequence[int]) -> int: """Count the number of times the color changes if the robots paint each car in car_sequence according to paint_bitstring, which notes the color for the first car in each pair. Args: paint_bitstring: A sequence that determines the color to paint the first car in pair i. For example, 0 for blue and nonzero for red. car_sequence: A sequence that determines which cars are paired together Returns: Count of the number of times the robots change the color """ color_sequence = [] painted_once = set() for car in car_sequence: if car in painted_once: # paint the other color for the second car in the pair color_sequence.append(not paint_bitstring[car]) else: # paint the noted color for the first car in the pair color_sequence.append(paint_bitstring[car]) painted_once.add(car) paint_change_counter = 0 # count the number of times two adjacent cars differ in color for color0, color1 in zip(color_sequence, color_sequence[1:]): if color0 != color1: paint_change_counter += 1 return paint_change_counter
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
If two consecutive cars in the sequence are painted in different colors the robots have to rinse the old color, clean the nozzles and flush in the new color. This color change procedure costs time, paint, water and ultimately costs money, which is why we want to minimize the number of color changes. However, a rearrangement of the car sequence is not at our disposal (because of restrictions that are posed by the remainig manufacturing processes), but we can decide once we reach the first car of each car pair which color to paint the pair first. When we have chosen the color for the first car the other car has to be painted in the other respective color. Obvious generalizations exist, for example more than two colors and groups of cars with more than 2 cars where it is permissible to exchange colors, however for demonstration purposes it suffices to consider the here presented binary version of the paintshop problem. It is NP-hard to solve the binary paintshop problem exactly as well as approximately with an arbitrary performance guarantee. A performance guarantee in this context would be a proof that an approximation algorithm never gives us a solution with a number of color changes that is more than some factor times the optimal number of color changes. This is the situation where substantial quantum speedup can be assumed (c.f. Quantum Computing in the NISQ era and beyond). The quantum algorithm presented here can deliver, on average, better solutions than all polynomial runtime heuristics specifically developed for the paintshop problem in constant time (constant query complexity) (c.f. Beating classical heuristics for the binary paint shop problem with the quantum approximate optimization algorithm). Spin Glass To be able to solve the binary paintshop problem with the Quantum Approximate Optimization Algorithm (QAOA) we need to translate the problem to a spin glass problem. Interestingly, that is possible with no spatial overhead, i.e. the spin glass has as many spins as the sequence has car pairs. The state of every spin represents the color we paint the respective first car in the seqence of every car pair. Every second car is painted with the repsective other color. The interactions of the spin glass can be deduced proceeding through the fixed car sequence: If two cars are adjacent to each other and both of them are either the first or the second car in their respective car pairs we can add a ferromagnetic interaction to the spin glass in order to penalize the color change between these two cars. If two cars are next to each other and one of the cars is the first and the other the second in their respective car pairs we have to add a antiferromagnetic interaction to the spin glass in order to penalize the color change because in this case the color for the car that is the second car in its car pair is exactly the opposite. All color changes in the car sequence are equivalent which is why we have equal magnitude ferromagnetic and antiferromagnetic interactions and additionally we choose unit magnitude interactions.
def spin_glass(car_sequence: Sequence[int]) -> Sequence[Tuple[int, int, int]]: """Assign interactions between adjacent cars. Assign a ferromagnetic(1) interaction if both elements of the pair are the first/second in their respective pairs. Otheriwse, assign an antiferromagnetic(-1) interaction. Yield a tuple with the two paired cars followed by the chosen interaction. """ ferromagnetic = -1 antiferromagnetic = 1 appeared_already = set() for car0, car1 in zip(car_sequence, car_sequence[1:]): if car0 == car1: continue if car0 in appeared_already: appeared_already.add(car0) if car1 in appeared_already: yield car0, car1, ferromagnetic else: yield car0, car1, antiferromagnetic else: appeared_already.add(car0) if car1 in appeared_already: yield car0, car1, antiferromagnetic else: yield car0, car1, ferromagnetic
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
Quantum Approximate Optimization Algorithm We want to execute a one block version of the QAOA circuit for the binary paintshop instance with p = 1 on a trapped-ion quantum computer of IonQ. This device is composed of 11 fully connected qubits with average single- and two-qubit fidelities of 99.5% and 97.5% respectively (Benchmarking an 11-qubit quantum computer). As most available quantum hardware, trapped ion quantum computers only allow the application of gates from a restricted native gate set predetermined by the physics of the quantum processor. To execute an arbitrary gate, compilation of the desired gate into available gates is required. For trapped ions, a generic native gate set consists of a parameterized two-qubit rotation, the Molmer Sorensen gate, $R_\mathrm{XX}(\alpha)=\mathrm{exp}[-\mathrm{i}\alpha \sigma_\mathrm{x}^{(i)}\sigma_\mathrm{x}^{(j)}/2]$ and a parametrized single qubit rotation: $R(\theta,\phi)=\begin{pmatrix} \cos{(\theta/2)} & -\mathrm{i}\mathrm{e}^{-\mathrm{i}\phi}\sin{(\theta/2)} \-\mathrm{i}\mathrm{e}^{\mathrm{i}\phi}\sin{(\theta/2)} & \cos{(\theta/2)} \end{pmatrix}$ QAOA circuits employ parametrized two body $\sigma_z$ rotations, $R_\mathrm{ZZ}(\gamma)=\mathrm{exp}[-i\gamma \sigma_\mathrm{z}^{(i)}\sigma_\mathrm{z}^{(j)}]$. To circumvent a compilation overhead and optimally leverage the Ion Trap, we inject pairs of Hadamard gates $H H^{\dagger} = 1$ for every qubit in between the two body $\sigma_z$ rotations. This means we are able to formulate the phase separator entirely with Molmer Sorensen gates. To support this, the QAOA circuit starts in the state where all qubits are in the groundstate $\left| 0\right\rangle$ instead of the superposition of all computational basis states $\left| + \right\rangle$,
def phase_separator( gamma: float, qubit_register: Sequence[cirq.Qid], car_sequence: Sequence[int] ) -> Sequence[cirq.Operation]: """Yield a sequence of Molmer Sorensen gates to implement a phase separator over the ferromagnetic/antiferromagnetic interactions between adjacent cars, as defined by spin_glass """ for car_pair0, car_pair1, interaction in spin_glass(car_sequence): yield cirq.ms(interaction * gamma).on( qubit_register[car_pair0], qubit_register[car_pair1] ) qubit_register = cirq.LineQubit.range(CAR_PAIR_COUNT) circuit = cirq.Circuit([phase_separator(0.1, qubit_register, car_sequence)])
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
Because we replaced the two body $\sigma_z$ rotations with Molmer Sorensen gates we also have to adjust the mixer slightly to account for the injected Hadamard gates.
def mixer(beta: float, qubit_register: Sequence[cirq.Qid]) -> Iterator[cirq.Operation]: """Yield a QAOA mixer of RX gates, modified by adding RY gates first, to account for the additional Hadamard gates. """ yield cirq.ry(np.pi / 2).on_each(qubit_register) yield cirq.rx(beta - np.pi).on_each(qubit_register)
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
To find the right parameters for the QAOA circuit, we have to assess the quality of the solutions for a given set of parameters. To this end, we execute the QAOA circuit with fixed parameters 100 times and calculate the average number of color changes.
def average_color_changes( parameters: Tuple[float, float], qubit_register: Sequence[cirq.Qid], car_sequence: Sequence[int], ) -> float: """Calculate the average number of color changes over all measurements of the QAOA circuit, aross `repetitions` many runs, for provided parameters beta and gamma. Args: parameters: tuple of (`beta`, `gamma`), the two parameters for the QAOA circuit qubit_register: A sequence of qubits for the circuit to use. car_sequence: A sequence that determines which cars are paired together. Returns: A float average number of color changes over all measurements. """ beta, gamma = parameters repetitions = 100 circuit = cirq.Circuit() circuit.append(phase_separator(gamma, qubit_register, car_sequence)) circuit.append(mixer(beta, qubit_register)) circuit.append(cirq.measure(*qubit_register, key="z")) results = service.run(circuit, repetitions=repetitions) avg_cc = 0 for paint_bitstring in results.measurements["z"]: avg_cc += color_changes(paint_bitstring, car_sequence) / repetitions return avg_cc
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
We optimize the average number of color changes by adjusting the parameters with scipy.optimzes function minimize. The results of these optimsation runs strongly depend on the random starting values we choose for the parameters, which is why we restart the optimization procedure for different starting parameters 10 times and take the best performing optimized parameters.
from scipy.optimize import minimize service = cirq.Simulator() beta, gamma = np.random.rand(2) average_cc = average_color_changes([beta, gamma], qubit_register, car_sequence) optimization_function = lambda x: average_color_changes(x, qubit_register, car_sequence) for _ in range(10): initial_guess = np.random.rand(2) optimization_result = minimize( optimization_function, initial_guess, method="SLSQP", options={"eps": 0.1} ) average_cc_temp = average_color_changes( optimization_result.x, qubit_register, car_sequence ) if average_cc > average_cc_temp: beta, gamma = optimization_result.x average_cc = average_cc_temp average_cc
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
Note here that the structure of the problem graphs of the binary paintshop problem allow for an alternative technique to come up with good parameters independent of the specifics of the respective instance of the problem: Training the quantum approximate optimization algorithm without access to a quantum processing unit Once the parameters are optimised, we execute the optimised QAOA circuit 100 times and output the solution with the least color changes. Please replace &lt;your key&gt; with your IonQ API key and &lt;remote host&gt; with the API endpoint.
repetitions = 100 circuit = cirq.Circuit() circuit.append(phase_separator(gamma, qubit_register, car_sequence)) circuit.append(mixer(beta, qubit_register)) circuit.append(cirq.measure(*qubit_register, key="z")) service = ionq.Service( remote_host="<remote host>", api_key="<your key>", default_target="qpu" ) results = service.run(circuit, repetitions=repetitions) best_result = CAR_PAIR_COUNT for paint_bitstring in results.measurements["z"]: result = color_changes(paint_bitstring, car_sequence) if result < best_result: best_result = result best_paint_bitstring = paint_bitstring print(f"The minimal number of color changes found by level-1 QAOA is: {best_result}") print( f"The car pairs have to be painted according to {best_paint_bitstring}, with index i representing the paint of the first car of pair i." ) print(f" The other car in pair i is painted the second color.")
docs/qaoa/binary_paintshop.ipynb
quantumlib/ReCirq
apache-2.0
使用 TF-Hub 对孟加拉语文章进行分类 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/bangla_article_classifier"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/bangla_article_classifier.ipynb">{img1下载笔记本</a></td> </table> 小心:除了使用 pip 安装 Python 软件包外,此笔记本还使用 sudo apt install 安装系统软件包:unzip。 此 Colab 演示了如何使用 Tensorflow Hub 对非英语/本地语言进行文本分类。在这里,我们选择孟加拉语作为本地语言并使用预训练的单词嵌入向量解决多类分类任务,在这个任务中我们将孟加拉语的新闻文章分为 5 类。针对孟加拉语进行预训练的嵌入向量来自 FastText,这是一个由 Facebook 创建的库,其中包含 157 种语言的预训练单词向量。 我们将使用 TF-Hub 的预训练嵌入向量导出程序先将单词嵌入向量转换为文本嵌入向量模块,然后使用该模块通过 tf.keras(Tensorflow 的高级用户友好 API)训练分类器来构建深度学习模型。即使我们在这里使用 fastText 嵌入向量,您也可以导出任何通过其他任务预训练的其他嵌入向量,并使用 Tensorflow Hub 快速获得结果。 设置
%%bash # https://github.com/pypa/setuptools/issues/1694#issuecomment-466010982 pip install gdown --no-use-pep517 %%bash sudo apt-get install -y unzip import os import tensorflow as tf import tensorflow_hub as hub import gdown import numpy as np from sklearn.metrics import classification_report import matplotlib.pyplot as plt import seaborn as sns
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
数据集 我们将使用 BARD(孟加拉语文章数据集),内含从不同孟加拉语新闻门户收集的约 3,76,226 篇文章,并标记为 5 个类别:经济、国内、国际、体育和娱乐。我们从 Google 云端硬盘下载这个文件,此 (bit.ly/BARD_DATASET) 链接指向此 GitHub 仓库。
gdown.download( url='https://drive.google.com/uc?id=1Ag0jd21oRwJhVFIBohmX_ogeojVtapLy', output='bard.zip', quiet=True ) %%bash unzip -qo bard.zip
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
将预训练的单词向量导出到 TF-Hub 模块 TF-Hub 提供了一些方便的脚本将单词嵌入向量转换为 TF-Hub 文本嵌入向量模块,详见这里。要使模块适用于孟加拉语或其他语言,我们只需将单词嵌入向量 .txt 或 .vec 文件下载到与 export_v2.py 相同的目录中,然后运行脚本。 导出程序会读取嵌入向量,并将其导出到 Tensorflow SavedModel。SavedModel 包含完整的 TensorFlow 程序,其中包括权重和计算图。TF-Hub 可以将 SavedModel 作为模块进行加载,我们将用它来构建文本分类模型。由于我们使用 tf.keras 来构建模型,因此我们将使用 hub.KerasLayer,它为 Hub 模块提供用作 Keras 层的封装容器。 首先,我们从 fastText 获得单词嵌入向量,并从 TF-Hub 仓库获得嵌入向量导出程序。
%%bash curl -O https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.bn.300.vec.gz curl -O https://raw.githubusercontent.com/tensorflow/hub/master/examples/text_embeddings_v2/export_v2.py gunzip -qf cc.bn.300.vec.gz --k
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
然后,我们在嵌入向量文件上运行导出程序脚本。由于 fastText 嵌入向量具有标题行并且相当大(转换为模块后,孟加拉语大约为 3.3 GB),因此我们忽略第一行,仅将前 100, 000 个词例导入文本嵌入向量模块。
%%bash python export_v2.py --embedding_file=cc.bn.300.vec --export_path=text_module --num_lines_to_ignore=1 --num_lines_to_use=100000 module_path = "text_module" embedding_layer = hub.KerasLayer(module_path, trainable=False)
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
文本嵌入向量模块以一维字符串张量中的句子批次作为输入,并输出与句子相对应的形状 (batch_size, embedding_dim) 的嵌入向量。它通过按空格拆分来对输入进行预处理。我们使用 sqrtn 组合程序(请参阅此处)将单词嵌入向量组合到句子嵌入向量。为了演示,我们传递一个孟加拉语单词的列表作为输入,并获得相应的嵌入向量。
embedding_layer(['বাস', 'বসবাস', 'ট্রেন', 'যাত্রী', 'ট্রাক'])
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
转换为 TensorFlow 数据集 由于数据集确实很大,因此我们使用生成器通过 Tensorflow 数据集的功能在运行时批量生成样本,而不是将整个数据集加载到内存中。同时,数据集还非常不平衡,因此在使用生成器之前,我们将打乱数据集的顺序。
dir_names = ['economy', 'sports', 'entertainment', 'state', 'international'] file_paths = [] labels = [] for i, dir in enumerate(dir_names): file_names = ["/".join([dir, name]) for name in os.listdir(dir)] file_paths += file_names labels += [i] * len(os.listdir(dir)) np.random.seed(42) permutation = np.random.permutation(len(file_paths)) file_paths = np.array(file_paths)[permutation] labels = np.array(labels)[permutation]
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
打乱顺序后,我们可以查看标签在训练和验证样本中的分布。
train_frac = 0.8 train_size = int(len(file_paths) * train_frac) # plot training vs validation distribution plt.subplot(1, 2, 1) plt.hist(labels[0:train_size]) plt.title("Train labels") plt.subplot(1, 2, 2) plt.hist(labels[train_size:]) plt.title("Validation labels") plt.tight_layout()
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
要使用生成器创建数据集,我们首先编写一个生成器函数,该函数从 file_paths 读取文章,从标签数组中读取标签,并在每个步骤生成一个训练样本。我们将此生成器函数传递到 tf.data.Dataset.from_generator 方法,并指定输出类型。每个训练样本都是一个元组,其中包含 tf.string 数据类型的文章和独热编码标签。我们使用 skip 和 take 方法以 80-20 的比例将数据集拆分为训练集和验证集。
def load_file(path, label): return tf.io.read_file(path), label def make_datasets(train_size): batch_size = 256 train_files = file_paths[:train_size] train_labels = labels[:train_size] train_ds = tf.data.Dataset.from_tensor_slices((train_files, train_labels)) train_ds = train_ds.map(load_file).shuffle(5000) train_ds = train_ds.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE) test_files = file_paths[train_size:] test_labels = labels[train_size:] test_ds = tf.data.Dataset.from_tensor_slices((test_files, test_labels)) test_ds = test_ds.map(load_file) test_ds = test_ds.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE) return train_ds, test_ds train_data, validation_data = make_datasets(train_size)
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
模型训练和评估 由于我们已经在模块周围添加了封装容器,使其可以像 Keras 中的任何其他层一样使用,因此我们可以创建一个小的序贯模型,此模型是层的线性堆叠。我们可以像使用任何其他层一样,使用 model.add 添加文本嵌入向量模块。我们通过指定损失和优化器来编译模型,并对其进行 10 个周期的训练。tf.keras API 可以将 TensorFlow 数据集作为输入进行处理,因此我们可以将数据实例传递给用于模型训练的拟合方法。由于我们使用的是生成器函数,tf.data 将负责生成样本、对其进行批处理,并将其馈送给模型。 模型
def create_model(): model = tf.keras.Sequential([ tf.keras.layers.Input(shape=[], dtype=tf.string), embedding_layer, tf.keras.layers.Dense(64, activation="relu"), tf.keras.layers.Dense(16, activation="relu"), tf.keras.layers.Dense(5), ]) model.compile(loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer="adam", metrics=['accuracy']) return model model = create_model() # Create earlystopping callback early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3)
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
训练
history = model.fit(train_data, validation_data=validation_data, epochs=5, callbacks=[early_stopping_callback])
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
评估 我们可以使用由 fit 方法返回的 history 对象(包含每个周期的损失和准确率值)来可视化训练和验证数据的准确率和损失曲线。
# Plot training &amp; validation accuracy values plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # Plot training &amp; validation loss values plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show()
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
预测 我们可以获得验证数据的预测并检查混淆矩阵,以查看模型在 5 个类中的性能。predict 方法返回每个类的概率的 N 维数组后,我们使用 np.argmax 将其转换为类标签。
y_pred = model.predict(validation_data) y_pred = np.argmax(y_pred, axis=1) samples = file_paths[0:3] for i, sample in enumerate(samples): f = open(sample) text = f.read() print(text[0:100]) print("True Class: ", sample.split("/")[0]) print("Predicted Class: ", dir_names[y_pred[i]]) f.close()
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
比较性能 现在,我们可以从 labels 获得验证数据的正确标签,并与我们的预测进行比较,以获得 classification_report。
y_true = np.array(labels[train_size:]) print(classification_report(y_true, y_pred, target_names=dir_names))
site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb
tensorflow/docs-l10n
apache-2.0
<header class="w3-container w3-teal"> <img src="images/utfsm.png" alt="" align="left"/> <img src="images/inf.png" alt="" align="right"/> </header> <br/><br/><br/><br/><br/> IWI131 Programación de Computadores Sebastián Flores http://progra.usm.cl/ https://www.github.com/usantamaria/iwi131 Fechas Actividad 05: Miércoles 6 Enero 2016 (8:00). Certamen 3: Viernes 8 Enero 2016 (15:30). Certamen Recuperativo: Lunes 18 Enero 2016 (8:00). Clases Mie 23 Dic 2016: Procesamiento de Texto. Lun 28 Dic 2016: Escribir y leer archivos. Mie 30 Dic 2016: Ejercicios tipo certamen. Lun 04 Ene 2016: Ejercicios tipo certamen. Mie 06 Ene 2016: Actividad 5. Consejo: Baje el libro del curso, lea, aprenda y practique. ¿Qué contenido aprenderemos? Procesamiento de texto ¿Porqué aprenderemos ese contenido? Procesamiento de texto Habilidad crucial para resolver una gran variedad de problemas. Motivación Queremos conocer cuales son las palabras más comunes en un idioma. Para eso, necesitamos saber cuantas veces aparece cada palabra en una frase. Desarrolle una función contar_palabras que al ser aplicada sobre un string, entregue un diccionario con las palabras y la cantidad de veces que aparece en la frase. Omita espacios y signos de puntuación y exclamación. t = 'El sobre, en el aula, esta sobre el pupitre.' contar_palabras(t) {'el': 3, 'en': 1, 'esta': 1, 'aula': 1, 'sobre': 2, 'pupitre': 1} ¿Cómo realizaría usted esta difícil tarea? Consejos El procesamiento de texto utiliza: Reconocimiento de patrones: usted debe reconocer que patrones se repiten y puede explotar para procesar el texto. Utilización de funciones específicas: el tipo de dato string posee una rica colección de métodos que debe manejar para simplificar la tarea de procesamiento de texto. Recuerde que todo string es inmutable, por lo que al aplicar diversas funciones se obtiene siempre un nuevo string. Procesamiento de texto Salto de línea El string \n corresponde a un único carácter, que representa el salto de línea.
print len("\n") a1 = 'casa\narbol\npatio' print a1 print len(a1) a2 = '''casa arbol patio''' print a2 print len(a2) print a1==a2 b = 'a\nb\nc' print b print len(b)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Tabulación El string \t corresponde a un único carácter, que representa una tabulación.
print len("\t") a = 'casa\n\tarbol\n\tpatio' print a b = 'a\tb\tc' print b print len(b)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Importante: \n y \t aparecen frecuentemente cuando analicemos archivos leídos del disco duro. Procesamiento de texto Reemplazar secciones de un string La función mi_string.replace(s1, s2) busca cada ocurrencia del substring s1 en mi_string, y lo reemplaza por s2. La función mi_string.replace(s1, s2,n) busca las primeras n ocurrencias del substring s1 en mi_string, y lo reemplaza por s2. La función mi_string.replace(s1, s2) regresa un nuevo string, el string original no es modificado.
palabra = 'cara' palabra2 = palabra.replace('r', 's') print palabra print palabra2 print palabra2.replace('ca', 'pa') print palabra2.replace('a', 'e', 1) print palabra2.replace('c', '').replace('a', 'o') # Encadenamiento de metodos print palabra
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Separar un string Para separar un string tenemos 2 opciones: * Separar en caracteres, utilizando list(mi_string), que genera una lista con los carácteres de mi_string en orden. * Separar en palabras, utilizando mi_string.split(s), que generar una lista de "palabras" que han sido separadas por el string s. El string s no estará en ninguno de los substrings de la lista. Por defecto, s es el caracter espacio " ".
oracion = 'taca taca' print list(oracion) print set(oracion) print oracion.split() print oracion.split("a") print oracion.split("t") print oracion.split("ac")
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Unir una lista de strings Para unir una lista de strings es necesario utilizar el método join: Python s.join(lista_de_strings) Regresa un único string donde los elementos del string han sido "pegados" utilizando el string s.
mi_lista = ['Ex', 'umbra', 'in', 'solem'] print ' '.join(mi_lista) print ''.join(mi_lista) print ' -> '.join(mi_lista) mi_conjunto = {'Ex', 'umbra', 'in', 'solem'} print mi_conjunto print ' '.join(mi_conjunto) print ''.join(mi_conjunto) print ' -> '.join(mi_conjunto)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Unir una lista de strings Observación: join funciona sólo sobre una lista de strings. Si quiere pegar números, debe convertirlos a strings antes.
lista_de_strings = ["1", "2", "3"] print ", ".join(lista_de_strings) lista_de_ints = [1, 2, 3] print ", ".join(lista_de_ints) lista_de_ints = range(10) lista_de_strings = [] for x in lista_de_ints: lista_de_strings.append(str(x)) print ", ".join(lista_de_strings)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Unir una secuencia de valores (no strings) v2 También es posible utilizar map que aplica genera una nueva lista aplicando a cada elemento de la lista original la función pasada como argumento.
numeros = range(10) print numeros def f(x): return 2.*x + 1./(x+1) print map(str, numeros) print map(float, numeros) print map(f, numeros) print ', '.join(map(str, numeros)) # print "-"join("1,2,3,4".split(","))
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Interpolación de valores por posición
s = 'Soy {0} y vivo en {1} {2}' print s.format('Perico', 'Valparaiso') print s.format('Erika', 'Berlin') print s.format('Wang Dawei', 'Beijing')
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Interpolación de valores por nombre
s = '{nombre} estudia en la {u}' # Datos pueden pasarse ordenados print s.format(nombre='Perico', u='UTFSM') print s.format(nombre='Fulana', u='PUCV') # También es posible cambiar el orden print s.format(u='UPLA', nombre='Yayita') # O con magia (conocimiento avanzado) d = {"nombre":"Mago Merlin", "u":"Camelot University"} print s.format(**d)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Mayusculas y Minúsculas Para cambiar la capitalización de un string, es posible utilizar los siguientes métodos: .upper(): TODO EN MAYUSCULA. .lower(): todo en minuscula .swapcase(): cambia el order que tenia la capitalización. .capitalize(): Coloca únicamente mayuscula en la primera letra del string.
palabra = '1. raMo de ProGra' print palabra.upper() print palabra.lower() print palabra.swapcase() print palabra.capitalize()
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Ejemplo de Motivación Queremos conocer cuales son las palabras más comunes en un idioma. Para eso, necesitamos saber cuantas veces aparece cada palabra en una frase. Desarrolle una función contar_palabras que al ser aplicada sobre un string, entregue un diccionario con las palabras y la cantidad de veces que aparece en la frase. Omita espacios y signos de puntuación y exclamación. t = 'El sobre, en el aula, esta sobre el pupitre.' contar_palabras(t) {'el': 3, 'en': 1, 'esta': 1, 'aula': 1, 'sobre': 2, 'pupitre': 1} ¿Cómo realizaría ahora usted esta difícil tarea? Procesamiento de texto Consejos Subdividir en tareas menores: * ¿Cómo sacar los simbolos indeseados? * ¿Cómo separar las palabras? * ¿Cómo contar las palabras?
def contar_palabras(s): return s t = 'El sobre, en el aula, esta sobre el pupitre.' contar_palabras(t)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Motivación: Solución INPUT: t = 'El sobre, en el aula, esta sobre el pupitre.' contar_palabras(t) OUTPUT: {'el': 3, 'en': 1, 'esta': 1, 'aula': 1, 'sobre': 2, 'pupitre': 1}
def contar_palabras(s): s = s.lower() for signo in [",",".",";","!","?","'",'"']: s = s.replace(signo,"") palabras = s.split() contador = {} for palabra_sucia in palabras: palabra = palabra_sucia if palabra in contador: contador[palabra] += 1 # Aumentamos else: contador[palabra] = 1 # Inicializamos return contador t = 'El sobre, en el aula, !! Esta sobre el pupitre.' contar_palabras(t)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Ejercicio 2 Escriba un programa que tenga el siguiente comportamiento: INPUT: Numero de alumnos: 3 Nombre alumno 1: Isaac Newton Ingrese las notas de Isaac: 98 94 77 Nombre alumno 2: Nikola Tesla Ingrese las notas de Nikola: 100 68 94 88 Nombre alumno 3: Albert Einstein Ingrese las notas de Albert: 83 85 OUTPUT: El promedio de Isaac es 89.67 El promedio de Nikola es 87.50 El promedio de Albert es 84.00 Procesamiento de texto Ejercicio 2: Análisis ¿Cuáles son las tareas necesarias? Procesamiento de texto Ejercicio 1: Solución Las tareas a realizar son: * Leer número de alumnos * Para cada alumno, leer nombre y notas. * Procesar notas para obtener el promedio. * Almacenar nombre y notas. * Separar nombre de apellido. * Imprimir resultados apropiadamente.
# Solución Alumnos # Solución # Guardar datos N = int(raw_input("Numero de alumnos: ")) notas_alumnos = [] for i in range(N): nombre = raw_input("Nombre alumno {0}:".format(i+1)) nombre_pila = nombre.split(" ")[0] notas_str = raw_input("Ingrese las notas de {0}: ".format(nombre_pila)) notas_int = [] for nota in notas_str.split(" "): notas_int.append(int(nota)) promedio = sum(notas_int)/float(len(notas_int)) notas_alumnos.append( (nombre_pila, promedio) ) # Imprimir promedios for nombre, promedio in notas_alumnos: print "El promedio de {0} es {1:.2f}".format(nombre, promedio)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0
Procesamiento de texto Procesamiento de ADN Una cadena de ADN es una secuencia de bases nitrogenadas llamadas adenina, citosina, timina y guanina. En un programa, una cadena se representa como un string de caracteres 'a', 'c', 't' y 'g'. A cada cadena, le corresponde una cadena complementaria, que se obtiene intercambiando las adeninas con las timinas, y las citosinas con las guaninas: cadena = 'cagcccatgaggcagggtg' complemento = 'gtcgggtactccgtcccac' Procesamiento de ADN 1.1 Procesamiento de ADN: Secuencia aleatoria Escriba la función cadena_al_azar(n) que genere una cadena aleatoria de ADN de largo n: Ejemplo de uso: cadena_al_azar(10) puede regresar 'acgtccgcct', 'tgttcgcatt', etc. Pista: from random import choice choice('atcg') regresa al azar una de las letras de "atcg" Procesamiento de ADN 1.1 Secuencia aleatoria: Análisis ¿Que tareas son necesarias?
# Definicion de funcion from random import choice def cadena_al_azar(n): bases_n='' for i in range(n): base=choice('atgc') bases_n+=base return bases_n # Casos de uso print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_azar(1) print cadena_al_azar(10) print cadena_al_azar(10) print cadena_al_azar(10) print cadena_al_azar(10)
ipynb/23-ProcesamientoDeTexto/Texto.ipynb
usantamaria/iwi131
cc0-1.0