markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions
REGION = "us-central1" # @param {type: "string"}
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
# If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS ''
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION $BUCKET_NAME
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al $BUCKET_NAME
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants
import google.cloud.aiplatform as aip
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket.
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tutorial Now you are ready to start creating your own AutoML image classification model. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
IMPORT_FILE = ( "gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv" )
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Quick peek at your data This tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Dataset Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. import_schema_uri: The data labeling schema for the data items. This operation may take several minutes.
dataset = aip.ImageDataset.create( display_name="Flowers" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification, ) print(dataset.resource_name)
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. prediction_type: The type task to train the model for. classification: An image classification model. object_detection: An image object detection model. multi_label: If a classification task, whether single (False) or multi-labeled (True). model_type: The type of model for deployment. CLOUD: Deployment on Google Cloud CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud. CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud. MOBILE_TF_VERSATILE_1: Deployment on an edge device. MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device. MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device. base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only. The instantiated object is the DAG (directed acyclic graph) for the training job.
dag = aip.AutoMLImageTrainingJob( display_name="flowers_" + TIMESTAMP, prediction_type="classification", multi_label=False, model_type="CLOUD", base_model=None, ) print(dag)
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour). disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 20 minutes.
model = dag.run( dataset=dataset, model_display_name="flowers_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=8000, disable_early_stopping=False, )
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
# Get model resource ID models = aip.Model.list(filter="display_name=flowers_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation)
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send a batch prediction request Send a batch prediction to your deployed model. Get test item(s) Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
test_items = !gsutil cat $IMPORT_FILE | head -n2 if len(str(test_items[0]).split(",")) == 3: _, test_item_1, test_label_1 = str(test_items[0]).split(",") _, test_item_2, test_label_2 = str(test_items[1]).split(",") else: test_item_1, test_label_1 = str(test_items[0]).split(",") test_item_2, test_label_2 = str(test_items[1]).split(",") print(test_item_1, test_label_1) print(test_item_2, test_label_2)
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Copy test item(s) For the batch prediction, copy the test items over to your Cloud Storage bucket.
file_1 = test_item_1.split("/")[-1] file_2 = test_item_2.split("/")[-1] ! gsutil cp $test_item_1 $BUCKET_NAME/$file_1 ! gsutil cp $test_item_2 $BUCKET_NAME/$file_2 test_item_1 = BUCKET_NAME + "/" + file_1 test_item_2 = BUCKET_NAME + "/" + file_2
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make the batch input file Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs: content: The Cloud Storage path to the image. mime_type: The content type. In our example, it is a jpeg file. For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
import json import tensorflow as tf gcs_input_uri = BUCKET_NAME + "/test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: data = {"content": test_item_1, "mime_type": "image/jpeg"} f.write(json.dumps(data) + "\n") data = {"content": test_item_2, "mime_type": "image/jpeg"} f.write(json.dumps(data) + "\n") print(gcs_input_uri) ! gsutil cat $gcs_input_uri
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make the batch prediction request Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters: job_display_name: The human readable name for the batch prediction job. gcs_source: A list of one or more batch request input files. gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls. sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
batch_predict_job = model.batch_predict( job_display_name="flowers_" + TIMESTAMP, gcs_source=gcs_input_uri, gcs_destination_prefix=BUCKET_NAME, sync=False, ) print(batch_predict_job)
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Wait for completion of batch prediction job Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
batch_predict_job.wait()
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the predictions Next, get the results from the completed batch prediction job. The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format: content: The prediction request. prediction: The prediction response. ids: The internal assigned unique identifiers for each prediction request. displayNames: The class names for each class label. confidences: The predicted confidence, between 0 and 1, per class label.
import json import tensorflow as tf bp_iter_outputs = batch_predict_job.iter_outputs() prediction_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("prediction"): prediction_results.append(blob.name) tags = list() for prediction_result in prediction_results: gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}" with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile: for line in gfile.readlines(): line = json.loads(line) print(line) break
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint AutoML Training Job Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket
delete_all = True if delete_all: # Delete the dataset using the Vertex dataset object try: if "dataset" in globals(): dataset.delete() except Exception as e: print(e) # Delete the model using the Vertex model object try: if "model" in globals(): model.delete() except Exception as e: print(e) # Delete the endpoint using the Vertex endpoint object try: if "endpoint" in globals(): endpoint.delete() except Exception as e: print(e) # Delete the AutoML or Pipeline trainig job try: if "dag" in globals(): dag.delete() except Exception as e: print(e) # Delete the custom trainig job try: if "job" in globals(): job.delete() except Exception as e: print(e) # Delete the batch prediction job using the Vertex batch prediction object try: if "batch_predict_job" in globals(): batch_predict_job.delete() except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object try: if "hpt_job" in globals(): hpt_job.delete() except Exception as e: print(e) if "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME
notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Runtime Analysis using Finding the nth Fibonacci numbers as a computational object to think with
%pylab inline # Import libraries from __future__ import absolute_import, division, print_function import math from time import time import matplotlib.pyplot as pyplt
CS/Part_1_Complexity_RunTimeAnalysis.ipynb
omoju/Fundamentals
gpl-3.0
Fibonacci Excerpt from Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani Fibonacci is most widely known for his famous sequence of numbers $0,1,1,2,3,5,8,13,21,34,...,$ each the sum of its two immediate predecessors. More formally, the Fibonacci numbers $F_n$ are generated by the simple rule $F_n = \begin{cases} F_n−1 + F_n−2, & \mbox{if } n \mbox{ is} > 1 \ 1, & \mbox{if } n \mbox{ is} = 1 \ 0, & \mbox{if } n \mbox{ is} = 0 \end{cases}$ No other sequence of numbers has been studied as extensively, or applied to more fields: biology, demography, art, architecture, music, to name just a few. And, together with the powers of 2, it is computer science’s favorite sequence. Tree Recursion A very simple way to calculate the nth Fibonacci number is to use a recursive algorithm. Here is a recursive algorithm for computing the nth Fibonacci number. python def fib(n): if n == 0 or n == 1: return n else: return fib(n-2) + fib(n-1) This algorithm in particular is done using tree recursion.
from IPython.display import YouTubeVideo YouTubeVideo('ls0GsJyLVLw') def fib(n): if n == 0 or n == 1: return n else: return fib(n-2) + fib(n-1) fib(5)
CS/Part_1_Complexity_RunTimeAnalysis.ipynb
omoju/Fundamentals
gpl-3.0
Whenever we have an algorithm, there are three questions we always ask about it: Is it correct? How much time does it take, as a function of n? And can we do better? 1. Correctness For this question, the answer is yes because it is almost a line by line implementation of the definition of the Fibonacci sequence. 2. Time complexity as a function of n Let $T(n)$ be the number of computer steps needed to compute $fib(n)$; what can we say about this function? For starters, if $n$ is less than 2, the procedure halts almost immediately, after just a couple of steps. Therefore, $$ T(n)≤2 \, \mbox{for} \, n≤1. $$ For larger values of $n$, there are two recursive invocations of $fib$, taking time $T (n − 1)$ and $T(n−2)$, respectively, plus three computer steps (checks on the value of $n$ and a final addition). Therefore, $$ T(n) = T(n−1) + T(n−2)+3\, \mbox{for} \,n>1. $$ Compare this to the recurrence relation for $F_n$, we immediately see that $T(n) ≥ F_n$. This is very bad news: the running time of the algorithm grows as fast as the Fibonacci numbers! $T(n)$ is exponential in $n$, which implies that the algorithm is impractically slow except for very small values of $n$. Let’s be a little more concrete about just how bad exponential time is. To compute $F_{200}$, the $fib$ algorithm executes $T (200) ≥ F_{200} ≥ 2^{138}$ elementary computer steps. How long this actually takes depends, of course, on the computer used. At this time, the fastest computer in the world is the NEC Earth Simulator, which clocks 40 trillion steps per second. Even on this machine, $fib(200)$ would take at least $2^{92}$ seconds. This means that, if we start the computation today, it would still be going long after the sun turns into a red giant star.
# This function provides a way to track function calls def count(f): def counted(n): counted.call_count += 1 return f(n) counted.call_count = 0 return counted fib = count(fib) t0 = time() n = 5 fib(n) print ('This recursive implementation of fib(', n, ') took', round(time() - t0, 4), 'secs') print ('And {0} calls to the function'.format(fib.call_count)) t0 = time() n = 30 fib(n) print ('This recursive implementation of fib(', n, ') took', round(time() - t0, 4), 'secs') print ('And {0} calls to the function'.format(fib.call_count))
CS/Part_1_Complexity_RunTimeAnalysis.ipynb
omoju/Fundamentals
gpl-3.0
3. Can we do better? A polynomial algorithm for $fib$ Let’s try to understand why $fib$ is so slow. fib.call_count shows the count of recursive invocations triggered by a single call to $fib(5)$, which is 15. If you sketched it out, you will notice that many computations are repeated! A more sensible scheme would store the intermediate results—the values $F_0 , F_1 , . . . , F_{n−1}$ as soon as they become known. Lets do exactly that through memoization. Note that you can also do this by writing a polynomial algorithm. Memoization Tree-recursive computational processes can often be made more efficient through memoization, a powerful technique for increasing the efficiency of recursive functions that repeat computation. A memoized function will store the return value for any arguments it has previously received. A second call to fib(30) would not re-compute the return value recursively, but instead return the existing one that has already been constructed. Memoization can be expressed naturally as a higher-order function, which can also be used as a decorator. The definition below creates a cache of previously computed results, indexed by the arguments from which they were computed. The use of a dictionary requires that the argument to the memoized function be immutable.
def memo(f): cache = {} def memoized(n): if n not in cache: cache[n] = f(n) # Make a mapping between the key "n" and the return value of f(n) return cache[n] return memoized fib = memo(fib) t0 = time() n = 400 fib(n) print ('This memoized implementation of fib(', n, ') took', round(time() - t0, 4), 'secs') t0 = time() n = 300 fib(n) print ('This memoized implementation of fib(', n, ') took', round(time() - t0, 4), 'secs') # Here is the polynomial algorithm for fibonacci sequence def fib2(n): if n == 0: return 0 f = [0] * (n+1) # create an array f[0 . . . n] f[0], f[1] = 0, 1 for i in range(2, n+1): f[i] = f[i-1] + f[i-2] return f[n] fib2 = count(fib2) t0 = time() n = 3000 fib2(n) print ('This polynomial implementation of fib2(', n, ') took', round(time() - t0, 4), 'secs') fib2.call_count
CS/Part_1_Complexity_RunTimeAnalysis.ipynb
omoju/Fundamentals
gpl-3.0
How long does $fib2$ take? - The inner loop consists of a single computer step and is executed $n − 1$ times. - Therefore the number of computer steps used by $fib2$ is linear in $n$. From exponential we are down to polynomial, a huge breakthrough in running time. It is now perfectly reasonable to compute $F_{200}$ or even $F_{200,000}$
fib2(200)
CS/Part_1_Complexity_RunTimeAnalysis.ipynb
omoju/Fundamentals
gpl-3.0
Instead of reporting that an algorithm takes, say, $ 5n^3 + 4n + 3$ steps on an input of size $n$, it is much simpler to leave out lower-order terms such as $4n$ and $3$ (which become insignificant as $n$ grows), and even the detail of the coefficient $5$ in the leading term (computers will be five times faster in a few years anyway), and just say that the algorithm takes time $O(n^3)$ (pronounced “big oh of $n^3$”). It is time to define this notation precisely. In what follows, think of $f(n)$ and $g(n)$ as the running times of two algorithms on inputs of size $n$. Let $f(n)$ and $g(n)$ be functions from positive integers to positive reals. We say $f = O(g)$ (which means that “$f$ grows no faster than $g$”) if there is a constant $c > 0$ such that ${f(n) ≤ c · g(n)}$. Saying $f = O(g)$ is a very loose analog of “$f ≤ g$.” It differs from the usual notion of ≤ because of the constant c, so that for instance $10n = O(n)$. This constant also allows us to disregard what happens for small values of $n$. Example: For example, suppose we are choosing between two algorithms for a particular computational task. One takes $f_1(n) = n^2$ steps, while the other takes $f_2(n) = 2n + 20$ steps. Which is better? Well, this depends on the value of $n$. For $n ≤ 5$, $f_1(n)$ is smaller; thereafter, $f_2$ is the clear winner. In this case, $f_2$ scales much better as $n$ grows, and therefore it is superior.
t = arange(0, 15, 1) f1 = t * t f2 = 2*t + 20 pyplt.title('Exponential time vs Linear time') plot(t, f1, t, f2) pyplt.annotate('$n^2$', xy=(8, 1), xytext=(10, 108)) pyplt.annotate('$2n + 20$', xy=(5, 1), xytext=(10, 45)) pyplt.xlabel('n') pyplt.ylabel('Run time') pyplt.grid(True)
CS/Part_1_Complexity_RunTimeAnalysis.ipynb
omoju/Fundamentals
gpl-3.0
Config Automatically discover the paths to various data folders and compose the project structure.
project = kg.Project.discover()
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Identifier for storing these features on disk and referring to them later.
feature_list_id = 'oofp_nn_lstm_with_activations'
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Make subsequent NN runs reproducible.
RANDOM_SEED = 42 np.random.seed(RANDOM_SEED)
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Read data Word embedding lookup matrix.
embedding_matrix = kg.io.load(project.aux_dir + 'fasttext_vocab_embedding_matrix.pickle')
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Padded sequences of word indices for every question.
X_train_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_train.pickle') X_train_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_train.pickle') X_test_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_test.pickle') X_test_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_test.pickle') y_train = kg.io.load(project.features_dir + 'y_train.pickle')
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Word embedding properties.
EMBEDDING_DIM = embedding_matrix.shape[-1] VOCAB_LENGTH = embedding_matrix.shape[0] MAX_SEQUENCE_LENGTH = X_train_q1.shape[-1] print(EMBEDDING_DIM, VOCAB_LENGTH, MAX_SEQUENCE_LENGTH)
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Define models
def zero_loss(y_true, y_pred): return K.zeros((1,)) def create_model_question_branch(): input_q = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedding_q = Embedding( VOCAB_LENGTH, EMBEDDING_DIM, weights=[embedding_matrix], input_length=MAX_SEQUENCE_LENGTH, trainable=False, )(input_q) timedist_q = TimeDistributed(Dense( EMBEDDING_DIM, activation='relu', ))(embedding_q) lambda_q = Lambda( lambda x: K.max(x, axis=1), output_shape=(EMBEDDING_DIM, ) )(timedist_q) output_q = lambda_q return input_q, output_q def create_model(params): embedding_layer = Embedding( VOCAB_LENGTH, EMBEDDING_DIM, weights=[embedding_matrix], input_length=MAX_SEQUENCE_LENGTH, trainable=False, ) lstm_layer = LSTM( params['num_lstm'], dropout=params['lstm_dropout_rate'], recurrent_dropout=params['lstm_dropout_rate'], ) input_q1 = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedded_sequences_1 = embedding_layer(input_q1) x1 = lstm_layer(embedded_sequences_1) input_q2 = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedded_sequences_2 = embedding_layer(input_q2) y1 = lstm_layer(embedded_sequences_2) features = Concatenate(name='feature_output')([x1, y1]) dropout_feat = Dropout(params['dense_dropout_rate'])(features) bn_feat = BatchNormalization()(dropout_feat) dense_1 = Dense(params['num_dense'], activation='relu')(bn_feat) dropout_1 = Dropout(params['dense_dropout_rate'])(dense_1) bn_1 = BatchNormalization()(dropout_1) output = Dense(1, activation='sigmoid', name='target_output')(bn_1) model = Model( inputs=[input_q1, input_q2], outputs=[output, features], ) model.compile( loss={'target_output': 'binary_crossentropy', 'feature_output': zero_loss}, loss_weights={'target_output': 1.0, 'feature_output': 0.0}, optimizer='nadam', metrics=None, ) return model def predict(model, X_q1, X_q2): """ Mirror the pairs, compute two separate predictions, and average them. """ y1 = model.predict([X_q1, X_q2], batch_size=1024, verbose=1).reshape(-1) y2 = model.predict([X_q2, X_q1], batch_size=1024, verbose=1).reshape(-1) return (y1 + y2) / 2
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Partition the data
NUM_FOLDS = 5 kfold = StratifiedKFold( n_splits=NUM_FOLDS, shuffle=True, random_state=RANDOM_SEED )
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Define hyperparameters
BATCH_SIZE = 2048 MAX_EPOCHS = 200
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Best values picked by Bayesian optimization.
model_params = { 'dense_dropout_rate': 0.075, 'lstm_dropout_rate': 0.332, 'num_dense': 130, 'num_lstm': 300, } feature_output_size = model_params['num_lstm'] * 2
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Create placeholders for out-of-fold predictions.
y_train_oofp = np.zeros_like(y_train, dtype='float32') y_train_oofp_features = np.zeros((len(y_train), feature_output_size), dtype='float32') y_test_oofp = np.zeros((len(X_test_q1), NUM_FOLDS), dtype='float32') y_test_oofp_features = np.zeros((len(X_test_q1), feature_output_size), dtype='float32')
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
The path where the best weights of the current model will be saved.
model_checkpoint_path = project.temp_dir + 'fold-checkpoint-' + feature_list_id + '.h5'
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Fit the folds and compute out-of-fold predictions
%%time # Iterate through folds. for fold_num, (ix_train, ix_val) in enumerate(kfold.split(X_train_q1, y_train)): # Augment the training set by mirroring the pairs. X_fold_train_q1 = np.vstack([X_train_q1[ix_train], X_train_q2[ix_train]]) X_fold_train_q2 = np.vstack([X_train_q2[ix_train], X_train_q1[ix_train]]) X_fold_val_q1 = np.vstack([X_train_q1[ix_val], X_train_q2[ix_val]]) X_fold_val_q2 = np.vstack([X_train_q2[ix_val], X_train_q1[ix_val]]) # Ground truth should also be "mirrored". y_fold_train = np.concatenate([y_train[ix_train], y_train[ix_train]]) y_fold_val = np.concatenate([y_train[ix_val], y_train[ix_val]]) print() print(f'Fitting fold {fold_num + 1} of {kfold.n_splits}') print() # Compile a new model. model = create_model(model_params) # Train. model.fit( # Create dummy ground truth values for the activation outputs. [X_fold_train_q1, X_fold_train_q2], [y_fold_train, np.zeros((len(y_fold_train), feature_output_size))], validation_data=( [X_fold_val_q1, X_fold_val_q2], [y_fold_val, np.zeros((len(y_fold_val), feature_output_size))], ), batch_size=BATCH_SIZE, epochs=MAX_EPOCHS, verbose=1, callbacks=[ # Stop training when the validation loss stops improving. EarlyStopping( monitor='val_loss', min_delta=0.001, patience=3, verbose=1, mode='auto', ), # Save the weights of the best epoch. ModelCheckpoint( model_checkpoint_path, monitor='val_loss', save_best_only=True, verbose=2, ), ], ) # Restore the best epoch. model.load_weights(model_checkpoint_path) # Compute out-of-fold predictions. y_train_oofp[ix_val] = predict(model, X_train_q1[ix_val], X_train_q2[ix_val]) y_test_oofp[:, fold_num] = predict(model, X_test_q1, X_test_q2) # Clear GPU memory. K.clear_session() del X_fold_train_q1, X_fold_train_q2 del X_fold_val_q1, X_fold_val_q2 del model gc.collect() cv_score = log_loss(y_train, y_train_oofp) print('CV score:', cv_score)
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Save features
feature_names = [feature_list_id] features_train = y_train_oofp.reshape((-1, 1)) features_test = np.mean(y_test_oofp, axis=1).reshape((-1, 1)) project.save_features(features_train, features_test, feature_names, feature_list_id)
notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb
YuriyGuts/kaggle-quora-question-pairs
mit
Querying SQL (advanced) NOTE: THIS DOC IS CURRENTLY IN OUTLINE FORM In this tutorial, we'll use a dataset of television ratings. copying data in, and getting a table from SQL filtering out rows, and aggregating data looking at shifts in ratings between seasons checking for abnormalities in the data Setting up
import pandas as pd from siuba.tests.helpers import copy_to_sql from siuba import * from siuba.dply.vector import lag, desc, row_number from siuba.dply.string import str_c from siuba.sql import LazyTbl data_url = "https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-01-08/IMDb_Economist_tv_ratings.csv" tv_ratings = pd.read_csv(data_url, parse_dates = ["date"]) db_uri = "postgresql://{user}:{password}@localhost:5433/{db}".format( user = "postgres", password = "", db = "postgres" ) # create tv_ratings table tbl_ratings = copy_to_sql(tv_ratings, "tv_ratings", db_uri) # can also access an existing table tbl_ratings = LazyTbl(db_uri, "tv_ratings") tbl_ratings
docs/draft-old-pages/intro_sql_interm.ipynb
machow/siuba
mit
Inspecting a single show
buffy = (tbl_ratings >> filter(_.title == "Buffy the Vampire Slayer") >> collect() ) buffy buffy >> summarize(avg_rating = _.av_rating.mean())
docs/draft-old-pages/intro_sql_interm.ipynb
machow/siuba
mit
Average rating per show, along with dates
avg_ratings = (tbl_ratings >> group_by(_.title) >> summarize( avg_rating = _.av_rating.mean(), date_range = str_c(_.date.dt.year.max(), " - ", _.date.dt.year.min()) ) ) avg_ratings
docs/draft-old-pages/intro_sql_interm.ipynb
machow/siuba
mit
Biggest changes in ratings between two seasons
top_4_shifts = (tbl_ratings >> group_by(_.title) >> arrange(_.seasonNumber) >> mutate(rating_shift = _.av_rating - lag(_.av_rating)) >> summarize( max_shift = _.rating_shift.max() ) >> arrange(-_.max_shift) >> head(4) ) top_4_shifts big_shift_series = (top_4_shifts >> select(_.title) >> inner_join(_, tbl_ratings, "title") >> collect() ) from plotnine import * (big_shift_series >> ggplot(aes("seasonNumber", "av_rating")) + geom_point() + geom_line() + facet_wrap("~ title") + labs( title = "Seasons with Biggest Shifts in Ratings", y = "Average rating", x = "Season" ) )
docs/draft-old-pages/intro_sql_interm.ipynb
machow/siuba
mit
Do we have full data for each season?
mismatches = (tbl_ratings >> arrange(_.title, _.seasonNumber) >> group_by(_.title) >> mutate( row = row_number(_), mismatch = _.row != _.seasonNumber ) >> filter(_.mismatch.any()) >> ungroup() ) mismatches mismatches >> distinct(_.title) >> count() >> collect()
docs/draft-old-pages/intro_sql_interm.ipynb
machow/siuba
mit
Damped, driven nonlinear pendulum The equations of motion for a simple pendulum of mass $m$, length $l$ are: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta $$ When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t) $$ In this equation: $a$ governs the strength of the damping. $b$ governs the strength of the driving force. $\omega_0$ is the angular frequency of the driving force. When $a=0$ and $b=0$, the energy/mass is conserved: $$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$ Basic setup Here are the basic parameters we are going to use for this exercise:
g = 9.81 # m/s^2 l = 0.5 # length of pendulum, in meters tmax = 50. # seconds t = np.linspace(0, tmax, int(100*tmax))
assignments/assignment10/ODEsEx03.ipynb
jpilgram/phys202-2015-work
mit
Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
#I worked with James A and Hunter T. def derivs(y, t, a, b, omega0): """Compute the derivatives of the damped, driven pendulum. Parameters ---------- y : ndarray The solution vector at the current time t[i]: [theta[i],omega[i]]. t : float The current time t[i]. a, b, omega0: float The parameters in the differential equation. Returns ------- dy : ndarray The vector of derviatives at t[i]: [dtheta[i],domega[i]]. """ # YOUR CODE HERE #raise NotImplementedError() theta = y[0] omega = y[1] dtheta =omega dw = -(g/l)*np.sin(theta)-a*omega-b*np.sin(omega0*t) return [dtheta, dw] assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.]) def energy(y): """Compute the energy for the state array y. The state array y can have two forms: 1. It could be an ndim=1 array of np.array([theta,omega]) at a single time. 2. It could be an ndim=2 array where each row is the [theta,omega] at single time. Parameters ---------- y : ndarray, list, tuple A solution vector Returns ------- E/m : float (ndim=1) or ndarray (ndim=2) The energy per mass. """ # YOUR CODE HERE #raise NotImplementedError() if y.ndim==1: theta = y[0] omega = y[1] if y.ndim==2: theta = y[:,0] omega = y[:,1] E = g*l*(1-np.cos(theta))+0.5*l**2*omega**2 return (E) assert np.allclose(energy(np.array([np.pi,0])),g) assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
assignments/assignment10/ODEsEx03.ipynb
jpilgram/phys202-2015-work
mit
Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant. Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
# YOUR CODE HERE #raise NotImplementedError() y0 = [np.pi,0] solution = odeint(derivs, y0, t, args = (0,0,0), atol = 1e-5, rtol = 1e-4) # YOUR CODE HERE #raise NotImplementedError() plt.plot(t,energy(solution), label="$Energy/mass$") plt.title('Simple Pendulum Engery') plt.xlabel('time') plt.ylabel('$Engery/Mass$') plt.ylim(9.2,10.2); # YOUR CODE HERE #raise NotImplementedError() theta= solution[:,0] omega = solution[:,1] plt.plot(t ,theta, label = "$\Theta (t)$") plt.plot(t, omega, label = "$\omega (t)$") plt.ylim(-0.5,5) plt.legend() plt.title('Simple Pendulum $\Theta (t)$ and $\omega (t)$') plt.xlabel('Time'); assert True # leave this to grade the two plots and their tuning of atol, rtol.
assignments/assignment10/ODEsEx03.ipynb
jpilgram/phys202-2015-work
mit
Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Make a parametric plot of $[\theta(t),\omega(t)]$ versus time. Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$ Label your axes and customize your plot to make it beautiful and effective.
def plot_pendulum(a=0.0, b=0.0, omega0=0.0): """Integrate the damped, driven pendulum and make a phase plot of the solution.""" # YOUR CODE HERE #raise NotImplementedError() y0 =[-np.pi+0.1,0] solution = odeint(derivs, y0, t, args = (a,b,omega0), atol = 1e-5, rtol = 1e-4) theta=solution[:,0] omega=solution[:,1] plt.plot(theta, omega, color="k") plt.title('Damped and Driven Pendulum Motion') plt.xlabel('$\Theta (t)$') plt.ylabel('$\omega (t)$') plt.xlim(-2*np.pi, 2*np.pi) plt.ylim(-10,10);
assignments/assignment10/ODEsEx03.ipynb
jpilgram/phys202-2015-work
mit
Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
plot_pendulum(0.5, 0.0, 0.0)
assignments/assignment10/ODEsEx03.ipynb
jpilgram/phys202-2015-work
mit
Use interact to explore the plot_pendulum function with: a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$. b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
# YOUR CODE HERE #raise NotImplementedError() interact(plot_pendulum, a=(0.0,1.0,0.1), b=(0.0,10.0,0.1), omega0 = (0.0,10.0,0.1));
assignments/assignment10/ODEsEx03.ipynb
jpilgram/phys202-2015-work
mit
Head model and forward computation The aim of this tutorial is to be a getting started for forward computation. For more extensive details and presentation of the general concepts for forward modeling. See ch_forward.
import os.path as op import mne from mne.datasets import sample data_path = sample.data_path() # the raw file containing the channel location + types raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' # The paths to Freesurfer reconstructions subjects_dir = data_path + '/subjects' subject = 'sample'
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Computing the forward operator To compute a forward operator we need: a -trans.fif file that contains the coregistration info. a source space the :term:BEM surfaces Compute and visualize BEM surfaces The :term:BEM surfaces are the triangulations of the interfaces between different tissues needed for forward computation. These surfaces are for example the inner skull surface, the outer skull surface and the outer skin surface, a.k.a. scalp surface. Computing the BEM surfaces requires FreeSurfer and makes use of either of the two following command line tools: gen_mne_watershed_bem gen_mne_flash_bem Or by calling in a Python script one of the functions :func:mne.bem.make_watershed_bem or :func:mne.bem.make_flash_bem. Here we'll assume it's already computed. It takes a few minutes per subject. For EEG we use 3 layers (inner skull, outer skull, and skin) while for MEG 1 layer (inner skull) is enough. Let's look at these surfaces. The function :func:mne.viz.plot_bem assumes that you have the the bem folder of your subject FreeSurfer reconstruction the necessary files.
mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir, brain_surfaces='white', orientation='coronal')
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Visualization the coregistration The coregistration is operation that allows to position the head and the sensors in a common coordinate system. In the MNE software the transformation to align the head and the sensors in stored in a so-called trans file. It is a FIF file that ends with -trans.fif. It can be obtained with :func:mne.gui.coregistration (or its convenient command line equivalent gen_mne_coreg), or mrilab if you're using a Neuromag system. For the Python version see :func:mne.gui.coregistration Here we assume the coregistration is done, so we just visually check the alignment with the following code.
# The transformation file obtained by coregistration trans = data_path + '/MEG/sample/sample_audvis_raw-trans.fif' info = mne.io.read_info(raw_fname) # Here we look at the dense head, which isn't used for BEM computations but # is useful for coregistration. mne.viz.plot_alignment(info, trans, subject=subject, dig=True, meg=['helmet', 'sensors'], subjects_dir=subjects_dir, surfaces='head-dense')
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute Source Space The source space defines the position and orientation of the candidate source locations. There are two types of source spaces: source-based source space when the candidates are confined to a surface. volumetric or discrete source space when the candidates are discrete, arbitrarily located source points bounded by the surface. Source-based source space is computed using :func:mne.setup_source_space, while volumetric source space is computed using :func:mne.setup_volume_source_space. We will now compute a source-based source space with an OCT-6 resolution. See setting_up_source_space for details on source space definition and spacing parameter.
src = mne.setup_source_space(subject, spacing='oct6', subjects_dir=subjects_dir, add_dist=False) print(src)
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The surface based source space src contains two parts, one for the left hemisphere (4098 locations) and one for the right hemisphere (4098 locations). Sources can be visualized on top of the BEM surfaces in purple.
mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir, brain_surfaces='white', src=src, orientation='coronal')
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To compute a volume based source space defined with a grid of candidate dipoles inside a sphere of radius 90mm centered at (0.0, 0.0, 40.0) you can use the following code. Obviously here, the sphere is not perfect. It is not restricted to the brain and it can miss some parts of the cortex.
sphere = (0.0, 0.0, 40.0, 90.0) vol_src = mne.setup_volume_source_space(subject, subjects_dir=subjects_dir, sphere=sphere) print(vol_src) mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir, brain_surfaces='white', src=vol_src, orientation='coronal')
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To compute a volume based source space defined with a grid of candidate dipoles inside the brain (requires the :term:BEM surfaces) you can use the following.
surface = op.join(subjects_dir, subject, 'bem', 'inner_skull.surf') vol_src = mne.setup_volume_source_space(subject, subjects_dir=subjects_dir, surface=surface) print(vol_src) mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir, brain_surfaces='white', src=vol_src, orientation='coronal')
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
With the surface-based source space only sources that lie in the plotted MRI slices are shown. Let's write a few lines of mayavi to see all sources in 3D.
import numpy as np # noqa from mayavi import mlab # noqa from surfer import Brain # noqa brain = Brain('sample', 'lh', 'inflated', subjects_dir=subjects_dir) surf = brain.geo['lh'] vertidx = np.where(src[0]['inuse'])[0] mlab.points3d(surf.x[vertidx], surf.y[vertidx], surf.z[vertidx], color=(1, 1, 0), scale_factor=1.5)
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute forward solution We can now compute the forward solution. To reduce computation we'll just compute a single layer BEM (just inner skull) that can then be used for MEG (not EEG). We specify if we want a one-layer or a three-layer BEM using the conductivity parameter. The BEM solution requires a BEM model which describes the geometry of the head the conductivities of the different tissues.
conductivity = (0.3,) # for single layer # conductivity = (0.3, 0.006, 0.3) # for three layers model = mne.make_bem_model(subject='sample', ico=4, conductivity=conductivity, subjects_dir=subjects_dir) bem = mne.make_bem_solution(model)
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that the :term:BEM does not involve any use of the trans file. The BEM only depends on the head geometry and conductivities. It is therefore independent from the MEG data and the head position. Let's now compute the forward operator, commonly referred to as the gain or leadfield matrix. See :func:mne.make_forward_solution for details on parameters meaning.
fwd = mne.make_forward_solution(raw_fname, trans=trans, src=src, bem=bem, meg=True, eeg=False, mindist=5.0, n_jobs=2) print(fwd)
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can explore the content of fwd to access the numpy array that contains the gain matrix.
leadfield = fwd['sol']['data'] print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To extract the numpy array containing the forward operator corresponding to the source space fwd['src'] with cortical orientation constraint we can use the following:
fwd_fixed = mne.convert_forward_solution(fwd, surf_ori=True, force_fixed=True, use_cps=True) leadfield = fwd_fixed['sol']['data'] print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This is equivalent to the following code that explicitly applies the forward operator to a source estimate composed of the identity operator:
n_dipoles = leadfield.shape[1] vertices = [src_hemi['vertno'] for src_hemi in fwd_fixed['src']] stc = mne.SourceEstimate(1e-9 * np.eye(n_dipoles), vertices, tmin=0., tstep=1) leadfield = mne.apply_forward(fwd_fixed, stc, info).data / 1e-9
0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
define the path of the model you want to load, and also the path of the dataset
# you may need to change these to link to where your data and checkpoints are actually stored! # in the default config, model_dir is likely to be /tmp/sketch_rnn/models data_dir = './kanji' model_dir = './log' [train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env(data_dir, model_dir) [hps_model, eval_hps_model, sample_hps_model] = load_model(model_dir) # construct the sketch-rnn model here: reset_graph() model = Model(hps_model) eval_model = Model(eval_hps_model, reuse=True) sample_model = Model(sample_hps_model, reuse=True) sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) def decode(z_input=None, draw_mode=True, temperature=0.1, factor=0.2): z = None if z_input is not None: z = [z_input] sample_strokes, m = sample(sess, sample_model, seq_len=eval_model.hps.max_seq_len, temperature=temperature, z=z) strokes = to_normal_strokes(sample_strokes) if draw_mode: draw_strokes(strokes, factor) return strokes # loads the weights from checkpoint into our model load_checkpoint(sess, model_dir) # randomly unconditionally generate 10 examples N = 10 reconstructions = [] for i in range(N): reconstructions.append([decode(temperature=0.5, draw_mode=False), [0, i]])
jupyter-notebooks/Sketch_RNN_TF_To_JS_Tutorial.ipynb
magenta/magenta-demos
apache-2.0
Let's see if our model kind of works by sampling from it:
stroke_grid = make_grid_svg(reconstructions) draw_strokes(stroke_grid) def get_model_params(): # get trainable params. model_names = [] model_params = [] model_shapes = [] with sess.as_default(): t_vars = tf.trainable_variables() for var in t_vars: param_name = var.name p = sess.run(var) model_names.append(param_name) params = p model_params.append(params) model_shapes.append(p.shape) return model_params, model_shapes, model_names def quantize_params(params, max_weight=10.0, factor=32767): result = [] max_weight = np.abs(max_weight) for p in params: r = np.array(p) r /= max_weight r[r>1.0] = 1.0 r[r<-1.0] = -1.0 result.append(np.round(r*factor).flatten().astype(np.int).tolist()) return result model_params, model_shapes, model_names = get_model_params() model_names # scale factor converts "model-coordinates" to "pixel coordinates" for your JS canvas demo later on. # the larger it is, the larger your drawings (in pixel space) will be. # I recommend setting this to 100.0 and iterating the value in the json file later on when you build the JS part. scale_factor = 200.0 metainfo = {"mode":2,"version":6,"max_seq_len":train_set.max_seq_length,"name":"custom","scale_factor":scale_factor} model_params_quantized = quantize_params(model_params) model_blob = [metainfo, model_shapes, model_params_quantized] with open("custom.gen.full.json", 'w') as outfile: json.dump(model_blob, outfile, separators=(',', ':'))
jupyter-notebooks/Sketch_RNN_TF_To_JS_Tutorial.ipynb
magenta/magenta-demos
apache-2.0
The neural network accepts an input vector of length 2. It has 2 output nodes. One node is used to control whether or not to recursively run itself, the other is the real data output. We simply threshold > 0.5 to trigger a recursive call to itself.
###example output with random initial weights print( nn(X[0], theta) ) print( nn(X[1], theta) ) print( nn(X[2], theta) ) print( nn(X[3], theta) )
VariableOutput.ipynb
outlace/Machine-Learning-Experiments
mit
Cost Function Arbitrarily assign a high cost to mismatches in the length of the output, then also assess MSE
def costFunction(X, Y, theta): cost = 0 for i in range(len(X)): y = Y[i] m = float(len(X[i])) hThetaX = nn(X[i], theta) if len(y) != len(hThetaX): cost += 3 else: cost += (1/m) * np.sum(np.abs(y - hThetaX)**2) return cost
VariableOutput.ipynb
outlace/Machine-Learning-Experiments
mit
Genetic Algorithm to Solve Weights:
import random as rn, numpy as np # [Initial population size, mutation rate (=1%), num generations (30), solution length (13), # winners/per gen] initPop, mutRate, numGen, solLen, numWin = 100, 0.01, 500, 17, 20 #initialize current population to random values within range curPop = np.random.choice(np.arange(-15,15,step=0.01),size=(initPop, solLen),replace=False) nextPop = np.zeros((curPop.shape[0], curPop.shape[1])) fitVec = np.zeros((initPop, 2)) #1st col is indices, 2nd col is cost for i in range(numGen): #iterate through num generations #Create vector of all errors from cost function for each solution fitVec = np.array([np.array([x, np.sum(costFunction(X, y, curPop[x].T))]) for x in range(initPop)]) #plt.pyplot.scatter(i,np.sum(fitVec[:,1])) winners = np.zeros((numWin, solLen)) for n in range(len(winners)): #for n in range(10) selected = np.random.choice(range(len(fitVec)), numWin/2, replace=False) wnr = np.argmin(fitVec[selected,1]) winners[n] = curPop[int(fitVec[selected[wnr]][0])] nextPop[:len(winners)] = winners #populate new gen with winners duplicWin = np.zeros((((initPop - len(winners))),winners.shape[1])) for x in range(winners.shape[1]): #for each col in winners (3 cols) #Duplicate winners (20x3 matrix) 3 times to create 80x3 matrix, then shuffle columns numDups = ((initPop - len(winners))/len(winners)) #num times to duplicate to fill rest of nextPop duplicWin[:, x] = np.repeat(winners[:, x], numDups, axis=0)#duplicate each col duplicWin[:, x] = np.random.permutation(duplicWin[:, x]) #shuffle each col ("crossover") #Populate the rest of the generation with offspring of mating pairs nextPop[len(winners):] = np.matrix(duplicWin) #Create a mutation matrix, mostly 1s, but some elements are random numbers from a normal distribution mutMatrix = [np.float(np.random.normal(0,2,1)) if rn.random() < mutRate else 1 for x in range(nextPop.size)] #randomly mutate part of the population by multiplying nextPop by our mutation matrix nextPop = np.multiply(nextPop, np.matrix(mutMatrix).reshape(nextPop.shape)) curPop = nextPop best_soln = curPop[np.argmin(fitVec[:,1])] print("Best Sol'n:\n%s\nCost:%s" % (best_soln,np.sum(costFunction(X, y, best_soln.T)))) #Demonstrate variable output after training print( np.round(nn(X[0], best_soln.reshape(17,1)), 2) ) print( np.round(nn(X[1], best_soln.reshape(17,1)), 2) ) print( np.round(nn(X[2], best_soln.reshape(17,1)), 2) ) print( np.round(nn(X[3], best_soln.reshape(17,1)), 2) )
VariableOutput.ipynb
outlace/Machine-Learning-Experiments
mit
Backends Quick examples pandas (fast grouped) _
# pandas fast grouped implementation ---- from siuba.data import cars from siuba import _ from siuba.experimental.pd_groups import fast_mutate, fast_filter, fast_summarize fast_mutate( cars.groupby('cyl'), avg_mpg = _.mpg.mean(), # aggregation hp_per_mpg = _.hp / _.mpg, # elementwise demeaned = _.hp - _.hp.mean(), # elementwise + agg )
docs/backends.ipynb
machow/siuba
mit
SQL _
from siuba import _, mutate, group_by, summarize, show_query from siuba.sql import LazyTbl from sqlalchemy import create_engine # create sqlite db, add pandas DataFrame to it engine = create_engine("sqlite:///:memory:") cars.to_sql("cars", engine, if_exists="replace") # define query q = (LazyTbl(engine, "cars") >> group_by(_.cyl) >> summarize(avg_mpg=_.mpg.mean()) ) q res = show_query(q)
docs/backends.ipynb
machow/siuba
mit
Supported methods The table below shows the pandas methods supported by different backends. Note that the regular, ungrouped backend supports all methods, and the fast grouped implementation supports most methods a person could use without having to call the (slow) DataFrame.apply method. 🚧This table is displayed a bit funky, but will be cleaned up! pandas (ungrouped) In general, ungrouped pandas DataFrames do not require any translation. On this kind of data, verbs like mutate are just alternative implementations of methods like DataFrame.assign.
from siuba import _, mutate df = pd.DataFrame({ 'g': ['a', 'a', 'b'], 'x': [1,2,3], }) df.assign(y = lambda _: _.x + 1) mutate(df, y = _.x + 1)
docs/backends.ipynb
machow/siuba
mit
Siuba verbs also work on grouped DataFrames, but are not always fast. They are the potentially slow, reference implementation.
mutate( df.groupby('g'), y = _.x + 1, z = _.x - _.x.mean() )
docs/backends.ipynb
machow/siuba
mit
Overview of MEG/EEG analysis with MNE-Python This tutorial covers the basic EEG/MEG pipeline for event-related analysis: loading data, epoching, averaging, plotting, and estimating cortical activity from sensor data. It introduces the core MNE-Python data structures :class:~mne.io.Raw, :class:~mne.Epochs, :class:~mne.Evoked, and :class:~mne.SourceEstimate, and covers a lot of ground fairly quickly (at the expense of depth). Subsequent tutorials address each of these topics in greater detail. :depth: 1 We begin by importing the necessary Python modules:
import os import numpy as np import mne
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Loading data ^^^^^^^^^^^^ MNE-Python data structures are based around the FIF file format from Neuromag, but there are reader functions for a wide variety of other data formats &lt;data-formats&gt;. MNE-Python also has interfaces to a variety of :doc:publicly available datasets &lt;../../manual/datasets_index&gt;, which MNE-Python can download and manage for you. We'll start this tutorial by loading one of the example datasets (called "sample-dataset"), which contains EEG and MEG data from one subject performing an audiovisual experiment, along with structural MRI scans for that subject. The :func:mne.datasets.sample.data_path function will automatically download the dataset if it isn't found in one of the expected locations, then return the directory path to the dataset (see the documentation of :func:~mne.datasets.sample.data_path for a list of places it checks before downloading). Note also that for this tutorial to run smoothly on our servers, we're using a filtered and downsampled version of the data (:file:sample_audvis_filt-0-40_raw.fif), but an unfiltered version (:file:sample_audvis_raw.fif) is also included in the sample dataset and could be substituted here when running the tutorial locally.
sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_filt-0-40_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file)
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
By default, :func:~mne.io.read_raw_fif displays some information about the file it's loading; for example, here it tells us that there are four "projection items" in the file along with the recorded data; those are :term:SSP projectors &lt;projector&gt; calculated to remove environmental noise from the MEG signals, plus a projector to mean-reference the EEG channels; these are discussed in a later tutorial. In addition to the information displayed during loading, you can get a glimpse of the basic details of a :class:~mne.io.Raw object by printing it; even more is available by printing its info attribute (a :class:dictionary-like object &lt;mne.Info&gt; that is preserved across :class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked objects). The info data structure keeps track of channel locations, applied filters, projectors, etc. Notice especially the chs entry, showing that MNE-Python detects different sensor types and handles each appropriately. .. TODO edit prev. paragraph when projectors tutorial is added: ...those are discussed in the tutorial projectors-tutorial. (or whatever link)
print(raw) print(raw.info)
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
:class:~mne.io.Raw objects also have several built-in plotting methods; here we show the power spectral density (PSD) for each sensor type with :meth:~mne.io.Raw.plot_psd, as well as a plot of the raw sensor traces with :meth:~mne.io.Raw.plot. In the PSD plot, we'll only plot frequencies below 50 Hz (since our data are low-pass filtered at 40 Hz). In interactive Python sessions, :meth:~mne.io.Raw.plot is interactive and allows scrolling, scaling, bad channel marking, annotation, projector toggling, etc.
raw.plot_psd(fmax=50) raw.plot(duration=5, n_channels=30)
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Preprocessing ^^^^^^^^^^^^^ MNE-Python supports a variety of preprocessing approaches and techniques (maxwell filtering, signal-space projection, independent components analysis, filtering, downsampling, etc); see the full list of capabilities in the :mod:mne.preprocessing and :mod:mne.filter submodules. Here we'll clean up our data by performing independent components analysis (:class:~mne.preprocessing.ICA); for brevity we'll skip the steps that helped us determined which components best capture the artifacts (see :doc:../preprocessing/plot_artifacts_correction_ica for a detailed walk-through of that process).
# set up and fit the ICA ica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800) ica.fit(raw) ica.exclude = [1, 2] # details on how we picked these are omitted here ica.plot_properties(raw, picks=ica.exclude)
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Once we're confident about which component(s) we want to remove, we pass them as the exclude parameter and then apply the ICA to the raw signal. The :meth:~mne.preprocessing.ICA.apply method requires the raw data to be loaded into memory (by default it's only read from disk as-needed), so we'll use :meth:~mne.io.Raw.load_data first. We'll also make a copy of the :class:~mne.io.Raw object so we can compare the signal before and after artifact removal side-by-side:
orig_raw = raw.copy() raw.load_data() ica.apply(raw) # show some frontal channels to clearly illustrate the artifact removal chs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231', 'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531', 'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006', 'EEG 007', 'EEG 008'] chan_idxs = [raw.ch_names.index(ch) for ch in chs] orig_raw.plot(order=chan_idxs, start=12, duration=4) raw.plot(order=chan_idxs, start=12, duration=4)
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Detecting experimental events ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The sample dataset includes several :term:"STIM" channels &lt;stim channel&gt; that recorded electrical signals sent from the stimulus delivery computer (as brief DC shifts / squarewave pulses). These pulses (often called "triggers") are used in this dataset to mark experimental events: stimulus onset, stimulus type, and participant response (button press). The individual STIM channels are combined onto a single channel, in such a way that voltage levels on that channel can be unambiguously decoded as a particular event type. On older Neuromag systems (such as that used to record the sample data) this summation channel was called STI 014, so we can pass that channel name to the :func:mne.find_events function to recover the timing and identity of the stimulus events.
events = mne.find_events(raw, stim_channel='STI 014') print(events[:5]) # show the first 5
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The resulting events array is an ordinary 3-column :class:NumPy array &lt;numpy.ndarray&gt;, with sample number in the first column and integer event ID in the last column; the middle column is usually ignored. Rather than keeping track of integer event IDs, we can provide an event dictionary that maps the integer IDs to experimental conditions or events. In this dataset, the mapping looks like this: +----------+----------------------------------------------------------+ | Event ID | Condition | +==========+==========================================================+ | 1 | auditory stimulus (tone) to the left ear | +----------+----------------------------------------------------------+ | 2 | auditory stimulus (tone) to the right ear | +----------+----------------------------------------------------------+ | 3 | visual stimulus (checkerboard) to the left visual field | +----------+----------------------------------------------------------+ | 4 | visual stimulus (checkerboard) to the right visual field | +----------+----------------------------------------------------------+ | 5 | smiley face (catch trial) | +----------+----------------------------------------------------------+ | 32 | subject button press | +----------+----------------------------------------------------------+
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Event dictionaries like this one are used when extracting epochs from continuous data; the / character in the dictionary keys allows pooling across conditions by requesting partial condition descriptors (i.e., requesting 'auditory' will select all epochs with Event IDs 1 and 2; requesting 'left' will select all epochs with Event IDs 1 and 3). An example of this is shown in the next section. There is also a convenient :func:~mne.viz.plot_events function for visualizing the distribution of events across the duration of the recording (to make sure event detection worked as expected). Here we'll also make use of the :class:~mne.Info attribute to get the sampling frequency of the recording (so our x-axis will be in seconds instead of in samples).
fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq']) fig.subplots_adjust(right=0.7) # make room for the legend
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
For paradigms that are not event-related (e.g., analysis of resting-state data), you can extract regularly spaced (possibly overlapping) spans of data by creating events using :func:mne.make_fixed_length_events and then proceeding with epoching as described in the next section. Epoching continuous data ^^^^^^^^^^^^^^^^^^^^^^^^ The :class:~mne.io.Raw object and the events array are the bare minimum needed to create an :class:~mne.Epochs object, which we create with the :class:mne.Epochs class constructor. Here we'll also specify some data quality constraints: we'll reject any epoch where peak-to-peak signal amplitude is beyond reasonable limits for that channel type. This is done with a rejection dictionary; you may include or omit thresholds for any of the channel types present in your data. The values given here are reasonable for this particular dataset, but may need to be adapted for different hardware or recording conditions. For a more automated approach, consider using the autoreject package_.
reject_criteria = dict(mag=4000e-15, # 4000 fT grad=4000e-13, # 4000 fT/cm eeg=150e-6, # 150 μV eog=250e-6) # 250 μV
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We'll also pass the event dictionary as the event_id parameter (so we can work with easy-to-pool event labels instead of the integer event IDs), and specify tmin and tmax (the time relative to each event at which to start and end each epoch). As mentioned above, by default :class:~mne.io.Raw and :class:~mne.Epochs data aren't loaded into memory (they're accessed from disk only when needed), but here we'll force loading into memory using the preload=True parameter so that we can see the results of the rejection criteria being applied:
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5, reject=reject_criteria, preload=True)
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Next we'll pool across left/right stimulus presentations so we can compare auditory versus visual responses. To avoid biasing our signals to the left or right, we'll use :meth:~mne.Epochs.equalize_event_counts first to randomly sample epochs from each condition to match the number of epochs present in the condition with the fewest good epochs.
conds_we_care_about = ['auditory/left', 'auditory/right', 'visual/left', 'visual/right'] epochs.equalize_event_counts(conds_we_care_about) # this operates in-place aud_epochs = epochs['auditory'] vis_epochs = epochs['visual'] del raw, epochs # free up memory
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Like :class:~mne.io.Raw objects, :class:~mne.Epochs objects also have a number of built-in plotting methods. One is :meth:~mne.Epochs.plot_image, which shows each epoch as one row of an image map, with color representing signal magnitude; the average evoked response and the sensor location are shown below the image:
aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
<div class="alert alert-info"><h4>Note</h4><p>Both :class:`~mne.io.Raw` and :class:`~mne.Epochs` objects have :meth:`~mne.Epochs.get_data` methods that return the underlying data as a :class:`NumPy array <numpy.ndarray>`. Both methods have a ``picks`` parameter for subselecting which channel(s) to return; ``raw.get_data()`` has additional parameters for restricting the time domain. The resulting matrices have dimension ``(n_channels, n_times)`` for :class:`~mne.io.Raw` and ``(n_epochs, n_channels, n_times)`` for :class:`~mne.Epochs`.</p></div> Time-frequency analysis ^^^^^^^^^^^^^^^^^^^^^^^ The :mod:mne.time_frequency submodule provides implementations of several algorithms to compute time-frequency representations, power spectral density, and cross-spectral density. Here, for example, we'll compute for the auditory epochs the induced power at different frequencies and times, using Morlet wavelets. On this dataset the result is not especially informative (it just shows the evoked "auditory N100" response); see here &lt;inter-trial-coherence&gt; for a more extended example on a dataset with richer frequency content.
frequencies = np.arange(7, 30, 3) power = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False, freqs=frequencies, decim=3) power.plot(['MEG 1332'])
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Estimating evoked responses ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Now that we have our conditions in aud_epochs and vis_epochs, we can get an estimate of evoked responses to auditory versus visual stimuli by averaging together the epochs in each condition. This is as simple as calling the :meth:~mne.Epochs.average method on the :class:~mne.Epochs object, and then using a function from the :mod:mne.viz module to compare the global field power for each sensor type of the two :class:~mne.Evoked objects:
aud_evoked = aud_epochs.average() vis_evoked = vis_epochs.average() mne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked), show_legend='upper left', show_sensors='upper right')
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also get a more detailed view of each :class:~mne.Evoked object using other plotting methods such as :meth:~mne.Evoked.plot_joint or :meth:~mne.Evoked.plot_topomap. Here we'll examine just the EEG channels, and see the classic auditory evoked N100-P200 pattern over dorso-frontal electrodes, then plot scalp topographies at some additional arbitrary times:
# sphinx_gallery_thumbnail_number = 13 aud_evoked.plot_joint(picks='eeg') aud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Evoked objects can also be combined to show contrasts between conditions, using the :func:mne.combine_evoked function. A simple difference can be generated by negating one of the :class:~mne.Evoked objects passed into the function. We'll then plot the difference wave at each sensor using :meth:~mne.Evoked.plot_topo:
evoked_diff = mne.combine_evoked([aud_evoked, -vis_evoked], weights='equal') evoked_diff.pick_types('mag').plot_topo(color='r', legend=False)
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Inverse modeling ^^^^^^^^^^^^^^^^ Finally, we can estimate the origins of the evoked activity by projecting the sensor data into this subject's :term:source space (a set of points either on the cortical surface or within the cortical volume of that subject, as estimated by structural MRI scans). MNE-Python supports lots of ways of doing this (dynamic statistical parametric mapping, dipole fitting, beamformers, etc.); here we'll use minimum-norm estimation (MNE) to generate a continuous map of activation constrained to the cortical surface. MNE uses a linear :term:inverse operator to project EEG+MEG sensor measurements into the source space. The inverse operator is computed from the :term:forward solution for this subject and an estimate of the covariance of sensor measurements &lt;tut_compute_covariance&gt;. For this tutorial we'll skip those computational steps and load a pre-computed inverse operator from disk (it's included with the sample data &lt;sample-dataset&gt;). Because this "inverse problem" is underdetermined (there is no unique solution), here we further constrain the solution by providing a regularization parameter specifying the relative smoothness of the current estimates in terms of a signal-to-noise ratio (where "noise" here is akin to baseline activity level across all of cortex).
# load inverse operator inverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis-meg-oct-6-meg-inv.fif') inv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file) # set signal-to-noise ratio (SNR) to compute regularization parameter (λ²) snr = 3. lambda2 = 1. / snr ** 2 # generate the source time course (STC) stc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator, lambda2=lambda2, method='MNE') # or dSPM, sLORETA, eLORETA
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Finally, in order to plot the source estimate on the subject's cortical surface we'll also need the path to the sample subject's structural MRI files (the subjects_dir):
# path to subjects' MRI files subjects_dir = os.path.join(sample_data_folder, 'subjects') # plot stc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'], subjects_dir=subjects_dir)
0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Numpy arrays In standard Python, data is stored as lists, and multidimensional data as lists of lists. In numpy, however, we can now work with arrays. To get these arrays, we can use np.asarray to convert a list into an array. Below we take a quick look at how a list behaves differently from an array.
# We first create an array `x` start = 1 stop = 11 step = 1 x = np.arange(start, stop, step) print(x)
notebooks/Week_05/05_Numpy_Matplotlib.ipynb
VandyAstroML/Vanderbilt_Computational_Bootcamp
mit
We can also manipulate the array. For example, we can: Multiply by two:
x * 2
notebooks/Week_05/05_Numpy_Matplotlib.ipynb
VandyAstroML/Vanderbilt_Computational_Bootcamp
mit
Take the square of all the values in the array:
x ** 2
notebooks/Week_05/05_Numpy_Matplotlib.ipynb
VandyAstroML/Vanderbilt_Computational_Bootcamp
mit
Or even do some math on it:
(x**2) + (5*x) + (x / 3)
notebooks/Week_05/05_Numpy_Matplotlib.ipynb
VandyAstroML/Vanderbilt_Computational_Bootcamp
mit
If we want to set up an array in numpy, we can use range to make a list and then convert it to an array, but we can also just create an array directly in numpy. np.arange will do this with integers, and np.linspace will do this with floats, and allows for non-integer steps.
print(np.arange(10)) print(np.linspace(1,10,10))
notebooks/Week_05/05_Numpy_Matplotlib.ipynb
VandyAstroML/Vanderbilt_Computational_Bootcamp
mit
Last week we had to use a function or a loop to carry out math on a list. However with numpy we can do this a lot simpler by making sure we're working with an array, and carrying out the mathematical operations on that array.
x=np.arange(10) print(x) print(x**2)
notebooks/Week_05/05_Numpy_Matplotlib.ipynb
VandyAstroML/Vanderbilt_Computational_Bootcamp
mit