markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The main abstract base class is the following one: class SerialPort: metaclass = abc.ABCMeta @abc.abstractmethod def isOpen(self): pass @abc.abstractmethod def readline(self): pass @abc.abstractmethod def close(self): pass @abc.abstractmethod def get_port(self): return "" @abc.abstractmethod def get_baudrate(self): return 0 As an example, we can see a dummy implementation: class DummySerialPort (SerialPort): def init(self, port = None, baud = None): pass def isOpen(self): return True def close(self): pass def get_port(self): return "" def get_baudrate(self): return 0 def readline(self): time_delay = int(3*random.random())+1 time.sleep(time_delay) return self.gen_random_line() def gen_random_line(self): return "Hee" <h2>Building Serial Ports</h2> <span> In order to build an instance of a SerialPort class, we have 2 options: <ul> <li>Call the constructor directly</li> <li>Use a Builder</li> </ul> </span> <h3>Calling the constructor</h3>
import hit.serial.serial_port port="" baud=0 dummySerialPort = hit.serial.serial_port.DummySerialPort(port, baud)
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> The DummSerialPort is very simple. It just says "Hee" (after a few seconds) when its method "readline()" is called.<br> Port and Baud are useless here. </span>
print dummySerialPort.readline()
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> Let's create a more interesting Serialport instance, </span>
import hit.serial.serial_port port="" baud=0 emulatedSerialPort = hit.serial.serial_port.ATTEmulatedSerialPort(port, baud)
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> The ATTEmulatedSerialPort will emulate a real ATT serial port reading.<br> Port and Baud are useless here. </span>
print emulatedSerialPort.readline()
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<h3>Using a Builder</h3> <span> Let's use a builder now. </span> <span> We can choose the builder we want and build as many SerialPorts we want. </span>
import hit.serial.serial_port_builder builder = hit.serial.serial_port_builder.ATTEmulatedSerialPortBuilder() port="" baud=0 emulatedSerialPort1 = builder.build_serial_port(port, baud) emulatedSerialPort2 = builder.build_serial_port(port, baud) emulatedSerialPort3 = builder.build_serial_port(port, baud) emulatedSerialPort4 = builder.build_serial_port(port, baud) emulatedSerialPort5 = builder.build_serial_port(port, baud) emulatedSerialPort6 = builder.build_serial_port(port, baud) emulatedSerialPort7 = builder.build_serial_port(port, baud)
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> And call "readline()" </span>
print emulatedSerialPort5.readline()
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> There is a special Serial port abstraction that is fed from a file.<br> This is useful when we want to "mock" the serial port and give it previously stored readings. </span> <span> This is interesting, for example, in order to reproduce, or visualize the repetition of an interesting set of hits in a game. Because Serial line is Real-Time, there are situations where it is needed to provide the ATT framework with a set of know hits, previously stored. </span> <span> We can use the data use in "Train points importer". </span>
!head -10 train_points_import_data/arduino_raw_data.txt import hit.serial.serial_port_builder builder = hit.serial.serial_port_builder.ATTHitsFromFilePortBuilder() port="train_points_import_data/arduino_raw_data.txt" baud=0 fileSerialPort = builder.build_serial_port(port, baud)
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
<span> And now we will read some lines: </span>
for i in range(20): print fileSerialPort.readline()
notebooks/Serial Ports.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
Use Keras Model Zoo
# https://www.tensorflow.org/api_docs/python/tf/keras/applications/ #from tensorflow.keras.preprocessing import image as keras_preprocessing_image from tensorflow.keras.preprocessing import image as keras_preprocessing_image
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Architecture Choices NASNet cell structure Ensure we have the model loaded
#from tensorflow.python.keras.applications.nasnet import NASNetLarge, preprocess_input #model = NASNetLarge(weights='imagenet', include_top=False) # 343,608,736 from tensorflow.keras.applications.nasnet import NASNetMobile, preprocess_input, decode_predictions model_imagenet = NASNetMobile(weights='imagenet', include_top=True) # 24,226,656 bytes print("Model Loaded")
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Build the model and select layers we need - the features are taken from the final network layer, before the softmax nonlinearity.
def image_to_input(model, img_path): target_size=model.input_shape[1:] img = keras_preprocessing_image.load_img(img_path, target_size=target_size) x = keras_preprocessing_image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) return x def get_single_prediction(img_path, top=5): x = image_to_input(model_imagenet, img_path) preds = model_imagenet.predict(x) predictions = decode_predictions(preds, top=top) return predictions[0] img_path = './images/cat-with-tongue_224x224.jpg' im = plt.imread(img_path) plt.imshow(im) plt.show() for t in get_single_prediction(img_path): print("%6.2f %s" % (t[2],t[1],)) image_dir = './images/' image_files = [ os.path.join(image_dir, f) for f in os.listdir(image_dir) if (f.lower().endswith('png') or f.lower().endswith('jpg')) and f!='logo.png' ] t0 = time.time() for i, f in enumerate(image_files): im = plt.imread(f) if not (im.shape[0]==224 and im.shape[1]==224): continue plt.figure() plt.imshow(im.astype('uint8')) top5 = get_single_prediction(f) for n, (id,label,prob) in enumerate(top5): plt.text(350, 50 + n * 25, '{}. {}'.format(n+1, label), fontsize=14) plt.axis('off') print("DONE : %6.2f seconds each" %(float(time.time() - t0)/len(image_files),)) #model_imagenet=None model_imagenet.summary()
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Transfer Learning Now, we'll work with the layer 'just before' the final (ImageNet) classification layer.
#model_logits = NASNetMobile(weights='imagenet', include_top=False, pooling=None) # 19,993,200 bytes #logits_layer = model_imagenet.get_layer('global_average_pooling2d_1') logits_layer = model_imagenet.get_layer('predictions') model_logits = keras.Model(inputs=model_imagenet.input, outputs=logits_layer.output) print("Model Loaded")
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Use the Network to create 'features' for the training images Now go through the input images and feature-ize them at the 'logit level' according to the pretrained network. <!-- [Logits vs the softmax probabilities](images/presentation/softmax-layer-generic_676x327.png) !--> NB: The pretraining was done on ImageNet - there wasn't anything specific to the recognition task we're doing here. Display the network layout graph on TensorBoard This isn't very informative, since the CNN graph is pretty complex...
#writer = tf.summary.FileWriter(logdir='../tensorflow.logdir/', graph=tf.get_default_graph()) #writer.flush()
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Handy cropping function
def crop_middle_square_area(np_image): h, w, _ = np_image.shape h = int(h/2) w = int(w/2) if h>w: return np_image[ h-w:h+w, : ] return np_image[ :, w-h:w+h ] im_sq = crop_middle_square_area(im) im_sq.shape def get_logits_from_non_top(np_logits): # ~ average pooling #return np_logits[0].sum(axis=0).sum(axis=0) # ~ max-pooling return np_logits[0].max(axis=0).max(axis=0)
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Use folder names to imply classes for Training Set
classes = sorted( [ d for d in os.listdir(CLASS_DIR) if os.path.isdir(os.path.join(CLASS_DIR, d)) ] ) classes # Sorted for for consistency train = dict(filepath=[], features=[], target=[]) t0 = time.time() for class_i, directory in enumerate(classes): for filename in os.listdir(os.path.join(CLASS_DIR, directory)): filepath = os.path.join(CLASS_DIR, directory, filename) if os.path.isdir(filepath): continue im = plt.imread(filepath) im_sq = crop_middle_square_area(im) x = image_to_input(model_logits, filepath) #np_logits = model_logits.predict(x) # Shape = 1x7x7x1056 if pooling=None #print(np_logits.shape) #np_logits_pooled = get_logits_from_non_top( np_logits ) np_logits_pooled = model_logits.predict(x)[0] # Shape = 1x1056 if pooling=avg train['filepath'].append(filepath) train['features'].append(np_logits_pooled) train['target'].append( class_i ) plt.figure() plt.imshow(im_sq.astype('uint8')) plt.axis('off') plt.text(2*320, 50, '{}'.format(filename), fontsize=14) plt.text(2*320, 80, 'Train as class "{}"'.format(directory), fontsize=12) print("DONE : %6.2f seconds each" %(float(time.time() - t0)/len(train),))
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Build an SVM model over the features
from sklearn import svm classifier = svm.LinearSVC() classifier.fit(train['features'], train['target']) # learn from the data
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Use the SVM model to classify the test set
test_image_files = [f for f in os.listdir(CLASS_DIR) if not os.path.isdir(os.path.join(CLASS_DIR, f))] t0 = time.time() for filename in sorted(test_image_files): filepath = os.path.join(CLASS_DIR, filename) im = plt.imread(filepath) im_sq = crop_middle_square_area(im) # This is two ops : one merely loads the image from numpy, # the other runs the network to get the class probabilities x = image_to_input(model_logits, filepath) #np_logits = model_logits.predict(x) # Shape = 1x7x7x1056 #np_logits_pooled = get_logits_from_non_top( np_logits ) np_logits_pooled = model_logits.predict(x)[0] # Shape = 1x1056 prediction_i = classifier.predict([ np_logits_pooled ]) decision = classifier.decision_function([ np_logits_pooled ]) plt.figure() plt.imshow(im_sq.astype('uint8')) plt.axis('off') prediction = classes[ prediction_i[0] ] plt.text(2*320, 50, '{} : Distance from boundary = {:5.2f}'.format(prediction, decision[0]), fontsize=20) plt.text(2*320, 75, '{}'.format(filename), fontsize=14) print("DONE : %6.2f seconds each" %(float(time.time() - t0)/len(test_image_files),))
notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb
mdda/fossasia-2016_deep-learning
mit
Algorithm for processing Chunks Make a partition given the extent Produce a tuple (minx ,maxx,miny,maxy) for each element on the partition Calculate the semivariogram for each chunk and save it in a dataframe Plot Everything Do the same with a mMatern Kernel
minx,maxx,miny,maxy = getExtent(new_data) maxy ## If prefered a fixed number of chunks N = 100 xp,dx = np.linspace(minx,maxx,N,retstep=True) yp,dy = np.linspace(miny,maxy,N,retstep=True) ### Distance interval print(dx) print(dy) ## Let's build the partition ## If prefered a fixed size of chunk ds = 300000 #step size (meters) xp = np.arange(minx,maxx,step=ds) yp = np.arange(miny,maxy,step=ds) dx = ds dy = ds N = len(xp) xx,yy = np.meshgrid(xp,yp) Nx = xp.size Ny = yp.size #coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(N) for j in range(N)] coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(Ny) for j in range(Nx)] from functools import partial tuples = map(lambda (x,y) : partial(getExtentFromPoint,x,y,step_sizex=dx,step_sizey=dy)(),coordinates_list) chunks = map(lambda (mx,Mx,my,My) : subselectDataFrameByCoordinates(new_data,'newLon','newLat',mx,Mx,my,My),tuples) ## Here we can filter based on a threshold threshold = 20 chunks_non_empty = filter(lambda df : df.shape[0] > threshold ,chunks) len(chunks_non_empty) lengths = pd.Series(map(lambda ch : ch.shape[0],chunks_non_empty)) lengths.plot.hist()
notebooks/.ipynb_checkpoints/model_by_chunks-checkpoint.ipynb
molgor/spystats
bsd-2-clause
For efficiency purposes we restrict to 10 variograms
smaller_list = chunks_non_empty[:10] variograms =map(lambda chunk : tools.Variogram(chunk,'residuals1',using_distance_threshold=200000),smaller_list) vars = map(lambda v : v.calculateEmpirical(),variograms) vars = map(lambda v : v.calculateEnvelope(num_iterations=50),variograms)
notebooks/.ipynb_checkpoints/model_by_chunks-checkpoint.ipynb
molgor/spystats
bsd-2-clause
Take an average of the empirical variograms also with the envelope. We will use the group by directive on the field lags
envslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1) envhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1) variogram = pd.concat(map(lambda df : df[['variogram']],vars),axis=1) lags = vars[0][['lags']] meanlow = list(envslow.apply(lambda row : np.mean(row),axis=1)) meanhigh = list(envhigh.apply(np.mean,axis=1)) meanvariogram = list(variogram.apply(np.mean,axis=1)) results = pd.DataFrame({'meanvariogram':meanvariogram,'meanlow':meanlow,'meanhigh':meanhigh}) result_envelope = pd.concat([lags,results],axis=1) meanvg = tools.Variogram(section,'residuals1') meanvg.plot() meanvg.envelope.columns result_envelope.columns result_envelope.columns = ['lags','envhigh','envlow','variogram'] meanvg.envelope = result_envelope meanvg.plot(refresh=False)
notebooks/.ipynb_checkpoints/model_by_chunks-checkpoint.ipynb
molgor/spystats
bsd-2-clause
Sommer 2015 Datenmodell Aufgabe Erstellen Sie eine Abfrage, mit der Sie die Daten aller Kunden, die Anzahl deren Aufträge, die Anzahl der Fahrten und die Summe der Streckenkilometer erhalten. Die Ausgabe soll nach Kunden-PLZ absteigend sortiert sein. Lösung
%%sql %sql select count(*) as AnzahlFahrten from fahrten
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Warum geht kein Join ?? ```mysql ```
%%sql select k.kd_id, k.`kd_firma`, k.`kd_plz`, count(distinct a.Au_ID) as AnzAuftrag, count(distinct f.f_id) as AnzFahrt, sum(distinct ts.ts_strecke) as SumStrecke from kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id` left join fahrten f on a.`au_id` = f.`f_au_id` left join teilstrecke ts on ts.`ts_f_id` = f.`f_id` group by k.kd_id order by k.`kd_plz`
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Der Ansatz mit Join funktioniert in dieser Form nicht, da spätestens beim 2. Join die Firma Trappo mit 2 Datensätzen aus dem 1. Join verknüpft wird. Deshalb wird auch die Anzahl der Fahren verdoppelt. Dies wiederholt sich beim 3. Join. Die folgende Abfrage zeigt ohne die Aggregatfunktionen das jeweilige Ausgangsergebnis mysql select k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id` from kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id` left join fahrten f on a.`au_id` = f.`f_au_id` left join teilstrecke ts on ts.`ts_f_id` = f.`f_id` order by k.`kd_plz`
%sql select k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id` from kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id` left join fahrten f on a.`au_id` = f.`f_au_id` left join teilstrecke ts on ts.`ts_f_id` = f.`f_id` order by k.`kd_plz`
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Winter 2015 Datenmodell Hinweis: In Rechnung gibt es zusätzlich ein Feld Rechnung.Kd_ID Aufgabe Erstellen Sie eine SQL-Abfrage, mit der alle Kunden wie folgt aufgelistet werden, bei denen eine Zahlungsbedingung mit einem Skontosatz größer 3 % ist, mit Ausgabe der Anzahl der hinterlegten Rechnungen aus dem Jahr 2015. Lösung
%sql mysql://steinam:steinam@localhost/winter_2015
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
``mysql select count(rechnung.Rg_ID), kunde.Kd_Namefrom rechnung inner join kunde onrechnung.Rg_KD_ID= kunde.Kd_IDinner joinzahlungsbedingungon kunde.Kd_Zb_ID=zahlungsbedingung.Zb_IDwherezahlungsbedingung.Zb_SkontoProzent&gt; 3.0 and year(rechnung.Rg_Datum) = 2015 group by Kunde.Kd_Name` ```
%%sql select count(rechnung.`Rg_ID`), kunde.`Kd_Name` from rechnung inner join kunde on `rechnung`.`Rg_KD_ID` = kunde.`Kd_ID` inner join `zahlungsbedingung` on kunde.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` where `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0 and year(`rechnung`.`Rg_Datum`) = 2015 group by Kunde.`Kd_Name`
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Es geht auch mit einem Subselect ``mysql select kd.Kd_Name, (select COUNT(*) from Rechnung as R where R.Rg_KD_ID= KD.Kd_IDand year(R.Rg_Datum`) = 2015) from Kunde kd inner join `zahlungsbedingung` on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` and zahlungsbedingung.Zb_SkontoProzent > 3.0 ```
%%sql select kd.`Kd_Name`, (select COUNT(*) from Rechnung as R where R.`Rg_KD_ID` = KD.`Kd_ID` and year(R.`Rg_Datum`) = 2015) as Anzahl from Kunde kd inner join `zahlungsbedingung` on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` and `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Versicherung Zeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden. Datenmodell Versicherung
%sql -- your code goes here
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Lösung
%sql mysql://steinam:steinam@localhost/versicherung_complete %%sql select min(`vv`.`Abschlussdatum`) as 'Erster Abschluss', `vv`.`Mitarbeiter_ID` from `versicherungsvertrag` vv inner join mitarbeiter m on vv.`Mitarbeiter_ID` = m.`ID` where vv.`Mitarbeiter_ID` in ( select m.`ID` from mitarbeiter m inner join Abteilung a on m.`Abteilung_ID` = a.`ID`) group by vv.`Mitarbeiter_ID` result = _ result
jup_notebooks/datenbanken/Sommer_2015.ipynb
steinam/teacher
mit
Data source is http://www.quandl.com. We use blaze to store data.
with open('../.quandl_api_key.txt', 'r') as f: api_key = f.read() db = Quandl.get("EOD/DB", authtoken=api_key) bz.odo(db['Rate'].reset_index(), '../data/db.bcolz') fx = Quandl.get("CURRFX/EURUSD", authtoken=api_key) bz.odo(fx['Rate'].reset_index(), '../data/eurusd.bcolz')
notebooks/DataPreparation.ipynb
mvaz/osqf2015
mit
Can also migrate it to a sqlite database
bz.odo('../data/db.bcolz', 'sqlite:///osqf.db::db') %load_ext sql %%sql sqlite:///osqf.db select * from db
notebooks/DataPreparation.ipynb
mvaz/osqf2015
mit
Can perform queries
d = bz.Data('../data/db.bcolz') d.Close.max()
notebooks/DataPreparation.ipynb
mvaz/osqf2015
mit
AutoML SDK: AutoML image classification model Installation Install the latest (preview) version of AutoML SDK.
! pip3 install -U google-cloud-automl --user
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Install the Google cloud-storage library as well.
! pip3 install google-cloud-storage
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Restart the Kernel Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
import os if not os.getenv("AUTORUN"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin GPU run-time Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your GCP project The following steps are required, regardless of your notebook environment. Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the AutoML APIs and Compute Engine APIs. Google Cloud SDK is already installed in AutoML Notebooks. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
PROJECT_ID = "[your-project-id]" #@param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services
REGION = 'us-central1' #@param {type: "string"}
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your GCP account If you are using AutoML Notebooks, your environment is already authenticated. Skip this step. Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your Google Cloud account. This provides access # to your Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on AutoML, then don't execute this code if not os.path.exists('/opt/deeplearning/metadata/env_version'): if 'google.colab' in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this tutorial in a notebook locally, replace the string # below with the path to your service account key and run this cell to # authenticate your Google Cloud account. else: %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json # Log in to your account on Google Cloud ! gcloud auth login
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]": BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION gs://$BUCKET_NAME
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al gs://$BUCKET_NAME
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import AutoML SDK Import the AutoML SDK into our Python environment.
import json import os import sys import time from google.cloud import automl_v1beta1 as automl from google.protobuf.json_format import MessageToJson from google.protobuf.json_format import ParseDict
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
AutoML constants Setup up the following constants for AutoML: PARENT: The AutoML location root path for dataset, model and endpoint resources.
# AutoML location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Clients The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML). You will use several clients in this tutorial, so set them all up upfront. (?)
def automl_client(): return automl.AutoMlClient() def prediction_client(): return automl.PredictionServiceClient() def operations_client(): return automl.AutoMlClient()._transport.operations_client clients = {} clients["automl"] = automl_client() clients["prediction"] = prediction_client() clients["operations"] = operations_client() for client in clients.items(): print(client) IMPORT_FILE = 'gs://automl-video-demo-data/hmdb_split1.csv' ! gsutil cat $IMPORT_FILE | head -n 10
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: TRAIN,gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv TEST,gs://automl-video-demo-data/hmdb_split1_5classes_test_inf.csv Create a dataset projects.locations.datasets.create Request
dataset = { "display_name": "hmdb_" + TIMESTAMP, "video_classification_dataset_metadata": {} } print(MessageToJson( automl.CreateDatasetRequest( parent=PARENT, dataset=dataset ).__dict__["_pb"]) )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "dataset": { "displayName": "hmdb_20210228225744", "videoClassificationDatasetMetadata": {} } } Call
request = clients["automl"].create_dataset( parent=PARENT, dataset=dataset )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
result = request print(MessageToJson(result.__dict__["_pb"]))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/datasets/VCN6574174086275006464", "displayName": "hmdb_20210228225744", "createTime": "2021-02-28T23:06:43.197904Z", "etag": "AB3BwFrtf0Yl4fgnXW4leoEEANTAGQdOngyIqdQSJBT9pKEChgeXom-0OyH7dKtfvA4=", "videoClassificationDatasetMetadata": {} }
# The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split('/')[-1] print(dataset_id)
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.datasets.importData Request
input_config = { "gcs_source": { "input_uris": [IMPORT_FILE] } } print(MessageToJson( automl.ImportDataRequest( name=dataset_short_id, input_config=input_config ).__dict__["_pb"]) )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "VCN6574174086275006464", "inputConfig": { "gcsSource": { "inputUris": [ "gs://automl-video-demo-data/hmdb_split1.csv" ] } } } Call
request = clients["automl"].import_data( name=dataset_id, input_config=input_config )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
result = request.result() print(MessageToJson(result))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: {} Train a model projects.locations.models.create Request
model = { "display_name": "hmdb_" + TIMESTAMP, "dataset_id": dataset_short_id, "video_classification_model_metadata": {} } print(MessageToJson( automl.CreateModelRequest( parent=PARENT, model=model ).__dict__["_pb"]) )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "model": { "displayName": "hmdb_20210228225744", "datasetId": "VCN6574174086275006464", "videoClassificationModelMetadata": {} } } Call
request = clients["automl"].create_model( parent=PARENT, model=model )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
result = request.result() print(MessageToJson(result.__dict__["_pb"]))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648" }
# The full unique ID for the training pipeline model_id = result.name # The short numeric ID for the training pipeline model_short_id = model_id.split('/')[-1] print(model_short_id)
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Evaluate the model projects.locations.models.modelEvaluations.list Call
request = clients["automl"].list_model_evaluations( parent=model_id )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
import json model_evaluations = [ json.loads(MessageToJson(me.__dict__["_pb"])) for me in request ] # The evaluation slice evaluation_slice = request.model_evaluation[0].name print(json.dumps(model_evaluations, indent=2))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output ``` [ { "name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648/modelEvaluations/1998146574672720266", "createTime": "2021-03-01T01:02:02.452298Z", "evaluatedExampleCount": 150, "classificationEvaluationMetrics": { "auPrc": 1.0, "confidenceMetricsEntry": [ { "confidenceThreshold": 0.016075565, "recall": 1.0, "precision": 0.2, "f1Score": 0.33333334 }, { "confidenceThreshold": 0.017114623, "recall": 1.0, "precision": 0.202977, "f1Score": 0.3374578 }, # REMOVED FOR BREVITY { "confidenceThreshold": 0.9299338, "recall": 0.033333335, "precision": 1.0, "f1Score": 0.06451613 } ] }, "displayName": "golf" } ] ``` projects.locations.models.modelEvaluations.get Call
request = clients["automl"].get_model_evaluation( name=evaluation_slice )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Response
print(MessageToJson(request.__dict__["_pb"]))
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: ``` { "name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648/modelEvaluations/1998146574672720266", "createTime": "2021-03-01T01:02:02.452298Z", "evaluatedExampleCount": 150, "classificationEvaluationMetrics": { "auPrc": 1.0, "confidenceMetricsEntry": [ { "confidenceThreshold": 0.016075565, "recall": 1.0, "precision": 0.2, "f1Score": 0.33333334 }, { "confidenceThreshold": 0.017114623, "recall": 1.0, "precision": 0.202977, "f1Score": 0.3374578 }, # REMOVED FOR BREVITY { "confidenceThreshold": 0.9299338, "recall": 0.006666667, "precision": 1.0, "f1Score": 0.013245033 } ], "confusionMatrix": { "annotationSpecId": [ "175274248095399936", "2048771693081526272", "4354614702295220224", "6660457711508914176", "8966300720722608128" ], "row": [ { "exampleCount": [ 30, 0, 0, 0, 0 ] }, { "exampleCount": [ 0, 30, 0, 0, 0 ] }, { "exampleCount": [ 0, 0, 30, 0, 0 ] }, { "exampleCount": [ 0, 0, 0, 30, 0 ] }, { "exampleCount": [ 0, 0, 0, 0, 30 ] } ], "displayName": [ "ride_horse", "golf", "cartwheel", "pullup", "kick_ball" ] } } } ``` Make batch predictions Make the batch input file To request a batch of predictions from AutoML Video, create a CSV file that lists the Cloud Storage paths to the videos that you want to annotate. You can also specify a start and end time to tell AutoML Video to only annotate a segment (segment-level) of the video. The start time must be zero or greater and must be before the end time. The end time must be greater than the start time and less than or equal to the duration of the video. You can also use inf to indicate the end of a video. example: gs://my-videos-vcm/short_video_1.avi,0.0,5.566667 gs://my-videos-vcm/car_chase.avi,0.0,3.933333
TRAIN_FILES = "gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv" test_items = ! gsutil cat $TRAIN_FILES | head -n2 cols = str(test_items[0]).split(',') test_item_1, test_label_1, test_start_1, test_end_1 = str(cols[0]), str(cols[1]), str(cols[2]), str(cols[3]) print(test_item_1, test_label_1) cols = str(test_items[1]).split(',') test_item_2, test_label_2, test_start_2, test_end_2 = str(cols[0]), str(cols[1]), str(cols[2]), str(cols[3]) print(test_item_2, test_label_2)
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: gs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi cartwheel gs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi cartwheel
import tensorflow as tf import json gcs_input_uri = "gs://" + BUCKET_NAME + '/test.csv' with tf.io.gfile.GFile(gcs_input_uri, 'w') as f: data = f"{test_item_1}, {test_start_1}, {test_end_1}" f.write(data + '\n') data = f"{test_item_2}, {test_start_2}, {test_end_2}" f.write(data + '\n') print(gcs_input_uri) ! gsutil cat $gcs_input_uri
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: gs://migration-ucaip-trainingaip-20210228225744/test.csv gs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi, 0.0, inf gs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi, 0.0, inf projects.locations.models.batchPredict Request
input_config = { "gcs_source": { "input_uris": [gcs_input_uri] } } output_config = { "gcs_destination": { "output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/" } } batch_prediction = automl.BatchPredictRequest( name=model_id, input_config=input_config, output_config=output_config ) print(MessageToJson( batch_prediction.__dict__["_pb"]) )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648", "inputConfig": { "gcsSource": { "inputUris": [ "gs://migration-ucaip-trainingaip-20210228225744/test.csv" ] } }, "outputConfig": { "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210228225744/batch_output/" } } } Call
request = clients["prediction"].batch_predict( request=batch_prediction )
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: {} Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial.
delete_dataset = True delete_model = True delete_bucket = True # Delete the dataset using the AutoML fully qualified identifier for the dataset try: if delete_dataset: clients['automl'].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the model using the AutoML fully qualified identifier for the model try: if delete_model: clients['automl'].delete_model(name=model_id) except Exception as e: print(e) if delete_bucket and 'BUCKET_NAME' in globals(): ! gsutil rm -r gs://$BUCKET_NAME
notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Class Class is a blueprint defining the charactaristics and behaviors of an object. python class MyClass: ... ... For a simple class, one shall define an instance python __init__() to handle variable when it created. Let's try the following example:
class Person: def __init__(self,age,salary): self.age = age self.salary = salary def out(self): print(self.age) print(self.salary)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
This is a basic class definition, the age and salary are needed when creating this object. The new class can be invoked like this:
a = Person(30,10000) a.out()
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
The __init__ initilaze the variables stored in the class. When they are called inside the class, we should add a self. in front of the variable. The out(Self) method are arbitary functions that can be used by calling Yourclass.yourfunction(). The input to the functions can be added after the self input. Python Conditionals And Loops The for statement The for statement reads for xxx in yyyy: yyyy shall be an iteratable, i.e. tuple or list or sth that can be iterate. After this line, user should add an indentation at the start of next line, either by space or tab. Conditionals A conditional statement is a programming concept that describes whether a region of code runs based on if a condition is true or false. The keywords involved in conditional statements are if, and optionally elif and else.
# make a list students = ['boy', 'boy', 'girl', 'boy', 'girl', 'girl', 'boy', 'boy', 'girl', 'girl', 'boy', 'boy'] boys = 0; girls = 0 for s in students: if s == 'boy': boys = boys +1 else: girls+=1 print("boys:", boys) print("girls:", girls)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
The While statement The While statement reads while CONDITIONAL: CONDITIONAL is a conditional statement, like i &lt; 100 or a boolean variable. After this line, user should add an indentation at the start of next line, either by space or tab.
def int_sum(n): s=0; i=1 while i < n: s += i*i i += 1 return s int_sum(1000)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Performance
%timeit int_sum(100000)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
<img src="images/numba-blue-horizontal-rgb.svg" alt="numba" style="width: 600px;"/> <img src="images/numba_features.png" alt="numba" style="width: 600px;"/> Numba translates Python functions to optimized machine code at runtime using the LLVM compiler library. Your functions will be translated to c-code during declarations. To install numba, python pip install numba
import numba @numba.njit def int_sum_nb(n): s=0; i=1 while i < n: s += i*i i += 1 return s int_sum_nb(1000) %timeit int_sum_nb(100000)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Examples
import random def monte_carlo_pi(n): acc = 0 for i in range(n): x = random.random() y = random.random() if (x**2 + y**2) < 1.0: acc += 1 return 4.0 * acc / n monte_carlo_pi(1000000) %timeit monte_carlo_pi(1000000) @numba.njit def monte_carlo_pi_nb(n): acc = 0 for i in range(n): x = random.random() y = random.random() if (x**2 + y**2) < 1.0: acc += 1 return 4.0 * acc / n monte_carlo_pi_nb(1000000) %timeit monte_carlo_pi_nb(1000000) @numba.njit def monte_carlo_pi_nbmt(n): acc = 0 for i in numba.prange(n): x = random.random() y = random.random() if (x**2 + y**2) < 1.0: acc += 1 return 4.0 * acc / n monte_carlo_pi_nbmt(1000000) %timeit monte_carlo_pi_nbmt(1000000)
notebooks/05-Python-Functions-Class.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Data Clean up In the last section of looking around, I saw that a lot of rows do not have any values or have garbage values(see first row of the table above). This can cause errors when computing anything using the values in these rows, hence a clean up is required. If a the coulums last_activity and first_login are empty then drop the corresponding row !
# Lets check the health of the data set user_data_to_clean.info()
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
As is visible from the last column (age_on_platform) data type, Pandas is not recognising it as date type format. This will make things difficult, so I delete this particular column and add a new one. Since the data in age_on_platform can be recreated by doing age_on_platform = last_activity - first_login But on eyeballing I noticed some, cells of column first_login have greater value than corresponding cell of last_activity. These cells need to be swapped, since its not possible to have first_login > last_activity Finally the columns first_login, last_activity have missing values, as evident from above table. Since this is time data, that in my opinion should not be imputed, we will drop/delete the columns.
# Run a loop through the data frame and check each row for this anamoly, if found drop, # this is being done ONLY for selected columns import datetime swapped_count = 0 first_login_count = 0 last_activity_count = 0 email_count = 0 userid_count = 0 for index, row in user_data_to_clean.iterrows(): if row.last_activity == pd.NaT or row.last_activity != row.last_activity: last_activity_count = last_activity_count + 1 #print row.last_activity user_data_to_clean.drop(index, inplace=True) elif row.first_login > row.last_activity: user_data_to_clean.drop(index, inplace=True) swapped_count = swapped_count + 1 elif row.first_login != row.first_login or row.first_login == pd.NaT: user_data_to_clean.drop(index, inplace=True) first_login_count = first_login_count + 1 elif row.email != row.email: #or row.email == '' or row.email == ' ': user_data_to_clean.drop(index, inplace=True) email_count = email_count + 1 elif row.user_id != row.user_id: user_data_to_clean.drop(index, inplace=True) userid_count = userid_count + 1 print "last_activity_count=%d\tswapped_count=%d\tfirst_login_count=%d\temail_count=%d\tuserid_count=%d" \ % (last_activity_count, swapped_count, first_login_count, email_count, userid_count) user_data_to_clean.shape # Create new column 'age_on_platform' which has the corresponding value in date type format user_data_to_clean["age_on_platform"] = user_data_to_clean["last_activity"] - user_data_to_clean["first_login"] user_data_to_clean.info()
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
Validate if email i'd is correctly formatted and the email i'd really exists
from validate_email import validate_email email_count_invalid = 0 for index, row in user_data_to_clean.iterrows(): if not validate_email(row.email): # , verify=True) for checking if email i'd actually exits user_data_to_clean.drop(index, inplace=True) email_count_invalid = email_count_invalid + 1 print "Number of email-id invalid: %d" % (email_count_invalid) # Check the result of last operation user_data_to_clean.info()
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
Remove duplicates
user_data_to_deDuplicate = user_data_to_clean.copy() user_data_deDuplicateD = user_data_to_deDuplicate.loc[~user_data_to_deDuplicate.email.str.strip().duplicated()] len(user_data_deDuplicateD) user_data_deDuplicateD.info() # Now its time to convert the timedelta64 data type column named age_on_platform to seconds def convert_timedelta64_to_sec(td64): ts = (td64 / np.timedelta64(1, 's')) return ts user_data_deDuplicateD_timedelta64_converted = user_data_deDuplicateD.copy() temp_copy = user_data_deDuplicateD.copy() user_data_deDuplicateD_timedelta64_converted.drop("age_on_platform", 1) user_data_deDuplicateD_timedelta64_converted['age_on_platform'] = temp_copy['age_on_platform'].apply(convert_timedelta64_to_sec) user_data_deDuplicateD_timedelta64_converted.info()
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
Clustering using Mean shift from sklearn.cluster import MeanShift, estimate_bandwidth x = [1,1,5,6,1,5,10,22,23,23,50,51,51,52,100,112,130,500,512,600,12000,12230] x = pd.Series(user_data_deDuplicateD_timedelta64_converted['age_on_platform']) X = np.array(zip(x,np.zeros(len(x))), dtype=np.int) '''-- bandwidth = estimate_bandwidth(X, quantile=0.2) ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X) labels = ms.labels_ cluster_centers = ms.cluster_centers_ labels_unique = np.unique(labels) n_clusters_ = len(labels_unique) for k in range(n_clusters_): my_members = labels == k print "cluster {0} : lenght = {1}".format(k, len(X[my_members, 0])) #print "cluster {0}: {1}".format(k, X[my_members, 0]) cluster_sorted = sorted(X[my_members, 0]) print "cluster {0} : Max = {2} days & Min {1} days".format(k, cluster_sorted[0]1.15741e-5, cluster_sorted[-1]1.15741e-5) ''' The following bandwidth can be automatically detected using bandwidth = estimate_bandwidth(X, quantile=0.7) ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X) labels = ms.labels_ cluster_centers = ms.cluster_centers_ labels_unique = np.unique(labels) n_clusters_ = len(labels_unique) print("number of estimated clusters : %d" % n_clusters_) for k in range(n_clusters_): my_members = labels == k print "cluster {0} : lenght = {1}".format(k, len(X[my_members, 0])) cluster_sorted = sorted(X[my_members, 0]) print "cluster {0} : Min = {1} days & Max {2} days".format(k, cluster_sorted[0]1.15741e-5, cluster_sorted[-1]1.15741e-5) Plot result import matplotlib.pyplot as plt from itertools import cycle %matplotlib inline plt.figure(1) plt.clf() colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk') for k, col in zip(range(n_clusters_), colors): my_members = labels == k cluster_center = cluster_centers[k] plt.plot(X[my_members, 0], X[my_members, 1], col + '.') plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=14) plt.title('Estimated number of clusters: %d' % n_clusters_) plt.show()
# Clustering using Kmeans, not working ''' y = [1,1,5,6,1,5,10,22,23,23,50,51,51,52,100,112,130,500,512,600,12000,12230] y_float = map(float, y) x = range(len(y)) x_float = map(float, x) m = np.matrix([x_float, y_float]).transpose() from scipy.cluster.vq import kmeans kclust = kmeans(m, 5) kclust[0][:, 0] assigned_clusters = [abs(cluster_indices - e).argmin() for e in x] '''
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
Binning based on age_on_platform day 1; day 2; week 1; week 2; week 3; week 4; week 6; week 8; week 12; 3 months; 6 months; 1 year;
user_data_binned = user_data_deDuplicateD_timedelta64_converted.copy() # function to convert age_on_platform in seconds to hours convert_sec_to_hr = lambda x: x/3600 user_data_binned["age_on_platform"] = user_data_binned['age_on_platform'].map(convert_sec_to_hr).copy() # filter rows based on first_login value after 30th April user_data_binned_post30thApril = user_data_binned[user_data_binned.first_login < datetime.datetime(2016, 4, 30)] for index, row in user_data_binned_post30thApril.iterrows(): if row["age_on_platform"] < 25: user_data_binned_post30thApril.set_value(index, 'bin', 1) elif row["age_on_platform"] >= 25 and row["age_on_platform"] < 49: user_data_binned_post30thApril.set_value(index, 'bin', 2) elif row["age_on_platform"] >= 49 and row["age_on_platform"] < 169: #168 hrs = 1 week user_data_binned_post30thApril.set_value(index, 'bin', 3) elif row["age_on_platform"] >=169 and row["age_on_platform"] < 337: # 336 hrs = 2 weeks user_data_binned_post30thApril.set_value(index, 'bin', 4) elif row["age_on_platform"] >=337 and row["age_on_platform"] < 505: # 504 hrs = 3 weeks user_data_binned_post30thApril.set_value(index, 'bin', 5) elif row["age_on_platform"] >=505 and row["age_on_platform"] < 673: # 672 hrs = 4 weeks user_data_binned_post30thApril.set_value(index, 'bin', 6) elif row["age_on_platform"] >=673 and row["age_on_platform"] < 1009: # 1008 hrs = 6 weeks user_data_binned_post30thApril.set_value(index, 'bin', 7) elif row["age_on_platform"] >=1009 and row["age_on_platform"] < 1345: # 1344 hrs = 8 weeks user_data_binned_post30thApril.set_value(index, 'bin', 8) elif row["age_on_platform"] >=1345 and row["age_on_platform"] < 2017: # 2016 hrs = 12 weeks user_data_binned_post30thApril.set_value(index, 'bin', 9) elif row["age_on_platform"] >=2017 and row["age_on_platform"] < 4381: # 4380 hrs = 6 months user_data_binned_post30thApril.set_value(index, 'bin', 10) elif row["age_on_platform"] >=4381 and row["age_on_platform"] < 8761: # 8760 hrs = 12 months user_data_binned_post30thApril.set_value(index, 'bin', 11) elif row["age_on_platform"] > 8761: # Rest, ie. beyond 1 year user_data_binned_post30thApril.set_value(index, 'bin', 12) else: user_data_binned_post30thApril.set_value(index, 'bin', 0) user_data_binned_post30thApril.info() print "Number of users with age_on_platform equal to 1 day or less, aka 0th day = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 1]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 1].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_0day.csv", index=False) print "Number of users with age_on_platform between 1st and 2nd days = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 2]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 2].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_1st-day.csv", index=False) print "Number of users with age_on_platform greater than or equal to 2 complete days and less than 1 week = %d" % \ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 3]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 3].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_1st-week.csv", index=False) print "Number of users with age_on_platform between 2nd week = %d" % \ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 4]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 4].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_2nd-week.csv", index=False) print "Number of users with age_on_platform between 3rd weeks = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 5]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 5].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_3rd-week.csv", index=False) print "Number of users with age_on_platform between 4th weeks = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 6]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 6].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_4th-week.csv", index=False) print "Number of users with age_on_platform greater than or equal to 4 weeks and less than 6 weeks = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 7]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 7].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_4th-to-6th-week.csv", index=False) print "Number of users with age_on_platform greater than or equal to 6 weeks and less than 8 weeks = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 8]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 8].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_6th-to-8th-week.csv", index=False) print "Number of users with age_on_platform greater than or equal to 8 weeks and less than 12 weeks = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 9]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 9].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_8th-to-12th-week.csv", index=False) print "Number of users with age_on_platform greater than or equal to 12 weeks and less than 6 months = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 10]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 10].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_12thweek-to-6thmonth.csv", index=False) print "Number of users with age_on_platform greater than or equal to 6 months and less than 1 year = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 11]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 11].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_6thmonth-to-1year.csv", index=False) print "Number of users with age_on_platform greater than 1 year = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 12]) user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 12].to_csv\ ("/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_beyond-1year.csv", index=False) print "Number of users with age_on_platform is wierd = %d" %\ len(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 0]) # Save dataframe with binned values as CSV #user_data_binned_post30thApril.to_csv('user_data_binned_post30thApril.csv')
Janacare_Habits_dataset_upto-7May2016.ipynb
gprakhar/janCC
bsd-3-clause
TensorFlow Addons Optimizers: LazyAdam <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_lazyadam"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview This notebook will demonstrate how to use the lazy adam optimizer from the Addons package. LazyAdam LazyAdam is a variant of the Adam optimizer that handles sparse updates more efficiently. The original Adam algorithm maintains two moving-average accumulators for each trainable variable; the accumulators are updated at every step. This class provides lazier handling of gradient updates for sparse variables. It only updates moving-average accumulators for sparse variable indices that appear in the current batch, rather than updating the accumulators for all indices. Compared with the original Adam optimizer, it can provide large improvements in model training throughput for some applications. However, it provides slightly different semantics than the original Adam algorithm, and may lead to different empirical results. Setup
!pip install -U tensorflow-addons import tensorflow as tf import tensorflow_addons as tfa # Hyperparameters batch_size=64 epochs=10
site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb
tensorflow/docs-l10n
apache-2.0
Build the Model
model = tf.keras.Sequential([ tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'), tf.keras.layers.Dense(64, activation='relu', name='dense_2'), tf.keras.layers.Dense(10, activation='softmax', name='predictions'), ])
site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb
tensorflow/docs-l10n
apache-2.0
Prepare the Data
# Load MNIST dataset as NumPy arrays dataset = {} num_validation = 10000 (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # Preprocess the data x_train = x_train.reshape(-1, 784).astype('float32') / 255 x_test = x_test.reshape(-1, 784).astype('float32') / 255
site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb
tensorflow/docs-l10n
apache-2.0
Train and Evaluate Simply replace typical keras optimizers with the new tfa optimizer
# Compile the model model.compile( optimizer=tfa.optimizers.LazyAdam(0.001), # Utilize TFA optimizer loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) # Train the network history = model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs) # Evaluate the network print('Evaluate on test data:') results = model.evaluate(x_test, y_test, batch_size=128, verbose = 2) print('Test loss = {0}, Test acc: {1}'.format(results[0], results[1]))
site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb
tensorflow/docs-l10n
apache-2.0
Preprocessing
import os from skimage import io from skimage.color import rgb2gray from skimage import transform from math import ceil IMGSIZE = (100, 100) def load_images(folder, scalefactor=(2, 2), labeldict=None): images = [] labels = [] files = os.listdir(folder) for file in (fname for fname in files if fname.endswith('.png')): img = io.imread(folder + file).astype(float) img = rgb2gray(img) # Crop since some of the real world pictures are other shape img = img[:IMGSIZE[0], :IMGSIZE[1]] # Possibly downscale to speed up processing img = transform.downscale_local_mean(img, scalefactor) # normalize image range img -= np.min(img) img /= np.max(img) images.append(img) if labeldict is not None: # lookup label for real world data in dict generated from labels.txt key, _ = os.path.splitext(file) labels.append(labeldict[key]) else: # infere label from filename if file.find("einstein") > -1 or file.find("curie") > -1: labels.append(1) else: labels.append(0) return np.asarray(images)[:, None], np.asarray(labels) x_train, y_train = load_images('data/aps/train/') # Artifically pad Einstein's and Curie't to have balanced training set # ok, since we use data augmentation later anyway sel = y_train == 1 repeats = len(sel) // sum(sel) - 1 x_train = np.concatenate((x_train[~sel], np.repeat(x_train[sel], repeats, axis=0)), axis=0) y_train = np.concatenate((y_train[~sel], np.repeat(y_train[sel], repeats, axis=0)), axis=0) x_test, y_test = load_images('data/aps/test/') rw_labels = {str(key): 0 if label == 0 else 1 for key, label in np.loadtxt('data/aps/real_world/labels.txt', dtype=int)} x_rw, y_rw = load_images('data/aps/real_world/', labeldict=rw_labels) from mpl_toolkits.axes_grid import ImageGrid from math import ceil def imsshow(images, grid=(5, -1)): assert any(g > 0 for g in grid) grid_x = grid[0] if grid[0] > 0 else ceil(len(images) / grid[1]) grid_y = grid[1] if grid[1] > 0 else ceil(len(images) / grid[0]) axes = ImageGrid(pl.gcf(), "111", (grid_y, grid_x), share_all=True) for ax, img in zip(axes, images): ax.get_xaxis().set_ticks([]) ax.get_yaxis().set_ticks([]) ax.imshow(img[0], cmap='gray') pl.figure(0, figsize=(16, 10)) imsshow(x_train, grid=(5, 1)) pl.show() pl.figure(0, figsize=(16, 10)) imsshow(x_train[::-4], grid=(5, 1)) pl.show() from keras.preprocessing.image import ImageDataGenerator imggen = ImageDataGenerator(rotation_range=20, width_shift_range=0.15, height_shift_range=0.15, shear_range=0.4, fill_mode='constant', cval=1., zoom_range=0.3, channel_shift_range=0.1) imggen.fit(x_train) for batch in it.islice(imggen.flow(x_train, batch_size=5), 2): pl.figure(0, figsize=(16, 5)) imsshow(batch, grid=(5, 1)) pl.show()
Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb
peterwittek/qml-rg
gpl-3.0
Training LeNet First, we will train a simple CNN with a single hidden fully connected layer as a classifier.
from keras.layers import Conv2D, Dense, Flatten, MaxPooling2D from keras.models import Sequential from keras.backend import image_data_format def generate(figsize, nr_classes, cunits=[20, 50], fcunits=[500]): model = Sequential() cunits = list(cunits) input_shape = figsize + (1,) if image_data_format == 'channels_last' \ else (1,) + figsize model.add(Conv2D(cunits[0], (5, 5), padding='same', activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) # Convolutional layers for nr_units in cunits[1:]: model.add(Conv2D(nr_units, (5, 5), padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) # Fully connected layers model.add(Flatten()) for nr_units in fcunits: model.add(Dense(nr_units, activation='relu')) # Output layer activation = 'softmax' if nr_classes > 1 else 'sigmoid' model.add(Dense(nr_classes, activation=activation)) return model from keras.optimizers import Adam from keras.models import load_model try: model = load_model('aps_lenet.h5') print("Model succesfully loaded...") except OSError: print("Saved model not found, traing...") model = generate(figsize=x_train.shape[-2:], nr_classes=1, cunits=[24, 48], fcunits=[100]) optimizer = Adam() model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) model.fit_generator(imggen.flow(x_train, y_train, batch_size=len(x_train)), validation_data=imggen.flow(x_test, y_test), steps_per_epoch=100, epochs=5, verbose=1, validation_steps=256) model.save('aps_lenet.h5') from sklearn.metrics import confusion_matrix def plot_cm(cm, classes, normalize=False, title='Confusion matrix', cmap=pl.cm.viridis): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] pl.imshow(cm, interpolation='nearest', cmap=cmap) pl.title(title) pl.colorbar() tick_marks = np.arange(len(classes)) pl.xticks(tick_marks, classes, rotation=45) pl.yticks(tick_marks, classes) thresh = cm.max() / 2. for i, j in it.product(range(cm.shape[0]), range(cm.shape[1])): pl.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") pl.tight_layout() pl.ylabel('True label') pl.xlabel('Predicted label') y_pred_rw = model.predict_classes(x_rw, verbose=0).ravel() plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True, classes=["Not Einstein", "Einstein"])
Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb
peterwittek/qml-rg
gpl-3.0
Training Random Forests Preprocessing to a fixed size training set since sklearn doesn't suppport streaming training sets?
# Same size training set as LeNet TRAININGSET_SIZE = len(x_train) * 5 * 100 batch_size = len(x_train) nr_batches = TRAININGSET_SIZE // batch_size + 1 imgit = imggen.flow(x_train, y=y_train, batch_size=batch_size) x_train_sampled = np.empty((TRAININGSET_SIZE, 1,) + x_train.shape[-2:]) y_train_sampled = np.empty(TRAININGSET_SIZE) for batch, (x_batch, y_batch) in enumerate(it.islice(imgit, nr_batches)): buflen = len(x_train_sampled[batch * batch_size:(batch + 1) * batch_size]) x_train_sampled[batch * batch_size:(batch + 1) * batch_size] = x_batch[:buflen] y_train_sampled[batch * batch_size:(batch + 1) * batch_size] = y_batch[:buflen] from sklearn.ensemble import RandomForestClassifier rfe = RandomForestClassifier(n_estimators=64, criterion='entropy', n_jobs=-1, verbose=True) rfe = rfe.fit(x_train_sampled.reshape((TRAININGSET_SIZE, -1)), y_train_sampled) y_pred_rw = rfe.predict(x_rw.reshape((len(x_rw), -1))) plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True, classes=["Not Einstein", "Einstein"]) pl.show() print("Rightly classified Einsteins:") imsshow(x_rw[((y_rw - y_pred_rw) == 0) * (y_rw == 1)]) pl.show() print("Wrongly classified images:") imsshow(x_rw[(y_rw - y_pred_rw) != 0]) pl.show()
Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb
peterwittek/qml-rg
gpl-3.0
So training on raw pixel values might not be a good idea. Let's build a feature extractor based on the trained LeNet (or any other pretrained image classifier)
model = load_model('aps_lenet.h5') enc_layers = it.takewhile(lambda l: not isinstance(l, keras.layers.Flatten), model.layers) encoder_model = keras.models.Sequential(enc_layers) encoder_model.add(keras.layers.Flatten()) x_train_sampled_enc = encoder_model.predict(x_train_sampled, verbose=True) rfe = RandomForestClassifier(n_estimators=64, criterion='entropy', n_jobs=-1, verbose=True) rfe = rfe.fit(x_train_sampled_enc, y_train_sampled) y_pred_rw = rfe.predict(encoder_model.predict(x_rw, verbose=False)) plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True, classes=["Not Einstein", "Einstein"]) pl.show()
Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb
peterwittek/qml-rg
gpl-3.0
This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest. After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenize our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
# Create a set of frequent words stoplist = set('for a of the and to in'.split(' ')) # Lowercase each document, split it by white space and filter out stopwords texts = [[word for word in document.lower().split() if word not in stoplist] for document in raw_corpus] # Count word frequencies from collections import defaultdict frequency = defaultdict(int) for text in texts: for token in text: frequency[token] += 1 # Only keep words that appear more than once processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts] processed_corpus
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
from gensim import corpora dictionary = corpora.Dictionary(processed_corpus) print(dictionary)
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
Because our corpus is small, there is only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common. Vector To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string "coffee milk coffee" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from. Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to:
print(dictionary.token2id)
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts:
new_doc = "Human computer interaction" new_vec = dictionary.doc2bow(new_doc.lower().split()) new_vec
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token. Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure. We can convert our entire original corpus to a list of vectors:
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus] bow_corpus
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details. Model Now that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim, documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus. One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space, where the frequency counts are weighted according to the relative rarity of each word in the corpus. Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors":
from gensim import models # train the model tfidf = models.TfidfModel(bow_corpus) # transform the "system minors" string tfidf[dictionary.doc2bow("system minors".lower().split())]
gensim Quick Start.ipynb
robotcator/gensim
lgpl-2.1
💣 Note: The version available on PyPI might not be the latest version of DPPy. Please consider forking or cloning DPPy using
# !rm -r DPPy # !git clone https://github.com/guilgautier/DPPy.git # !pip install scipy --upgrade # Then # !pip install DPPy/. # OR # !pip install DPPy/.['zonotope','trees','docs'] to perform a full installation.
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
💣 If you have chosen to clone the repo and now wish to interact with the source code while running this notebook. You can uncomment the following cell.
%load_ext autoreload %autoreload 2 import os import sys sys.path.insert(0, os.path.abspath('..')) import numpy as np import matplotlib.pyplot as plt %config InlineBackend.figure_format = 'retina' from dppy.beta_ensemble_polynomial_potential import BetaEnsemblePolynomialPotential
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
💻You can play with the various parameters, e.g., $N, \beta, V$ nb_gibbs_passes 💻 $V(x) = g_{2m} x^{2m}$ We first consider even monomial potentials whose equilibrium distribution can be derived from [Dei00, Proposition 6.156] $V(x) = \frac{1}{2} x^2$ (Hermite ensemble) This this the potential associated to the Hermite ensemble. In this case, the Jacobi parameters are all independent and sampling is exact [DuEd02, II C]. In our setting, this corresponds to a single pass of the Gibbs sampler over each variable.
beta, V = 2, np.poly1d([0.5, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x2 = be.sample_mcmc(N=1000, nb_gibbs_passes=1, sample_exact_cond=True) be.hist(sampl_x2)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x) = \frac{1}{4} x^4$ To depart from the classical quadratic potential we consider the quartic potential, which has been sampled by [LiMe13] [OlNaTr15] [ChFe19, Section 3.1]
beta, V = 2, np.poly1d([1/4, 0, 0, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4 = be.sample_mcmc(N=200, nb_gibbs_passes=10, sample_exact_cond=True) # sample_exact_cond=False, # nb_mala_steps=100) be.hist(sampl_x4)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x) = \frac{1}{6} x^6$ This is the first time the sextic ensemble is (approximately) sampled to the best of our knowledge. In this case, the conditionals associated to the $a_n$ parameters are not $\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA. For this reason, we set sample_exact_cond=False.
beta, V = 2, np.poly1d([1/6, 0, 0, 0, 0, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x6 = be.sample_mcmc(N=200, nb_gibbs_passes=10, sample_exact_cond=False, nb_mala_steps=100) be.hist(sampl_x6)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x) = g_2 x^2 + g_4 x^4$ We consider quartic potentials where $g_2$ varies to reveal equilibrium distributions with support which are connected, about to be disconnect and fully disconnected. We refer to [DuKu06, p.2-3], [Molinari, Example 3.3] and [LiMe13, Section 2] for the exact shape of the corresponding equilibrium densities. $V(x)= \frac{1}{4} x^4 + \frac{1}{2} x^2$ This case reveals an equilibrium density with a connected support.
beta, V = 2, np.poly1d([1/4, 0, 1/2, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4_x2 = be.sample_mcmc(N=1000, nb_gibbs_passes=10, sample_exact_cond=True) # sample_exact_cond=False, # nb_mala_steps=100) be.hist(sampl_x4_x2)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x)= \frac{1}{4} x^4 - x^2$ (onset of two-cut solution) This case reveal an equilibrium density with support which is about to be disconnected. The conditionals associated to the $a_n$ parameters are not $\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA. For this reason, we set sample_exact_cond=False.
beta, V = 2, np.poly1d([1/4, 0, -1, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4_x2_onset_2cut = be.sample_mcmc(N=1000, nb_gibbs_passes=10, sample_exact_cond=False, nb_mala_steps=100) be.hist(sampl_x4_x2_onset_2cut)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x)= \frac{1}{4} x^4 - \frac{5}{4} x^2$ (Two-cut eigenvalue distribution) This case reveals an equilibrium density with support having two connected components. The conditionals associated to the $a_n$ parameters are not $\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA. For this reason, we set sample_exact_cond=False.
beta, V = 2, np.poly1d([1/4, 0, -1.25, 0, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4_x2_2cut = be.sample_mcmc(N=200, nb_gibbs_passes=10, sample_exact_cond=False, nb_mala_steps=100) be.hist(sampl_x4_x2_2cut)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit
$V(x) = \frac{1}{20} x^4 - \frac{4}{15}x^3 + \frac{1}{5}x^2 + \frac{8}{5}x$ This case reveals a singular behavior at the right edge of the support of the equilibrium density The conditionals associated to the $a_n$ parameters are not $\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA. For this reason, we set sample_exact_cond=False. We refer to [ClItsKr10, Example 1.2] [OlNaTr14, Section 3.2] for the expression of the corresponding equilibrium density.
beta, V = 2, np.poly1d([1/20, -4/15, 1/5, 8/5, 0]) be = BetaEnsemblePolynomialPotential(beta, V) sampl_x4_x3_x2_x = be.sample_mcmc(N=200, nb_gibbs_passes=10, sample_exact_cond=False, nb_mala_steps=100) be.hist(sampl_x4_x3_x2_x)
notebooks/fast_sampling_of_beta_ensembles.ipynb
guilgautier/DPPy
mit