path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Week 2/Week_2_Lab_1.ipynb | ###Markdown
Copyright 2020 Rishit Dagli
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow BasicsLast week you learnt about AI in anutshell you also explored the different types of learnings namely supervised and unsupervised. You also saw what `x` and `y` is in a Machine Learning problem. This Week you were introduced to TensorFlow and also saw the 2 versions of TF. Now it is time to put all you have learnt to practice and get your hands dirty with code. You also saw the idea of neural networks and we will implement that here in this notebook. You will also see how you can handle data.**To know more about this I earlier had written a blog you can read it here - https://medium.com/@rishit.dagli/get-started-with-tensorflow-and-deep-learning-part-1-72c7d67f99fc** ImportsIn this tutorial we will mostly work with TensorFLow 2.x which is the latest one
###Code
try:
%tensorflow_version 2.x
# Note: %tensorflow_version only exists in Colab
except Exception:
pass
import tensorflow as tf
# This line shows you what version of TensorFlow you are using
tf.__version__
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Off to Neural NetsTo show how that works, let’s take a look at a set of numbers and see if you can determine the pattern between them. Okay, here are the numbers.```x = -1, 0, 1, 2, 3, 4y = -3, -1, 1, 3, 5, 7``` Ok, so most of you might have figured out the relation here it is $y=2x+1$You probably tried that out with a couple of other values and see that it fits. Congratulations, you’ve just done the basics of machine learning in your head. Okay, here’s our first line of code. This is written using Python and TensorFlow. A neural network is basically a set of functions which can learn patterns.```model = keras.Sequential([keras.layers.Dense(units = 1, input_shape = [1])])```The simplest possible neural network is one that has only one neuron in it, and that’s what this line of code does. In TensorFlow we use the word `Dense` to define a layer of connected neurons. There is only one `Dense` here means that there is only one layer and there is only single unit in it so there is only one neuron. Successive layers in keras are defined in a sequence so the word Sequential . You define the shape of what's input to the neural network in the first and in this case the only layer, and you can see that our input shape is super simple. It's just one value. Loss and optimizer functionUnderstand it like this the neural network has no idea of the relation between $X$ and $Y$ so it makes a random guess say $y=x+3$ . It will then use the data that it knows about, that’s the set of $X$s and $Y$s that we’ve already seen to measure how good or how bad its guess was. The loss function measures this and then gives the data to the optimizer which figures out the next guess. So the optimizer thinks about how good or how badly the guess was done using the data from the loss function. Then the logic is that each guess should be better than the one before. As the guesses get better and better, an accuracy approaches 100 percent, the term convergence is used.This line does it for us-```model.compile(optimizer = 'sgd', loss = 'mean_squared_error')```Here we have used the loss function as mean squared error and optimizer as SGD or stochactic gradient descent. Coding it Data
###Code
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
###Output
_____no_output_____
###Markdown
Plot the data (You should get a straight line)
###Code
import matplotlib.pyplot as plt
plt.style.use("ggplot")
plt.plot(xs, ys)
plt.xlabel("Xs")
plt.ylabel("Ys")
plt.show()
###Output
_____no_output_____
###Markdown
Making our neural network
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
###Output
_____no_output_____
###Markdown
Compiling it
###Code
model.compile(optimizer='sgd', loss='mean_squared_error')
###Output
_____no_output_____
###Markdown
Fitting the model
###Code
model.fit(xs, ys, epochs=500)
###Output
Epoch 1/500
1/1 [==============================] - 0s 1ms/step - loss: 53.1886
Epoch 2/500
1/1 [==============================] - 0s 920us/step - loss: 42.2523
Epoch 3/500
1/1 [==============================] - 0s 2ms/step - loss: 33.6398
Epoch 4/500
1/1 [==============================] - 0s 1ms/step - loss: 26.8558
Epoch 5/500
1/1 [==============================] - 0s 2ms/step - loss: 21.5104
Epoch 6/500
1/1 [==============================] - 0s 1ms/step - loss: 17.2971
Epoch 7/500
1/1 [==============================] - 0s 1ms/step - loss: 13.9746
Epoch 8/500
1/1 [==============================] - 0s 1ms/step - loss: 11.3531
Epoch 9/500
1/1 [==============================] - 0s 2ms/step - loss: 9.2832
Epoch 10/500
1/1 [==============================] - 0s 2ms/step - loss: 7.6476
Epoch 11/500
1/1 [==============================] - 0s 2ms/step - loss: 6.3536
Epoch 12/500
1/1 [==============================] - 0s 1ms/step - loss: 5.3287
Epoch 13/500
1/1 [==============================] - 0s 1ms/step - loss: 4.5156
Epoch 14/500
1/1 [==============================] - 0s 977us/step - loss: 3.8692
Epoch 15/500
1/1 [==============================] - 0s 1ms/step - loss: 3.3542
Epoch 16/500
1/1 [==============================] - 0s 1ms/step - loss: 2.9426
Epoch 17/500
1/1 [==============================] - 0s 1ms/step - loss: 2.6126
Epoch 18/500
1/1 [==============================] - 0s 1ms/step - loss: 2.3468
Epoch 19/500
1/1 [==============================] - 0s 2ms/step - loss: 2.1317
Epoch 20/500
1/1 [==============================] - 0s 2ms/step - loss: 1.9566
Epoch 21/500
1/1 [==============================] - 0s 2ms/step - loss: 1.8131
Epoch 22/500
1/1 [==============================] - 0s 1ms/step - loss: 1.6946
Epoch 23/500
1/1 [==============================] - 0s 1ms/step - loss: 1.5959
Epoch 24/500
1/1 [==============================] - 0s 1ms/step - loss: 1.5128
Epoch 25/500
1/1 [==============================] - 0s 1ms/step - loss: 1.4421
Epoch 26/500
1/1 [==============================] - 0s 1ms/step - loss: 1.3814
Epoch 27/500
1/1 [==============================] - 0s 1ms/step - loss: 1.3285
Epoch 28/500
1/1 [==============================] - 0s 1ms/step - loss: 1.2820
Epoch 29/500
1/1 [==============================] - 0s 1ms/step - loss: 1.2405
Epoch 30/500
1/1 [==============================] - 0s 2ms/step - loss: 1.2031
Epoch 31/500
1/1 [==============================] - 0s 1ms/step - loss: 1.1690
Epoch 32/500
1/1 [==============================] - 0s 1ms/step - loss: 1.1376
Epoch 33/500
1/1 [==============================] - 0s 1ms/step - loss: 1.1084
Epoch 34/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0810
Epoch 35/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0552
Epoch 36/500
1/1 [==============================] - 0s 2ms/step - loss: 1.0307
Epoch 37/500
1/1 [==============================] - 0s 2ms/step - loss: 1.0073
Epoch 38/500
1/1 [==============================] - 0s 2ms/step - loss: 0.9849
Epoch 39/500
1/1 [==============================] - 0s 2ms/step - loss: 0.9633
Epoch 40/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9424
Epoch 41/500
1/1 [==============================] - 0s 2ms/step - loss: 0.9222
Epoch 42/500
1/1 [==============================] - 0s 2ms/step - loss: 0.9026
Epoch 43/500
1/1 [==============================] - 0s 2ms/step - loss: 0.8835
Epoch 44/500
1/1 [==============================] - 0s 2ms/step - loss: 0.8650
Epoch 45/500
1/1 [==============================] - 0s 2ms/step - loss: 0.8469
Epoch 46/500
1/1 [==============================] - 0s 2ms/step - loss: 0.8292
Epoch 47/500
1/1 [==============================] - 0s 2ms/step - loss: 0.8120
Epoch 48/500
1/1 [==============================] - 0s 2ms/step - loss: 0.7951
Epoch 49/500
1/1 [==============================] - 0s 1ms/step - loss: 0.7787
Epoch 50/500
1/1 [==============================] - 0s 2ms/step - loss: 0.7626
Epoch 51/500
1/1 [==============================] - 0s 2ms/step - loss: 0.7469
Epoch 52/500
1/1 [==============================] - 0s 2ms/step - loss: 0.7315
Epoch 53/500
1/1 [==============================] - 0s 2ms/step - loss: 0.7164
Epoch 54/500
1/1 [==============================] - 0s 1ms/step - loss: 0.7016
Epoch 55/500
1/1 [==============================] - 0s 2ms/step - loss: 0.6872
Epoch 56/500
1/1 [==============================] - 0s 2ms/step - loss: 0.6730
Epoch 57/500
1/1 [==============================] - 0s 2ms/step - loss: 0.6592
Epoch 58/500
1/1 [==============================] - 0s 2ms/step - loss: 0.6456
Epoch 59/500
1/1 [==============================] - 0s 2ms/step - loss: 0.6324
Epoch 60/500
1/1 [==============================] - 0s 2ms/step - loss: 0.6194
Epoch 61/500
1/1 [==============================] - 0s 2ms/step - loss: 0.6066
Epoch 62/500
1/1 [==============================] - 0s 2ms/step - loss: 0.5942
Epoch 63/500
1/1 [==============================] - 0s 2ms/step - loss: 0.5820
Epoch 64/500
1/1 [==============================] - 0s 2ms/step - loss: 0.5700
Epoch 65/500
1/1 [==============================] - 0s 2ms/step - loss: 0.5583
Epoch 66/500
1/1 [==============================] - 0s 2ms/step - loss: 0.5468
Epoch 67/500
1/1 [==============================] - 0s 2ms/step - loss: 0.5356
Epoch 68/500
1/1 [==============================] - 0s 2ms/step - loss: 0.5246
Epoch 69/500
1/1 [==============================] - 0s 1ms/step - loss: 0.5138
Epoch 70/500
1/1 [==============================] - 0s 2ms/step - loss: 0.5033
Epoch 71/500
1/1 [==============================] - 0s 1ms/step - loss: 0.4929
Epoch 72/500
1/1 [==============================] - 0s 1ms/step - loss: 0.4828
Epoch 73/500
1/1 [==============================] - 0s 2ms/step - loss: 0.4729
Epoch 74/500
1/1 [==============================] - 0s 2ms/step - loss: 0.4632
Epoch 75/500
1/1 [==============================] - 0s 2ms/step - loss: 0.4537
Epoch 76/500
1/1 [==============================] - 0s 2ms/step - loss: 0.4443
Epoch 77/500
1/1 [==============================] - 0s 2ms/step - loss: 0.4352
Epoch 78/500
1/1 [==============================] - 0s 1ms/step - loss: 0.4263
Epoch 79/500
1/1 [==============================] - 0s 1ms/step - loss: 0.4175
Epoch 80/500
1/1 [==============================] - 0s 1ms/step - loss: 0.4089
Epoch 81/500
1/1 [==============================] - 0s 1ms/step - loss: 0.4005
Epoch 82/500
1/1 [==============================] - 0s 1ms/step - loss: 0.3923
Epoch 83/500
1/1 [==============================] - 0s 2ms/step - loss: 0.3843
Epoch 84/500
1/1 [==============================] - 0s 1ms/step - loss: 0.3764
Epoch 85/500
1/1 [==============================] - 0s 1ms/step - loss: 0.3686
Epoch 86/500
1/1 [==============================] - 0s 2ms/step - loss: 0.3611
Epoch 87/500
1/1 [==============================] - 0s 2ms/step - loss: 0.3536
Epoch 88/500
1/1 [==============================] - 0s 2ms/step - loss: 0.3464
Epoch 89/500
1/1 [==============================] - 0s 1ms/step - loss: 0.3393
Epoch 90/500
1/1 [==============================] - 0s 1ms/step - loss: 0.3323
Epoch 91/500
1/1 [==============================] - 0s 1ms/step - loss: 0.3255
Epoch 92/500
1/1 [==============================] - 0s 2ms/step - loss: 0.3188
Epoch 93/500
1/1 [==============================] - 0s 2ms/step - loss: 0.3122
Epoch 94/500
1/1 [==============================] - 0s 2ms/step - loss: 0.3058
Epoch 95/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2995
Epoch 96/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2934
Epoch 97/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2874
Epoch 98/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2815
Epoch 99/500
1/1 [==============================] - 0s 2ms/step - loss: 0.2757
Epoch 100/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2700
Epoch 101/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2645
Epoch 102/500
1/1 [==============================] - 0s 2ms/step - loss: 0.2590
Epoch 103/500
1/1 [==============================] - 0s 910us/step - loss: 0.2537
Epoch 104/500
1/1 [==============================] - 0s 2ms/step - loss: 0.2485
Epoch 105/500
1/1 [==============================] - 0s 2ms/step - loss: 0.2434
Epoch 106/500
1/1 [==============================] - 0s 2ms/step - loss: 0.2384
Epoch 107/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2335
Epoch 108/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2287
Epoch 109/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2240
Epoch 110/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2194
Epoch 111/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2149
Epoch 112/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2105
Epoch 113/500
1/1 [==============================] - 0s 1ms/step - loss: 0.2062
Epoch 114/500
1/1 [==============================] - 0s 2ms/step - loss: 0.2019
Epoch 115/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1978
Epoch 116/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1937
Epoch 117/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1897
Epoch 118/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1858
Epoch 119/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1820
Epoch 120/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1783
Epoch 121/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1746
Epoch 122/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1710
Epoch 123/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1675
Epoch 124/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1641
Epoch 125/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1607
Epoch 126/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1574
Epoch 127/500
1/1 [==============================] - 0s 5ms/step - loss: 0.1542
Epoch 128/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1510
Epoch 129/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1479
Epoch 130/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1449
Epoch 131/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1419
Epoch 132/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1390
Epoch 133/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1361
Epoch 134/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1333
Epoch 135/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1306
Epoch 136/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1279
Epoch 137/500
1/1 [==============================] - 0s 1ms/step - loss: 0.1253
Epoch 138/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1227
Epoch 139/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1202
Epoch 140/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1177
Epoch 141/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1153
Epoch 142/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1129
Epoch 143/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1106
Epoch 144/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1083
Epoch 145/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1061
Epoch 146/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1039
Epoch 147/500
1/1 [==============================] - 0s 2ms/step - loss: 0.1018
Epoch 148/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0997
Epoch 149/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0977
Epoch 150/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0957
Epoch 151/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0937
Epoch 152/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0918
Epoch 153/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0899
Epoch 154/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0880
Epoch 155/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0862
Epoch 156/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0845
Epoch 157/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0827
Epoch 158/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0810
Epoch 159/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0794
Epoch 160/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0777
Epoch 161/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0761
Epoch 162/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0746
Epoch 163/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0730
Epoch 164/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0715
Epoch 165/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0701
Epoch 166/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0686
Epoch 167/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0672
Epoch 168/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0658
Epoch 169/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0645
Epoch 170/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0632
Epoch 171/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0619
Epoch 172/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0606
Epoch 173/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0593
Epoch 174/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0581
Epoch 175/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0569
Epoch 176/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0558
Epoch 177/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0546
Epoch 178/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0535
Epoch 179/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0524
Epoch 180/500
1/1 [==============================] - 0s 6ms/step - loss: 0.0513
Epoch 181/500
1/1 [==============================] - 0s 939us/step - loss: 0.0503
Epoch 182/500
1/1 [==============================] - 0s 3ms/step - loss: 0.0492
Epoch 183/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0482
Epoch 184/500
1/1 [==============================] - 0s 3ms/step - loss: 0.0472
Epoch 185/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0463
Epoch 186/500
1/1 [==============================] - 0s 4ms/step - loss: 0.0453
Epoch 187/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0444
Epoch 188/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0435
Epoch 189/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0426
Epoch 190/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0417
Epoch 191/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0408
Epoch 192/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0400
Epoch 193/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0392
Epoch 194/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0384
Epoch 195/500
1/1 [==============================] - 0s 979us/step - loss: 0.0376
Epoch 196/500
1/1 [==============================] - 0s 975us/step - loss: 0.0368
Epoch 197/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0361
Epoch 198/500
1/1 [==============================] - 0s 1000us/step - loss: 0.0353
Epoch 199/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0346
Epoch 200/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0339
Epoch 201/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0332
Epoch 202/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0325
Epoch 203/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0318
Epoch 204/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0312
Epoch 205/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0305
Epoch 206/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0299
Epoch 207/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0293
Epoch 208/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0287
Epoch 209/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0281
Epoch 210/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0275
Epoch 211/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0270
Epoch 212/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0264
Epoch 213/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0259
Epoch 214/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0253
Epoch 215/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0248
Epoch 216/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0243
Epoch 217/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0238
Epoch 218/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0233
Epoch 219/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0228
Epoch 220/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0224
Epoch 221/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0219
Epoch 222/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0215
Epoch 223/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0210
Epoch 224/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0206
Epoch 225/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0202
Epoch 226/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0198
Epoch 227/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0193
Epoch 228/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0190
Epoch 229/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0186
Epoch 230/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0182
Epoch 231/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0178
Epoch 232/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0174
Epoch 233/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0171
Epoch 234/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0167
Epoch 235/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0164
Epoch 236/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0161
Epoch 237/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0157
Epoch 238/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0154
Epoch 239/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0151
Epoch 240/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0148
Epoch 241/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0145
Epoch 242/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0142
Epoch 243/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0139
Epoch 244/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0136
Epoch 245/500
1/1 [==============================] - 0s 7ms/step - loss: 0.0133
Epoch 246/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0130
Epoch 247/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0128
Epoch 248/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0125
Epoch 249/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0123
Epoch 250/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0120
Epoch 251/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0118
Epoch 252/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0115
Epoch 253/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0113
Epoch 254/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0110
Epoch 255/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0108
Epoch 256/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0106
Epoch 257/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0104
Epoch 258/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0102
Epoch 259/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0100
Epoch 260/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0098
Epoch 261/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0096
Epoch 262/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0094
Epoch 263/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0092
Epoch 264/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0090
Epoch 265/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0088
Epoch 266/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0086
Epoch 267/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0084
Epoch 268/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0083
Epoch 269/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0081
Epoch 270/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0079
Epoch 271/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0078
Epoch 272/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0076
Epoch 273/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0074
Epoch 274/500
1/1 [==============================] - 0s 5ms/step - loss: 0.0073
Epoch 275/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0071
Epoch 276/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0070
Epoch 277/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0069
Epoch 278/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0067
Epoch 279/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0066
Epoch 280/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0064
Epoch 281/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0063
Epoch 282/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0062
Epoch 283/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0061
Epoch 284/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0059
Epoch 285/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0058
Epoch 286/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0057
Epoch 287/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0056
Epoch 288/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0055
Epoch 289/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0053
Epoch 290/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0052
Epoch 291/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0051
Epoch 292/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0050
Epoch 293/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0049
Epoch 294/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0048
Epoch 295/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0047
Epoch 296/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0046
Epoch 297/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0045
Epoch 298/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0044
Epoch 299/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0043
Epoch 300/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0043
Epoch 301/500
1/1 [==============================] - 0s 997us/step - loss: 0.0042
Epoch 302/500
1/1 [==============================] - 0s 982us/step - loss: 0.0041
Epoch 303/500
1/1 [==============================] - 0s 930us/step - loss: 0.0040
Epoch 304/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0039
Epoch 305/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0038
Epoch 306/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0038
Epoch 307/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0037
Epoch 308/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0036
Epoch 309/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0035
Epoch 310/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0035
Epoch 311/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0034
Epoch 312/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0033
Epoch 313/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0032
Epoch 314/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0032
Epoch 315/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0031
Epoch 316/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0031
Epoch 317/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0030
Epoch 318/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0029
Epoch 319/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0029
Epoch 320/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0028
Epoch 321/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0028
Epoch 322/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0027
Epoch 323/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0026
Epoch 324/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0026
Epoch 325/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0025
Epoch 326/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0025
Epoch 327/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0024
Epoch 328/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0024
Epoch 329/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0023
Epoch 330/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0023
Epoch 331/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0022
Epoch 332/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0022
Epoch 333/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0021
Epoch 334/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0021
Epoch 335/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0021
Epoch 336/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0020
Epoch 337/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0020
Epoch 338/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0019
Epoch 339/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0019
Epoch 340/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0019
Epoch 341/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0018
Epoch 342/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0018
Epoch 343/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0017
Epoch 344/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0017
Epoch 345/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0017
Epoch 346/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0016
Epoch 347/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0016
Epoch 348/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0016
Epoch 349/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0015
Epoch 350/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0015
Epoch 351/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0015
Epoch 352/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0014
Epoch 353/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0014
Epoch 354/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0014
Epoch 355/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0014
Epoch 356/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0013
Epoch 357/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0013
Epoch 358/500
1/1 [==============================] - 0s 2ms/step - loss: 0.0013
Epoch 359/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0012
Epoch 360/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0012
Epoch 361/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0012
Epoch 362/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0012
Epoch 363/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0012
Epoch 364/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0011
Epoch 365/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0011
Epoch 366/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0011
Epoch 367/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0011
Epoch 368/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0010
Epoch 369/500
1/1 [==============================] - 0s 1ms/step - loss: 0.0010
Epoch 370/500
1/1 [==============================] - 0s 2ms/step - loss: 9.9478e-04
Epoch 371/500
1/1 [==============================] - 0s 1ms/step - loss: 9.7435e-04
Epoch 372/500
1/1 [==============================] - 0s 1ms/step - loss: 9.5434e-04
Epoch 373/500
1/1 [==============================] - 0s 1ms/step - loss: 9.3473e-04
Epoch 374/500
1/1 [==============================] - 0s 1ms/step - loss: 9.1553e-04
Epoch 375/500
1/1 [==============================] - 0s 1ms/step - loss: 8.9673e-04
Epoch 376/500
1/1 [==============================] - 0s 1ms/step - loss: 8.7831e-04
Epoch 377/500
1/1 [==============================] - 0s 1ms/step - loss: 8.6027e-04
Epoch 378/500
1/1 [==============================] - 0s 1ms/step - loss: 8.4260e-04
Epoch 379/500
1/1 [==============================] - 0s 2ms/step - loss: 8.2529e-04
Epoch 380/500
1/1 [==============================] - 0s 2ms/step - loss: 8.0834e-04
Epoch 381/500
1/1 [==============================] - 0s 2ms/step - loss: 7.9173e-04
Epoch 382/500
1/1 [==============================] - 0s 1ms/step - loss: 7.7547e-04
Epoch 383/500
1/1 [==============================] - 0s 2ms/step - loss: 7.5954e-04
Epoch 384/500
1/1 [==============================] - 0s 2ms/step - loss: 7.4394e-04
Epoch 385/500
1/1 [==============================] - 0s 2ms/step - loss: 7.2866e-04
Epoch 386/500
1/1 [==============================] - 0s 2ms/step - loss: 7.1369e-04
Epoch 387/500
1/1 [==============================] - 0s 2ms/step - loss: 6.9903e-04
Epoch 388/500
1/1 [==============================] - 0s 2ms/step - loss: 6.8467e-04
Epoch 389/500
1/1 [==============================] - 0s 1ms/step - loss: 6.7061e-04
Epoch 390/500
1/1 [==============================] - 0s 1ms/step - loss: 6.5684e-04
Epoch 391/500
1/1 [==============================] - 0s 2ms/step - loss: 6.4335e-04
Epoch 392/500
1/1 [==============================] - 0s 2ms/step - loss: 6.3013e-04
Epoch 393/500
1/1 [==============================] - 0s 2ms/step - loss: 6.1719e-04
Epoch 394/500
1/1 [==============================] - 0s 1ms/step - loss: 6.0451e-04
Epoch 395/500
1/1 [==============================] - 0s 2ms/step - loss: 5.9210e-04
Epoch 396/500
1/1 [==============================] - 0s 2ms/step - loss: 5.7993e-04
Epoch 397/500
1/1 [==============================] - 0s 2ms/step - loss: 5.6802e-04
Epoch 398/500
1/1 [==============================] - 0s 2ms/step - loss: 5.5635e-04
Epoch 399/500
1/1 [==============================] - 0s 1ms/step - loss: 5.4493e-04
Epoch 400/500
1/1 [==============================] - 0s 1ms/step - loss: 5.3373e-04
Epoch 401/500
1/1 [==============================] - 0s 4ms/step - loss: 5.2277e-04
Epoch 402/500
1/1 [==============================] - 0s 2ms/step - loss: 5.1203e-04
Epoch 403/500
1/1 [==============================] - 0s 1ms/step - loss: 5.0151e-04
Epoch 404/500
1/1 [==============================] - 0s 2ms/step - loss: 4.9121e-04
Epoch 405/500
1/1 [==============================] - 0s 2ms/step - loss: 4.8112e-04
Epoch 406/500
1/1 [==============================] - 0s 2ms/step - loss: 4.7124e-04
Epoch 407/500
1/1 [==============================] - 0s 2ms/step - loss: 4.6156e-04
Epoch 408/500
1/1 [==============================] - 0s 2ms/step - loss: 4.5208e-04
Epoch 409/500
1/1 [==============================] - 0s 2ms/step - loss: 4.4279e-04
Epoch 410/500
1/1 [==============================] - 0s 1ms/step - loss: 4.3370e-04
Epoch 411/500
1/1 [==============================] - 0s 2ms/step - loss: 4.2479e-04
Epoch 412/500
1/1 [==============================] - 0s 2ms/step - loss: 4.1606e-04
Epoch 413/500
1/1 [==============================] - 0s 2ms/step - loss: 4.0752e-04
Epoch 414/500
1/1 [==============================] - 0s 1ms/step - loss: 3.9914e-04
Epoch 415/500
1/1 [==============================] - 0s 1ms/step - loss: 3.9095e-04
Epoch 416/500
1/1 [==============================] - 0s 2ms/step - loss: 3.8292e-04
Epoch 417/500
1/1 [==============================] - 0s 1ms/step - loss: 3.7505e-04
Epoch 418/500
1/1 [==============================] - 0s 1ms/step - loss: 3.6735e-04
Epoch 419/500
1/1 [==============================] - 0s 2ms/step - loss: 3.5980e-04
Epoch 420/500
1/1 [==============================] - 0s 1ms/step - loss: 3.5241e-04
Epoch 421/500
1/1 [==============================] - 0s 2ms/step - loss: 3.4517e-04
Epoch 422/500
1/1 [==============================] - 0s 2ms/step - loss: 3.3808e-04
Epoch 423/500
1/1 [==============================] - 0s 2ms/step - loss: 3.3114e-04
Epoch 424/500
1/1 [==============================] - 0s 1ms/step - loss: 3.2434e-04
Epoch 425/500
1/1 [==============================] - 0s 2ms/step - loss: 3.1768e-04
Epoch 426/500
1/1 [==============================] - 0s 2ms/step - loss: 3.1115e-04
Epoch 427/500
1/1 [==============================] - 0s 2ms/step - loss: 3.0476e-04
Epoch 428/500
1/1 [==============================] - 0s 2ms/step - loss: 2.9850e-04
Epoch 429/500
1/1 [==============================] - 0s 1ms/step - loss: 2.9237e-04
Epoch 430/500
1/1 [==============================] - 0s 3ms/step - loss: 2.8636e-04
Epoch 431/500
1/1 [==============================] - 0s 2ms/step - loss: 2.8048e-04
Epoch 432/500
1/1 [==============================] - 0s 1ms/step - loss: 2.7472e-04
Epoch 433/500
1/1 [==============================] - 0s 2ms/step - loss: 2.6908e-04
Epoch 434/500
1/1 [==============================] - 0s 2ms/step - loss: 2.6355e-04
Epoch 435/500
1/1 [==============================] - 0s 1ms/step - loss: 2.5813e-04
Epoch 436/500
1/1 [==============================] - 0s 1ms/step - loss: 2.5283e-04
Epoch 437/500
1/1 [==============================] - 0s 1ms/step - loss: 2.4764e-04
Epoch 438/500
1/1 [==============================] - 0s 1ms/step - loss: 2.4255e-04
Epoch 439/500
1/1 [==============================] - 0s 2ms/step - loss: 2.3757e-04
Epoch 440/500
1/1 [==============================] - 0s 2ms/step - loss: 2.3269e-04
Epoch 441/500
1/1 [==============================] - 0s 2ms/step - loss: 2.2791e-04
Epoch 442/500
1/1 [==============================] - 0s 2ms/step - loss: 2.2323e-04
Epoch 443/500
1/1 [==============================] - 0s 1ms/step - loss: 2.1864e-04
Epoch 444/500
1/1 [==============================] - 0s 1ms/step - loss: 2.1415e-04
Epoch 445/500
1/1 [==============================] - 0s 2ms/step - loss: 2.0975e-04
Epoch 446/500
1/1 [==============================] - 0s 2ms/step - loss: 2.0544e-04
Epoch 447/500
1/1 [==============================] - 0s 2ms/step - loss: 2.0122e-04
Epoch 448/500
1/1 [==============================] - 0s 1ms/step - loss: 1.9709e-04
Epoch 449/500
1/1 [==============================] - 0s 2ms/step - loss: 1.9304e-04
Epoch 450/500
1/1 [==============================] - 0s 2ms/step - loss: 1.8908e-04
Epoch 451/500
1/1 [==============================] - 0s 2ms/step - loss: 1.8519e-04
Epoch 452/500
1/1 [==============================] - 0s 1ms/step - loss: 1.8139e-04
Epoch 453/500
1/1 [==============================] - 0s 1ms/step - loss: 1.7766e-04
Epoch 454/500
1/1 [==============================] - 0s 1ms/step - loss: 1.7401e-04
Epoch 455/500
1/1 [==============================] - 0s 1ms/step - loss: 1.7044e-04
Epoch 456/500
1/1 [==============================] - 0s 1ms/step - loss: 1.6694e-04
Epoch 457/500
1/1 [==============================] - 0s 1ms/step - loss: 1.6351e-04
Epoch 458/500
1/1 [==============================] - 0s 1ms/step - loss: 1.6015e-04
Epoch 459/500
1/1 [==============================] - 0s 2ms/step - loss: 1.5686e-04
Epoch 460/500
1/1 [==============================] - 0s 1ms/step - loss: 1.5364e-04
Epoch 461/500
1/1 [==============================] - 0s 1ms/step - loss: 1.5049e-04
Epoch 462/500
1/1 [==============================] - 0s 2ms/step - loss: 1.4739e-04
Epoch 463/500
1/1 [==============================] - 0s 1ms/step - loss: 1.4437e-04
Epoch 464/500
1/1 [==============================] - 0s 1ms/step - loss: 1.4140e-04
Epoch 465/500
1/1 [==============================] - 0s 1ms/step - loss: 1.3850e-04
Epoch 466/500
1/1 [==============================] - 0s 1ms/step - loss: 1.3565e-04
Epoch 467/500
1/1 [==============================] - 0s 1ms/step - loss: 1.3286e-04
Epoch 468/500
1/1 [==============================] - 0s 2ms/step - loss: 1.3014e-04
Epoch 469/500
1/1 [==============================] - 0s 1ms/step - loss: 1.2746e-04
Epoch 470/500
1/1 [==============================] - 0s 2ms/step - loss: 1.2485e-04
Epoch 471/500
1/1 [==============================] - 0s 1ms/step - loss: 1.2228e-04
Epoch 472/500
1/1 [==============================] - 0s 2ms/step - loss: 1.1977e-04
Epoch 473/500
1/1 [==============================] - 0s 2ms/step - loss: 1.1731e-04
Epoch 474/500
1/1 [==============================] - 0s 2ms/step - loss: 1.1490e-04
Epoch 475/500
1/1 [==============================] - 0s 2ms/step - loss: 1.1254e-04
Epoch 476/500
1/1 [==============================] - 0s 1ms/step - loss: 1.1023e-04
Epoch 477/500
1/1 [==============================] - 0s 2ms/step - loss: 1.0796e-04
Epoch 478/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0575e-04
Epoch 479/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0357e-04
Epoch 480/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0144e-04
Epoch 481/500
1/1 [==============================] - 0s 1ms/step - loss: 9.9361e-05
Epoch 482/500
1/1 [==============================] - 0s 1ms/step - loss: 9.7320e-05
Epoch 483/500
1/1 [==============================] - 0s 1ms/step - loss: 9.5322e-05
Epoch 484/500
1/1 [==============================] - 0s 1ms/step - loss: 9.3364e-05
Epoch 485/500
1/1 [==============================] - 0s 1ms/step - loss: 9.1446e-05
Epoch 486/500
1/1 [==============================] - 0s 1ms/step - loss: 8.9568e-05
Epoch 487/500
1/1 [==============================] - 0s 1ms/step - loss: 8.7728e-05
Epoch 488/500
1/1 [==============================] - 0s 1ms/step - loss: 8.5926e-05
Epoch 489/500
1/1 [==============================] - 0s 1ms/step - loss: 8.4160e-05
Epoch 490/500
1/1 [==============================] - 0s 2ms/step - loss: 8.2433e-05
Epoch 491/500
1/1 [==============================] - 0s 2ms/step - loss: 8.0740e-05
Epoch 492/500
1/1 [==============================] - 0s 2ms/step - loss: 7.9081e-05
Epoch 493/500
1/1 [==============================] - 0s 2ms/step - loss: 7.7457e-05
Epoch 494/500
1/1 [==============================] - 0s 2ms/step - loss: 7.5866e-05
Epoch 495/500
1/1 [==============================] - 0s 1ms/step - loss: 7.4307e-05
Epoch 496/500
1/1 [==============================] - 0s 1ms/step - loss: 7.2781e-05
Epoch 497/500
1/1 [==============================] - 0s 1ms/step - loss: 7.1286e-05
Epoch 498/500
1/1 [==============================] - 0s 2ms/step - loss: 6.9823e-05
Epoch 499/500
1/1 [==============================] - 0s 2ms/step - loss: 6.8388e-05
Epoch 500/500
1/1 [==============================] - 0s 2ms/step - loss: 6.6984e-05
###Markdown
PredictionsOk, now you have a model that has been trained to learn the relationshop between X and Y. You can use the model.predict method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:
###Code
print(model.predict([10.0]))
###Output
[[18.975853]]
|
notebooks/chemview-test.ipynb | ###Markdown
US EPA ChemView web servicesThe [documentation](http://java.epa.gov/chemview/resources/ChemView_WebServices.pdf) lists several ways of accessing data in ChemView.
###Code
URIBASE = 'http://java.epa.gov/chemview/'
###Output
_____no_output_____
###Markdown
Getting 'chemicals' data from ChemViewAs a start... this downloads data for *all* chemicals. Let's see what we get.
###Code
uri = URIBASE + 'chemicals'
r = requests.get(uri, headers = {'Accept': 'application/json, */*'})
j = json.loads(r.text)
print(len(j))
df = DataFrame(j)
df.tail()
# Save this dataset so that I don't have to re-request it again later.
df.to_pickle('../data/chemicals.pickle')
df = pd.read_pickle('../data/chemicals.pickle')
###Output
_____no_output_____
###Markdown
Data wrangling
###Code
# want to interpret 'None' as NaN
def scrub_None(x):
s = str(x).strip()
if s == 'None' or s == '':
return np.nan
else:
return s
for c in list(df.columns)[:-1]:
df[c] = df[c].apply(scrub_None)
df.tail()
###Output
_____no_output_____
###Markdown
How many unique CASRNs, PMN numbers?
###Code
# CASRNS
len(df.casNo.value_counts())
# PMN numbers
len(df.pmnNo.value_counts())
###Output
_____no_output_____
###Markdown
What's in 'synonyms'?
###Code
DataFrame(df.loc[4,'synonyms'])
###Output
_____no_output_____
###Markdown
How many 'synonyms' for each entry?
###Code
df.synonyms.apply(len).describe()
###Output
_____no_output_____
###Markdown
Do the data objects in `synonyms` all have the same attributes?
###Code
def getfields(x):
k = set()
for d in x:
j = set(d.keys())
k = k | j
return ','.join(sorted(k))
df.synonyms.apply(getfields).head()
len(df.synonyms.apply(getfields).value_counts())
###Output
_____no_output_____
###Markdown
All of the `synonyms` fields contain a variable number of objects with a uniform set of fields. Tell me more about those items with PMN numbers...
###Code
pmns = df.loc[df.pmnNo.notnull()]
pmns.head()
###Output
_____no_output_____
###Markdown
Are there any that have CASRN too? ... No.
###Code
len(pmns.casNo.dropna())
###Output
_____no_output_____ |
Assignment Day 8-LetsUpgrad_PythonEssentials.ipynb | ###Markdown
QUESTION1
###Code
def getInput(checkoddeve_arg_fun):
def wrap_function():
a = int(input("Enter Number to check odd even - "))
checkoddeve_arg_fun(a)
return wrap_function
@getInput
def checkoddeve(num):
if num%2==0:
print('Even number')
else:
print('Odd number')
checkoddeve()
###Output
Enter Number to check odd even - 1
Odd number
###Markdown
QUESTION 2
###Code
try:
fh = open("testfile", "r")
fh.write("This is my test file for exception handling!!")
except Exception as e:
print(e)
finally:
print ("Going to close the file")
fh.close()
try:
fh = open("testfile", "r")
try:
fh.write("This is my test file for exception handling!!")
finally:
print ("Going to close the file")
fh.close()
except IOError:
print ("Error: can\'t find file or read data")
###Output
Error: can't find file or read data
|
06.Math/18.3 DBSCAN.ipynb | ###Markdown
DBSCAN- 밀도 이용- 클러스터 갯수 지정X- params - epsilon: neighbor 정의 위한 거리 - minimum points: 밀집지역 정의 위한 neighbor 갯수- point - core: $\epsilon$ 거리 안에 MinPts 이상의 neighbor있는 point - border: 고밀도 데이터와 연결되었으나, 이웃은 MinPts이하 - outlier: 연결안됨- `DBSCAN` attr: - `labels_`: 클러스터 번호 - `core_sample_indices_`: 핵심 데이터 인덱스
###Code
from sklearn.datasets import make_circles, make_moons
from sklearn.cluster import DBSCAN
n_samples = 1000
np.random.seed(2)
X1, y1 = make_circles(n_samples=n_samples, factor=.5, noise=.09)
X2, y2 = make_moons(n_samples=n_samples, noise=.1)
def plot_DBSCAN(title, X, eps, xlim, ylim):
mod = DBSCAN(eps=eps)
y_pred = mod.fit_predict(X)
idx_outlier = np.logical_not((mod.labels_ == 0) | (mod.labels_ == 1))
plt.scatter(X[idx_outlier, 0], X[idx_outlier, 1], marker='x', lw=1, s=20)
plt.scatter(X[mod.labels_ == 0, 0], X[mod.labels_ == 0, 1], marker='o', facecolor='g', s=5)
plt.scatter(X[mod.labels_ == 1, 0], X[mod.labels_ == 1, 1], marker='s', facecolor='y', s=5)
# core point
X_core = X[mod.core_sample_indices_, :]
idx_core_0 = np.array(list(set(np.where(mod.labels_ == 0)[0]).\
intersection(mod.core_sample_indices_)))
idx_core_1 = np.array(list(set(np.where(mod.labels_ == 1)[0]).\
intersection(mod.core_sample_indices_)))
plt.scatter(X[idx_core_0, 0], X[idx_core_0, 1], marker='o', facecolor='g', s=80, alpha=0.3)
plt.scatter(X[idx_core_1, 0], X[idx_core_1, 1], marker='s', facecolor='y', s=80, alpha=0.3)
plt.grid(False)
plt.xlim(*xlim)
plt.ylim(*ylim)
plt.xticks(())
plt.yticks(())
plt.title(title)
return y_pred
plt.figure(figsize=(10, 5))
plt.subplot(121)
y_pred1 = plot_DBSCAN("concentric circles", X1, 0.1, (-1.2, 1.2), (-1.2, 1.2))
plt.subplot(122)
y_pred2 = plot_DBSCAN("concentric circles", X2, 0.1, (-1.5, 2.5), (-0.8, 1.2))
plt.tight_layout()
plt.show()
from sklearn.metrics.cluster import adjusted_mutual_info_score, adjusted_rand_score
print("Circle ARI:", adjusted_rand_score(y1, y_pred1))
print("Circle AMI:", adjusted_mutual_info_score(y1, y_pred1))
print("Moon ARI:", adjusted_rand_score(y2, y_pred2))
print("Moon AMI:", adjusted_mutual_info_score(y2, y_pred2))
###Output
Circle ARI: 0.9414262371038592
Circle AMI: 0.8361564005781013
Moon ARI: 0.9544844153926417
Moon AMI: 0.8606657095694518
|
notebooks/simulation/Runge-Kutta.ipynb | ###Markdown
ルンゲ・クッタ法
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set('notebook', 'whitegrid', 'dark', font_scale=2, rc={"lines.linewidth": 2, 'grid.linestyle': '-'})
###Output
_____no_output_____
###Markdown
オイラー法$x'=f(t,x)$の初期値問題に対して漸化式$$ x_{n+1} = x_n + h f(t_n, x_n)$$によって近似解を計算する
###Code
def Euler(t, x, f, h):
return x + h * f(t, x)
###Output
_____no_output_____
###Markdown
$x'=f(t,x)$の右辺の定義
###Code
def func(t,x):
return x
a, b = 0.0, 1.0
N = 10
h = (b-a)/N
t = a
x = 1.0
X = np.zeros(N+1)
X[0] = x
for n in range(N):
x = Euler(t, x, func, h)
X[n+1] = x
t = a + (n + 1) * h
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.linspace(a,b,256), np.exp(np.linspace(a,b,256)), '--k')
ax.plot(np.linspace(a,b,N+1), X, 'o-k')
ax.set_xlabel("$t$")
ax.set_ylabel("$x$")
# plt.savefig("euler1.pdf", bbox_inches="tight")
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.linspace(a,b,N+1), np.abs(X-np.exp(np.linspace(a,b,N+1))), 'ok')
ax.set_xlabel("$t$")
ax.set_ylabel("error")
# plt.savefig("euler1_error.pdf", bbox_inches="tight")
a, b = 0.0, 1.0
N = 100
h = (b-a)/N
t = a
x = 1.0
X = np.zeros(N+1)
X[0] = x
for n in range(N):
x = Euler(t, x, func, h)
X[n+1] = x
t = a + (n + 1) * h
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.linspace(a,b,256), np.exp(np.linspace(a,b,256)), '--k')
ax.plot(np.linspace(a,b,N+1), X, '-k')
ax.set_xlabel("$t$")
ax.set_ylabel("$x$")
# plt.savefig("euler2.pdf", bbox_inches="tight")
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.linspace(a,b,N+1), np.abs(X-np.exp(np.linspace(a,b,N+1))), '.-k')
ax.set_xlabel("$t$")
ax.set_ylabel("error")
# plt.savefig("euler2_error.pdf", bbox_inches="tight")
###Output
_____no_output_____
###Markdown
ホイン法\begin{align} k_1 &= f(t_n, x_n),\\ k_2 &= f(t_n + h, x_n + h k_1),\\ x_{n+1} &= x_n + \frac{h}{2} (k_1 + k_2)\end{align}
###Code
def Heun(t, x, f, h):
k1 = f(t, x)
k2 = f(t + h, x + h * k1)
return x + 0.5 * h * (k1 + k2)
a, b = 0.0, 1.0
N = 10
h = (b-a)/N
t = a
x = 1.0
X2 = np.zeros(N+1)
X2[0] = x
for n in range(N):
x = Heun(t, x, func, h)
X2[n+1] = x
t = a + (n + 1) * h
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.linspace(a,b,256), np.exp(np.linspace(a,b,256)), '--k')
ax.plot(np.linspace(a,b,N+1), X2, 'o-k')
ax.set_xlabel("$t$")
ax.set_ylabel("$x$")
# plt.savefig("heun1.pdf", bbox_inches="tight")
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.linspace(a,b,N+1), np.abs(X2-np.exp(np.linspace(a,b,N+1))), 'ok')
ax.set_xlabel("$t$")
ax.set_ylabel("error")
# plt.savefig("heun1_error.pdf", bbox_inches="tight")
###Output
_____no_output_____
###Markdown
古典的ルンゲ・クッタ法$$\begin{aligned}k_1 &= f(t_n, x_n),\\k_2 &= f(t_n + \tfrac{h}{2}, x_n + \tfrac{h}{2}k_1),\\k_3 &= f(t_n + \tfrac{h}{2}, x_n + \tfrac{h}{2}k_2),\\k_4 &= f(t_n + h, x_n + h k_3),\\x_{n+1} &= x_n + \frac{h}{6}(k_1 + 2 k_2 + 2 k_3 + k_4)\end{aligned}$$
###Code
def RK4(t, x, f, h):
k1 = f(t, x)
k2 = f(t+0.5*h, x+0.5*h*k1)
k3 = f(t+0.5*h, x+0.5*h*k2)
k4 = f(t+h, x+h*k3)
return x + h * (k1 + 2 * k2 + 2 * k3 + k4) / 6
a, b = 0.0, 1.0
N = 10
h = (b-a)/N
t = a
x = 1.0
X4 = np.zeros(N+1)
X4[0] = x
for n in range(N):
x = RK4(t, x, func, h)
X4[n+1] = x
t = a + (n + 1) * h
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.linspace(a,b,256), np.exp(np.linspace(a,b,256)), '--k')
ax.plot(np.linspace(a,b,N+1), X4, 'o-k')
ax.set_xlabel("$t$")
ax.set_ylabel("$x$")
# plt.savefig("rk1.pdf", bbox_inches="tight")
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.linspace(a,b,N+1), np.abs(X4-np.exp(np.linspace(a,b,N+1))), 'ok')
ax.set_xlabel("$t$")
ax.set_ylabel("error")
# plt.savefig("rk1_error.pdf", bbox_inches="tight")
###Output
_____no_output_____ |
CXR_Detection_Adway.ipynb | ###Markdown
Detecting Abnormalities on Chest Radiographs using Deep Learning.Dataset : NIH ChestXray dataset [found here](https://nihcc.app.box.com/v/ChestXray-NIHCC)Import dependencies
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.densenet import DenseNet121
from keras.layers import Dense, GlobalAveragePooling2D
from keras.models import Model
from keras.optimizers import Adam
from keras import backend as K
from keras.models import load_model
tf.compat.v1.disable_eager_execution()
print("Version: ", tf.__version__)
###Output
Version: 2.5.0
###Markdown
Some Utility functions: Skip
###Code
import random
import cv2
import matplotlib.pyplot as plt
import numpy as np
from keras import backend as K
from keras.preprocessing import image
from sklearn.metrics import roc_auc_score, roc_curve
from tensorflow.compat.v1.logging import INFO, set_verbosity
random.seed(a=None, version=2)
set_verbosity(INFO)
def get_mean_std_per_batch(image_path, df, H=320, W=320):
sample_data = []
for idx, img in enumerate(df.sample(100)["Image"].values):
# path = image_dir + img
sample_data.append(
np.array(image.load_img(image_path, target_size=(H, W))))
mean = np.mean(sample_data[0])
std = np.std(sample_data[0])
return mean, std
def load_image(img, image_dir, df, preprocess=True, H=320, W=320):
"""Load and preprocess image."""
img_path = image_dir + img
mean, std = get_mean_std_per_batch(img_path, df, H=H, W=W)
x = image.load_img(img_path, target_size=(H, W))
if preprocess:
x -= mean
x /= std
x = np.expand_dims(x, axis=0)
return x
def grad_cam(input_model, image, cls, layer_name, H=320, W=320):
"""GradCAM method for visualizing input saliency."""
y_c = input_model.output[0, cls]
conv_output = input_model.get_layer(layer_name).output
grads = K.gradients(y_c, conv_output)[0]
gradient_function = K.function([input_model.input], [conv_output, grads])
output, grads_val = gradient_function([image])
output, grads_val = output[0, :], grads_val[0, :, :, :]
weights = np.mean(grads_val, axis=(0, 1))
cam = np.dot(output, weights)
# Process CAM
cam = cv2.resize(cam, (W, H), cv2.INTER_LINEAR)
cam = np.maximum(cam, 0)
cam = cam / cam.max()
return cam
def compute_gradcam(model, img, image_dir, df, labels, selected_labels,
layer_name='bn'):
preprocessed_input = load_image(img, image_dir, df)
predictions = model.predict(preprocessed_input)
print("Loading original image")
plt.figure(figsize=(15, 10))
plt.subplot(151)
plt.title("Original")
plt.axis('off')
plt.imshow(load_image(img, image_dir, df, preprocess=False), cmap='gray')
j = 1
for i in range(len(labels)):
if labels[i] in selected_labels:
print(f"Generating gradcam for class {labels[i]}")
gradcam = grad_cam(model, preprocessed_input, i, layer_name)
plt.subplot(151 + j)
plt.title(f"{labels[i]}: p={predictions[0][i]:.3f}")
plt.axis('off')
plt.imshow(load_image(img, image_dir, df, preprocess=False),
cmap='gray')
plt.imshow(gradcam, cmap='jet', alpha=min(0.5, predictions[0][i]))
j += 1
def get_roc_curve(labels, predicted_vals, generator):
auc_roc_vals = []
for i in range(len(labels)):
try:
gt = generator.labels[:, i]
pred = predicted_vals[:, i]
auc_roc = roc_auc_score(gt, pred)
auc_roc_vals.append(auc_roc)
fpr_rf, tpr_rf, _ = roc_curve(gt, pred)
plt.figure(1, figsize=(10, 10))
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rf, tpr_rf,
label=labels[i] + " (" + str(round(auc_roc, 3)) + ")")
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
except:
print(
f"Error in generating ROC curve for {labels[i]}. "
f"Dataset lacks enough examples."
)
plt.show()
return auc_roc_vals
train_df = pd.read_csv("/content/drive/MyDrive/nih/train-small.csv")
valid_df = pd.read_csv("/content/drive/MyDrive/nih/valid-small.csv")
test_df = pd.read_csv("/content/drive/MyDrive/nih/test.csv")
labels = ['Cardiomegaly',
'Emphysema',
'Effusion',
'Hernia',
'Infiltration',
'Mass',
'Nodule',
'Atelectasis',
'Pneumothorax',
'Pleural_Thickening',
'Pneumonia',
'Fibrosis',
'Edema',
'Consolidation']
def check_for_leakage(df1, df2, patient_col):
"""
Return True if there any patients are in both df1 and df2.
Args:
df1 (dataframe): dataframe describing first dataset
df2 (dataframe): dataframe describing second dataset
patient_col (str): string name of column with patient IDs
Returns:
leakage (bool): True if there is leakage, otherwise False
"""
df1_patients_unique = set(df1[patient_col].values)
df2_patients_unique = set(df2[patient_col].values)
patients_in_both_groups = df1_patients_unique.intersection(df2_patients_unique)
# leakage contains true if there is patient overlap, otherwise false.
leakage = len(patients_in_both_groups) > 0 # boolean (true if there is at least 1 patient in both groups)
return leakage
print("leakage between train and test: {}".format(check_for_leakage(train_df, test_df, 'PatientId')))
print("leakage between valid and test: {}".format(check_for_leakage(valid_df, test_df, 'PatientId')))
###Output
leakage between train and test: False
leakage between valid and test: False
###Markdown
Define function to create generator for loading the training and testing datasets
###Code
def get_train_generator(df, image_dir, x_col, y_cols, shuffle=True, batch_size=8, seed=1, target_w = 320, target_h = 320):
"""
Return generator for training set, normalizing using batch
statistics.
Args:
train_df (dataframe): dataframe specifying training data.
image_dir (str): directory where image files are held.
x_col (str): name of column in df that holds filenames.
y_cols (list): list of strings that hold y labels for images.
sample_size (int): size of sample to use for normalization statistics.
batch_size (int): images per batch to be fed into model during training.
seed (int): random seed.
target_w (int): final width of input images.
target_h (int): final height of input images.
Returns:
train_generator (DataFrameIterator): iterator over training set
"""
print("getting train generator...")
# normalize images
image_generator = ImageDataGenerator(
samplewise_center=True,
samplewise_std_normalization= True)
# flow from directory with specified batch size
# and target image size
generator = image_generator.flow_from_dataframe(
dataframe=df,
directory=image_dir,
x_col=x_col,
y_col=y_cols,
class_mode="raw",
batch_size=batch_size,
shuffle=shuffle,
seed=seed,
target_size=(target_w,target_h))
return generator
def get_test_and_valid_generator(valid_df, test_df, train_df, image_dir, x_col, y_cols, sample_size=100, batch_size=8, seed=1, target_w = 320, target_h = 320):
"""
Return generator for validation set and test test set using
normalization statistics from training set.
Args:
valid_df (dataframe): dataframe specifying validation data.
test_df (dataframe): dataframe specifying test data.
train_df (dataframe): dataframe specifying training data.
image_dir (str): directory where image files are held.
x_col (str): name of column in df that holds filenames.
y_cols (list): list of strings that hold y labels for images.
sample_size (int): size of sample to use for normalization statistics.
batch_size (int): images per batch to be fed into model during training.
seed (int): random seed.
target_w (int): final width of input images.
target_h (int): final height of input images.
Returns:
test_generator (DataFrameIterator) and valid_generator: iterators over test set and validation set respectively
"""
print("getting train and valid generators...")
# get generator to sample dataset
raw_train_generator = ImageDataGenerator().flow_from_dataframe(
dataframe=train_df,
directory=IMAGE_DIR,
x_col="Image",
y_col=labels,
class_mode="raw",
batch_size=sample_size,
shuffle=True,
target_size=(target_w, target_h))
# get data sample
batch = raw_train_generator.next()
data_sample = batch[0]
# use sample to fit mean and std for test set generator
image_generator = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization= True)
# fit generator to sample from training data
image_generator.fit(data_sample)
# get test generator
valid_generator = image_generator.flow_from_dataframe(
dataframe= valid_df,
directory= image_dir,
x_col= x_col,
y_col = y_cols,
class_mode="raw",
batch_size=batch_size,
shuffle=False,
seed=seed,
target_size=(target_w,target_h))
test_generator = image_generator.flow_from_dataframe(
dataframe=test_df,
directory=image_dir,
x_col=x_col,
y_col=y_cols,
class_mode="raw",
batch_size=batch_size,
shuffle=False,
seed=seed,
target_size=(target_w,target_h))
return valid_generator, test_generator
IMAGE_DIR = "/content/drive/MyDrive/nih/images-small/"
train_generator = get_train_generator(train_df, IMAGE_DIR, "Image", labels)
valid_generator , test_generator = get_test_and_valid_generator(valid_df, test_df, train_df, IMAGE_DIR, "Image", labels)
###Output
getting train generator...
Found 1000 validated image filenames.
getting train and valid generators...
Found 1000 validated image filenames.
Found 1 validated image filenames.
Found 420 validated image filenames.
###Markdown
Sanity Check: Get single output from generator and plot
###Code
x, y = train_generator.__getitem__(0)
plt.imshow(x[0]);
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
Pre-processing- Fix class imbalance by normalizing class frequencies
###Code
plt.xticks(rotation=90)
plt.bar(x=labels, height=np.mean(train_generator.labels, axis=0))
plt.title("Frequency of Each Class")
plt.show()
def compute_class_freqs(labels):
"""
Compute positive and negative frequences for each class.
Args:
labels (np.array): matrix of labels, size (num_examples, num_classes)
Returns:
positive_frequencies (np.array): array of positive frequences for each
class, size (num_classes)
negative_frequencies (np.array): array of negative frequences for each
class, size (num_classes)
"""
# total number of patients (rows)
N = labels.shape[0]
positive_frequencies = np.sum(labels, axis=0) / N
negative_frequencies = 1 - positive_frequencies
return positive_frequencies, negative_frequencies
freq_pos, freq_neg = compute_class_freqs(train_generator.labels)
freq_pos
data = pd.DataFrame({"Class": labels, "Label": "Positive", "Value": freq_pos})
data = data.append([{"Class": labels[l], "Label": "Negative", "Value": v} for l,v in enumerate(freq_neg)], ignore_index=True)
plt.xticks(rotation=90)
f = sns.barplot(x="Class", y="Value", hue="Label" ,data=data)
pos_weights = freq_neg
neg_weights = freq_pos
pos_contribution = freq_pos * pos_weights
neg_contribution = freq_neg * neg_weights
data = pd.DataFrame({"Class": labels, "Label": "Positive", "Value": pos_contribution})
data = data.append([{"Class": labels[l], "Label": "Negative", "Value": v}
for l,v in enumerate(neg_contribution)], ignore_index=True)
plt.xticks(rotation=90)
sns.barplot(x="Class", y="Value", hue="Label" ,data=data);
def get_weighted_loss(pos_weights, neg_weights, epsilon=1e-7):
"""
Return weighted loss function given negative weights and positive weights.
Args:
pos_weights (np.array): array of positive weights for each class, size (num_classes)
neg_weights (np.array): array of negative weights for each class, size (num_classes)
Returns:
weighted_loss (function): weighted loss function
"""
def weighted_loss(y_true, y_pred):
"""
Return weighted loss value.
Args:
y_true (Tensor): Tensor of true labels, size is (num_examples, num_classes)
y_pred (Tensor): Tensor of predicted labels, size is (num_examples, num_classes)
Returns:
loss (Tensor): overall scalar loss summed across all classes
"""
# initialize loss to zero
loss = 0.0
for i in range(len(pos_weights)):
# for each class, add average weighted loss for that class
loss += K.mean(-(pos_weights[i] *y_true[:,i] * K.log(y_pred[:,i] + epsilon)
+ neg_weights[i]* (1 - y_true[:,i]) * K.log( 1 - y_pred[:,i] + epsilon)))
return loss
return weighted_loss
###Output
_____no_output_____
###Markdown
Load pretrained model
###Code
# create the base pre-trained model
base_model = DenseNet121(weights= '/content/drive/MyDrive/nih/densenet.hdf5', include_top=False)
x = base_model.output
# add a global spatial average pooling layer
x = GlobalAveragePooling2D()(x)
# and a logistic layer
predictions = Dense(len(labels), activation="sigmoid")(x)
model = Model(inputs= base_model.input, outputs=predictions)
model.compile(optimizer= 'adam', loss=get_weighted_loss(pos_weights, neg_weights))
# Skip training for now
# history = model.fit_generator(train_generator,
# validation_data=valid_generator,
# steps_per_epoch=100,
# validation_steps=25,
# epochs = 30)
# plt.plot(history.history['loss'])
# plt.ylabel("loss")
# plt.xlabel("epoch")
# plt.title("Training Loss Curve")
# plt.show()
model.load_weights("/content/drive/MyDrive/nih/pretrained_model.h5")
predicted_vals = model.predict_generator(test_generator, steps = len(test_generator))
auc_rocs = get_roc_curve(labels, predicted_vals, test_generator)
df = pd.read_csv("/content/drive/MyDrive/nih/train-small.csv")
IMAGE_DIR = "/content/drive/MyDrive/nih/images-small/"
# only show the lables with top 4 AUC
labels_to_show = np.take(labels, np.argsort(auc_rocs)[::-1])[:4]
###Output
_____no_output_____
###Markdown
Compute Grad-CAMs
###Code
compute_gradcam(model, '00008270_015.png', IMAGE_DIR, df, labels, labels_to_show)
compute_gradcam(model, '00011355_002.png', IMAGE_DIR, df, labels, labels_to_show)
compute_gradcam(model, '00005410_000.png', IMAGE_DIR, df, labels, labels_to_show)
###Output
_____no_output_____ |
Modelos Lineales/ANOVA k factores.ipynb | ###Markdown
Two-Way ANOVA without replication
###Code
df = pd.DataFrame({'Lab': np.repeat(['lab1', 'lab2', 'lab3','lab4'], 4),
'Sample': np.tile(np.repeat(['type1', 'type2', 'type3','type4'], 1), 4),
'Recovery': [99.5, 83.0, 96.5, 96.8,
105.0, 105.5, 104.0, 108.0,
95.4, 81.9, 87.4, 86.3,
93.7, 80.8, 84.5, 70.3]})
print(df[:])
#perform two-way ANOVA without replication
model = ols('Recovery ~ C(Lab) + C(Sample)', data=df).fit()
print(sm.stats.anova_lm(model, typ=2))
bloques = [1,2,3,4,5]
tratamientos = ["T. Control","T1","T2"]
datos = [3.7,4.5,5.7,4.6,6.8,7,4.6,5.2,6.1,4.5,5.2,5.7,4.8,6.9,7.6]
df = pd.DataFrame({"Bloques": np.repeat(bloques,len(bloques)),
"Tratamientoos": np.tile(np.repeat(tratamientos,1),len(tratamientos))})#,
# "Datos": datos})
df
###Output
_____no_output_____ |
Model backlog/Training/Classification/Google Colab/9-resnet50_224x224_untrained.ipynb | ###Markdown
Dependencies
###Code
!unzip -q '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Data/train_images384x480.zip'
#@title
# Dependencies
import os
import cv2
import math
import random
import shutil
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
from skimage import exposure
import multiprocessing as mp
import albumentations as albu
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from keras import backend as K
from keras.utils import Sequence
from keras.layers import Input, average
from keras import optimizers, applications
from keras.models import Model, load_model
from keras.losses import binary_crossentropy
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
# Required repositories
os.system('pip install segmentation-models')
os.system('pip install keras-rectified-adam')
os.system('pip install tta-wrapper')
from keras_radam import RAdam
import segmentation_models as sm
from tta_wrapper import tta_segmentation
# Misc
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(seed)
# Segmentation related
def rle_decode(mask_rle, shape=(1400, 2100)):
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0]*shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo:hi] = 1
return img.reshape(shape, order='F') # Needed to align to RLE direction
def rle_to_mask(rle_string, height, width):
rows, cols = height, width
if rle_string == -1:
return np.zeros((height, width))
else:
rle_numbers = [int(num_string) for num_string in rle_string.split(' ')]
rle_pairs = np.array(rle_numbers).reshape(-1,2)
img = np.zeros(rows*cols, dtype=np.uint8)
for index, length in rle_pairs:
index -= 1
img[index:index+length] = 255
img = img.reshape(cols,rows)
img = img.T
return img
def get_mask_area(df, index, column_name, shape=(1400, 2100)):
rle = df.loc[index][column_name]
try:
math.isnan(rle)
np_mask = np.zeros((shape[0], shape[1], 3))
except:
np_mask = rle_to_mask(rle, shape[0], shape[1])
np_mask = np.clip(np_mask, 0, 1)
return int(np.sum(np_mask))
def np_resize(img, input_shape):
"""
Reshape a numpy array, which is input_shape=(height, width),
as opposed to input_shape=(width, height) for cv2
"""
height, width = input_shape
return cv2.resize(img, (width, height))
def mask2rle(img):
'''
img: numpy array, 1 - mask, 0 - background
Returns run length as string formated
'''
pixels= img.T.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
def build_rles(masks, reshape=None):
width, height, depth = masks.shape
rles = []
for i in range(depth):
mask = masks[:, :, i]
if reshape:
mask = mask.astype(np.float32)
mask = np_resize(mask, reshape).astype(np.int64)
rle = mask2rle(mask)
rles.append(rle)
return rles
def build_masks(rles, input_shape, reshape=None):
depth = len(rles)
if reshape is None:
masks = np.zeros((*input_shape, depth))
else:
masks = np.zeros((*reshape, depth))
for i, rle in enumerate(rles):
if type(rle) is str:
if reshape is None:
masks[:, :, i] = rle2mask(rle, input_shape)
else:
mask = rle2mask(rle, input_shape)
reshaped_mask = np_resize(mask, reshape)
masks[:, :, i] = reshaped_mask
return masks
def rle2mask(rle, input_shape):
width, height = input_shape[:2]
mask = np.zeros( width*height ).astype(np.uint8)
array = np.asarray([int(x) for x in rle.split()])
starts = array[0::2]
lengths = array[1::2]
current_position = 0
for index, start in enumerate(starts):
mask[int(start):int(start+lengths[index])] = 1
current_position += lengths[index]
return mask.reshape(height, width).T
def dice_coefficient(y_true, y_pred):
y_true = np.asarray(y_true).astype(np.bool)
y_pred = np.asarray(y_pred).astype(np.bool)
intersection = np.logical_and(y_true, y_pred)
return (2. * intersection.sum()) / (y_true.sum() + y_pred.sum())
def dice_coef(y_true, y_pred, smooth=1):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
# Data pre-process
def preprocess_image(image_id, base_path, save_path, HEIGHT, WIDTH):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (WIDTH, HEIGHT))
cv2.imwrite(save_path + image_id, image)
def pre_process_set(df, preprocess_fn):
n_cpu = mp.cpu_count()
df_n_cnt = df.shape[0]//n_cpu
pool = mp.Pool(n_cpu)
dfs = [df.iloc[df_n_cnt*i:df_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = df.iloc[df_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_fn, [x_df for x_df in dfs])
pool.close()
# def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH):
# df = df.reset_index()
# for i in range(df.shape[0]):
# item = df.iloc[i]
# image_id = item['image']
# item_set = item['set']
# if item_set == 'train':
# preprocess_image(image_id, train_base_path, train_images_dest_path, HEIGHT, WIDTH)
# if item_set == 'validation':
# preprocess_image(image_id, train_base_path, validation_images_dest_path, HEIGHT, WIDTH)
# if item_set == 'test':
# preprocess_image(image_id, test_base_path, test_images_dest_path, HEIGHT, WIDTH)
# Model evaluation
def get_metrics_classification(df, preds, label_columns, threshold=0.5, show_report=True):
accuracy = []
precision = []
recall = []
f_score = []
for index, label in enumerate(label_columns):
print('Metrics for: %s' % label)
if show_report:
print(classification_report(df[label], (preds[:,index] > threshold).astype(int), output_dict=False))
metrics = classification_report(df[label], (preds[:,index] > threshold).astype(int), output_dict=True)
accuracy.append(metrics['accuracy'])
precision.append(metrics['1']['precision'])
recall.append(metrics['1']['recall'])
f_score.append(metrics['1']['f1-score'])
print('Averaged accuracy: %.2f' % np.mean(accuracy))
print('Averaged precision: %.2f' % np.mean(precision))
print('Averaged recall: %.2f' % np.mean(recall))
print('Averaged f_score: %.2f' % np.mean(f_score))
def plot_metrics(history, metric_list=['loss', 'dice_coef'], figsize=(22, 14)):
fig, axes = plt.subplots(len(metric_list), 1, sharex='col', figsize=(22, len(metric_list)*4))
axes = axes.flatten()
for index, metric in enumerate(metric_list):
axes[index].plot(history[metric], label='Train %s' % metric)
axes[index].plot(history['val_%s' % metric], label='Validation %s' % metric)
axes[index].legend(loc='best')
axes[index].set_title(metric)
plt.xlabel('Epochs')
sns.despine()
plt.show()
# Model post process
def post_process(probability, threshold=0.5, min_size=10000):
mask = cv2.threshold(probability, threshold, 1, cv2.THRESH_BINARY)[1]
num_component, component = cv2.connectedComponents(mask.astype(np.uint8))
predictions = np.zeros(probability.shape, np.float32)
for c in range(1, num_component):
p = (component == c)
if p.sum() > min_size:
predictions[p] = 1
return predictions
# Prediction evaluation
def get_metrics(model, target_df, df, df_images_dest_path, label_columns, tresholds, min_mask_sizes, N_CLASSES=4, seed=0, preprocessing=None, adjust_fn=None, adjust_param=None, set_name='Complete set', column_names=['Class', 'Dice', 'Dice Post']):
metrics = []
for class_name in label_columns:
metrics.append([class_name, 0, 0])
metrics_df = pd.DataFrame(metrics, columns=column_names)
for i in range(0, df.shape[0], 300):
batch_idx = list(range(i, min(df.shape[0], i + 300)))
batch_set = df[batch_idx[0]: batch_idx[-1]+1]
ratio = len(batch_set) / len(df)
generator = DataGenerator(
directory=df_images_dest_path,
dataframe=batch_set,
target_df=target_df,
batch_size=len(batch_set),
target_size=model.input_shape[1:3],
n_channels=model.input_shape[3],
n_classes=N_CLASSES,
preprocessing=preprocessing,
adjust_fn=adjust_fn,
adjust_param=adjust_param,
seed=seed,
mode='fit',
shuffle=False)
x, y = generator.__getitem__(0)
preds = model.predict(x)
for class_index in range(N_CLASSES):
class_score = []
class_score_post = []
mask_class = y[..., class_index]
pred_class = preds[..., class_index]
for index in range(len(batch_idx)):
sample_mask = mask_class[index, ]
sample_pred = pred_class[index, ]
sample_pred_post = post_process(sample_pred, threshold=tresholds[class_index], min_size=min_mask_sizes[class_index])
if (sample_mask.sum() == 0) & (sample_pred.sum() == 0):
dice_score = 1.
else:
dice_score = dice_coefficient(sample_pred, sample_mask)
if (sample_mask.sum() == 0) & (sample_pred_post.sum() == 0):
dice_score_post = 1.
else:
dice_score_post = dice_coefficient(sample_pred_post, sample_mask)
class_score.append(dice_score)
class_score_post.append(dice_score_post)
metrics_df.loc[metrics_df[column_names[0]] == label_columns[class_index], column_names[1]] += np.mean(class_score) * ratio
metrics_df.loc[metrics_df[column_names[0]] == label_columns[class_index], column_names[2]] += np.mean(class_score_post) * ratio
metrics_df = metrics_df.append({column_names[0]:set_name, column_names[1]:np.mean(metrics_df[column_names[1]].values), column_names[2]:np.mean(metrics_df[column_names[2]].values)}, ignore_index=True).set_index(column_names[0])
return metrics_df
def get_metrics_ensemble(model_list, target_df, df, df_images_dest_path, label_columns, tresholds, min_mask_sizes, N_CLASSES=4, seed=0, preprocessing=None, adjust_fn=None, adjust_param=None, set_name='Complete set', column_names=['Class', 'Dice', 'Dice Post']):
metrics = []
for class_name in label_columns:
metrics.append([class_name, 0, 0])
metrics_df = pd.DataFrame(metrics, columns=column_names)
for i in range(0, df.shape[0], 300):
batch_idx = list(range(i, min(df.shape[0], i + 300)))
batch_set = df[batch_idx[0]: batch_idx[-1]+1]
ratio = len(batch_set) / len(df)
target_size = model_list[0].input_shape[1:3]
n_channels = model_list[0].input_shape[3]
generator = DataGenerator(
directory=df_images_dest_path,
dataframe=batch_set,
target_df=target_df,
batch_size=len(batch_set),
target_size=target_size,
n_channels=n_channels,
n_classes=N_CLASSES,
preprocessing=preprocessing,
adjust_fn=adjust_fn,
adjust_param=adjust_param,
seed=seed,
mode='fit',
shuffle=False)
x, y = generator.__getitem__(0)
preds = np.zeros((len(batch_set), *target_size, N_CLASSES))
for model in model_list:
preds += model.predict(x)
preds /= len(model_list)
for class_index in range(N_CLASSES):
class_score = []
class_score_post = []
mask_class = y[..., class_index]
pred_class = preds[..., class_index]
for index in range(len(batch_idx)):
sample_mask = mask_class[index, ]
sample_pred = pred_class[index, ]
sample_pred_post = post_process(sample_pred, threshold=tresholds[class_index], min_size=min_mask_sizes[class_index])
if (sample_mask.sum() == 0) & (sample_pred.sum() == 0):
dice_score = 1.
else:
dice_score = dice_coefficient(sample_pred, sample_mask)
if (sample_mask.sum() == 0) & (sample_pred_post.sum() == 0):
dice_score_post = 1.
else:
dice_score_post = dice_coefficient(sample_pred_post, sample_mask)
class_score.append(dice_score)
class_score_post.append(dice_score_post)
metrics_df.loc[metrics_df[column_names[0]] == label_columns[class_index], column_names[1]] += np.mean(class_score) * ratio
metrics_df.loc[metrics_df[column_names[0]] == label_columns[class_index], column_names[2]] += np.mean(class_score_post) * ratio
metrics_df = metrics_df.append({column_names[0]:set_name, column_names[1]:np.mean(metrics_df[column_names[1]].values), column_names[2]:np.mean(metrics_df[column_names[2]].values)}, ignore_index=True).set_index(column_names[0])
return metrics_df
def inspect_predictions(df, image_ids, images_dest_path, pred_col=None, label_col='EncodedPixels', title_col='Image_Label', img_shape=(525, 350), figsize=(22, 6)):
if pred_col:
for sample in image_ids:
sample_df = df[df['image'] == sample]
fig, axes = plt.subplots(2, 5, figsize=figsize)
img = cv2.imread(images_dest_path + sample_df['image'].values[0])
img = cv2.resize(img, img_shape)
axes[0][0].imshow(img)
axes[1][0].imshow(img)
axes[0][0].set_title('Label', fontsize=16)
axes[1][0].set_title('Predicted', fontsize=16)
axes[0][0].axis('off')
axes[1][0].axis('off')
for i in range(4):
mask = sample_df[label_col].values[i]
try:
math.isnan(mask)
mask = np.zeros((img_shape[1], img_shape[0]))
except:
mask = rle_decode(mask)
axes[0][i+1].imshow(mask)
axes[1][i+1].imshow(rle2mask(sample_df[pred_col].values[i], img.shape))
axes[0][i+1].set_title(sample_df[title_col].values[i], fontsize=18)
axes[1][i+1].set_title(sample_df[title_col].values[i], fontsize=18)
axes[0][i+1].axis('off')
axes[1][i+1].axis('off')
else:
for sample in image_ids:
sample_df = df[df['image'] == sample]
fig, axes = plt.subplots(1, 5, figsize=figsize)
img = cv2.imread(images_dest_path + sample_df['image'].values[0])
img = cv2.resize(img, img_shape)
axes[0].imshow(img)
axes[0].set_title('Original', fontsize=16)
axes[0].axis('off')
for i in range(4):
mask = sample_df[label_col].values[i]
try:
math.isnan(mask)
mask = np.zeros((img_shape[1], img_shape[0]))
except:
mask = rle_decode(mask, shape=(img_shape[1], img_shape[0]))
axes[i+1].imshow(mask)
axes[i+1].set_title(sample_df[title_col].values[i], fontsize=18)
axes[i+1].axis('off')
# Model tunning
def classification_tunning(y_true, y_pred, label_columns, threshold_grid=np.arange(0, 1, .01), column_names=['Class', 'Threshold', 'Score'], print_score=True):
metrics = []
for label in label_columns:
for threshold in threshold_grid:
metrics.append([label, threshold, 0])
metrics_df = pd.DataFrame(metrics, columns=column_names)
for index, label in enumerate(label_columns):
for thr in threshold_grid:
metrics_df.loc[(metrics_df[column_names[0]] == label) & (metrics_df[column_names[1]] == thr) , column_names[2]] = accuracy_score(y_true[:,index], (y_pred[:,index] > thr).astype(int))
best_tresholds = []
best_scores = []
for index, label in enumerate(label_columns):
metrics_df_lbl = metrics_df[metrics_df[column_names[0]] == label_columns[index]]
optimal_values_lbl = metrics_df_lbl.loc[metrics_df_lbl[column_names[2]].idxmax()].values
best_tresholds.append(optimal_values_lbl[1])
best_scores.append(optimal_values_lbl[2])
if print_score:
for index, label in enumerate(label_columns):
print('%s treshold=%.2f Score=%.3f' % (label, best_tresholds[index], best_scores[index]))
return best_tresholds
def segmentation_tunning(model, target_df, df, df_images_dest_path, label_columns, mask_grid, threshold_grid=np.arange(0, 1, .01), N_CLASSES=4, preprocessing=None, adjust_fn=None, adjust_param=None, seed=0, column_names=['Class', 'Threshold', 'Mask size', 'Dice'], print_score=True):
metrics = []
for label in label_columns:
for threshold in threshold_grid:
for mask_size in mask_grid:
metrics.append([label, threshold, mask_size, 0])
metrics_df = pd.DataFrame(metrics, columns=column_names)
for i in range(0, df.shape[0], 300):
batch_idx = list(range(i, min(df.shape[0], i + 300)))
batch_set = df[batch_idx[0]: batch_idx[-1]+1]
ratio = len(batch_set) / len(df)
generator = DataGenerator(
directory=df_images_dest_path,
dataframe=batch_set,
target_df=target_df,
batch_size=len(batch_set),
target_size=model.input_shape[1:3],
n_channels=model.input_shape[3],
n_classes=N_CLASSES,
preprocessing=preprocessing,
adjust_fn=adjust_fn,
adjust_param=adjust_param,
seed=seed,
mode='fit',
shuffle=False)
x, y = generator.__getitem__(0)
preds = model.predict(x)
for class_index, label in enumerate(label_columns):
class_score = []
label_class = y[..., class_index]
pred_class = preds[..., class_index]
for threshold in threshold_grid:
for mask_size in mask_grid:
mask_score = []
for index in range(len(batch_idx)):
label_mask = label_class[index, ]
pred_mask = pred_class[index, ]
pred_mask = post_process(pred_mask, threshold=threshold, min_size=mask_size)
dice_score = dice_coefficient(pred_mask, label_mask)
if (pred_mask.sum() == 0) & (label_mask.sum() == 0):
dice_score = 1.
mask_score.append(dice_score)
metrics_df.loc[(metrics_df[column_names[0]] == label) & (metrics_df[column_names[1]] == threshold) &
(metrics_df[column_names[2]] == mask_size), column_names[3]] += np.mean(mask_score) * ratio
best_tresholds = []
best_masks = []
best_dices = []
for index, label in enumerate(label_columns):
metrics_df_lbl = metrics_df[metrics_df[column_names[0]] == label_columns[index]]
optimal_values_lbl = metrics_df_lbl.loc[metrics_df_lbl[column_names[3]].idxmax()].values
best_tresholds.append(optimal_values_lbl[1])
best_masks.append(optimal_values_lbl[2])
best_dices.append(optimal_values_lbl[3])
if print_score:
for index, name in enumerate(label_columns):
print('%s treshold=%.2f mask size=%d Dice=%.3f' % (name, best_tresholds[index], best_masks[index], best_dices[index]))
return best_tresholds, best_masks
# Model utils
def ensemble_models(input_shape, model_list, rename_model=False):
if rename_model:
for index, model in enumerate(model_list):
model.name = 'ensemble_' + str(index) + '_' + model.name
for layer in model.layers:
layer.name = 'ensemble_' + str(index) + '_' + layer.name
inputs = Input(shape=input_shape)
outputs = average([model(inputs) for model in model_list])
return Model(inputs=inputs, outputs=outputs)
# Data generator
class DataGenerator(Sequence):
def __init__(self, dataframe, directory, batch_size, n_channels, target_size, n_classes,
mode='fit', target_df=None, shuffle=True, preprocessing=None, augmentation=None, adjust_fn=None, adjust_param=None, seed=0):
self.batch_size = batch_size
self.dataframe = dataframe
self.mode = mode
self.directory = directory
self.target_df = target_df
self.target_size = target_size
self.n_channels = n_channels
self.n_classes = n_classes
self.shuffle = shuffle
self.preprocessing = preprocessing
self.augmentation = augmentation
self.adjust_fn = adjust_fn
self.adjust_param = adjust_param
self.seed = seed
self.mask_shape = (1400, 2100)
self.list_IDs = self.dataframe.index
if self.seed is not None:
np.random.seed(self.seed)
self.on_epoch_end()
def __len__(self):
return len(self.list_IDs) // self.batch_size
def __getitem__(self, index):
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
list_IDs_batch = [self.list_IDs[k] for k in indexes]
X = self.__generate_X(list_IDs_batch)
if self.mode == 'fit':
Y = self.__generate_Y(list_IDs_batch)
if self.augmentation:
X, Y = self.__augment_batch(X, Y)
return X, Y
elif self.mode == 'predict':
return X
def on_epoch_end(self):
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __generate_X(self, list_IDs_batch):
X = np.empty((self.batch_size, *self.target_size, self.n_channels))
for i, ID in enumerate(list_IDs_batch):
img_name = self.dataframe['image'].loc[ID]
img_path = self.directory + img_name
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if (not self.adjust_fn is None) & (not self.adjust_param is None):
img = self.adjust_fn(img, self.adjust_param)
if self.preprocessing:
img = self.preprocessing(img)
X[i,] = img
return X
def __generate_Y(self, list_IDs_batch):
Y = np.empty((self.batch_size, *self.target_size, self.n_classes), dtype=int)
for i, ID in enumerate(list_IDs_batch):
img_name = self.dataframe['image'].loc[ID]
image_df = self.target_df[self.target_df['image'] == img_name]
rles = image_df['EncodedPixels'].values
masks = build_masks(rles, input_shape=self.mask_shape, reshape=self.target_size)
Y[i, ] = masks
return Y
def __augment_batch(self, X_batch, Y_batch):
for i in range(X_batch.shape[0]):
X_batch[i, ], Y_batch[i, ] = self.__random_transform(X_batch[i, ], Y_batch[i, ])
return X_batch, Y_batch
def __random_transform(self, X, Y):
composed = self.augmentation(image=X, mask=Y)
X_aug = composed['image']
Y_aug = composed['mask']
return X_aug, Y_aug
seed = 0
seed_everything(seed)
warnings.filterwarnings("ignore")
base_path = '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/'
data_path = base_path + 'Data/'
model_base_path = base_path + 'Models/files/classification/'
train_path = data_path + 'train.csv'
hold_out_set_path = data_path + 'hold-out.csv'
train_images_dest_path = 'train_images384x480/'
###Output
_____no_output_____
###Markdown
Load data
###Code
train = pd.read_csv(train_path)
hold_out_set = pd.read_csv(hold_out_set_path)
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
print('Compete set samples:', len(train))
print('Train samples: ', len(X_train))
print('Validation samples: ', len(X_val))
# Preprocecss data
train['image'] = train['Image_Label'].apply(lambda x: x.split('_')[0])
display(X_train.head())
###Output
Compete set samples: 22184
Train samples: 4420
Validation samples: 1105
###Markdown
Model parameters
###Code
BATCH_SIZE = 64
EPOCHS = 30
LEARNING_RATE = 3e-4
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 2
DECAY_DROP = 0.2
model_name = '9-resnet50_%sx%s' % (HEIGHT, WIDTH)
model_path = model_base_path + '%s.h5' % (model_name)
###Output
_____no_output_____
###Markdown
Data generator
###Code
label_columns=['Fish', 'Flower', 'Gravel', 'Sugar']
datagen=ImageDataGenerator(rescale=1./255.,
vertical_flip=True,
horizontal_flip=True,
zoom_range=[1, 1.2],
fill_mode='constant',
cval=0.)
test_datagen=ImageDataGenerator(rescale=1./255.)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_images_dest_path,
x_col="image",
y_col=label_columns,
target_size=(HEIGHT, WIDTH),
batch_size=BATCH_SIZE,
class_mode="other",
shuffle=True,
seed=seed)
valid_generator=test_datagen.flow_from_dataframe(
dataframe=X_val,
directory=train_images_dest_path,
x_col="image",
y_col=label_columns,
target_size=(HEIGHT, WIDTH),
batch_size=BATCH_SIZE,
class_mode="other",
shuffle=True,
seed=seed)
###Output
Found 4420 validated image filenames.
Found 1105 validated image filenames.
###Markdown
Model
###Code
def create_model(input_shape, N_CLASSES):
input_tensor = Input(shape=input_shape)
base_model = applications.ResNet50(weights=None,
include_top=False,
input_tensor=input_tensor,
pooling='avg')
x = base_model.output
final_output = Dense(N_CLASSES, activation='sigmoid')(x)
model = Model(input_tensor, final_output)
return model
model = create_model((HEIGHT, WIDTH, CHANNELS), N_CLASSES)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, verbose=1)
metric_list = ['accuracy']
callback_list = [checkpoint, es, rlrop]
###Output
_____no_output_____
###Markdown
Warmup top layers Fine-tune all layers
###Code
optimizer = RAdam(learning_rate=LEARNING_RATE, warmup_proportion=0.1)
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=metric_list)
# model.summary()
STEP_SIZE_TRAIN = len(X_train)//BATCH_SIZE
STEP_SIZE_VALID = len(X_val)//BATCH_SIZE
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callback_list,
epochs=EPOCHS,
verbose=1).history
###Output
Epoch 1/30
69/69 [==============================] - 139s 2s/step - loss: 0.6756 - acc: 0.5753 - val_loss: 0.8147 - val_acc: 0.5524
Epoch 2/30
69/69 [==============================] - 112s 2s/step - loss: 0.6540 - acc: 0.6061 - val_loss: 1.1974 - val_acc: 0.4546
Epoch 3/30
69/69 [==============================] - 113s 2s/step - loss: 0.6478 - acc: 0.6110 - val_loss: 0.8534 - val_acc: 0.5298
Epoch 00003: ReduceLROnPlateau reducing learning rate to 6.000000284984708e-05.
Epoch 4/30
69/69 [==============================] - 112s 2s/step - loss: 0.6250 - acc: 0.6390 - val_loss: 0.7197 - val_acc: 0.5845
Epoch 5/30
69/69 [==============================] - 112s 2s/step - loss: 0.6233 - acc: 0.6418 - val_loss: 0.7219 - val_acc: 0.5817
Epoch 6/30
69/69 [==============================] - 113s 2s/step - loss: 0.6170 - acc: 0.6498 - val_loss: 0.9973 - val_acc: 0.5034
Epoch 00006: ReduceLROnPlateau reducing learning rate to 1.2000000424450263e-05.
Epoch 7/30
69/69 [==============================] - 114s 2s/step - loss: 0.6107 - acc: 0.6565 - val_loss: 0.6709 - val_acc: 0.5953
Epoch 8/30
69/69 [==============================] - 112s 2s/step - loss: 0.6035 - acc: 0.6645 - val_loss: 0.6682 - val_acc: 0.6073
Epoch 9/30
69/69 [==============================] - 113s 2s/step - loss: 0.6083 - acc: 0.6599 - val_loss: 0.6909 - val_acc: 0.5886
Epoch 10/30
69/69 [==============================] - 113s 2s/step - loss: 0.6064 - acc: 0.6636 - val_loss: 0.6693 - val_acc: 0.5997
Epoch 00010: ReduceLROnPlateau reducing learning rate to 2.4000000848900527e-06.
Epoch 11/30
69/69 [==============================] - 113s 2s/step - loss: 0.6084 - acc: 0.6596 - val_loss: 0.6767 - val_acc: 0.6028
Epoch 12/30
69/69 [==============================] - 114s 2s/step - loss: 0.6049 - acc: 0.6593 - val_loss: 0.6773 - val_acc: 0.6006
Epoch 00012: ReduceLROnPlateau reducing learning rate to 4.800000169780105e-07.
Epoch 13/30
69/69 [==============================] - 113s 2s/step - loss: 0.6101 - acc: 0.6572 - val_loss: 0.6755 - val_acc: 0.6102
Restoring model weights from the end of the best epoch
Epoch 00013: early stopping
###Markdown
Model loss graph
###Code
#@title
plot_metrics(history, metric_list=['loss', 'acc'])
###Output
_____no_output_____ |
workshop2021.ipynb | ###Markdown
Machine Translation for Translators WorkshopLocalization summer school '21 Note before beginning: - This coding template is based on Masakhane's starter notebook (https://github.com/masakhane-io/masakhane-mt) - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus. - The TL;DR: Go to the **"TODO"** comments which will tell you what to update to get up and running - If you actually want to have a clue what you're doing, read the text and peek at the links - With 100 epochs, it should take around 7 hours to run in Google Colab Retrieve your data & make a parallel corpusIn this workshop we will use open corpus available from OPUS repository to train a translation model. We will first download the data, create training, development, testing sets from it and then use JoeyNMT to train a baseline model. In the next cell, you need to set the languages you want to work with and specify which corpus you want to use to train. To select a corpus go to https://opus.nlpl.eu/, enter your language pair and select one that you think is more appropriate (size, domain)
###Code
# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:
# These will also become the suffix's of all vocab and corpus files used throughout
import os
source_language = "en"
target_language = "tr"
seed = 42 # Random seed for shuffling.
tag = "baseline" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted
os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts
os.environ["tgt"] = target_language
os.environ["tag"] = tag
# This will save it to a folder in our gdrive instead!
from google.colab import drive
drive.mount('/content/drive')
!mkdir -p "/content/drive/My Drive/mt-workshop/$src-$tgt-$tag"
os.environ["gdrive_path"] = "/content/drive/My Drive/mt-workshop/%s-%s-%s" % (source_language, target_language, tag)
!echo $gdrive_path
# Install opus-tools (Warning! This is not really python)
! pip install opustools-pkg
# TODO: Indicate here the ID of the corpus you want to use from OPUS
opus_corpus = "TED2020"
os.environ["corpus"] = opus_corpus
# Downloading our corpus
! opus_read -d $corpus -s $src -t $tgt -wm moses -w $corpus.$src $corpus.$tgt -q
# Extract the corpus file
! gunzip ${corpus}_latest_xml_$src-$tgt.xml.gz
# Read the corpus into python lists
source_file = opus_corpus + '.' + source_language
target_file = opus_corpus + '.' + target_language
src_all = [sentence.strip() for sentence in open(source_file).readlines()]
tgt_all = [sentence.strip() for sentence in open(target_file).readlines()]
# Let's take a peek at the files
print("Source size:", len(src_all))
print("Target size:", len(tgt_all))
print("--------")
peek_size = 5
for i in range(peek_size):
print("Sent #", i)
print("SRC:", src_all[i])
print("TGT:", tgt_all[i])
print("---------")
###Output
Source size: 374378
Target size: 374378
--------
Sent # 0
SRC: Thank you so much , Chris .
TGT: Çok teşekkür ederim Chris .
---------
Sent # 1
SRC: And it 's truly a great honor to have the opportunity to come to this stage twice ; I 'm extremely grateful .
TGT: Bu sahnede ikinci kez yer alma fırsatına sahip olmak gerçekten büyük bir onur . Çok minnettarım .
---------
Sent # 2
SRC: I have been blown away by this conference , and I want to thank all of you for the many nice comments about what I had to say the other night .
TGT: Bu konferansta çok mutlu oldum , ve anlattıklarımla ilgili güzel yorumlarınız için sizlere çok teşekkür ederim .
---------
Sent # 3
SRC: And I say that sincerely , partly because ( Mock sob ) I need that .
TGT: Bunu içtenlikle söylüyorum , çünkü ... ( Ağlama taklidi ) Buna ihtiyacım var .
---------
Sent # 4
SRC: ( Laughter ) Put yourselves in my position .
TGT: ( Kahkahalar ) Kendinizi benim yerime koyun !
---------
###Markdown
Making training, development and testing setsWe need to pick training, development and testing sets from our corpus. Training set will contain the sentences that we'll teach our model. Development set will be used to see how our model is progressing during the training. And finally, testing set will be used to evaluate the model.You can optionally load your own testing set.
###Code
# TODO: Determine ratios of each set
all_size = len(src_all)
dev_size = 1000
test_size = 1000
train_size = all_size - test_size - dev_size
src_train = src_all[0:train_size]
tgt_train = tgt_all[0:train_size]
src_dev = src_all[train_size:train_size+dev_size]
tgt_dev = tgt_all[train_size:train_size+dev_size]
src_test = src_all[train_size+dev_size:all_size]
tgt_test = tgt_all[train_size+dev_size:all_size]
print("Set sizes")
print("All:", len(src_all))
print("Train:", len(src_train))
print("Dev:", len(src_dev))
print("Test:", len(src_test))
###Output
Set sizes
All: 374378
Train: 372378
Dev: 1000
Test: 1000
###Markdown
Preprocessing the Data into Subword BPE Tokens- One of the most powerful improvements for neural machine translation is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909).- BPE tokenization limits the number of vocabulary into a certain size by smartly dividing words into subwords- This is especially useful for agglutinative languages (like Turkish) where vocabulary is effectively endless. - Below you have the scripts for doing BPE tokenization of our data. We use bpemb library that has pre-trained BPE models to convert our data into subwords.
###Code
! pip install bpemb
from bpemb import BPEmb
BPE_VOCAB_SIZE = 5000
bpemb_src = BPEmb(lang=source_language, vs=BPE_VOCAB_SIZE, segmentation_only=True, preprocess=False)
bpemb_tgt = BPEmb(lang=target_language, vs=BPE_VOCAB_SIZE, segmentation_only=True, preprocess=False)
# Testing BPE encoding
encoded_tokens = bpemb_src.encode("This is a test sentence to demonstrate how BPE encoding works for our source language.")
print(encoded_tokens)
encoded_string = " ".join(encoded_tokens)
print(encoded_string)
decoded_string = bpemb_src.decode(encoded_tokens)
print(decoded_string)
# Shortcut functions to encode and decode
def encode_bpe(string, lang, to_lower=True):
if to_lower:
string = string.lower()
if lang == source_language:
return " ".join(bpemb_src.encode(string))
elif lang == target_language:
return " ".join(bpemb_tgt.encode(string))
else:
return ""
def decode_bpe(string, lang):
tokens = string.strip().split()
if lang == source_language:
return bpemb_src.decode(tokens)
elif lang == target_language:
return bpemb_tgt.decode(tokens)
else:
return ""
# Let's encode all our sets with BPE
src_train_bpe = [encode_bpe(sentence, source_language) for sentence in src_train]
tgt_train_bpe = [encode_bpe(sentence, target_language) for sentence in tgt_train]
src_dev_bpe = [encode_bpe(sentence, source_language) for sentence in src_dev]
tgt_dev_bpe = [encode_bpe(sentence, target_language) for sentence in tgt_dev]
src_test_bpe = [encode_bpe(sentence, source_language) for sentence in src_test]
tgt_test_bpe = [encode_bpe(sentence, target_language) for sentence in tgt_test]
# Now let's write all our sets into separate files
with open("train."+source_language, "w") as src_file, open("train."+target_language, "w") as tgt_file:
for s, t in zip(src_train, tgt_train):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
with open("dev."+source_language, "w") as src_file, open("dev."+target_language, "w") as tgt_file:
for s, t in zip(src_dev, tgt_dev):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
with open("test."+source_language, "w") as src_file, open("test."+target_language, "w") as tgt_file:
for s, t in zip(src_test, tgt_test):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
with open("train.bpe."+source_language, "w") as src_file, open("train.bpe."+target_language, "w") as tgt_file:
for s, t in zip(src_train_bpe, tgt_train_bpe):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
with open("dev.bpe."+source_language, "w") as src_file, open("dev.bpe."+target_language, "w") as tgt_file:
for s, t in zip(src_dev_bpe, tgt_dev_bpe):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
with open("test.bpe."+source_language, "w") as src_file, open("test.bpe."+target_language, "w") as tgt_file:
for s, t in zip(src_test_bpe, tgt_test_bpe):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
# Doublecheck the files. There should be no extra quotation marks or weird characters.
! head -n5 train.*
! head -n5 dev.*
! head -n5 test.*
# If creating data for the first time, move all prepared data to the mounted location in google drive
! mkdir "$gdrive_path"/data
! cp train.* "$gdrive_path"/data
! cp test.* "$gdrive_path"/data
! cp dev.* "$gdrive_path"/data
! ls "$gdrive_path"/data #See the contents of the drive directory
# OR... If continuing from previous run, load files from drive
! cp "$gdrive_path"/data/dev.* .
! cp "$gdrive_path"/data/train.* .
! cp "$gdrive_path"/data/test.* .
###Output
_____no_output_____
###Markdown
--- Installation of JoeyNMTJoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io)
###Code
# Install JoeyNMT
! git clone https://github.com/joeynmt/joeynmt.git
! cd joeynmt; pip3 install .
# Install Pytorch with GPU support v1.7.1.
! pip install torch==1.8.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
#Move everything important under joeynmt directory
os.environ["data_path"] = os.path.join("joeynmt", "data", source_language + target_language)
# Create directory, move everyone we care about to the correct location
! mkdir -p $data_path
! cp train.* $data_path
! cp test.* $data_path
! cp dev.* $data_path
! ls $data_path
# Create that vocab using build_vocab
! sudo chmod 777 joeynmt/scripts/build_vocab.py
! joeynmt/scripts/build_vocab.py joeynmt/data/$src$tgt/train.bpe.$src joeynmt/data/$src$tgt/train.bpe.$tgt --output_path joeynmt/data/$src$tgt/vocab.txt
# Some output
! echo "Combined BPE Vocab"
! head -n 10 joeynmt/data/$src$tgt/vocab.txt # Herman
# Backup vocab to drive
! cp joeynmt/data/$src$tgt/vocab.txt "$gdrive_path"/data
###Output
dev.bpe.en dev.en test.bpe.en test.en train.bpe.en train.en
dev.bpe.tr dev.tr test.bpe.tr test.tr train.bpe.tr train.tr
Combined BPE Vocab
774
531
883
6397
794
431
381
1414
761
548
###Markdown
Creating the JoeyNMT ConfigJoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with!- We used Transformer architecture - We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021))Things worth playing with:- The batch size (also recommended to change for low-resourced languages)- The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes)- The decoder options (beam_size, alpha)- Evaluation metrics (BLEU versus Crhf4)
###Code
# This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update
# (You can of course play with all the parameters if you'd like!)
name = '%s%s' % (source_language, target_language)
gdrive_path = os.environ["gdrive_path"]
# Create the config
config = """
name: "{name}_transformer"
data:
src: "{source_language}"
trg: "{target_language}"
train: "data/{name}/train.bpe"
dev: "data/{name}/dev.bpe"
test: "data/{name}/test.bpe"
level: "bpe"
lowercase: False
max_sent_length: 100
src_vocab: "data/{name}/vocab.txt"
trg_vocab: "data/{name}/vocab.txt"
testing:
beam_size: 5
alpha: 1.0
training:
#load_model: "models/entr_transformer/1000.ckpt"
load_model: "{gdrive_path}/models/{name}_transformer/1000.ckpt" # if uncommented, load a pre-trained model from this checkpoint
random_seed: 42
optimizer: "adam"
normalization: "tokens"
adam_betas: [0.9, 0.999]
scheduling: "plateau" # TODO: try switching from plateau to Noam scheduling
patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.
learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)
learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)
decrease_factor: 0.7
loss: "crossentropy"
learning_rate: 0.0003
learning_rate_min: 0.00000001
weight_decay: 0.0
label_smoothing: 0.1
batch_size: 4096
batch_type: "token"
eval_batch_size: 3600
eval_batch_type: "token"
batch_multiplier: 1
early_stopping_metric: "ppl"
epochs: 30 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all
validation_freq: 1000 # TODO: Set to at least once per epoch.
logging_freq: 100
eval_metric: "bleu"
model_dir: "models/{name}_transformer"
overwrite: True # TODO: Set to True if you want to overwrite possibly existing models.
shuffle: True
use_cuda: True
max_output_length: 100
print_valid_sents: [0, 1, 2, 3]
keep_last_ckpts: 3
model:
initializer: "xavier"
bias_initializer: "zeros"
init_gain: 1.0
embed_initializer: "xavier"
embed_init_gain: 1.0
tied_embeddings: True
tied_softmax: True
encoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
decoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
""".format(name=name, gdrive_path=os.environ["gdrive_path"], source_language=source_language, target_language=target_language)
with open("joeynmt/configs/transformer_{name}.yaml".format(name=name),'w') as f:
f.write(config)
###Output
_____no_output_____
###Markdown
Train the ModelThis single line of joeynmt runs the training using the config we made above
###Code
# Train the model
# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints!
!cd joeynmt; python3 -m joeynmt train configs/transformer_$src$tgt.yaml
# Copy the created models from the notebook storage to google drive for persistant storage
! mkdir -p "$gdrive_path/models/${src}${tgt}_transformer/"
! cp -r joeynmt/models/${src}${tgt}_transformer/* "$gdrive_path/models/${src}${tgt}_transformer/"
# OR... If continuing from previous work, load models from google drive to notebook storage
! mkdir -p joeynmt/models/${src}${tgt}_transformer
! cp -r "$gdrive_path/models/${src}${tgt}_transformer" joeynmt/models
# Output our validation accuracy
! cat "$gdrive_path/models/${src}${tgt}_transformer/validations.txt"
# Test our model
! cd joeynmt; python3 -m joeynmt test "$gdrive_path/models/${src}${tgt}_transformer/config.yaml"
###Output
2021-07-28 16:28:56,802 - INFO - root - Hello! This is Joey-NMT (version 1.3).
2021-07-28 16:28:56,802 - INFO - joeynmt.data - Building vocabulary...
2021-07-28 16:28:57,990 - INFO - joeynmt.data - Loading dev data...
2021-07-28 16:28:58,003 - INFO - joeynmt.data - Loading test data...
2021-07-28 16:28:58,014 - INFO - joeynmt.data - Data loaded.
2021-07-28 16:28:58,045 - INFO - joeynmt.prediction - Process device: cuda, n_gpu: 1, batch_size per device: 18000 (with beam_size)
2021-07-28 16:29:01,412 - INFO - joeynmt.model - Building an encoder-decoder model...
2021-07-28 16:29:01,642 - INFO - joeynmt.model - Enc-dec model built.
2021-07-28 16:29:01,712 - INFO - joeynmt.prediction - Decoding on dev set (data/entr/dev.bpe.tr)...
2021-07-28 16:30:08,794 - INFO - joeynmt.prediction - dev bleu[13a]: 0.97 [Beam search decoding with beam size = 5 and alpha = 1.0]
2021-07-28 16:30:08,795 - INFO - joeynmt.prediction - Decoding on test set (data/entr/test.bpe.tr)...
2021-07-28 16:31:30,255 - INFO - joeynmt.prediction - test bleu[13a]: 0.75 [Beam search decoding with beam size = 5 and alpha = 1.0]
###Markdown
Fine-tuning to domainOne important technique in neural machine translation is in-domain adaptation or fine-tuning. This introduces the model a certain domain we're interested to do translations in. One simple way of doing this is having a pre-trained model and continuing training from it on our in-domain training set. In this example we're going to fine-tune our model to news.
###Code
fine_corpus = "WMT-News"
os.environ["fine"] = fine_corpus
# Downloading our corpus
! opus_read -d $fine -s $src -t $tgt -wm moses -w $fine.$src $fine.$tgt -q
# Extract the corpus file
! gunzip ${fine}_latest_xml_$src-$tgt.xml.gz
# Read the corpus into python lists
source_file = fine_corpus + '.' + source_language
target_file = fine_corpus + '.' + target_language
fine_src_all_bpe = [encode_bpe(sentence.strip(),'en') for sentence in open(source_file).readlines()]
fine_tgt_all_bpe = [encode_bpe(sentence.strip(), 'tr') for sentence in open(target_file).readlines()]
# Let's take a peek at the files
print("Source size:", len(fine_src_all_bpe))
print("Target size:", len(fine_tgt_all_bpe))
print("--------")
peek_size = 5
for i in range(peek_size):
print("Sent #", i)
print("SRC:", decode_bpe(fine_src_all_bpe[i], 'en'))
print("TGT:", decode_bpe(fine_tgt_all_bpe[i],'tr'))
print("---------")
# Allocate train, dev, test portions
all_size = len(fine_src_all_bpe)
dev_size = 500
test_size = 500
train_size = all_size - test_size - dev_size
fine_src_train_bpe = fine_src_all_bpe[0:train_size]
fine_tgt_train_bpe = fine_tgt_all_bpe[0:train_size]
fine_src_dev_bpe = fine_src_all_bpe[train_size:train_size+dev_size]
fine_tgt_dev_bpe = fine_tgt_all_bpe[train_size:train_size+dev_size]
fine_src_test_bpe = fine_src_all_bpe[train_size+dev_size:all_size]
fine_tgt_test_bpe = fine_tgt_all_bpe[train_size+dev_size:all_size]
print("Set sizes")
print("All:", len(fine_src_all_bpe))
print("Train:", len(fine_src_train_bpe))
print("Dev:", len(fine_src_dev_bpe))
print("Test:", len(fine_src_test_bpe))
# Store sentences as files
with open("finetrain.bpe."+source_language, "w") as src_file, open("finetrain.bpe."+target_language, "w") as tgt_file:
for s, t in zip(fine_src_train_bpe, fine_tgt_train_bpe):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
with open("finedev.bpe."+source_language, "w") as src_file, open("finedev.bpe."+target_language, "w") as tgt_file:
for s, t in zip(fine_src_dev_bpe, fine_tgt_dev_bpe):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
with open("finetest.bpe."+source_language, "w") as src_file, open("finetest.bpe."+target_language, "w") as tgt_file:
for s, t in zip(fine_src_test_bpe, fine_tgt_test_bpe):
src_file.write(s+"\n")
tgt_file.write(t+"\n")
# If creating data for the first time, move all prepared data to the mounted location in google drive
! mkdir -p "$gdrive_path"/data
! cp finetrain.* "$gdrive_path"/data
! cp finetest.* "$gdrive_path"/data
! cp finedev.* "$gdrive_path"/data
! ls "$gdrive_path"/data #See the contents of the drive directory
# OR... If continuing from previous run, load finetuning data from drive
! cp "$gdrive_path"/data/finedev.* .
! cp "$gdrive_path"/data/finetrain.* .
! cp "$gdrive_path"/data/finetest.* .
! cp "$gdrive_path"/data/finetest.* .
# #Move everything important under joeynmt directory
os.environ["data_path"] = os.path.join("joeynmt", "data", source_language + target_language)
# Move fine-tuning data to data directory
! mkdir -p $data_path
! cp finetrain.* $data_path
! cp finetest.* $data_path
! cp finedev.* $data_path
! ls $data_path
# Also, load models from google drive to notebook storage
! mkdir -p joeynmt/models/${src}${tgt}_transformer
! cp -r "$gdrive_path/models/${src}${tgt}_transformer" joeynmt/models
# Let's create a config file for finetuning training
# Changes from previous config are dataset names, model name, batch size and learning rate
name = '%s%s' % (source_language, target_language)
gdrive_path = os.environ["gdrive_path"]
# Create the config
config = """
name: "{name}_transformer"
data:
src: "{source_language}"
trg: "{target_language}"
train: "data/{name}/finetrain.bpe"
dev: "data/{name}/finedev.bpe"
test: "data/{name}/finetest.bpe"
level: "bpe"
lowercase: False
max_sent_length: 100
src_vocab: "data/{name}/vocab.txt"
trg_vocab: "data/{name}/vocab.txt"
testing:
beam_size: 5
alpha: 1.0
training:
load_model: "{gdrive_path}/models/{name}_transformer/best.ckpt" # Load base model from its best scoring checkpoint (from gdrive)
#load_model: "models/{name}_transformer/best.ckpt" # Load base model from its best scoring checkpoint (from local)
random_seed: 42
optimizer: "adam"
normalization: "tokens"
adam_betas: [0.9, 0.999]
scheduling: "plateau" # TODO: try switching from plateau to Noam scheduling
patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds.
learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer)
learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer)
decrease_factor: 0.7
loss: "crossentropy"
learning_rate: 0.0001
learning_rate_min: 0.00000001
weight_decay: 0.0
label_smoothing: 0.1
batch_size: 1028
batch_type: "token"
eval_batch_size: 3600
eval_batch_type: "token"
batch_multiplier: 1
early_stopping_metric: "ppl"
epochs: 30 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all
validation_freq: 1000 # TODO: Set to at least once per epoch.
logging_freq: 100
eval_metric: "bleu"
model_dir: "models/{name}_transformer"
overwrite: True # TODO: Set to True if you want to overwrite possibly existing models.
shuffle: True
use_cuda: True
max_output_length: 100
print_valid_sents: [0, 1, 2, 3]
keep_last_ckpts: 3
model:
initializer: "xavier"
bias_initializer: "zeros"
init_gain: 1.0
embed_initializer: "xavier"
embed_init_gain: 1.0
tied_embeddings: True
tied_softmax: True
encoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
decoder:
type: "transformer"
num_layers: 6
num_heads: 4 # TODO: Increase to 8 for larger data.
embeddings:
embedding_dim: 256 # TODO: Increase to 512 for larger data.
scale: True
dropout: 0.2
# typically ff_size = 4 x hidden_size
hidden_size: 256 # TODO: Increase to 512 for larger data.
ff_size: 1024 # TODO: Increase to 2048 for larger data.
dropout: 0.3
""".format(name=name, gdrive_path=os.environ["gdrive_path"], source_language=source_language, target_language=target_language)
with open("joeynmt/configs/transformer_{name}_finetune.yaml".format(name=name),'w') as f:
f.write(config)
# Test our model on our domain before fine-tuning
! cd joeynmt; python3 -m joeynmt test "configs/transformer_${src}${tgt}_finetune.yaml"
# Train to our domain
# You can press Ctrl-C to stop. And then run the next cell to save your checkpoints!
! cd joeynmt; python3 -m joeynmt train "configs/transformer_${src}${tgt}_finetune.yaml"
# Copy the created models from the notebook storage to google drive for persistant storage
! mkdir -p "$gdrive_path/models/${src}${tgt}_transformer_finetune/"
! cp -r joeynmt/models/${src}${tgt}_transformer_finetune/* "$gdrive_path/models/${src}${tgt}_transformer_finetune/"
# Test again to see how our model improved
#! cd joeynmt; python3 -m joeynmt test "configs/transformer_${src}${tgt}_finetune.yaml"
! cd joeynmt; python3 -m joeynmt test "$gdrive_path/models/${src}${tgt}_transformer_finetune/config.yaml"
###Output
_____no_output_____ |
EdX/IBM DL0110EN - Deep Learning with Python and PyTorch/2.1Prediction1Dregression_v2.ipynb | ###Markdown
Linear Regression 1D: Prediction Table of ContentsIn this lab, we will review how to make a prediction in several different ways by using PyTorch. Prediction Class Linear Build Custom ModulesEstimated Time Needed: 15 min Preparation The following are the libraries we are going to use for this lab.
###Code
# These are the libraries will be used for this lab.
import torch
###Output
_____no_output_____
###Markdown
Prediction Let us create the following expressions: $b=-1,w=2$$\hat{y}=-1+2x$ First, define the parameters:
###Code
# Define w = 2 and b = -1 for y = wx + b
w = torch.tensor(2.0, requires_grad = True)
b = torch.tensor(-1.0, requires_grad = True)
###Output
_____no_output_____
###Markdown
Then, define the function forward(x, w, b) makes the prediction:
###Code
# Function forward(x) for prediction
def forward(x):
yhat = w * x + b
return yhat
###Output
_____no_output_____
###Markdown
Let's make the following prediction at x = 1 $\hat{y}=-1+2x$$\hat{y}=-1+2(1)$
###Code
# Predict y = 2x - 1 at x = 1
x = torch.tensor([[1.0]])
yhat = forward(x)
print("The prediction: ", yhat)
###Output
The prediction: tensor([[1.]], grad_fn=<ThAddBackward>)
###Markdown
Now, let us try to make the prediction for multiple inputs: Let us construct the x tensor first. Check the shape of x.
###Code
# Create x Tensor and check the shape of x tensor
x = torch.tensor([[1.0], [2.0]])
print("The shape of x: ", x.shape)
###Output
The shape of x: torch.Size([2, 1])
###Markdown
Now make the prediction:
###Code
# Make the prediction of y = 2x - 1 at x = [1, 2]
yhat = forward(x)
print("The prediction: ", yhat)
###Output
The prediction: tensor([[1.],
[3.]], grad_fn=<ThAddBackward>)
###Markdown
The result is the same as what it is in the image above. Practice Make a prediction of the following x tensor using the w and b from above.
###Code
# Practice: Make a prediction of y = 2x - 1 at x = [[1.0], [2.0], [3.0]]
x = torch.tensor([[1.0], [2.0], [3.0]])
yhat = forward(x)
print("The prediction: ", yhat)
###Output
The prediction: tensor([[1.],
[3.],
[5.]], grad_fn=<ThAddBackward>)
###Markdown
Double-click here for the solution.<!-- Your answer is below:yhat = forward(x)print("The prediction: ", yhat)--> Class Linear The linear class can be used to make a prediction. We can also use the linear class to build more complex models. Let's import the module:
###Code
# Import Class Linear
from torch.nn import Linear
###Output
_____no_output_____
###Markdown
Set the random seed because the parameters are randomly initialized:
###Code
# Set random seed
torch.manual_seed(1)
###Output
_____no_output_____
###Markdown
Let us create the linear object by using the constructor. The parameters are randomly created. Let us print out to see what w and b are.
###Code
# Create Linear Regression Model, and print out the parameters
lr = Linear(in_features=1, out_features=1, bias=True)
print("Parameters w and b: ", list(lr.parameters()))
###Output
Parameters w and b: [Parameter containing:
tensor([[0.5153]], requires_grad=True), Parameter containing:
tensor([-0.4414], requires_grad=True)]
###Markdown
This is equivalent to the following expression: $b=-0.44, w=0.5153$$\hat{y}=-0.44+0.5153x$ Now let us make a single prediction at x = [[1.0]].
###Code
# Make the prediction at x = [[1.0]]
x = torch.tensor([[1.0]])
yhat = lr(x)
print("The prediction: ", yhat)
###Output
The prediction: tensor([[0.0739]], grad_fn=<ThAddmmBackward>)
###Markdown
Similarly, you can make multiple predictions: Use model lr(x) to predict the result.
###Code
# Create the prediction using linear model
x = torch.tensor([[1.0], [2.0]])
yhat = lr(x)
print("The prediction: ", yhat)
###Output
The prediction: tensor([[0.0739],
[0.5891]], grad_fn=<ThAddmmBackward>)
###Markdown
Practice Make a prediction of the following x tensor using the linear regression model lr.
###Code
# Practice: Use the linear regression model object lr to make the prediction.
x = torch.tensor([[1.0],[2.0],[3.0]])
yhat = lr(x)
print("The prediction: ", yhat)
###Output
The prediction: tensor([[0.0739],
[0.5891],
[1.1044]], grad_fn=<ThAddmmBackward>)
###Markdown
Double-click here for the solution.<!-- Your answer is below:x=torch.tensor([[1.0],[2.0],[3.0]])yhat = lr(x)print("The prediction: ", yhat)--> Build Custom Modules Now, let's build a custom module. We can make more complex models by using this method later on. First, import the following library.
###Code
# Library for this section
from torch import nn
###Output
_____no_output_____
###Markdown
Now, let us define the class:
###Code
# Customize Linear Regression Class
class LR(nn.Module):
# Constructor
def __init__(self, input_size, output_size):
# Inherit from parent
super(LR, self).__init__()
self.linear = nn.Linear(input_size, output_size)
# Prediction function
def forward(self, x):
out = self.linear(x)
return out
###Output
_____no_output_____
###Markdown
Create an object by using the constructor. Print out the parameters we get and the model.
###Code
# Create the linear regression model. Print out the parameters.
lr = LR(1, 1)
print("The parameters: ", list(lr.parameters()))
print("Linear model: ", lr.linear)
###Output
The parameters: [Parameter containing:
tensor([[-0.1939]], requires_grad=True), Parameter containing:
tensor([0.4694], requires_grad=True)]
Linear model: Linear(in_features=1, out_features=1, bias=True)
###Markdown
Let us try to make a prediction of a single input sample.
###Code
# Try our customize linear regression model with single input
x = torch.tensor([[1.0]])
yhat = lr(x)
print("The prediction: ", yhat)
###Output
The prediction: tensor([[0.2755]], grad_fn=<ThAddmmBackward>)
###Markdown
Now, let us try another example with multiple samples.
###Code
# Try our customize linear regression model with multiple input
x = torch.tensor([[1.0], [2.0]])
yhat = lr(x)
print("The prediction: ", yhat)
###Output
The prediction: tensor([[0.2755],
[0.0816]], grad_fn=<ThAddmmBackward>)
###Markdown
Practice Create an object lr1 from the class we created before and make a prediction by using the following tensor:
###Code
# Practice: Use the LR class to create a model and make a prediction of the following tensor.
x = torch.tensor([[1.0], [2.0], [3.0]])
lr1=LR(1,1)
yhat=lr(x)
yhat
###Output
_____no_output_____ |
Chapter 2 - End-to-End Machine Learning Project/02_end_to_end_machine_learning_project.ipynb | ###Markdown
**Chapter 2 – End-to-end Machine Learning project***Welcome to Machine Learning Housing Corp.! Your task is to predict median house values in Californian districts, given a number of features from these districts.**This notebook contains all the sample code and solutions to the exercices in chapter 2.* Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "end_to_end_project"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
###Output
_____no_output_____
###Markdown
Get the data
###Code
import os
import tarfile
import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing = load_housing_data()
housing.head()
housing.info()
housing["ocean_proximity"].value_counts()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
save_fig("attribute_histogram_plots")
plt.show()
# to make this notebook's output identical at every run
np.random.seed(42)
import numpy as np
# For illustration only. Sklearn has train_test_split()
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(housing, 0.2)
len(train_set)
len(test_set)
from zlib import crc32
def test_set_check(identifier, test_ratio):
return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32
def split_train_test_by_id(data, test_ratio, id_column):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio))
return data.loc[~in_test_set], data.loc[in_test_set]
###Output
_____no_output_____
###Markdown
The implementation of `test_set_check()` above works fine in both Python 2 and Python 3. In earlier releases, the following implementation was proposed, which supported any hash function, but was much slower and did not support Python 2:
###Code
import hashlib
def test_set_check(identifier, test_ratio, hash=hashlib.md5):
return hash(np.int64(identifier)).digest()[-1] < 256 * test_ratio
###Output
_____no_output_____
###Markdown
If you want an implementation that supports any hash function and is compatible with both Python 2 and Python 3, here is one:
###Code
def test_set_check(identifier, test_ratio, hash=hashlib.md5):
return bytearray(hash(np.int64(identifier)).digest())[-1] < 256 * test_ratio
housing_with_id = housing.reset_index() # adds an `index` column
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "index")
housing_with_id["id"] = housing["longitude"] * 1000 + housing["latitude"]
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "id")
test_set.head()
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
test_set.head()
housing["median_income"].hist()
housing["income_cat"] = pd.cut(housing["median_income"],
bins=[0., 1.5, 3.0, 4.5, 6., np.inf],
labels=[1, 2, 3, 4, 5])
housing["income_cat"].value_counts()
housing["income_cat"].hist()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
strat_test_set["income_cat"].value_counts() / len(strat_test_set)
housing["income_cat"].value_counts() / len(housing)
def income_cat_proportions(data):
return data["income_cat"].value_counts() / len(data)
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
compare_props = pd.DataFrame({
"Overall": income_cat_proportions(housing),
"Stratified": income_cat_proportions(strat_test_set),
"Random": income_cat_proportions(test_set),
}).sort_index()
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Strat. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
compare_props
for set_ in (strat_train_set, strat_test_set):
set_.drop("income_cat", axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Discover and visualize the data to gain insights
###Code
housing = strat_train_set.copy()
housing.plot(kind="scatter", x="longitude", y="latitude")
save_fig("bad_visualization_plot")
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1)
save_fig("better_visualization_plot")
###Output
_____no_output_____
###Markdown
The argument `sharex=False` fixes a display bug (the x-axis values and legend were not displayed). This is a temporary fix (see: https://github.com/pandas-dev/pandas/issues/10611 ). Thanks to Wilmer Arellano for pointing it out.
###Code
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.4,
s=housing["population"]/100, label="population", figsize=(10,7),
c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True,
sharex=False)
plt.legend()
save_fig("housing_prices_scatterplot")
# Download the California image
images_path = os.path.join(PROJECT_ROOT_DIR, "images", "end_to_end_project")
os.makedirs(images_path, exist_ok=True)
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
filename = "california.png"
print("Downloading", filename)
url = DOWNLOAD_ROOT + "images/end_to_end_project/" + filename
urllib.request.urlretrieve(url, os.path.join(images_path, filename))
import matplotlib.image as mpimg
california_img=mpimg.imread(os.path.join(images_path, filename))
ax = housing.plot(kind="scatter", x="longitude", y="latitude", figsize=(10,7),
s=housing['population']/100, label="Population",
c="median_house_value", cmap=plt.get_cmap("jet"),
colorbar=False, alpha=0.4,
)
plt.imshow(california_img, extent=[-124.55, -113.80, 32.45, 42.05], alpha=0.5,
cmap=plt.get_cmap("jet"))
plt.ylabel("Latitude", fontsize=14)
plt.xlabel("Longitude", fontsize=14)
prices = housing["median_house_value"]
tick_values = np.linspace(prices.min(), prices.max(), 11)
cbar = plt.colorbar()
cbar.ax.set_yticklabels(["$%dk"%(round(v/1000)) for v in tick_values], fontsize=14)
cbar.set_label('Median House Value', fontsize=16)
plt.legend(fontsize=16)
save_fig("california_housing_prices_plot")
plt.show()
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
# from pandas.tools.plotting import scatter_matrix # For older versions of Pandas
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 8))
save_fig("scatter_matrix_plot")
housing.plot(kind="scatter", x="median_income", y="median_house_value",
alpha=0.1)
plt.axis([0, 16, 0, 550000])
save_fig("income_vs_house_value_scatterplot")
housing["rooms_per_household"] = housing["total_rooms"]/housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"]=housing["population"]/housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
housing.plot(kind="scatter", x="rooms_per_household", y="median_house_value",
alpha=0.2)
plt.axis([0, 5, 0, 520000])
plt.show()
housing.describe()
###Output
_____no_output_____
###Markdown
Prepare the data for Machine Learning algorithms
###Code
housing = strat_train_set.drop("median_house_value", axis=1) # drop labels for training set
housing_labels = strat_train_set["median_house_value"].copy()
sample_incomplete_rows = housing[housing.isnull().any(axis=1)].head()
sample_incomplete_rows
sample_incomplete_rows.dropna(subset=["total_bedrooms"]) # option 1
sample_incomplete_rows.drop("total_bedrooms", axis=1) # option 2
median = housing["total_bedrooms"].median()
sample_incomplete_rows["total_bedrooms"].fillna(median, inplace=True) # option 3
sample_incomplete_rows
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy="median")
###Output
_____no_output_____
###Markdown
Remove the text attribute because median can only be calculated on numerical attributes:
###Code
housing_num = housing.drop("ocean_proximity", axis=1)
# alternatively: housing_num = housing.select_dtypes(include=[np.number])
imputer.fit(housing_num)
imputer.statistics_
###Output
_____no_output_____
###Markdown
Check that this is the same as manually computing the median of each attribute:
###Code
housing_num.median().values
###Output
_____no_output_____
###Markdown
Transform the training set:
###Code
X = imputer.transform(housing_num)
housing_tr = pd.DataFrame(X, columns=housing_num.columns,
index=housing.index)
housing_tr.loc[sample_incomplete_rows.index.values]
imputer.strategy
housing_tr = pd.DataFrame(X, columns=housing_num.columns,
index=housing_num.index)
housing_tr.head()
###Output
_____no_output_____
###Markdown
Now let's preprocess the categorical input feature, `ocean_proximity`:
###Code
housing_cat = housing[["ocean_proximity"]]
housing_cat.head(10)
from sklearn.preprocessing import OrdinalEncoder
ordinal_encoder = OrdinalEncoder()
housing_cat_encoded = ordinal_encoder.fit_transform(housing_cat)
housing_cat_encoded[:10]
ordinal_encoder.categories_
from sklearn.preprocessing import OneHotEncoder
cat_encoder = OneHotEncoder()
housing_cat_1hot = cat_encoder.fit_transform(housing_cat)
housing_cat_1hot
###Output
_____no_output_____
###Markdown
By default, the `OneHotEncoder` class returns a sparse array, but we can convert it to a dense array if needed by calling the `toarray()` method:
###Code
housing_cat_1hot.toarray()
###Output
_____no_output_____
###Markdown
Alternatively, you can set `sparse=False` when creating the `OneHotEncoder`:
###Code
cat_encoder = OneHotEncoder(sparse=False)
housing_cat_1hot = cat_encoder.fit_transform(housing_cat)
housing_cat_1hot
cat_encoder.categories_
###Output
_____no_output_____
###Markdown
Let's create a custom transformer to add extra attributes:
###Code
from sklearn.base import BaseEstimator, TransformerMixin
# column index
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self # nothing else to do
def transform(self, X):
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household,
bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
housing_extra_attribs = pd.DataFrame(
housing_extra_attribs,
columns=list(housing.columns)+["rooms_per_household", "population_per_household"],
index=housing.index)
housing_extra_attribs.head()
###Output
_____no_output_____
###Markdown
Now let's build a pipeline for preprocessing the numerical attributes:
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
housing_num_tr
from sklearn.compose import ColumnTransformer
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
])
housing_prepared = full_pipeline.fit_transform(housing)
housing_prepared
housing_prepared.shape
###Output
_____no_output_____
###Markdown
For reference, here is the old solution based on a `DataFrameSelector` transformer (to just select a subset of the Pandas `DataFrame` columns), and a `FeatureUnion`:
###Code
from sklearn.base import BaseEstimator, TransformerMixin
# Create a class to select numerical or categorical columns
class OldDataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
###Output
_____no_output_____
###Markdown
Now let's join all these components into a big pipeline that will preprocess both the numerical and the categorical features:
###Code
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
old_num_pipeline = Pipeline([
('selector', OldDataFrameSelector(num_attribs)),
('imputer', SimpleImputer(strategy="median")),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
old_cat_pipeline = Pipeline([
('selector', OldDataFrameSelector(cat_attribs)),
('cat_encoder', OneHotEncoder(sparse=False)),
])
from sklearn.pipeline import FeatureUnion
old_full_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", old_num_pipeline),
("cat_pipeline", old_cat_pipeline),
])
old_housing_prepared = old_full_pipeline.fit_transform(housing)
old_housing_prepared
###Output
_____no_output_____
###Markdown
The result is the same as with the `ColumnTransformer`:
###Code
np.allclose(housing_prepared, old_housing_prepared)
###Output
_____no_output_____
###Markdown
Select and train a model
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
# let's try the full preprocessing pipeline on a few training instances
some_data = housing.iloc[:5]
some_labels = housing_labels.iloc[:5]
some_data_prepared = full_pipeline.transform(some_data)
print("Predictions:", lin_reg.predict(some_data_prepared))
###Output
_____no_output_____
###Markdown
Compare against the actual values:
###Code
print("Labels:", list(some_labels))
some_data_prepared
from sklearn.metrics import mean_squared_error
housing_predictions = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
from sklearn.metrics import mean_absolute_error
lin_mae = mean_absolute_error(housing_labels, housing_predictions)
lin_mae
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(random_state=42)
tree_reg.fit(housing_prepared, housing_labels)
housing_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(housing_labels, housing_predictions)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
###Output
_____no_output_____
###Markdown
Fine-tune your model
###Code
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-scores)
def display_scores(scores):
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard deviation:", scores.std())
display_scores(tree_rmse_scores)
lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
lin_rmse_scores = np.sqrt(-lin_scores)
display_scores(lin_rmse_scores)
###Output
_____no_output_____
###Markdown
**Note**: we specify `n_estimators=100` to be future-proof since the default value is going to change to 100 in Scikit-Learn 0.22 (for simplicity, this is not shown in the book).
###Code
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor(n_estimators=100, random_state=42)
forest_reg.fit(housing_prepared, housing_labels)
housing_predictions = forest_reg.predict(housing_prepared)
forest_mse = mean_squared_error(housing_labels, housing_predictions)
forest_rmse = np.sqrt(forest_mse)
forest_rmse
from sklearn.model_selection import cross_val_score
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
scores = cross_val_score(lin_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
pd.Series(np.sqrt(-scores)).describe()
from sklearn.svm import SVR
svm_reg = SVR(kernel="linear")
svm_reg.fit(housing_prepared, housing_labels)
housing_predictions = svm_reg.predict(housing_prepared)
svm_mse = mean_squared_error(housing_labels, housing_predictions)
svm_rmse = np.sqrt(svm_mse)
svm_rmse
from sklearn.model_selection import GridSearchCV
param_grid = [
# try 12 (3×4) combinations of hyperparameters
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
# then try 6 (2×3) combinations with bootstrap set as False
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor(random_state=42)
# train across 5 folds, that's a total of (12+6)*5=90 rounds of training
grid_search = GridSearchCV(forest_reg, param_grid, cv=5,
scoring='neg_mean_squared_error',
return_train_score=True)
grid_search.fit(housing_prepared, housing_labels)
###Output
_____no_output_____
###Markdown
The best hyperparameter combination found:
###Code
grid_search.best_params_
grid_search.best_estimator_
###Output
_____no_output_____
###Markdown
Let's look at the score of each hyperparameter combination tested during the grid search:
###Code
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
pd.DataFrame(grid_search.cv_results_)
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
param_distribs = {
'n_estimators': randint(low=1, high=200),
'max_features': randint(low=1, high=8),
}
forest_reg = RandomForestRegressor(random_state=42)
rnd_search = RandomizedSearchCV(forest_reg, param_distributions=param_distribs,
n_iter=10, cv=5, scoring='neg_mean_squared_error', random_state=42)
rnd_search.fit(housing_prepared, housing_labels)
cvres = rnd_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
extra_attribs = ["rooms_per_hhold", "pop_per_hhold", "bedrooms_per_room"]
#cat_encoder = cat_pipeline.named_steps["cat_encoder"] # old solution
cat_encoder = full_pipeline.named_transformers_["cat"]
cat_one_hot_attribs = list(cat_encoder.categories_[0])
attributes = num_attribs + extra_attribs + cat_one_hot_attribs
sorted(zip(feature_importances, attributes), reverse=True)
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("median_house_value", axis=1)
y_test = strat_test_set["median_house_value"].copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
###Output
_____no_output_____
###Markdown
We can compute a 95% confidence interval for the test RMSE:
###Code
from scipy import stats
confidence = 0.95
squared_errors = (final_predictions - y_test) ** 2
np.sqrt(stats.t.interval(confidence, len(squared_errors) - 1,
loc=squared_errors.mean(),
scale=stats.sem(squared_errors)))
###Output
_____no_output_____
###Markdown
We could compute the interval manually like this:
###Code
m = len(squared_errors)
mean = squared_errors.mean()
tscore = stats.t.ppf((1 + confidence) / 2, df=m - 1)
tmargin = tscore * squared_errors.std(ddof=1) / np.sqrt(m)
np.sqrt(mean - tmargin), np.sqrt(mean + tmargin)
###Output
_____no_output_____
###Markdown
Alternatively, we could use a z-scores rather than t-scores:
###Code
zscore = stats.norm.ppf((1 + confidence) / 2)
zmargin = zscore * squared_errors.std(ddof=1) / np.sqrt(m)
np.sqrt(mean - zmargin), np.sqrt(mean + zmargin)
###Output
_____no_output_____
###Markdown
Extra material A full pipeline with both preparation and prediction
###Code
full_pipeline_with_predictor = Pipeline([
("preparation", full_pipeline),
("linear", LinearRegression())
])
full_pipeline_with_predictor.fit(housing, housing_labels)
full_pipeline_with_predictor.predict(some_data)
###Output
_____no_output_____
###Markdown
Model persistence using joblib
###Code
my_model = full_pipeline_with_predictor
import joblib
joblib.dump(my_model, "my_model.pkl") # DIFF
#...
my_model_loaded = joblib.load("my_model.pkl") # DIFF
###Output
_____no_output_____
###Markdown
Example SciPy distributions for `RandomizedSearchCV`
###Code
from scipy.stats import geom, expon
geom_distrib=geom(0.5).rvs(10000, random_state=42)
expon_distrib=expon(scale=1).rvs(10000, random_state=42)
plt.hist(geom_distrib, bins=50)
plt.show()
plt.hist(expon_distrib, bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
Exercise solutions 1. Question: Try a Support Vector Machine regressor (`sklearn.svm.SVR`), with various hyperparameters such as `kernel="linear"` (with various values for the `C` hyperparameter) or `kernel="rbf"` (with various values for the `C` and `gamma` hyperparameters). Don't worry about what these hyperparameters mean for now. How does the best `SVR` predictor perform?
###Code
from sklearn.model_selection import GridSearchCV
param_grid = [
{'kernel': ['linear'], 'C': [10., 30., 100., 300., 1000., 3000., 10000., 30000.0]},
{'kernel': ['rbf'], 'C': [1.0, 3.0, 10., 30., 100., 300., 1000.0],
'gamma': [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]},
]
svm_reg = SVR()
grid_search = GridSearchCV(svm_reg, param_grid, cv=5, scoring='neg_mean_squared_error', verbose=2)
grid_search.fit(housing_prepared, housing_labels)
###Output
_____no_output_____
###Markdown
The best model achieves the following score (evaluated using 5-fold cross validation):
###Code
negative_mse = grid_search.best_score_
rmse = np.sqrt(-negative_mse)
rmse
###Output
_____no_output_____
###Markdown
That's much worse than the `RandomForestRegressor`. Let's check the best hyperparameters found:
###Code
grid_search.best_params_
###Output
_____no_output_____
###Markdown
The linear kernel seems better than the RBF kernel. Notice that the value of `C` is the maximum tested value. When this happens you definitely want to launch the grid search again with higher values for `C` (removing the smallest values), because it is likely that higher values of `C` will be better. 2. Question: Try replacing `GridSearchCV` with `RandomizedSearchCV`.
###Code
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import expon, reciprocal
# see https://docs.scipy.org/doc/scipy/reference/stats.html
# for `expon()` and `reciprocal()` documentation and more probability distribution functions.
# Note: gamma is ignored when kernel is "linear"
param_distribs = {
'kernel': ['linear', 'rbf'],
'C': reciprocal(20, 200000),
'gamma': expon(scale=1.0),
}
svm_reg = SVR()
rnd_search = RandomizedSearchCV(svm_reg, param_distributions=param_distribs,
n_iter=50, cv=5, scoring='neg_mean_squared_error',
verbose=2, random_state=42)
rnd_search.fit(housing_prepared, housing_labels)
###Output
_____no_output_____
###Markdown
The best model achieves the following score (evaluated using 5-fold cross validation):
###Code
negative_mse = rnd_search.best_score_
rmse = np.sqrt(-negative_mse)
rmse
###Output
_____no_output_____
###Markdown
Now this is much closer to the performance of the `RandomForestRegressor` (but not quite there yet). Let's check the best hyperparameters found:
###Code
rnd_search.best_params_
###Output
_____no_output_____
###Markdown
This time the search found a good set of hyperparameters for the RBF kernel. Randomized search tends to find better hyperparameters than grid search in the same amount of time. Let's look at the exponential distribution we used, with `scale=1.0`. Note that some samples are much larger or smaller than 1.0, but when you look at the log of the distribution, you can see that most values are actually concentrated roughly in the range of exp(-2) to exp(+2), which is about 0.1 to 7.4.
###Code
expon_distrib = expon(scale=1.)
samples = expon_distrib.rvs(10000, random_state=42)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.title("Exponential distribution (scale=1.0)")
plt.hist(samples, bins=50)
plt.subplot(122)
plt.title("Log of this distribution")
plt.hist(np.log(samples), bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
The distribution we used for `C` looks quite different: the scale of the samples is picked from a uniform distribution within a given range, which is why the right graph, which represents the log of the samples, looks roughly constant. This distribution is useful when you don't have a clue of what the target scale is:
###Code
reciprocal_distrib = reciprocal(20, 200000)
samples = reciprocal_distrib.rvs(10000, random_state=42)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.title("Reciprocal distribution (scale=1.0)")
plt.hist(samples, bins=50)
plt.subplot(122)
plt.title("Log of this distribution")
plt.hist(np.log(samples), bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
The reciprocal distribution is useful when you have no idea what the scale of the hyperparameter should be (indeed, as you can see on the figure on the right, all scales are equally likely, within the given range), whereas the exponential distribution is best when you know (more or less) what the scale of the hyperparameter should be. 3. Question: Try adding a transformer in the preparation pipeline to select only the most important attributes.
###Code
from sklearn.base import BaseEstimator, TransformerMixin
def indices_of_top_k(arr, k):
return np.sort(np.argpartition(np.array(arr), -k)[-k:])
class TopFeatureSelector(BaseEstimator, TransformerMixin):
def __init__(self, feature_importances, k):
self.feature_importances = feature_importances
self.k = k
def fit(self, X, y=None):
self.feature_indices_ = indices_of_top_k(self.feature_importances, self.k)
return self
def transform(self, X):
return X[:, self.feature_indices_]
###Output
_____no_output_____
###Markdown
Note: this feature selector assumes that you have already computed the feature importances somehow (for example using a `RandomForestRegressor`). You may be tempted to compute them directly in the `TopFeatureSelector`'s `fit()` method, however this would likely slow down grid/randomized search since the feature importances would have to be computed for every hyperparameter combination (unless you implement some sort of cache). Let's define the number of top features we want to keep:
###Code
k = 5
###Output
_____no_output_____
###Markdown
Now let's look for the indices of the top k features:
###Code
top_k_feature_indices = indices_of_top_k(feature_importances, k)
top_k_feature_indices
np.array(attributes)[top_k_feature_indices]
###Output
_____no_output_____
###Markdown
Let's double check that these are indeed the top k features:
###Code
sorted(zip(feature_importances, attributes), reverse=True)[:k]
###Output
_____no_output_____
###Markdown
Looking good... Now let's create a new pipeline that runs the previously defined preparation pipeline, and adds top k feature selection:
###Code
preparation_and_feature_selection_pipeline = Pipeline([
('preparation', full_pipeline),
('feature_selection', TopFeatureSelector(feature_importances, k))
])
housing_prepared_top_k_features = preparation_and_feature_selection_pipeline.fit_transform(housing)
###Output
_____no_output_____
###Markdown
Let's look at the features of the first 3 instances:
###Code
housing_prepared_top_k_features[0:3]
###Output
_____no_output_____
###Markdown
Now let's double check that these are indeed the top k features:
###Code
housing_prepared[0:3, top_k_feature_indices]
###Output
_____no_output_____
###Markdown
Works great! :) 4. Question: Try creating a single pipeline that does the full data preparation plus the final prediction.
###Code
prepare_select_and_predict_pipeline = Pipeline([
('preparation', full_pipeline),
('feature_selection', TopFeatureSelector(feature_importances, k)),
('svm_reg', SVR(**rnd_search.best_params_))
])
prepare_select_and_predict_pipeline.fit(housing, housing_labels)
###Output
_____no_output_____
###Markdown
Let's try the full pipeline on a few instances:
###Code
some_data = housing.iloc[:4]
some_labels = housing_labels.iloc[:4]
print("Predictions:\t", prepare_select_and_predict_pipeline.predict(some_data))
print("Labels:\t\t", list(some_labels))
###Output
_____no_output_____
###Markdown
Well, the full pipeline seems to work fine. Of course, the predictions are not fantastic: they would be better if we used the best `RandomForestRegressor` that we found earlier, rather than the best `SVR`. 5. Question: Automatically explore some preparation options using `GridSearchCV`.
###Code
param_grid = [{
'preparation__num__imputer__strategy': ['mean', 'median', 'most_frequent'],
'feature_selection__k': list(range(1, len(feature_importances) + 1))
}]
grid_search_prep = GridSearchCV(prepare_select_and_predict_pipeline, param_grid, cv=5,
scoring='neg_mean_squared_error', verbose=2)
grid_search_prep.fit(housing, housing_labels)
grid_search_prep.best_params_
###Output
_____no_output_____ |
Module 0. Word Embeddings/Word Representations.ipynb | ###Markdown
Written by [Nihir](mailto:[email protected]) (Twitter/IG: @nvedd)For [AI Core](theaicore.com) An introduction to NLP, Pre-processing, and Word Representations Workshop contents:- [A very brief overview of pre-neural NLP](Introduction-to-NLP)- [How do we get computers to represent our data?](How-do-we-get-computers-to-represent-our-natural-language-data?)- [Distributed Representations](Distributed-Representations)- [Pre-processing and tokenization: Cleaning your corpus](Pre-processing-and-tokenization:-Cleaning-your-corpus)- [Word2Vec](Word2Vec) - [Skipgram](Skipgram)- [SpaCy](SpaCy) Introduction to NLP_Natural language processing_ (NLP) is an interdisciplinary field concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. There are a wide variety of tasks that NLP covers, such as Translation, Natural Language Generation, Entity Detection, Sentiment Classification, and so forth.Research into NLP started in the 50s. Even though today's systems are (obviously) vastly better than what they were previously, NLP is still considered "unsolved" and modern research is evolving at a rapid pace. One facet as to why it is challenging is due to the ambiguity of language. For example, the word "mean" has multiple definitions: "signify", "unkind", and "average". We also find ourselves using idioms a lot in language - phrases for which the meaning doesn't directly represent sequence of words (e.g. over the moon). Specifically looking at translation, word order differences also prove to be a problem:- DE: Gestern bin ich in London gewesen- Word-By-Word EN: ‘Yesterday have I to London been’- Ground Truth EN: Yesterday I have been to London The history of NLP can essentially be broken down into three approaches: Rule-based, Statistical, and Neural:- Rule-based (1950 - 1999) - Hand-crafted rules to model linguistic intuitions - Corpus-based - Example-based (EBMT) - MT Translation by analogy: if this segment has been translated before, copy its translation - Statistical (2000-2015) - Statistical models used to push the “translation by analogy” paradigm to its extreme - Language-independent - Low cost - Neural - a.k.a. Deep learning (2014-) - Learning __representations (features) & models__ from data Neural approaches to NLP in particular allow us to solve the following kinds of problems: How do we get computers to represent our natural language data?Let's assume we're working with a many-to-one classification problem, for example classifying user written movie reviews between 1-5. How can we feed raw text into a model:One possible solution is to one-hot our **corpus** based on our **vocabulary**:Let's build this:
###Code
corpus=[["this movie had a brilliant story line with great action"],
["some parts were not so great but overall pretty ok"],
["my dog went to sleep watching this piece of trash"]] # we'll cover how to deal with emoji later
# Let's tokenize our corpus.
# Tokeniziation involves splitting the corpus from the sentence level to the word level
def get_tokenized_corpus(corpus):
return [sentence[0].split(" ") for sentence in corpus]
tokenized_corpus = get_tokenized_corpus(corpus)
print(tokenized_corpus)
###Output
_____no_output_____
###Markdown
Below is some code to load in a __vocabulary__. A vocabulary is simply a __list of unique words__ which we can perform lookups against.
###Code
vocab_file = open("google-10000-english.txt", "r")
vocabulary = [word.strip() for word in vocab_file.readlines()]
print("First five entries of vocabulary:", vocabulary[0:5])
###Output
_____no_output_____
###Markdown

###Code
# Let's one-hot the tokenized corpus
import numpy as np
def get_onehot_corpus(corpus, tokenized_corpus, vocabulary):
CORPUS_SIZE = len(corpus)
MAX_LEN_SEQUENCE = max(len(x) for x in tokenized_corpus)
VOCAB_LEN = len(vocabulary)
##Create a 3-d array of zeros
for corpus_idx, tokenized_sentence in enumerate(tokenized_corpus):
for sequence_idx, token in enumerate(tokenized_sentence):
##find the token index
##change the relevant value in the numpy array to one
return onehot_corpus
onehot_corpus = get_onehot_corpus(corpus, tokenized_corpus, vocabulary)
print(onehot_corpus.shape)
print(onehot_corpus)
new_reviews = [["good movie"], ["this movie had a significant amount of flaws"]]
corpus.extend(new_reviews)
tokenized_corpus = get_tokenized_corpus(corpus)
onehot_corpus = get_onehot_corpus(corpus, tokenized_corpus, vocabulary)
###Output
_____no_output_____
###Markdown
Oh... How do you think we can get around this? Discuss with someone around you the issues of one-hot encoding.Dog and hound. Dog and potato. Distributed Representations__"You shall know a word by the company it keeps" - Firth (1957).__What does this mean?Before focus on words themselves, let's look at concepts with a 2-dimension representations.Animal cuteness and size: Animal Cuteness Size Lion 80 50 Elephant 75 95 Hyena 10 30 Mouse 60 8 Pig 30 30 Horse 50 65 Dolphin 90 45 Wasp 2 1 Giraffe 60 80 Dog 95 20 Alligator 8 40 Mole 30 12 Black Widow 100 30
###Code
import plotly.graph_objects as go
animal_labels = ["Lion", "Elephant", "Hyena", "Mouse", "Pig", "Horse", "Dolphin", "Wasp", "Giraffe", "Dog", "Alligator", "Mole", "Scarlett Johansson"]
animal_cuteness = [80, 75, 10, 60, 30, 50, 90, 1, 60, 95, 8, 30, 100]
animal_size = [50, 95, 30, 8, 30, 65, 45, 1, 80, 20, 40, 12, 30]
fig = go.Figure(data=[go.Scatter(
x=animal_cuteness, y=animal_size,
text=animal_labels,
mode='markers+text',
marker_size=70)
])
fig.update_layout(
title="Animal Cuteness vs Animal Size",
xaxis_title="Animal Cuteness",
yaxis_title="Animal Size",
)
fig.show()
###Output
_____no_output_____
###Markdown
The plot shows us that the closest animal to the Alligator is the Hyena. Let's calculate the (Euclidean) distance between the two!
###Code
import math
def distance_2d(x1, y1, x2, y2):
return math.sqrt((x1-x2)**2 + (y1-y2)**2)
# Helper function to return us the animal information we need
def get_animal_info(animal_name):
if animal_name in animal_labels:
animal_idx = animal_labels.index(animal_name)
animal_cuteness_ = animal_cuteness[animal_idx]
animal_size_ = animal_size[animal_idx]
return animal_cuteness_, animal_size_
else:
return False
alligator_cuteness, alligator_size = get_animal_info("Alligator")
hyena_cuteness, hyena_size = get_animal_info("Hyena")
elephant_cuteness, elephant_size = get_animal_info("Elephant")
print("DISTANCE BETWEEN ALLIGATOR AND HYENA", distance_2d(alligator_cuteness, alligator_size, hyena_cuteness, hyena_size))
print("DISTANCE BETWEEN ALLIGATOR AND ELEPHANT", distance_2d(alligator_cuteness, alligator_size, elephant_cuteness, elephant_size))
###Output
_____no_output_____
###Markdown
This visual representation allows us to reason about many things. We can ask, for example, what's halfway between a Mosquito and a Horse (a Pig). We can also ask about differences. For example, the difference between a Mole and Mouse is 30 units of cuteness and a couple units in size.The concept of difference allows us to reason about analogies. This means that we can say that animal_1 is to animal_2 the way that animal_3 is to animal_4. For example, we can say `Pig is to Horse as Mouse is to ???`
###Code
horse_cuteness, horse_size = get_animal_info("Horse")
pig_cuteness, pig_size = get_animal_info("Pig")
fig.add_trace(go.Scatter(x=[pig_cuteness, horse_cuteness], y=[pig_size, horse_size]))
fig.update_layout(showlegend=False)
fig.show()
# let's get the x and y difference between Pig and Horse
pig_horse_diff_cuteness = abs(pig_cuteness - horse_cuteness)
## GET THE DIFFERENCE IN SIZE
# now let's apply the analogy to Mouse:
# Pig is to Horse as Mouse is to ???
mouse_cuteness, mouse_size = get_animal_info("Mouse")
## plot the new new analogy
fig.update_layout(showlegend=False)
fig.show()
# Lion!
###Output
_____no_output_____
###Markdown
These concepts also work in higher dimensions. Before focusing on words themselves, let's briefly add another dimension to our dataset and look at one or two more concepts: Animal Cuteness Size Ferocity Lion 80 50 85 Elephant 75 95 20 Hyena 10 30 90 Mouse 60 8 1 Pig 30 30 10 Horse 50 65 30 Dolphin 90 45 20 Wasp 2 1 100 Giraffe 60 80 65 Dog 95 20 15 Alligator 8 40 90 Mole 30 12 15 Black Widow 100 30 69
###Code
# just to remind us ;)
animal_labels = animal_labels
animal_cuteness = animal_cuteness
animal_size = animal_size
animal_ferocity = [85, 20, 90, 1, 10, 30, 20, 100, 65, 15, 90, 15, 69]
# nothing particularly important... just used for visualisation purposes
import statistics
animal_mean_stats = [statistics.mean(k) for k in zip(animal_cuteness, animal_size, animal_ferocity)]
fig = go.Figure(data=[go.Scatter3d(
x=animal_cuteness, y=animal_size, z=animal_ferocity,
text=animal_labels,
mode='markers+text',
marker=dict(
size=12,
color=animal_mean_stats, # set color to an array/list of desired values
colorscale='Viridis', # choose a colorscale
opacity=0.8
))
])
fig.update_layout(title="Animal Cuteness vs Animal Size vs Animal Ferocity",
scene = dict(
xaxis_title='Animal Cuteness',
yaxis_title='Animal Size',
zaxis_title='Animal Ferocity')
)
fig.show()
# Let's redefine this function to return ferocity as well
def get_animal_info(animal_name):
if animal_name in animal_labels:
animal_idx = animal_labels.index(animal_name)
animal_cuteness_ = animal_cuteness[animal_idx]
animal_size_ = animal_size[animal_idx]
animal_ferocity_ = animal_ferocity[animal_idx]
return animal_cuteness_, animal_size_, animal_ferocity_
else:
return False
###Output
_____no_output_____
###Markdown
We'll demonstrate how to calculate the distance between vectors in 3d space, show the closest `n` points to a given point (animal) and also show how analogies work in 3 dimensions. The point of this exercise is give you an intuition behind how things can analogusly work in higher dimensional space. 
###Code
# Euclidean distance
def distance(vector1, vector2):
return math.sqrt(sum((i - j)**2 for i, j in zip(vector1, vector2)))
hyena_coords = get_animal_info("Hyena")
alligator_coords = get_animal_info("Alligator")
print("Distance between Hyena and Alligator:", distance(hyena_coords, alligator_coords))
dog_coords = get_animal_info("Dog")
print("Distance between Hyena and Dog:", distance(hyena_coords, dog_coords))
# Closest
animal_info = zip(animal_labels, animal_cuteness, animal_size, animal_ferocity)
def closest_to(animal_name, n=3):
primary_animal_stats = get_animal_info(animal_name)
distances_from_animal = []
for label, cuteness, size, ferocity in animal_info:
if label==animal_name:
continue
secondary_animal_stat = (cuteness, size, ferocity)
distances_from_animal.append((label, distance(primary_animal_stats, secondary_animal_stat)))
sorted_distances_from_animal = sorted(distances_from_animal, key=lambda x: x[1])
return sorted_distances_from_animal[:n]
closest_to("Hyena")
###Output
_____no_output_____
###Markdown
What if we could do the same with words? Before introducing the methodology of obtaining these vectors, it's relevant to discuss why word vectors are effective. One of the main reasons adoption is so widespread is because they can be **pretrained**. What this means is that we can use word vectors which have been trained on any corpus (e.g. Wikipedia, Twitter, medical journals, ancient religious texts etc.) in a potentially domain specific downstream task. For example, word vectors obtained by training on ancient religious texts might be used to classify ancient religious texts into a religion; or Twitter data can be used to generate conversation-like agents. That is, each word vector is just a vector to represent that word, and the algorithm we're using to solve the task at hand will learn the embeddings for all the words based on their context in the training corpus (e.g. the "meaning" of the word _lit_ would be different if comparing the vector based on a religious text and the vector based on Twitter).Before the methodology, let's analyse some of these pre-trained embeddings. We'll use King, Queen, Royal, Man, Woman, Water, and Earth as an example.
###Code
glove_file = open("glove_50d_TRUNCATED.txt", "r", encoding="utf8")
glove_vectors_list = [word_and_vector.strip() for word_and_vector in glove_file.readlines()]
glove_vectors = {obj.split()[0]: np.asarray(obj.split()[1:], dtype=np.float) for obj in glove_vectors_list}
print(glove_vectors["the"])
king_vector = glove_vectors["king"]
queen_vector = glove_vectors["queen"]
man_vector = glove_vectors["man"]
woman_vector = glove_vectors["woman"]
water_vector = glove_vectors["water"]
earth_vector = glove_vectors["earth"]
from plotly.subplots import make_subplots
# Let's visualise these vectors and colour them based on their elemental values
# Red is lower, white=0, blue is higher
# vectors is a list of tuples: [(vector_name, vector)]
def plot_vectors(vectors):
fig = make_subplots(rows=len(vectors), cols=1)
for i, vector_tuple in enumerate(vectors):
vector_name = vector_tuple[0]
vector = vector_tuple[1]
normalized_vector = (vector-np.min(vector))/(np.ptp(vector))
x = ["dimension_"+str(_) for _ in range(50)]
y = [1] * 50
fig.add_trace(
go.Bar(x=x, y=y, marker=dict(
color=normalized_vector, # set color to an array/list of desired values
colorscale='rdbu', # choose a colorscale
opacity=0.8,
)),
row=i+1, col=1
)
fig.update_yaxes(title_text=vector_name, row=i+1, col=1)
fig.update_layout(height=175*len(vectors), xaxis_showgrid=False, yaxis_showgrid=False, showlegend=False)
fig.update_yaxes(showticklabels=False)
fig.update_xaxes(showticklabels=False)
fig.show()
plot_vectors([("King", king_vector), ("Queen", queen_vector), ("Man", man_vector), ("Woman", woman_vector), ("Water", water_vector), ("Earth", earth_vector)])
###Output
_____no_output_____
###Markdown
Similar to what was done with the animals previously, we can apply analogies to word vectors. In high-dimensional space it is preferrable to use **cosine** distance.Before applying/finding any analogies, we need a function which finds the closest vector to another vector:
###Code
# Use the cosine distance instead of the euclidean distance we coded up earlier on
from scipy.spatial.distance import cosine
def find_closest(to_find_closest, exclude_list):
closest_distance = float("inf")
closest_token = None
for token, vector in glove_vectors.items():
distance = cosine(to_find_closest, vector)
if closest_distance > distance and token not in exclude_list:
closest_distance = distance
closest_token = token
return closest_token
###Output
_____no_output_____
###Markdown
Find out the answer to "king - man + woman"
###Code
to_find_closest = king_vector - man_vector + woman_vector
find_closest(to_find_closest, ["king", "man", "woman"])
###Output
_____no_output_____
###Markdown
What about.. London is to England as Paris is to X?
###Code
london_vector = glove_vectors["london"]
england_vector = glove_vectors["england"]
paris_vector = glove_vectors["paris"]
find_closest(england_vector - london_vector + paris_vector, ["paris", "england", "london"])
###Output
_____no_output_____
###Markdown
Alongside analogies, we are also able to semantically 'reason':
###Code
print(find_closest(glove_vectors["france"] - glove_vectors["capital"], ["france", "capital"]))
###Output
_____no_output_____
###Markdown
The takeaway: Distributed vectors group similar words/objects together Pre-processing and tokenization: Cleaning your corpusNow that we know how we want to represent words, let's do it! However, before we do so we need to clean our dataset. We're going to create our vocabulary from scratch using our corpus. Oov
###Code
# our corpus is now more in a format you would find a corpus in production
corpus = [corpus[idx][0] for idx, sentence in enumerate(corpus)]
more_movie_reviews = [
"this was a BRILLIANT 👍 movie",
"💩 film",
"this was a terrible movie",
"this was a treribel movie",
"this was a good 👍 movie",
"A moving story about U.S. wildlife.",
"Wow. I had not expected wildlife in the US to be so diverse",
"Wow - what a MOVIE 👍",
"Us here at The Movie Reviewers found this movie exceedingly average",
"The Polish people in this film... stunning 👍",
"A bit of a polish to this movie would have made it great",
"This film didn't exite me as much as much as the prequel.",
"this movie doesn't live up to the hype of its trailer",
"It's rare that a film is this good??",
"This movie was 👍 💩"
]
corpus.extend(more_movie_reviews)
corpus
###Output
_____no_output_____
###Markdown
Like we did at the beginning of this notebook, we need to tokenize this corpus. Code up a function to do that.
###Code
def tokenize(corpus):
##split the reviews on whitespace
tokenized_corpus = tokenize(corpus)
print(tokenized_corpus)
###Output
_____no_output_____
###Markdown
What issues can you see? Let's count all the distinct tokens now:
###Code
distinct_tokens_count = {}
for t_review in tokenized_corpus:
for token in t_review:
if token not in distinct_tokens_count.keys():
distinct_tokens_count[token] = 1
else:
distinct_tokens_count[token] += 1
for token, count in distinct_tokens_count.items():
print("{:<14s} {:<10d}".format(token, count))
###Output
_____no_output_____
###Markdown
Ok.. that's a lot of information, lets just look at the key points:
###Code
for token, count in distinct_tokens_count.items():
if "movie" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "film" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "good" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "polish" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "wildlife" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "us" in token.lower() or "u.s." in token.lower():
print(token, count)
###Output
_____no_output_____
###Markdown
Ok so let's quickly run through the issues here. Firstly, we see `movie`, `Movie` and `MOVIE`. How can we resolve this issue during our tokenization step? We could lowercase everything, but what issues can you see that causing? Semantic differences of Polish vs polish etc. What about the two different instances of `film` (`film...`)? Ok, we can remove punctuation, but wouldn't this ruin the `U.S.` acronym? Another solution would be to count `...` as a token, or remove it, but keep `.`s as part of the token if it is instantly followed by a letter (i.e. no space). This part of the pre-processing pipeline is called normalization. Let's do some basic normalization for now, and later on we'll use a library to do this for us ([regex] is a beast in itself). Our normalization step will be to lowercase everything and remove all punctuation.
###Code
import re # regex
# The backslash is an escape character to let the interpreter know we mean to use it as a string literal (- or ')
re_punctuation_string = '[\s,/.?\-\']' # split on spaces (\s), commas (,), slash (/), fullstop (.), question marks (?), hyphens (-), and apostrophe (').
tokenized_corpus = []
for review in corpus:
tokenized_review = re.split(re_punctuation_string, review) # in python's regex, [...] is an alternative to writing .|.|.
tokenized_review = list(filter(None, tokenized_review)) # remove empty strings from list
tokenized_corpus.append([token.lower() for token in tokenized_review]) # Lowercasing everything
print(tokenized_corpus)
distinct_tokens_count = {}
for t_review in tokenized_corpus:
for token in t_review:
if token not in distinct_tokens_count.keys():
distinct_tokens_count[token] = 1
else:
distinct_tokens_count[token] += 1
for token, count in distinct_tokens_count.items():
if "movie" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "film" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "good" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "polish" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "wildlife" in token.lower():
print(token, count)
for token, count in distinct_tokens_count.items():
if "us" in token.lower() or "u.s." in token.lower():
print(token, count)
###Output
_____no_output_____
###Markdown
There are still some problems, e.g.: `['a', 'moving', 'story', 'about', 'u', 's', 'wildlife']` and the letters that were split on after apostrophes, e.g.: `s` and `t`. We'll leave this as is for now, and sort this out later. What do you think we need to do the emoji?
###Code
print("U+", ord("😂"))
###Output
_____no_output_____
###Markdown
[Word2Vec](https://arxiv.org/abs/1301.3781)Representing words as vectors is often attributed to the quote at the beginning of this section - __you shall know a word by the company it keeps__. Above, we demonstrated how this quote is manifested by the idea of distributed representations. This section discusses and implements one way of obtaining representations.Word2Vec refers to a family of neural network based algorithms which obtain these word vectors/embeddings. One part of this family consists of two different model architectures: Continuous bag-of-words and Skip-gram. The other part of this family consists of two different approaches of dealing with a vocabulary in the order of **millions**: Hierarchical Softmax and Negative Sampling.N.B. Word2Vec is not the only algorithm used to obtain word vectors. [GLoVE](https://nlp.stanford.edu/projects/glove/), [FastText](https://fasttext.cc/), and [pre-trained language models](https://arxiv.org/abs/1810.04805) are alternatives. We will be looking at pre-trained language models later on this course. The reason **Word2Vec** is focused on is because it provides a simple intuition behind how we can use neural networks to reduce dimensionality and then use the lower-dimensional vectors in downstream tasks. Using parameters from one model in another is also part of a paradigm known as _transfer learning_.Recall the problem we are trying to solve - representing our sequences, movie reviews, in a way that we can feed to a (classification) model. Earlier on, we loaded in pre-trained embeddings. Each vector we obtained for a given word was the representation of this word. Let's discuss how to get this representation.Looking at the quote again, we have this notion of "company". What do you think this means? It leads to the term we call a context window: the words which are in the **negative** $c$ range to the **positive** $c$ range away from the context word $w_t$.Two variants, Continuous Bag of Words (CBOW) and Skipgram. We'll focus on skipgram because it has shown to outperform CBOW on larger corpus'. After we implement skipgram below, see if you're able to implement CBOW yourself. Skipgram With the knowledge that the input and output are one-hot, lets derive the objective. Instead of considering the problem where we have four gold labels, lets look at the case where we have one:    Now we know what our neural network is going to look like - we just need to figure out how to feed our data into the model. To convert our input to one hot we'll need to build a vocabulary dictionary which maps words to an integer. We're going to use our corpus to build our vocabulary. The reason we didn't do that before was because we loaded in an external vocabulary file
###Code
##extract a vocabulary (i.e. a list/set of unique tokens)
print("LENGTH OF VOCAB:", len(vocabulary), "\nVOCAB:", vocabulary)
# Let's map these to the aforementioned dictionaries
# word2idx = {}
# n_words = 0
# for token in vocabulary:
# if token not in word2idx:
# word2idx[token] = n_words
# n_words += 1
word2idx = dict([(word, index) for index, word in enumerate(vocabulary)])
assert len(word2idx) == len(vocabulary)
print(word2idx)
###Output
_____no_output_____
###Markdown
**At this point, we should extract our contexts and focus words:**- Let's say we're considering the sentence: `['this', 'movie', 'had', 'a', 'brilliant', 'story', 'line', 'with', 'great', 'action']`- For every word in the sentence, we want to get the words which are `window_size` around it.- So if `window_size==2`, for the word `this`, we obtain: `[['this', 'movie'], ['this', 'had']]`- For the word `movie`, we obtain: `[['movie', 'this'], ['movie', 'had'], ['movie', 'a']]`- For the word `had`, we obtain: `[['had', 'this'], ['had', 'movie'], ['had', 'a'], ['had', 'brilliant']]`
###Code
def get_focus_context_pairs(tokenized_corpus, window_size=2):
focus_context_pairs = []
for sentence in tokenized_corpus:
for token_idx, token in enumerate(sentence):
for w in range(-window_size, window_size+1):
context_word_pos = token_idx + w
if w == 0 or context_word_pos >= len(sentence) or context_word_pos < 0:
continue
try:
focus_context_pairs.append([token, sentence[context_word_pos]])
except:
continue
return focus_context_pairs
focus_context_pairs = get_focus_context_pairs(tokenized_corpus)
print(focus_context_pairs)
# Let's map these to our indicies in preparation to one-hot
def get_focus_context_idx(focus_context_pairs):
idx_pairs = []
for pair in focus_context_pairs:
idx_pairs.append([word2idx[pair[0]], word2idx[pair[1]]])
return idx_pairs
idx_pairs = get_focus_context_idx(focus_context_pairs)
print(idx_pairs)
def get_one_hot(indicies, vocab_size=len(vocabulary)):
oh_matrix = np.zeros((len(indicies), vocab_size))
for i, idx in enumerate(indicies):
oh_matrix[i, idx] = 1
return torch.Tensor(oh_matrix)
###Output
_____no_output_____
###Markdown
Time to build our neural network!
###Code
import torch
from torch import nn
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
from tqdm import tqdm
import random
writer = SummaryWriter('runs/word2vec')
class Word2Vec(nn.Module):
def __init__(self, input_size, output_size, hidden_dim_size):
super().__init__()
embedding_dim = hidden_dim_size
# Why do you think we don't have an activation function here?
##initialize the hidden layer and output layer
def forward(self, input_token):
x = self.projection(input_token)
output = self.output(x)
return output
# Tensorboard doesn't handle encoding emojis.
# So while we can train our model on emojis (as we've just done)
# We gotta convert their unicode string to something displayable on Tensorboard
word2idx[":pile_of_poo:"] = word2idx.pop("\U0001f4a9")
word2idx[":thumbs_up:"] = word2idx.pop("\U0001f44d")
word2idx = {k: v for k, v in sorted(word2idx.items(), key=lambda item: item[1])} # sort dictionary
def train(word2vec_model, idx_pairs, state_dict_filename, early_stop=False, num_epochs=10, lr=1e-3):
word2vec_model.train()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(word2vec_model.parameters(), lr=lr)
for epoch in tqdm(range(num_epochs)):
random.shuffle(idx_pairs)
for focus, context in idx_pairs:
oh_inputs = get_one_hot([focus], len(vocabulary))
target = torch.LongTensor([context])
pred_outputs = word2vec_model(oh_inputs)
loss = criterion(pred_outputs, target)
loss.backward()
optimizer.step()
word2vec_model.zero_grad()
### These lines stop training early
if early_stop: break
if early_stop: break
###
torch.save(word2vec_model.state_dict(), state_dict_filename)
writer.add_embedding(word2vec_model.projection.weight.T,
metadata=word2idx.keys(), global_step=epoch)
word2vec = Word2Vec(len(vocabulary), len(vocabulary), 10)
train(word2vec, idx_pairs, "word2vec.pt")
###Output
_____no_output_____
###Markdown
SpaCy We've covered how to write some basic pre-processing rules from scratch, and we've seen some issues that rule based pre-processing can cause. Sometimes it'll simply take too long to code up rules for all edge cases. In downstream tasks, we might also want to augment the data with some more information, for example, their Parts of Speech tag (PoS) - an identifier per word describing the type of word it is (e.g. verb, adjective).SpaCy is an easy to use NLP library which gives us access to neural-models for various linguistic features. In this section we're going to use the Tokenizer to preprocess a larger corpus of text and train a word2vec model on this corpus. Then we're going to use Tensorboard to visualise the embeddings we've just trained.
###Code
import spacy
import pickle
import os
from tqdm import tqdm
nlp = spacy.load("en")
GUTENBERG_DIR = "gutenberg/"
gutenberg_books = []
for i, book_name in enumerate(os.listdir(GUTENBERG_DIR)):
book_file = open(os.path.join(
GUTENBERG_DIR, book_name), encoding="latin-1")
book = book_file.read()
gutenberg_books.append(book)
book_file.close()
if i == 3:
break
gutenberg_book_lines = []
for book in tqdm(gutenberg_books):
book_lines = book.split("\n")
book_lines = list(filter(lambda x: x != "", book_lines))
gutenberg_book_lines.append(book_lines)
print(gutenberg_book_lines[1][10])
print(gutenberg_book_lines[2][34])
if os.path.exists("tokenized_corpus_gutenberg.pkl"):
tokenized_corpus = pickle.load(open("tokenized_corpus_gutenberg.pkl", "rb"))
else:
tokenized_corpus = []
for book_line in tqdm(gutenberg_book_lines):
for line in tqdm(book_line):
doc = nlp(line)
tokenized_corpus.append([token.text.lower()
for token in doc if not token.is_punct])
print(tokenized_corpus[0:5])
pickle.dump(tokenized_corpus, open("tokenized_corpus_gutenberg.pkl", "wb"))
print(tokenized_corpus[0:5])
len(tokenized_corpus)
###Output
_____no_output_____
###Markdown
One thing which wasn't mentioned before was the discrepency in the words which may be present at test/inference time but not during training. These are known as __out of vocabulary__ tokens. A simple strategy to deal with this is to replace every word in our training set which occurs with less than a certain threshold with an `` token. At test time, if a given word isn't in the vocabulary that the model was trained on, we simply replace it with the `` token.
###Code
def get_vocabulary(tokenized_corpus, cutoff_frequency=5):
vocab_freq_dict = dict()
for sentence in tokenized_corpus:
for token in sentence:
if token not in vocab_freq_dict.keys():
vocab_freq_dict[token] = 0
vocab_freq_dict[token] += 1
vocabulary = set()
for sentence in tokenized_corpus:
for token in sentence:
if vocab_freq_dict[token] > cutoff_frequency:
vocabulary.add(token)
##for each token in our corpus,
##add that token to our vocabulary if it appears less than cutoff_frequency amount of times
return vocabulary
vocabulary = get_vocabulary(tokenized_corpus)
print("LENGTH OF VOCAB:", len(vocabulary), "\nVOCAB:", vocabulary)
OOV_token = "<OOV>"
vocabulary.add(OOV_token)
word2idx = {}
n_words = 0
tokenized_corpus_with_OOV = []
for sentence in tokenized_corpus:
tokenized_sentence_with_OOV = []
for token in sentence:
if token in vocabulary:
tokenized_sentence_with_OOV.append(token)
else:
tokenized_sentence_with_OOV.append(OOV_token)
tokenized_corpus_with_OOV.append(tokenized_sentence_with_OOV)
for token in vocabulary:
if token not in word2idx:
word2idx[token] = n_words
n_words += 1
assert len(word2idx) == len(vocabulary)
focus_context_pairs = get_focus_context_pairs(tokenized_corpus_with_OOV)
print(focus_context_pairs[0:20])
idx_pairs = get_focus_context_idx(focus_context_pairs)
print(idx_pairs[0:20])
writer = SummaryWriter('runs/word2vec_gutenberg')
w2v_gutenberg = Word2Vec(len(vocabulary), len(vocabulary), 10)
train(w2v_gutenberg, idx_pairs, "word2vec_gutenberg.pt", early_stop=True)
###Output
_____no_output_____
###Markdown
Let's visualise these embeddings using Tensorboard's projector! How do we use these word embeddings?Our word embeddings __are__ the `projection` weight matrix that we trained earlier on. To use this in downstream tasks, we can save our weight matrix and initalise the embeddings of our downstream network with the weights we've obtained. This is something we'll do in the next session, but extracting the weight matrix is as simple as:
###Code
weights_matrix = w2v_gutenberg.projection.weight.T
print(weights_matrix.shape)
###Output
_____no_output_____ |
exam/exam_statistics_and_machine_learning.ipynb | ###Markdown
exam of Statistics &Machine LearningThe goal of this exam is perform an analysis on data related to heart disease,in particular, we want to explore the relationship between a `target` variable - whether patient has a heart disease or not - and several other variables such as cholesterol level, age, ...The data is present in the file `'heartData_simplified.csv'`, which is a cleaned and simplified version of the [UCI heart disease data set](https://archive.ics.uci.edu/ml/datasets/heart+Disease)We ask that you explore the data-set and answer the questions in a commented jupyter notebook. You should send your code to **[email protected]** by the **17th of December**.I will reply to your e-mail when I receive it. If I have not replied after 24h, it means that your e-mail went into my spam folder, so do not hesitate to send me another one.Comment your code to explain to us your thought process and detail your conclusions following the analysis. description of the columns* age : Patient age in years* sex : Patient sex* chol : Cholesterol level in mg/dl. * thalach : Maxium heart rate during the stress test* oldpeak : Decrease of the ST segment during exercise according to the same one on rest.* ca : Number of main blood vessels coloured by the radioactive dye. The number varies between 0 to 3.* thal : Results of the blood flow observed via the radioactive dye. * defect -> fixed defect (no blood flow in some part of the heart) * normal -> normal blood flow * reversible -> reversible defect (a blood flow is observed but it is not normal)* target : Whether the patient has a heart disease or not**code to read the data**
###Code
import pandas as pd
df = pd.read_table('heartData_simplified.csv',sep=',')
df.head()
print(len(df))
###Output
296
###Markdown
You can see that we have to handle categorical variables : `sex` , `thal`, and our response variable : `target`.If taken as-is, they make create problems in models. So we will first transform them to sets of 0/1 columns :
###Code
# in this first line, I make sure that "normal" becomes the default value for the thal columns
df['thal'] = df['thal'].astype(pd.CategoricalDtype(categories=["normal", "reversible", "defect"],ordered=True))
# get dummies will transform these categorical columns to sets of 0/1 columns
df = pd.get_dummies( df , drop_first=True )
df.head()
coVariables = df.drop('target_yes' , axis=1)
response = df.target_yes
df.columns
###Output
_____no_output_____ |
ds7333_case_study_6/case6_AL.ipynb | ###Markdown
EDA
###Code
df.head()
df['# label'].value_counts()
df['# label'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Scaling and SkewNeed to scale feature 'mass'. Other features are already scaled or pretty close.Several features are highly skewed with min or max values > 4 standard devations from the mean
###Code
feature_summary = df.iloc[:, 1:29].describe().T
feature_summary
###Output
_____no_output_____
###Markdown
Higly skewed features
###Code
right_skew = feature_summary.loc[feature_summary['max'] > feature_summary['mean'] + feature_summary['std']*4]
left_skew = feature_summary.loc[feature_summary['min'] < feature_summary['mean'] - feature_summary['std']*4]
skew = pd.concat([right_skew.T, left_skew.T], axis=0, join='outer')
skew.head(8)
###Output
_____no_output_____
###Markdown
No major outliers, just skewed data
###Code
fig, axes = plt.subplots(2, 6, figsize=(14, 5))
fig.suptitle('Skewed Features - Boxplots')
for i,j in zip(skew.columns, range(11)):
sns.boxplot(ax = axes[int(j/6), j%6], x = df[i])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Modeling Prep
###Code
class BaseImputer:
#@abstractmethod
def fit(self, X, y=None):
pass
#@abstractmethod
def transform(self, X):
pass
class BaseModel:
#@abstractmethod
def fit(self, X, y, sample_weight=None):
pass
#@abstractmethod
def predict(self, X):
pass
class Modeling:
_X_train_fitted = None
_X_test_fitted = None
_y_train = None
_y_test = None
_y_preds = None
def __init__(self, data: pd.DataFrame,
target_name: str,
shuffle_splitter: BaseShuffleSplit,
imputer: BaseImputer,
model: BaseModel,
scaler = None):
self._data = data
self._target_name = target_name
self._shuffle_splitter = shuffle_splitter
self._imputer = imputer
self._model = model
self._X, self._y = self._split_data()
self._scaler = scaler
@property
def X(self):
return self._X
@property
def y(self):
return self._y
@property
def model(self):
return self._model
@model.setter
def model(self, model):
self._model = model
@property
def X_train(self):
return self._X_train_fitted
@property
def X_test(self):
return self._X_test_fitted
@property
def y_train(self):
return self._y_train
@property
def y_test(self):
return self._y_test
@property
def y_preds(self):
return self._y_preds
def _split_data(self):
X = self._data.copy()
return X.drop([self._target_name], axis=1) , X[self._target_name]
def _shuffle_split(self):
X = self.X
y = self.y
for train_index, test_index in self._shuffle_splitter.split(X,y):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y[train_index], y[test_index]
return X_train, X_test, y_train, y_test
def _fit_imputer(self, train):
if self._imputer is not None:
self._imputer.fit(train)
def _fit_scaler(self, train):
if self._scaler is not None:
self._scaler.fit(train)
def _impute_data(self, X: pd.DataFrame):
if self._imputer is not None:
return pd.DataFrame(self._imputer.transform(X), columns = self.X.columns, index = X.index)
return X
def _scale_data(self, X: pd.DataFrame):
if self._scaler is not None:
X = pd.DataFrame(self._scaler.transform(X), columns = self._X.columns)
return X
def prepare(self):
X_train, X_test, y_train, y_test = self._shuffle_split()
self._fit_imputer(X_train)
X_train = self._impute_data(X_train)
X_test = self._impute_data(X_test)
self._fit_scaler(X_train)
self._X_train_fitted = self._scale_data(X_train)
self._X_test_fitted = self._scale_data(X_test)
self._y_train = y_train
self._y_test = y_test
def prepare_and_train(self):
self.prepare()
return self.train()
def train(self): #, epoch=None, batch=None
self._model.fit(self.X_train, self.y_train) #, batch_size=batch, epochs=epoch
self._y_preds = self._model.predict(self.X_train)
return self.metrics(self.y_train, self.y_preds)
def test(self):
return self.metrics(self.y_test, self._model.predict(self.X_test))
def metrics(self, y_true = None, y_pred = None):
pass
class ClassificationModeling(Modeling):
def __init__(self,
data: pd.DataFrame,
target_name: str,
shuffle_splitter: BaseShuffleSplit,
imputer: BaseImputer,
model: BaseModel,
scaler = None,
beta: int = 1,
classification: str = 'binary'):
super().__init__(data, target_name, shuffle_splitter, imputer, model, scaler)
self.beta = beta
self.classification = classification
def metrics(self, y_true = None, y_pred = None):
if y_true is None and y_pred is None:
y_true = self.y_train
y_pred = self.y_preds
return ({'matrix': confusion_matrix(y_true, y_pred),
'accuracy': accuracy_score(y_true, y_pred),
'precision': precision_score(y_true, y_pred, average=self.classification),
'recall': recall_score(y_true, y_pred, average=self.classification),
'f1': f1_score(y_true, y_pred),
'f{}'.format(self.beta) : fbeta_score(y_true, y_pred, average=self.classification, beta=self.beta) } )
class NNClassificationModeling(ClassificationModeling):
def __init__(self,
data: pd.DataFrame,
target_name: str,
shuffle_splitter: BaseShuffleSplit,
imputer: BaseImputer,
model: BaseModel,
scaler = None,
beta: int = 1,
classification: str = 'binary', tb_callback = TensorBoard(log_dir="logs/", histogram_freq=1)):
super().__init__(data, target_name, shuffle_splitter, imputer, model, scaler, beta, classification)
self.tb_callback=tb_callback
def train(self, epoch, batch):
logDir = "logs/{epoch}-{batchsize}-{time}".format(epoch=epoch, batchsize=batch, time=time.time())
self.tb_callback.log_dir = logDir
self._model.fit(self.X_train, self.y_train, batch_size=batch, epochs=epoch, validation_data=(self.X_test, self.y_test), callbacks=[self.tb_callback])
self._y_preds = self._model.predict(self.X_train)
return self.metrics(self.y_train, self.y_preds)
def metrics(self, y_true = None, y_pred = None):
if y_true is None and y_pred is None:
y_true = self.y_train
y_pred = self.y_preds
y_pred = pd.Series(y_pred.reshape((y_pred.shape[1], y_pred.shape[0]))[0], index=y_true.index)
y_pred = pd.Series( (y_pred>0.5).astype(int), index=y_true.index)
return super().metrics(y_true,y_pred)
###Output
_____no_output_____
###Markdown
Baseline Model - Logistic Regression
###Code
baseline = ClassificationModeling(df,'# label',
StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=12343),
None,
LogisticRegression(penalty='l1', solver='saga', random_state=12343),
StandardScaler(), beta=2)
baseline.prepare()
baseline_results = pd.DataFrame()
for i in [0.0001, 0.0005, 0.001, .005, 1, 100]:
baseline.model.C = i
baseline_results = baseline_results.append({"C": i,
"Train Accuracy": round( baseline.train()['accuracy'], 4),
"Test Accuracy": round( baseline.test()['accuracy'], 4)},
ignore_index=True)
sns.lineplot(data=baseline_results, x='C', y='Train Accuracy', color='blue')
sns.lineplot(data=baseline_results, x='C', y='Test Accuracy', color='red')
plt.title('Tuning C for Logistic Regression with L1')
plt.legend(['Train', 'Test'])
plt.xscale('log')
plt.axvline(0.001, color='black', ls='--')
plt.show()
baseline.model.C = 0.001
baseline.train() #epoch=None, batch=None
baseline.test()
###Output
_____no_output_____
###Markdown
Feature Importance with L1 Regularization
###Code
feat_coef = []
feat = zip(baseline.X_train.columns, baseline.model.coef_[0])
[feat_coef.append([i,j]) for i,j in feat]
feat_coef = pd.DataFrame(feat_coef, columns = ['feature','coef'])
top_feat_baseline = feat_coef.loc[abs(feat_coef['coef'])>0].sort_values(by='coef')
feat_plot = sns.barplot(data=top_feat_baseline, x='feature', y='coef', palette = "ch:s=.25,rot=-.25")
plt.xticks(rotation=90)
plt.title('LR Feature Importance with L1')
plt.show()
###Output
_____no_output_____
###Markdown
Eliminated Features from L1 Regularization
###Code
list( feat_coef.loc[feat_coef['coef']==0, 'feature'] )
###Output
_____no_output_____
###Markdown
See if ElasticNet helps with overfitting features 'mass' and 'f6'
###Code
baseline.model = LogisticRegression(penalty='elasticnet', l1_ratio=0.5, C=0.001, solver='saga', random_state=12343)
print( 'ElasticNet Training Accuracy =', round( baseline.prepare_and_train()['accuracy'], 4) )
print( 'ElasticNet Test Accuracy =', round( baseline.test()['accuracy'], 4) )
###Output
ElasticNet Training Accuracy = 0.8367
ElasticNet Test Accuracy = 0.8368
###Markdown
Features 'mass' and 'f6' are most important even with some L2 regularization
###Code
feat_coef = []
feat = zip(baseline.X_train.columns, baseline.model.coef_[0])
[feat_coef.append([i,j]) for i,j in feat]
feat_coef = pd.DataFrame(feat_coef, columns = ['feature','coef'])
top_feat_baseline = feat_coef.loc[abs(feat_coef['coef'])>0].sort_values(by='coef')
feat_plot = sns.barplot(data=top_feat_baseline, x='feature', y='coef', palette = "ch:s=.25,rot=-.25")
plt.xticks(rotation=90)
plt.title('LR Feature Importance with ElasticNet')
plt.show()
list( feat_coef.loc[feat_coef['coef']==0, 'feature'] )
###Output
_____no_output_____
###Markdown
Neural Network Modeling
###Code
NN = NNClassificationModeling(df,'# label',
StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=12343),
None,
None,
StandardScaler(), beta=2)
NN.prepare()
NN.model = tf.keras.Sequential() # model object
NN.model.add( tf.keras.layers.Input( shape=(NN.X_train.shape[1],) ) )
# specify data shape for first input layer
# columns (features) only, # rows specified by batch size later in fit()
NN.model.add( tf.keras.layers.Dense(200, activation = 'relu') )
# add these layers sequentially with decreasing # neurons
NN.model.add( tf.keras.layers.Dense(50, activation = 'relu') )
NN.model.add( tf.keras.layers.Dense(1, activation = 'sigmoid') )
# Final layer, Regression Output
# For Classification, use activation = 'sigmoid' or 'softmax' for Final layer
NN.model.compile(optimizer='adam', loss='BinaryCrossentropy', metrics=['accuracy'])
# Have to compile model after specifying layers
NN.train(batch = 100000, epoch=40)
NN.test()
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____ |
todo.ipynb | ###Markdown
VisualDTA Big data analysis
###Code
ds = pd.read_csv('/Users/john/data/twitter/tweets_merged.csv',
dtype={
'in_reply_to_status_id': object,
},
parse_dates=['timestamp']
)
ds.shape
ds.dtypes
nonconv_tweets = ds[(ds.in_reply_to_status_id.isnull()) & (ds.num_child_replies==1)]
nonconv_tweets.shape
root_tweets = ds[(ds.in_reply_to_status_id.isnull()) & (ds.num_child_replies>1)]
root_tweets.shape
nonconv_tweets[['id']].to_csv('data/nonconvids.csv', index=False,header=False)
root_tweets[['id']].to_csv('data/convids.csv', index=False,header=False)
ds['is_conversation'] = -1
ds.loc[(ds.in_reply_to_status_id.isnull()) & (ds.num_child_replies==1), 'is_conversation'] = 0
ds.loc[(ds.in_reply_to_status_id.isnull()) & (ds.num_child_replies>1), 'is_conversation'] = 1
ds.groupby('is_conversation').size()
dsmin = ds[['id', 'is_conversation']]
dsmin = dsmin[dsmin.is_conversation>=0]
dsmin.shape
###Output
_____no_output_____
###Markdown
once we generate the ids, and extract the info from cassandra, let analyze
###Code
urls = pd.read_csv('data/tweets_urls.csv', header=None, names=['id', 'url'])
urls.shape
urls = urls.merge(dsmin, on='id', copy=False)
print(urls.shape)
urls.head()
from urllib.parse import urlparse
o = urlparse('http://www.cwi.nl:80/%7Eguido/Python.html')
o
import requests
r = requests.head('http://bit.ly/2lY7Wm3', allow_redirects=True, verify=False)
r.url
def is_short_ulr(url):
shortdomains = ['youtu.be','tinyurl.com','lnkd.in']
for sd in shortdomains:
if sd in url:
return True
return False
is_short_ulr('youtu.be/videos=?xl')
def get_url_domain(url):
domain=urlparse(url).netloc
if len(domain) < 7 or is_short_ulr(domain):
r=requests.head(url, allow_redirects=True, verify=False)
domain=urlparse(r.url).netloc
return domain
urls['domain'] = urls.url.apply(lambda u: get_url_domain(u))
urls.head()
domains=urls.groupby('domain').size()
domains = domains.reset_index(name='count')
domains.sort_values('count', ascending=False, inplace=True)
#domains[domains.domain.str.len()<7]
domains.head()
urls.dtypes
urlconvrate=urls.groupby('domain').agg({'url':'count',
'is_conversation':'sum'})
urlconvrate=urlconvrate.reset_index()
urlconvrate['rate'] = urlconvrate['is_conversation'] / urlconvrate['url']
urlconvrate.sort_values('url', ascending=False, inplace=True)
urlconvrate.head()
y=urlconvrate.rate.head(10)
x=range(len(y))
plt.plot(x, y, marker='x')
hashtags = pd.read_csv('data/tweets_hashtags.csv', header=None, names=['id', 'hashtag'])
hashtags.shape
hashtags = hashtags.merge(dsmin, on='id', copy=False)
print(hashtags.shape)
hashtags.head()
htconvrate=hashtags.groupby('hashtag').agg({'id':'count',
'is_conversation':'sum'})
htconvrate=htconvrate.reset_index()
htconvrate['rate'] = htconvrate['is_conversation'] / htconvrate['id']
htconvrate.sort_values('id', ascending=False, inplace=True)
htconvrate.head()
y=htconvrate.rate.head(10)
x=range(len(y))
plt.scatter(x, y, marker='o')
###Output
_____no_output_____
###Markdown
conversations rate vs followers
###Code
root_tweets = ds[(ds.in_reply_to_status_id.isnull()) & (ds.num_child_replies>1)]
root_tweets.shape
crf = root_tweets.groupby('screen_name').agg(
{
'followers_count': 'max',
'statuses_count' : 'max',
'id': 'count',
'is_conversation' : 'sum',
})
crf.reset_index(inplace=True)
crf.rename(columns={'id':'tweets'}, inplace=True)
crf.sort_values('is_conversation', ascending=False, inplace=True)
crf.head()
x=crf.followers_count.values
y=crf.is_conversation.values
plt.scatter(x, y, marker='o')
x=crf.statuses_count.values
y=crf.is_conversation.values
plt.scatter(x, y, marker='o')
###Output
_____no_output_____ |
Notebooks/02-Review-Acceptance-Analysis.ipynb | ###Markdown
Setup environment
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd drive/MyDrive/'Colab Notebooks'/
from utility import getComments, getReviewFinal
%cd Thesis/PeerRead/code/accept_classify/
###Output
/content/drive/My Drive/Colab Notebooks/Thesis/PeerRead/code/accept_classify
###Markdown
Import library and Load Data
###Code
path = '../../my_data/Figures/02-Score-Acceptance-'
import json, pickle, os
import pandas as pd
import nltk
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
from sklearn.metrics import accuracy_score, roc_curve, auc, confusion_matrix, plot_roc_curve
sns.set_context("talk", font_scale=2)
r_test_path = "../../data/iclr_2017/test/reviews/"
test_files = list(map(lambda f: r_test_path+f, os.listdir(r_test_path)))
test_target = []
for name in tqdm(test_files):
if getComments(name):
test_target.append(getReviewFinal(name))
###Output
_____no_output_____
###Markdown
BERT
###Code
with open('../../my_data/02-Review-Acceptance/bert_outcome-16', "rb") as f:
outcome_bert=pickle.load(f)
df_bert=pd.DataFrame(outcome_bert)
df_bert['avg'] = df_bert.apply(lambda row: (row['roc_auc']+row['accuracy'])/2, axis=1)
df_bert.sort_values('avg',ascending=False).head(5)
PROBS_b = np.array([out['probs'] for out in outcome_bert])
ACC_b = np.array([out['accuracy'] for out in outcome_bert])
FPR_b = np.array([out['fpr'] for out in outcome_bert])
TPR_b = np.array([out['tpr'] for out in outcome_bert])
AUC_b = np.array([out['roc_auc'] for out in outcome_bert])
cf_matrix_b = confusion_matrix(test_target, PROBS_b[4].argmax(axis=1))
print(cf_matrix_b)
from matplotlib import colors
sns.set_context("notebook", font_scale=2.5)
fig,ax = plt.subplots(figsize = (8,8))
cmap = colors.ListedColormap(['lightcoral','lightseagreen','lightcoral','lightseagreen'])
bounds=[0, 5.5, 6.5, 7.5, 12]
norm = colors.BoundaryNorm(bounds, cmap.N)
heatmap = sns.heatmap(cf_matrix_b, annot=True,
cmap=cmap, norm=norm, linewidths=.5, cbar=False)
plt.xlabel("predicted label")
plt.ylabel("true label")
plt.xticks([0.5, 1.5], ['reject', 'accept'])
plt.yticks([0.5, 1.5], ['reject', 'accept'])
plt.savefig(path+'bert_confusion-matrix', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
sns.set_context("notebook", font_scale=2)
fig,ax=plt.subplots(figsize = (10,6))
ax.plot(FPR_b[4], TPR_b[4],label='ROC', color='darkslateblue', lw=4)
ax.plot([0, 1], [0, 1],label='random chance', color='crimson', lw=2, linestyle='dashed')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.legend()
plt.savefig(path+'bert_ROC', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
from sklearn.metrics import classification_report
target_names = ['Reject', 'Accept']
print(classification_report(test_target, PROBS_b[4].argmax(axis=1), target_names=target_names))
df_bert.describe()
sns.set_context("notebook", font_scale=2)
fig, ax = plt.subplots(figsize=(10,6))
sns.histplot(x=ACC_b, alpha=0.5, color= 'darkslateblue', ax=ax, binwidth=0.01)
sns.despine()
plt.ylabel('count')
plt.xlabel('accuracy (%)')
ax.set_yticks([1,2,3])
ax.set_xticks([0.52, 0.54, 0.56, 0.58,0.60, 0.62, 0.64, 0.66, 0.68])
ax.set_xticklabels(['52','54','56','58','60', '62', '64', '66', '68'])
plt.savefig(path+'bert_ACC', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
sns.displot(x=ACC_b, alpha=0.5, color= 'darkslateblue')
sns.rugplot(x=ACC_b, height=.1, color='black')
plt.ylabel('Count')
plt.xlabel('Accuracy')
plt.ylim(0,4)
plt.savefig(path+'bert_ACC', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
sns.displot(x=AUC_b, alpha=0.5, color='darkslateblue')
sns.rugplot(x=AUC_b, height=.1, color='black')
plt.ylabel('Count')
plt.xlabel('ROC AUC')
plt.savefig(path+'bert_AUC', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
sns.displot(data=df_bert, x='avg', alpha=0.5, color='darkslateblue')
sns.rugplot(data=df_bert, x='avg', height=.1, color='black')
plt.ylabel('Count')
plt.xlabel('Average')
plt.ylim(0,4)
plt.savefig(path+'bert_Avg', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
data=PROBS_b[4]
outcome=np.argmax(data,axis=1)
probs = [data[i,outcome[i]] for i in range(len(data))]
outcome_str = list(map(lambda x: 'accept' if x==1 else 'reject', outcome))
df_ar = pd.DataFrame(np.vstack((probs,outcome_str)).T,columns=['probs','outcome'])
df_ar["probs"] = pd.to_numeric(df_ar["probs"])
sns.set_context("notebook", font_scale=2)
fig, ax = plt.subplots(figsize=(12,6))
sns.histplot(data=df_ar, x='probs', hue='outcome', multiple='stack', palette='bwr_r', bins=10, ax=ax)
sns.despine()
plt.ylabel('count')
plt.xlabel('probability')
plt.legend(['accept', 'reject'])
plt.xlim(0.48, 1.01)
plt.ylim(0,10)
plt.savefig(path+'bert_prob', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
###Output
_____no_output_____
###Markdown
SciBert
###Code
with open('../../my_data/02-Review-Acceptance/scibert_outcome-16', "rb") as f:
outcome_scibert=pickle.load(f)
df_scibert=pd.DataFrame(outcome_scibert)
df_scibert['avg'] = df_scibert.apply(lambda row: (row['roc_auc']+row['accuracy'])/2, axis=1)
df_scibert.sort_values('avg',ascending=False).head(5)
PROBS_s = np.array([out['probs'] for out in outcome_scibert])
ACC_s = np.array([out['accuracy'] for out in outcome_scibert])
FPR_s = np.array([out['fpr'] for out in outcome_scibert])
TPR_s = np.array([out['tpr'] for out in outcome_scibert])
AUC_s = np.array([out['roc_auc'] for out in outcome_scibert])
cf_matrix = confusion_matrix(test_target, PROBS_s[8].argmax(axis=1))
print(cf_matrix)
from matplotlib import colors
sns.set_context("notebook", font_scale=2.5)
fig,ax = plt.subplots(figsize = (8,8))
cmap = colors.ListedColormap(['lightcoral','lightseagreen','lightcoral','lightseagreen'])
bounds=[0, 5, 7, 10, 12]
norm = colors.BoundaryNorm(bounds, cmap.N)
heatmap = sns.heatmap(cf_matrix, annot=True,
cmap=cmap, norm=norm, linewidths=.5, cbar=False)
plt.xlabel("predicted label")
plt.ylabel("true label")
plt.xticks([0.5, 1.5], ['reject', 'accept'])
plt.yticks([0.5, 1.5], ['reject', 'accept'])
plt.savefig(path+'scibert_confusion-matrix', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
sns.set_context("notebook", font_scale=2)
fig,ax=plt.subplots(figsize = (10,6))
ax.plot(FPR_s[8], TPR_s[8],label='ROC', color='darkslateblue', lw=4)
ax.plot([0, 1], [0, 1],label='random chance', color='crimson', lw=2, linestyle='dashed')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.legend()
plt.savefig(path+'scibert_ROC', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
from sklearn.metrics import classification_report
target_names = ['Reject', 'Accept']
print(classification_report(test_target, PROBS_s[8].argmax(axis=1), target_names=target_names))
df_scibert.describe()
sns.set_context("notebook", font_scale=2)
fig, ax = plt.subplots(figsize=(10,6))
sns.histplot(x=ACC_s, alpha=0.5, color= 'darkslateblue', ax=ax, binwidth=0.01)
sns.despine()
plt.ylabel('count')
plt.xlabel('accuracy (%)')
ax.set_yticks([1,2,3])
ax.set_xticks([0.45, 0.50, 0.55, 0.60, 0.65, 0.70])
ax.set_xticklabels(['45','50','55','60', '65', '70'])
plt.savefig(path+'scibert_ACC', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
sns.displot(x=ACC_s, alpha=0.5, color= 'darkslateblue')
sns.rugplot(x=ACC_s, height=.1, color='black')
plt.ylabel('Count')
plt.xlabel('Accuracy')
plt.ylim(0,4)
plt.savefig(path+'scibert_ACC', dpi=400, bbox_inches = 'tight', pad_inches = 0)
sns.displot(x=AUC_s, alpha=0.5, color='darkslateblue')
sns.rugplot(x=AUC_s, height=.1, color='black')
plt.ylabel('Count')
plt.xlabel('ROC AUC')
plt.savefig(path+'scibert_AUC', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
sns.displot(data=df_scibert, x='avg', alpha=0.5, color='darkslateblue')
sns.rugplot(data=df_scibert, x='avg', height=.1, color='black')
plt.ylabel('Count')
plt.xlabel('Average')
plt.ylim(0,4)
plt.savefig(path+'scibert_Avg', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
data=PROBS_s[8]
outcome=np.argmax(data,axis=1)
probs = [data[i,outcome[i]] for i in range(len(data))]
outcome_str = list(map(lambda x: 'accept' if x==1 else 'reject', outcome))
df_ar = pd.DataFrame(np.vstack((probs,outcome_str)).T,columns=['probs','outcome'])
df_ar["probs"] = pd.to_numeric(df_ar["probs"])
sns.set_context("notebook", font_scale=2)
fig, ax = plt.subplots(figsize=(12,6))
sns.histplot(data=df_ar, x='probs', hue='outcome', multiple='stack', palette='bwr_r', bins=10, ax=ax)
sns.despine()
plt.ylabel('count')
plt.xlabel('probability')
plt.legend(['accept', 'reject'])
plt.xlim(0.48, 1.01)
plt.ylim(0,16)
ax.set_yticks([0,2,4,6,8,10,12,14,16])
plt.savefig(path+'scibert_prob', dpi=400, bbox_inches = 'tight', pad_inches = 0 )
###Output
_____no_output_____ |
python/notebooks/04 Level Curves (Figs. 11-12).ipynb | ###Markdown
Estimate level curves using GPs and ax Simulation of a Single Configuration
###Code
def simulate(N=1e7,
alpha=0.570,
beta=0.011,
gamma=0.456,
delta=0.011,
compensation=0.,
lockdown_effectiveness=0.175,
duty_cycle=1,
period=7,
n_steps=365, **kwargs):
steps_high = duty_cycle
steps_low = period-duty_cycle
suppression_start = 20
switching_start = 50
# ODE
step_size = 0.001
# initial condition
I = 500/6
D = 20
A = 1
R = 2
T = H = E = 0
S = N - I - D - A - R - T - H - E
s0 = np.array([S, I, D, A, R, T, H, E])
# default latent parameters
epsilon = 0.171
theta = 0.371
zeta = 0.125
eta = 0.125
mu = 0.012
nu = 0.027
tau = 0.003
h = 0.034
rho = 0.034
kappa = 0.017
xi = 0.017
sigma = 0.017
model = clds.agents.BatchSIDARTHE(s0=s0,
alpha='a', # variable
beta='b', # variable
gamma='g', # variable
delta='d', #variable
epsilon=epsilon,
theta=theta,
zeta=zeta,
eta=eta,
mu=mu,
nu=nu,
tau=tau,
h=h,
rho=rho,
kappa=kappa,
xi=xi,
sigma=sigma,
N=N,
step_size=step_size)
fpsp = clds.agents.BatchFPSP(beta_high=1,
beta_low=lockdown_effectiveness,
steps_high=steps_high,
steps_low=steps_low,
suppression_start=suppression_start,
switching_start=switching_start)
env = clds.Composite()
env.add(model,
pre=lambda x: {'a': x['fpsp']*alpha,
'b': x['fpsp']*beta,
'g': x['fpsp']*gamma,
'd': x['fpsp']*delta},
out='model')
env.add(fpsp, out='fpsp')
#print(model.R0({'a': alpha, 'b': beta, 'g': gamma, 'd': delta}))
o = [env.reset()]
for i in range(n_steps):
if i > suppression_start:
fpsp.beta_high = (1+compensation)
o.append(env.step()[0])
s = np.array([x['model'] for x in o]).reshape(-1, 8)/N*100
p = np.array([x['fpsp'] for x in o]).reshape(-1, 1)
return s, p
###Output
_____no_output_____
###Markdown
Evaluation Wrapper Function
###Code
# evaluate configuration
def eval_fn(params):
config = dict(params)
# R0 -> alpha, beta, gamma, delta
R0_base = 2.38461532
correction = config['R0']/R0_base
config['alpha'] = 0.570 * correction
config['beta'] = 0.011 * correction
config['gamma'] = 0.456 * correction
config['delta'] = 0.011 * correction
# quarantine effectiveness -> leakage
config['lockdown_effectiveness'] = config['q']
config['compensation'] = config['d']
config['period'] *= 7
config['duty_cycle'] = int(config['X']/7*config['period'])
s, p = simulate(**config)
#outcome measure: sum of daily I+Q from step 50 onwards
total_infected_daily = np.sum(s[:,1:6], axis=1)
peak_daily = total_infected_daily[50:].max()
return {'peak_daily': (peak_daily, 0.0)} # dict of tuples (mean, standard error)
###Output
_____no_output_____
###Markdown
Experiments Initial set of 100 simulations
###Code
import ax
from ax.modelbridge.registry import Models
from ax.plot.slice import plot_slice
from ax.plot.contour import plot_contour
from ax.utils.notebook.plotting import render
# Developer API
batch_size = 1 # number of parallel GPEI trials
n_sobol=10
n_gpei=1
# parameter search space
params_full = [
# ('R0', mu=2.675739, sd=0.5719293, lower=0)
{"name": "R0", "type": "range", "bounds": [1.0, 4.4]}, #"type": "fixed", "value": 2.78358},
{"name": "q", "type": "range", "bounds": [1e-6, 4.0]}, #"type": "fixed", "value": 0.175},
{"name": "d", "type": "range", "bounds": [0.0, 1.0]}, #"type": "fixed", "value": 1.0},
{"name": "X", "parameter_type": ax.ParameterType.INT, "type": "range", "bounds": [1, 7]},
{"name": "period", "parameter_type": ax.ParameterType.INT, "type": "fixed", "value": 1}
]
params = {}
params['days_vs_R0']=[
{"name": "R0", "type": "range", "bounds": [1.0, 4.4]},
{"name": "q", "type": "fixed", "value": 0.175},
{"name": "d", "type": "fixed", "value": 0.},
{"name": "X", "parameter_type": ax.ParameterType.INT, "type": "range", "bounds": [1, 7]}, # , "log_scale": True},
{"name": "period", "parameter_type": ax.ParameterType.INT, "type": "fixed", "value": 1}
]
params['days_vs_q']=[
{"name": "R0", "type": "fixed", "value": 2.38461532},
{"name": "q", "type": "range", "bounds": [1e-6, 0.5]},
{"name": "d", "type": "fixed", "value": 0.},
{"name": "X", "parameter_type": ax.ParameterType.INT, "type": "range", "bounds": [1, 7]}, # , "log_scale": True},
{"name": "period", "parameter_type": ax.ParameterType.INT, "type": "fixed", "value": 1}
]
params['days_vs_d']=[
{"name": "R0", "type": "fixed", "value": 2.38461532},
{"name": "q", "type": "fixed", "value": 0.175},
{"name": "d", "type": "range", "bounds": [0.0, 1.0]}, #"type": "fixed", "value": 1.0},
{"name": "X", "parameter_type": ax.ParameterType.INT, "type": "range", "bounds": [1, 7]}, # , "log_scale": True},
{"name": "period", "parameter_type": ax.ParameterType.INT, "type": "fixed", "value": 1}
]
params['period_vs_R0']=[
{"name": "R0", "type": "range", "bounds": [1.0, 4.4]},
{"name": "q", "type": "fixed", "value": 0.175},
{"name": "d", "type": "fixed", "value": 0.},
{"name": "X", "type": "fixed", "value": 2},
{"name": "period", "parameter_type": ax.ParameterType.INT, "type": "range", "bounds": [1, 7]}
]
params['period_vs_q']=[
{"name": "R0", "type": "fixed", "value": 2.38461532},
{"name": "q", "type": "range", "bounds": [1e-6, 0.5]},
{"name": "d", "type": "fixed", "value": 0.},
{"name": "X", "type": "fixed", "value": 2},
{"name": "period", "parameter_type": ax.ParameterType.INT, "type": "range", "bounds": [1, 7]}
]
params['period_vs_d']=[
{"name": "R0", "type": "fixed", "value": 2.38461532},
{"name": "q", "type": "fixed", "value": 0.175},
{"name": "d", "type": "range", "bounds": [0.0, 1.0]}, #"type": "fixed", "value": 1.0},
{"name": "X", "type": "fixed", "value": 2},
{"name": "period", "parameter_type": ax.ParameterType.INT, "type": "range", "bounds": [1, 7]}
]
def do_experiment(experiment):
parameters = params[experiment]
outfile = f'results/{experiment}_SIDARTHE'
print("Experiment", experiment)
# NOTE: if search space is needed (for developer API)
from ax.service.utils.instantiation import parameter_from_json
param_list = [parameter_from_json(p) for p in parameters]
search_space = ax.SearchSpace(parameters=param_list, parameter_constraints=None)
exp = ax.SimpleExperiment(
name="test_experiment",
search_space=search_space,
evaluation_function=eval_fn,
objective_name="peak_daily",
minimize=True
)
# repeatedly evaluate SOBOL and save to disk
sobol = Models.SOBOL(exp.search_space)
for i in range(10):
print(f"Running trials {i*n_sobol} to {(i+1)*n_sobol}..")
exp.new_batch_trial(generator_run=sobol.gen(n_sobol))
exp.eval()
ax.save(exp, outfile+f'_{i}.json')
for i in range(n_gpei):
gp = Models.GPEI(experiment=exp, data=exp.eval())
print(f"GPEI trial {i+1}")
exp.new_trial(generator_run=gp.gen(batch_size))
experiments = ['days_vs_R0', 'days_vs_d', 'days_vs_q', 'period_vs_R0', 'period_vs_d', 'period_vs_q']
for e in experiments:
do_experiment(e)
###Output
_____no_output_____
###Markdown
Resume and complete remaining 900 simulations
###Code
# Developer API
batch_size = 1 # number of parallel GPEI trials
n_continue=10
n_sobol=10
n_gpei=1
def do_experiment(experiment):
parameters = params[experiment]
outfile = f'results/{experiment}_SIDARTHE'
print("Experiment", experiment)
param_list = [parameter_from_json(p) for p in parameters]
search_space = ax.SearchSpace(parameters=param_list, parameter_constraints=None)
infile = outfile+f'_{n_continue-1}.json'
exp = ax.load(infile)
exp.evaluation_function = eval_fn
# repeatedly evaluate SOBOL and save to disk
sobol = Models.SOBOL(exp.search_space)
for i in range(90):
print(f"Running trials {(i+n_continue)*n_sobol} to {(i+1+n_continue)*n_sobol}..")
exp.new_batch_trial(generator_run=sobol.gen(n_sobol))
exp.eval()
ax.save(exp, outfile+f'_{i+n_continue}.json')
for i in range(n_gpei):
gp = Models.GPEI(experiment=exp, data=exp.eval())
print(f"GPEI trial {i+1}")
exp.new_trial(generator_run=gp.gen(batch_size))
for e in experiments:
do_experiment(e)
###Output
_____no_output_____
###Markdown
Visualisation
###Code
def save_fig(ax_config, filename):
fig = go.Figure(data=ax_config.data)
fig.update_layout(
font=dict(
size=18
),
xaxis=dict(tickfont_size=18),
yaxis=dict(tickfont_size=18)
)
fig.write_image(filename+'.png')
fig.write_image(filename+'.eps')
def plot_level_curve(experiment, n_trials, x, y, metric='peak_daily'):
idx = int(n_trials-1)
exp = ax.load(f'results/{experiment}_SIDARTHE_{idx}.json')
model = Models.GPEI(experiment=exp, data=exp.eval())
ax_config = plot_contour(
model,
param_x=x,
param_y=y,
metric_name=metric
)
save_fig(ax_config, f'results/{experiment}_SIDARTHE_{idx}')
render(ax_config)
plot_level_curve('days_vs_R0', 100, x='X', y='R0')
plot_level_curve('days_vs_d', 100, x='X', y='d')
plot_level_curve('days_vs_q', 100, x='X', y='q')
plot_level_curve('period_vs_R0', 100, x='period', y='R0')
plot_level_curve('period_vs_d', 100, x='period', y='d')
plot_level_curve('period_vs_q', 100, x='period', y='q')
###Output
_____no_output_____ |
demoEDBT/edbtDemo.ipynb | ###Markdown
RulER: Scaling Up Record-level Matching Rules Preparing data for RulER
###Code
import RulER.Commons.implicits
import RulER.Commons.implicits._
import RulER.DataStructure.Rule
import RulER.DataStructure.ThresholdTypes.ED
import RulER.DataStructure.ThresholdTypes.JS
import RulER.Commons.CommonFunctions.loadProfilesAsDF
import RulER.SimJoins.EDJoin.EDJoin
import RulER.SimJoins.PPJoin.PPJoin
import java.util.Calendar
//Load the dataset
val imdb = loadProfilesAsDF("imdb.csv")
%%dataframe --limit 1
imdb
###Output
_____no_output_____
###Markdown
Defining a ruleFirst, we define a complex rule to find the matches
###Code
//Predicates
val r1 = Rule("title", JS, 0.8)
val r2 = Rule("title", ED, 3)
val r3 = Rule("director", JS, 0.7)
val r4 = Rule("cast", JS, 0.7)
val r5 = Rule("country", ED, 2)
val r6 = Rule("plot", JS, 0.8)
//Rule
val rule = (r1 and r3) or (r2 and r4) or (r5 and r6)
###Output
_____no_output_____
###Markdown
Running the rule by using existing algorithmsBy using the existing algorithms (e.g. PPJoin, EDJoin) it is possible to execute the rule as a combination of intersections and unions
###Code
val tStart = Calendar.getInstance().getTimeInMillis
//Obtaining the matches with PPJoin/EDJoin
val and1 = PPJoin(imdb, r1).intersect(PPJoin(imdb, r3))
val and2 = EDJoin(imdb, r2).intersect(PPJoin(imdb, r4))
val and3 = EDJoin(imdb, r5).intersect(PPJoin(imdb, r6))
//Final results
val res = and1.union(and2).union(and3).distinct()
val tmp = imdb.join(res, imdb("_rowId") === res("id1"))
val results = imdb.join(tmp, tmp("id2") === imdb("_rowId"))
results.cache()
results.count()
val tEnd = Calendar.getInstance().getTimeInMillis
println("Execution time (s) "+(tEnd-tStart)/1000.0)
%%dataframe --limit 1
results
###Output
_____no_output_____
###Markdown
Running the rule by using RulER
###Code
val tStart = Calendar.getInstance().getTimeInMillis
val results = imdb.joinWithRules(imdb, rule)
results.count()
val tEnd = Calendar.getInstance().getTimeInMillis
println("Execution time (s) "+(tEnd-tStart)/1000.0)
%%dataframe --limit 1
results
###Output
_____no_output_____
###Markdown
Join multiple datasets example
###Code
val roger_ebert = loadProfilesAsDF("roger_ebert.csv")
val rotten_tomatoes = loadProfilesAsDF("rotten_tomatoes.csv")
%%dataframe --limit 1
roger_ebert
%%dataframe --limit 1
rotten_tomatoes
###Output
_____no_output_____
###Markdown
Defining the rule
###Code
val r1 = Rule("movie_name", JS, 0.8, "Name")
val r2 = Rule("actors", JS, 0.5, "Actors")
val r3 = Rule("directors", ED, 2, "Director")
val rule = (r1 and r2) or (r1 and r3)
###Output
_____no_output_____
###Markdown
Join the datasets by using the rule
###Code
val matches = roger_ebert.joinWithRules(rotten_tomatoes, rule)
%%dataframe --limit 1
matches
###Output
_____no_output_____ |
Tarea_1/problema2.ipynb | ###Markdown
_**TAREA I - SERIES DE TIEMPO**_ - Bastian Araya- Franco Gonzalez- Daniela Díaz
###Code
import os
import pandas as pd
import numpy as np
import matplotlib as mp
import datetime
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import scipy.optimize as so
###Output
_____no_output_____
###Markdown
Problema 2 Analice, compare y comente la serie *accidental deaths* de USA con la serie *sinestrialidad* en Chile - Importación de las bases de datos
###Code
sinestrialidad= pd.read_csv('evolucionChile.csv',decimal=',',thousands='.')
sinestrialidad.head()
accidental_deaths=pd.read_csv('deaths.txt',delim_whitespace=True)
accidental_deaths.head()
#necesitamos hacer una limpieza de los datos para tenerlos como tipo de dato datetime
df=pd.DataFrame({'year':accidental_deaths['Year'].tolist(),
'month':accidental_deaths['Mont'].tolist(),
'day': [1]*len(accidental_deaths['Mont'].tolist())}) #se agrega el 1 como fecha referencial, no influye en nada
accidental_deaths['date']=pd.to_datetime(df)
accidental_deaths=accidental_deaths.drop(['Mont','Year'],axis=1)
accidental_deaths.head()
###Output
_____no_output_____
###Markdown
Creación de graficos
###Code
#En este programa se graficara cada grafico columna vs año, incluyendo el año vs año solo ya que invluyendo este dato tendremos la imprsión de los gráficos de forma cuadriular nrows=5, ncols=4
X=sinestrialidad['Año'].tolist()
rows=10
cols=2
fig, ax = plt.subplots(rows,cols,figsize=(20,100))
count=0
for i in range(0,rows):
for j in range(0,cols):
ax[i,j].plot(X,sinestrialidad[sinestrialidad.columns.tolist()[count]],color='g',marker='o')
ax[i,j].set_title(sinestrialidad.columns.tolist()[count]+' vs Años')
ax[i,j].set_xlabel('Año')
ax[i,j].set_ylabel(sinestrialidad.columns.tolist()[count])
count+=1
plt.show()
#notemos que en algunos casos tiene más sentido graficar varios graficos en uno como los es el caso de lesionados (graves, menos grave, leves), indicadaros cada 10000 vehiculos (siniestrabilidad,mortalidad,morbilidad), indicadores cada 10000 habitantes (siniestralidad,mortabilidad, morbilidad).
#Lesionados(graves, menos graves, leves)
plt.plot(X,sinestrialidad['Lesionados_graves '].tolist(),color='r')
plt.plot(X,sinestrialidad['Lesionados_menos_graves '].tolist(),color='b')
plt.plot(X,sinestrialidad['Lesionados_leves '].tolist(),color='g')
plt.title('Lesionaos (Graves, menos graves, leves) vs Año')
red_patch = mpatches.Patch(color='red', label='Lesionados graves')
blue_patch = mpatches.Patch(color='blue', label='Lesionados menos graves')
green_patch = mpatches.Patch(color='green', label='Lesionados leves')
plt.legend(handles=[red_patch,blue_patch,green_patch])
plt.xlabel('Años')
plt.ylabel('Lesionados')
plt.show()
#Indicadores cada 10000 vehiculos
plt.plot(X,sinestrialidad['Indicadores_cada_10000_vehículos_Morbilidad'].tolist(),color='r')
plt.plot(X,sinestrialidad['Indicadores_cada_10000_vehículos_Mortalidad '].tolist(),color='b')
plt.plot(X,sinestrialidad['Indicadores_cada_10000_vehículos_Siniestralidad '].tolist(),color='g')
plt.title('Indicadores cada 10000 vehiculos (Morbilidad, Mortalidad, Siniestralidad) vs Año')
red_patch = mpatches.Patch(color='red', label='Morbilidad')
blue_patch = mpatches.Patch(color='blue', label='Mortalidad')
green_patch = mpatches.Patch(color='green', label='Siniestralidad')
plt.legend(handles=[red_patch,blue_patch,green_patch])
plt.xlabel('Años')
plt.ylabel('Indicadores')
plt.show()
#Indicadores cada 10000 personas
plt.plot(X,sinestrialidad['Indicadores_cada_100000_habitantes_Morbilidad'].tolist(),color='r')
plt.plot(X,sinestrialidad['Indicadores_cada_100000_habitantes_Mortalidad'].tolist(),color='b')
plt.plot(X,sinestrialidad['Indicadores_cada_100000_habitantes_Siniestralidad '].tolist(),color='g')
plt.title('Indicadores cada 100000 habitantes (Morbilidad, Mortalidad, Siniestralidad) vs Año')
red_patch = mpatches.Patch(color='red', label='Morbilidad')
blue_patch = mpatches.Patch(color='blue', label='Mortalidad')
green_patch = mpatches.Patch(color='green', label='Siniestralidad')
plt.legend(handles=[red_patch,blue_patch,green_patch])
plt.xlabel('Años')
plt.ylabel('Indicadores')
plt.show()
#Ahora graficaremos para deaths
plt.plot(accidental_deaths['date'],accidental_deaths['accidental_deaths'].tolist())
plt.title('Muertes USA vs Fecha (Año,mes)')
plt.xlabel('Fecha')
plt.ylabel('Muertes accidentales ')
plt.show()
###Output
/home/daniela/miniconda3/envs/mat281/lib/python3.7/site-packages/pandas/plotting/_matplotlib/converter.py:103: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
**Previo** Primero haremos un analisis previo, prediciendo a simple vista lo que apreciamos respecto a la tendencia, estacionalidad Tendecia **Siniestros Chile**- Siniestros vs año : tendencia al alza- Fallecidos vs año : no se observa tendencia clara- Lesionados graves vs año : tendencia horizontal- Lesionados menos graves vs año : tendencia horizontal- Lesionados leves vs año : tendencia al alza- Total lesionados vs año : tendencia al alza- Total victimas vs año : tendencia al alza- Tasa motorización vs año : tendencia a la baja- Vehiculos cada 100 habitantes vs año : tendencia al alza- Indicadores cada 10000 vehiculos: Siniestralidad vs año : tendencia a la baja- Indicadores cada 10000 vehiculos: Mortalidad vs año : tendencia horizontal- Indicadores cada 10000 vehiculos: Morbilidad vs año : tendencia a la baja- Indicadores cada 100000 habitantes: Siniestralidad vs año : tendencia al alza- Indicadores cada 100000 habitantes: Mortalidad vs año : tendencia horizontal- Indicadores cada 100000 habitantes: Morbilidad vs año : tendencia al alza- Fallecidos cada 100 siniestros vs año : tendencia a la baja- Siniestros por cada Fallecidos vs año : tendencia al alza **Muertes accidentales USA**- Muertes vs mes/año : tendencia constante Estacionalidad **Siniestros Chile** : No se aprecia estacionalidad- Siniestros vs año - Fallecidos vs año - Lesionados graves vs año - Lesionados menos graves vs año - Lesionados leves vs año - Total lesionados vs año - Total victimas vs año - Tasa motorización vs año - Vehiculos cada 100 habitantes vs año - Indicadores cada 10000 vehiculos: Siniestralidad vs año - Indicadores cada 10000 vehiculos: Mortalidad vs año - Indicadores cada 10000 vehiculos: Morbilidad vs año - Indicadores cada 100000 habitantes: Siniestralidad vs año - Indicadores cada 100000 habitantes: Mortalidad vs año - Indicadores cada 100000 habitantes: Morbilidad vs año - Fallecidos cada 100 siniestros vs año - Siniestros por cada Fallecidos vs año **Muertes accidentales USA**- Muertes vs mes/año : se aprecia estacionalidad, a mitad de año hay un un amuento claro en las muertes, mientras que que en los meses de fin/principio de año hay una baja de muertes **Luego aplicaremos el Método S1 de "Tendencia Pequeña" para el modelo clásico de descomposición, esto lo podemos hacer ya que el tamaño del dataset es pequeño y además el por el grafico de la serie de tiempo se puede apreciar que esta aprecia una tendencia constante** Metodo ClásicoSe tiene X_t=T_t+S_t+N_t- X_t es la serie de tiempo- T_t es la compononente de tendencia- S_t es la componente estacional de periodo d- N_t es la componente residual o de ruido aleatorioSe cumple que- E(N_t)=0- S_{t+f}=S_t- \sum_{j=1}^d S_j=0 **S_1** 'tendencia pequeña'
###Code
def tendencia(X,Y):
Tt=[]
meses=list(np.arange(1,13))
años=[]
for i in range(0,len(X)):
años.append(X[i].year)
años=list(set(años))
index_años=list(np.arange(1,len(list(set(años)))+1))
#print(meses,'\n',años,'\n',index_años,'\n')
for a in años:
sum=0
for i in range(0,len(X)):
if X[i].year == a:
sum+=Y[i]
Tt.append(sum/12)
#print(Tt)
return Tt
tend=tendencia(accidental_deaths['date'],accidental_deaths['accidental_deaths'].tolist())
print(tend)
plt.plot(accidental_deaths['date'],accidental_deaths['accidental_deaths'].tolist())
plt.title('Muertes USA vs Fecha (Año,mes)')
plt.xlabel('Fecha')
plt.ylabel('Muertes accidentales ')
plt.plot(accidental_deaths['date'],accidental_deaths['accidental_deaths'].tolist())
plt.show()
def estacionalidad(X,Y):
St=[]
meses=list(np.arange(1,13))
años=[]
for i in range(0,len(X)):
años.append(X[i].year)
años=list(set(años))
for m in meses:
suma=0
for i in range(0,len(X)):
if X[i].month== m:
#años=[3]
suma+=Y[i]-tendencia(X,Y)[X[i].year-años[0]]
St.append(suma/len(años))
#print(St)
return St
estacional=estacionalidad(accidental_deaths['date'],accidental_deaths['accidental_deaths'].tolist())
print(estacional)
def ruido(X,Y):
Nt=[]
meses=list(np.arange(1,13))
años=[]
for i in range(0,len(X)):
años.append(X[i].year)
años=list(set(años))
for i in range(0,len(X)):
Nt.append(Y[i]-tendencia(X,Y)[X[i].year-años[0]]-estacionalidad(X,Y)[X[i].month-1])
return Nt
noise=ruido(accidental_deaths['date'],accidental_deaths['accidental_deaths'].tolist())
print(noise)
#Muertes vs Fecha
################
plt.plot(accidental_deaths['date'],accidental_deaths['accidental_deaths'].tolist())
plt.title('Muertes USA vs Fecha (Año,mes)')
plt.xlabel('Fecha')
plt.ylabel('Muertes accidentales ')
plt.show()
#Tendencia
#############
#años
años=[]
for i in accidental_deaths['date']:
años.append(i.year)
plt.plot(list(set(años)),tend,color='green')
plt.title('Tendencia Muertes USA')
plt.xlabel('Año')
plt.ylabel('Muertes accidentales ')
plt.show()
#Estacionalidad
#############
plt.plot(accidental_deaths['date'],estacional*6,color='red')
plt.title('Estacionalidad Muertes USA')
plt.xlabel('Año')
plt.ylabel('Muertes accidentales ')
plt.show()
#Ruido
##################
plt.plot(accidental_deaths['date'],noise,color='m')
plt.title('Ruido Muertes USA')
plt.xlabel('Año')
plt.ylabel('Muertes accidentales ')
plt.show()
###Output
_____no_output_____
###Markdown
**Por otra parte tenemos que el dataset sinestralidad no tiene componente estacional ya que registra los datos por años** Metodo sin componente estacionalSe tiene X_t=T_t+N_t- X_t es la serie de tiempo- T_t es la compononente de tendencia- N_t es la componente residual o de ruido aleatorio **Luego aplicaremos el Método S2 de "Promedios móviles" para el metodo sin componente estacional.**
###Code
def promedios_moviles(X,Y,q):
#utilizando el filto simetrico de 2q+1 puntos
W_t=[]
for i in range(q+1,len(Y)-q+1):
suma=0
for j in range(-q,q+1):
suma+=Y[i+j-1]
W_t.append(suma/(2*q+1))
return W_t
q=3
Y_tendencia=promedios_moviles(sinestrialidad['Año'],sinestrialidad['Fallecidos'],q)
print(Y_tendencia)
plt.plot(X,sinestrialidad['Fallecidos'],color='blue')
plt.title('Muertes vs Año y Tendencia promedios moviles')
plt.xlabel('Año')
plt.ylabel('Fallecidos ')
plt.plot(list(np.arange(q+1,len(X)-q+1)+1970),Y_tendencia,color='red')
blue_patch = mpatches.Patch(color='blue', label='Fallecidos')
red_patch = mpatches.Patch(color='red', label='Promedios moviles')
plt.legend(handles=[red_patch,blue_patch])
plt.show()
###Output
_____no_output_____ |
data_connectors/nzradar_connector/NZ Radar connector.ipynb | ###Markdown
NZ radar raw data connector Author: Sebastien DelauxThis notebook describes how to work with the raw radar files produced by the MetServices rain radar network. It shows how to read them and interact with the key quantities to plot the data.The last part of this notebook shows how multiple radar files (files corresponding to different locations). Can be blended to create a composite 3D or 2D gridded version of the data.MetService is in the process of deciding under which license the data will be made available. For now only a few sample files have been added to this repository for illustration purposes. Please contact the author to arrange access to a larger sample of files. Python librariesWe install pyart which is a python library specilised in dealing with raw radar data. We also load matplotlib which will be used to generate plots.
###Code
import sys
!{sys.executable} -m pip install arm-pyart matplotlib
import pyart
from matplotlib import pyplot as plt
import numpy as np
###Output
## You are using the Python ARM Radar Toolkit (Py-ART), an open source
## library for working with weather radar data. Py-ART is partly
## supported by the U.S. Department of Energy as part of the Atmospheric
## Radiation Measurement (ARM) Climate Research Facility, an Office of
## Science user facility.
##
## If you use this software to prepare a publication, please cite:
##
## JJ Helmus and SM Collis, JORS 2016, doi: 10.5334/jors.119
###Markdown
Reading the raw radar dataThe New Zealand MetService's radar data are generated by a network of 8 doppler radars spread around New Zealand. Each radar does a scan of it's surroundings every 7 to 8 minutes. The radar operate independantly and produce one file for every scan.Let's read one of the example files and look at the structure of the data.First we use pyart to load the data in the shape of a radar object.
###Code
file = './sample_data/BOP170101000556.RAWSURV'
radar = pyart.io.read(file)
###Output
_____no_output_____
###Markdown
The documentation for radar object just loaded can be found here: http://arm-doe.github.io/pyart-docs-travis/API/generated/pyart.core.Radar.htmlpyart.core.Radar The radar files contain polar data from two sweeps (two revolution of the radar beam each for a different vertical angle). Each sweep contains data over a number of rays (angle) and gates (segments in the radial direction). Informations about the content of the file can be found using the info function of the radar object.
###Code
radar.info()
###Output
altitude:
data: <ndarray of type: float64 and shape: (1,)>
long_name: Altitude
standard_name: Altitude
units: meters
positive: up
altitude_agl: None
antenna_transition: None
azimuth:
data: <ndarray of type: float32 and shape: (840,)>
units: degrees
standard_name: beam_azimuth_angle
long_name: azimuth_angle_from_true_north
axis: radial_azimuth_coordinate
comment: Azimuth of antenna relative to true north
elevation:
data: <ndarray of type: float32 and shape: (840,)>
units: degrees
standard_name: beam_elevation_angle
long_name: elevation_angle_from_horizontal_plane
axis: radial_elevation_coordinate
comment: Elevation of antenna relative to the horizontal plane
fields:
total_power:
data: <ndarray of type: float32 and shape: (840, 1000)>
units: dBZ
standard_name: equivalent_reflectivity_factor
long_name: Total power
coordinates: elevation azimuth range
_FillValue: -9999.0
reflectivity:
data: <ndarray of type: float32 and shape: (840, 1000)>
units: dBZ
standard_name: equivalent_reflectivity_factor
long_name: Reflectivity
coordinates: elevation azimuth range
_FillValue: -9999.0
fixed_angle:
data: <ndarray of type: float32 and shape: (2,)>
long_name: Target angle for sweep
units: degrees
standard_name: target_fixed_angle
instrument_parameters:
unambiguous_range:
data: <ndarray of type: float32 and shape: (840,)>
units: meters
comments: Unambiguous range
meta_group: instrument_parameters
long_name: Unambiguous range
prt_mode:
data: <ndarray of type: |S5 and shape: (2,)>
comments: Pulsing mode Options are: "fixed", "staggered", "dual". Assumed "fixed" if missing.
meta_group: instrument_parameters
long_name: Pulsing mode
units: unitless
prt:
data: <ndarray of type: float32 and shape: (840,)>
units: seconds
comments: Pulse repetition time. For staggered prt, also see prt_ratio.
meta_group: instrument_parameters
long_name: Pulse repetition time
prt_ratio:
data: <ndarray of type: float32 and shape: (840,)>
units: unitless
meta_group: instrument_parameters
long_name: Pulse repetition frequency ratio
nyquist_velocity:
data: <ndarray of type: float32 and shape: (840,)>
units: meters_per_second
comments: Unambiguous velocity
meta_group: instrument_parameters
long_name: Nyquist velocity
radar_beam_width_h:
data: <ndarray of type: float32 and shape: (1,)>
units: degrees
meta_group: radar_parameters
long_name: Antenna beam width H polarization
radar_beam_width_v:
data: <ndarray of type: float32 and shape: (1,)>
units: degrees
meta_group: radar_parameters
long_name: Antenna beam width V polarization
pulse_width:
data: <ndarray of type: float32 and shape: (840,)>
units: seconds
comments: Pulse width
meta_group: instrument_parameters
long_name: Pulse width
latitude:
data: <ndarray of type: float64 and shape: (1,)>
long_name: Latitude
standard_name: Latitude
units: degrees_north
longitude:
data: <ndarray of type: float64 and shape: (1,)>
long_name: Longitude
standard_name: Longitude
units: degrees_east
nsweeps: 2
ngates: 1000
nrays: 840
radar_calibration: None
range:
data: <ndarray of type: float32 and shape: (1000,)>
units: meters
standard_name: projection_range_coordinate
long_name: range_to_measurement_volume
axis: radial_range_coordinate
spacing_is_constant: true
comment: Coordinate variable for range. Range to center of each bin.
meters_to_center_of_first_gate: [0.]
meters_between_gates: [300.]
scan_rate: None
scan_type: ppi
sweep_end_ray_index:
data: <ndarray of type: int32 and shape: (2,)>
long_name: Index of last ray in sweep, 0-based
units: count
sweep_mode:
data: <ndarray of type: |S20 and shape: (2,)>
units: unitless
standard_name: sweep_mode
long_name: Sweep mode
comment: Options are: "sector", "coplane", "rhi", "vertical_pointing", "idle", "azimuth_surveillance", "elevation_surveillance", "sunscan", "pointing", "manual_ppi", "manual_rhi"
sweep_number:
data: <ndarray of type: int32 and shape: (2,)>
units: count
standard_name: sweep_number
long_name: Sweep number
sweep_start_ray_index:
data: <ndarray of type: int32 and shape: (2,)>
long_name: Index of first ray in sweep, 0-based
units: count
target_scan_rate: None
time:
data: <ndarray of type: float64 and shape: (840,)>
units: seconds since 2017-01-01T00:05:56Z
standard_name: time
long_name: time_in_seconds_since_volume_start
calendar: gregorian
comment: Coordinate variable for time. Time at the center of each ray, in fractional seconds since the global variable time_coverage_start
metadata:
Conventions: CF/Radial instrument_parameters
version: 1.3
title:
institution:
references:
source:
history:
comment:
instrument_name: b'radbp'
original_container: sigmet
sigmet_task_name: b'SURVEILLANCE'
sigmet_extended_header: false
time_ordered: none
rays_missing: 0
###Markdown
Plotting the raw dataThe raw data from the two sweeps are stored in a single two dimensional array. They can be plotted in the form of two polar plots.
###Code
%matplotlib inline
# Create nsweeps polar plots
fig, axs = plt.subplots(1, radar.nsweeps, subplot_kw=dict(polar=True))
# For each plot find start and end indices
for ax, istart, iend in zip(axs.flat, radar.sweep_start_ray_index['data'], radar.sweep_end_ray_index['data']):
# Convert azimuth to geometric angle
geometric_angle = -(radar.azimuth['data'][istart:iend+1]/180.*np.pi+0.5*np.pi)
# Create grid from indices
R, T = np.meshgrid(radar.range['data']/1000., geometric_angle)
# Plot data on grid
im = ax.pcolor(T,R,radar.fields['reflectivity']['data'][istart:iend+1,:], vmin=-30, vmax=40)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
cbar = fig.colorbar(im, cax=cbar_ax)
cbar.set_label("Reflectivity (dBZ)", rotation=270)
###Output
_____no_output_____
###Markdown
Those two plots allow to get a good idea of the horizontal spreading of the reflectivity field. However the data recorded by the radar is three-dimensional and slightly spread over time. Indeed, as shows below it takes about 1 minute for the two sweeps to be recorded.
###Code
%matplotlib inline
plt.plot(radar.time['data'])
plt.ylabel("Time since start of scan (s)")
plt.xlabel("ID for ray")
###Output
_____no_output_____
###Markdown
Gridded composite radar fieldsThough the radars are not fully in sync, the frequency at which they perform their scans is about 7 minutes and in many cases it is OK to assume that all the nearest measurments in time of all New Zealand radars can be collated together to create a snapshot representation of the reflectivity field all over New Zealand at a given time. In that case we have to depart from a polar few of the reflectivity field and blend the data from the different radar over a rectilinear grid.The present section gives an example of how two radar files, one coming from the Auckland radar (AKL...) and one coming from the Bay of Plenty radar (BOP...) can be blended together onto a single grid. In the process some minor filtering is applied to the data. An azimithal equidistant projection with longitude/latitude origin (175.5, -37.0) output coordinate system is used as well as 41 levels in the vertical. More information on the regridding function can be found here: https://arm-doe.github.io/pyart/API/generated/pyart.map.grid_from_radars.htmlpyart.map.grid_from_radars
###Code
# Path to the 2 files to collate
radar_files = ['./sample_data/AKL170101000558.RAWSURV', './sample_data/BOP170101000556.RAWSURV']
# Loading the data
radars = [pyart.io.read(file) for file in radar_files]
# Set filters for the radar object
filters = []
for radar in radars:
gatefilter = pyart.correct.despeckle.despeckle_field(radar, 'reflectivity',
threshold=-100, size=20)
gatefilter.exclude_transition()
gatefilter.exclude_above('reflectivity', 80)
filters.append(gatefilter)
# Do regridding
grid = pyart.map.grid_from_radars(radars, gatefilters=filters,
grid_shape= (41, 2250, 2000),
grid_limits= ([0.0, 20000.0], [-460000.0, 460000.0], [-400000.0, 400000.0]),
fields=['reflectivity'],
max_refl= 80.,
copy_field_data= True,
grid_origin= (-37.0, 175.5),
roi_func= 'dist_beam',
min_radius= 500.0,
h_factor= 1.0,
nb= 1.0,
bsp= 1.0
)
###Output
/home/sebastien/.local/lib/python3.6/site-packages/pyart/map/gates_to_grid.py:162: DeprecationWarning: Barnes weighting function is deprecated. Please use Barnes 2 to be consistent with Pauley and Wu 1990.
" Pauley and Wu 1990.", DeprecationWarning)
###Markdown
Now we can plot five cross-sections of the gridded data at 5 fixes vertical levels (every 10 vertical levels of the grid). And get an idea of the 3D structure of the data.
###Code
%matplotlib inline
fig, axs = plt.subplots(1, 5)
for ax, data in zip(axs.flat, grid.fields['reflectivity']['data'][::10,:,:]):
ax.pcolor(data, vmin=0, vmax=35)
###Output
_____no_output_____
###Markdown
We can also plot cross-sections along the north-south and east-west directiontp get an idea of the vertical structure of the data
###Code
%matplotlib inline
fig, axs = plt.subplots(1, 2)
ax = axs.flat[0]
ax.pcolor(grid.x['data']/1000., grid.z['data'][:30]/1000.,
grid.fields['reflectivity']['data'][:30,1125,:])
ax.set(xlabel="Distance from radar", ylabel="Height agl (km)")
ax = axs.flat[1]
plot = ax.pcolor(grid.y['data']/1000., grid.z['data'][:30]/1000.,
grid.fields['reflectivity']['data'][:30,:,1000])
ax.set(xlabel="Distance from radar")
###Output
_____no_output_____
###Markdown
3D data can be quite heavy to work with and something one only consider the maximum reflectivity over the atmospheric column.
###Code
%matplotlib inline
plt.pcolor(grid.x['data'], grid.y['data'],
np.amax(grid.fields['reflectivity']['data'][:,:,:], axis=0), vmin=0, vmax=35)
plt.xlabel('Easting')
plt.ylabel('Northing')
cbar = plt.colorbar()
cbar.set_label("Radius of influence", rotation=270)
###Output
_____no_output_____
###Markdown
One can always keep some information on the vertical structure by storing the height at which the maximum in reflectivity was found
###Code
max_reflectivity_height = grid.z['data'][np.argmax(grid.fields['reflectivity']['data'][:,:,:], axis=0)]
%matplotlib inline
plt.pcolor(grid.x['data'], grid.y['data'], max_reflectivity_height)
plt.xlabel('Easting')
plt.ylabel('Northing')
cbar = plt.colorbar()
cbar.set_label("Height of maximum reflectivity (m)", rotation=270)
###Output
_____no_output_____
###Markdown
An additional field that is produced during the regridding process is the radius of influence. This quantify the distance between the data point in the grid and the instrument that recorded it. This is an important quantity as the resolution and hence quality of the data drops with the distance from the instrument.On the plot below one can clearly see that two sources were used to created the grid, their location and how the radius of influence increases with the distance from the radars.
###Code
%matplotlib inline
plt.pcolor(grid.x['data'], grid.y['data'], grid.fields['ROI']['data'][0,:,:])
plt.xlabel('Easting')
plt.ylabel('Northing')
cbar = plt.colorbar()
cbar.set_label("Radius of influence", rotation=270)
###Output
_____no_output_____ |
name_gen_birnn.ipynb | ###Markdown
Train
###Code
_, ch2int = get_vocab()
line = names[0][0]
toks = line.strip()
batch_s = ([0]+[ch2int[ch] for ch in toks[-2:]]+[VOCAB_SIZE-1])
emb_batch_inputs = [tf.nn.embedding_lookup(embedding, x)
for x in batch_s]
for layer_i in xrange(_NUM_LAYERS):
cell_fw = tf.contrib.rnn.LSTMCell(
NUM_UNITS,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=123),
state_is_tuple=False)
cell_bw = tf.contrib.rnn.LSTMCell(
NUM_UNITS,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=113),
state_is_tuple=False)
(emb_batch_inputs, fw_state, bw_state) = tf.contrib.rnn.static_bidirectional_rnn(
cell_fw, cell_bw, emb_batch_inputs, dtype=tf.float32,
sequence_length=input_lengths)
outputs = emb_batch_inputs
names[0]
tf.nn.embedding_lookup(embedding, 1417)
embedding
###Output
_____no_output_____ |
GettingStartedwithSignificantTerms.ipynb | ###Markdown
Adapted by [Jen Ferguson](https://library.northeastern.edu/about/library-staff-directory/jen-ferguson) from notebooks created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/); modified by [Sarah Connell](https://cssh.northeastern.edu/person/sarah-connell/) in summer 2021. See [here](https://ithaka.github.io/tdm-notebooks/book/all-notebooks.html) for the originals and additional text analysis notebooks. Finding significant words within a curated datasetThis notebook demonstrates how to find the significant words in your dataset using a model called TF-IDF, which stands for term frequency-inverse document frequency. TF-IDF encompasses both 'aboutness' and 'uniqueness' by looking at how often a term appears in a document (='aboutness') and how unusual it is for that same term to appear in other documents in your dataset (='uniqueness').Essentially, TF-IDF attempts to determine which words are distinctive in individual documents, relative to a larger corpus. The idea is that words that are common within a particular document and **uncommon** within the other documents you're comparing it to can tell you something significant about the language in a document. The concept of inverse document frequency was introduced by [Karen Spärck Jones](https://en.wikipedia.org/wiki/Karen_Sp%C3%A4rck_Jones) in 1972. *Fun fact: TF-IDF was used in early search engines to do relevance ranking, until clever folks figured out how to break that by 'keyword stuffing' their websites.* As you work through this notebook, you'll take the following steps:* Import a dataset* Write a helper function to help clean up a single token* Clean each document of your dataset, one token at a time* Compute the most significant words in your corpus using TF-IDF and a library called `gensim` To perform this analysis, we'll need to treat the words in our dataset as *tokens*. **What's a token?** It's a string of text. For our purposes, think of a token as a single word. Calculating tokens and counting words can get very complicated (for example, is "you're" one word or two? what about "part-time"?), but we'll keep things simple for this lesson and treat "tokens" as strings of text separated by white space. First we'll import `gensim`, and the `constellate` library. The `constellate` library contains functions for connecting to the JSTOR server that contains our corpus dataset. As a reminder, *functions* are pieces of code that perform specific tasks, and we can get additional functions by importing libraries like `gensim` and `constellate`. We'll also import a library that will suppress unnecessary warnings.
###Code
import warnings
warnings.filterwarnings('ignore')
import gensim
import constellate
###Output
_____no_output_____
###Markdown
Next we'll pull in our datasets. **Did you build your own dataset?** In the next code cell, you'll supply the dataset ID provided when you created your dataset. Now's a good time to make sure you have it handy.**Didn't create a dataset?** Here are a few to choose from, with dataset IDs in red :* 'help desk' from JSTOR: 47975cec-ccb5-f1b1-b304-1756876d093a* All documents published in *Social Science Computer Review*: 3e418cfd-3274-ed04-dddd-2f0ffb1c33c8* All documents published in *Computers and the Humanities*: 243faca4-17dc-4c61-506e-f666c9e3c488 Pasting your unique **dataset ID** in place of the red text below will import your dataset from the JSTOR server. (No output will show.) To frame this in terms of our introduction to Python, running the line of code below will create a new variable called `dataset_id` whose value is the ID for the dataset you've chosen.
###Code
# Initializes the "dataset_id" variable
dataset_id = '243faca4-17dc-4c61-506e-f666c9e3c488'
###Output
_____no_output_____
###Markdown
Now, we want to create one more variable for storing information about our dataset, using a function from `constellate` called `get_description()`:
###Code
# Initializes the "dataset_info" variable
dataset_info = constellate.get_description(dataset_id)
###Output
_____no_output_____
###Markdown
Let's double-check to make sure we have the correct dataset. We can confirm the original query by viewing the `search_description`.
###Code
dataset_info["search_description"]
###Output
_____no_output_____
###Markdown
Using our new variable, we can also find the total number of documents in the dataset:
###Code
dataset_info["num_documents"]
###Output
_____no_output_____
###Markdown
We've verified that we have the correct dataset, and we've checked how many documents are in there. So, now let's pull the dataset into our working environent. This code might take a minute or two to run; recall that if it's in process, you'll see an asterisk in the bracket at the left of the code cell.
###Code
dataset_file = constellate.get_dataset(dataset_id)
###Output
_____no_output_____
###Markdown
Next, let's create a helper function that can standardize and clean up the tokens in our dataset. The function will:* Change all tokens (aka words) to lower case. This will make 'otter' and 'Otter' be counted as the same token.* Discard tokens with non-alphabetical characters (This will remove, for example, any tokens that contain numbers)* Discard any tokens less than 4 characters in length*Question to ponder:* Why do you think we want to discard tokens that are less than 4 characters long?
###Code
def process_token(token): # Defines a function `process_token` that takes the argument `token`
token = token.lower() # Changes all strings to lower case
if len(token) < 4: # Discards any tokens that are less than 4 characters long
return
if not(token.isalpha()): # Discards any tokens with non-alphabetic characters
return
return token # Returns the `token` variable which has been set equal to the `corrected` variable
###Output
_____no_output_____
###Markdown
Now let's cycle through each document in the corpus with our helper function. This may take a while to run. After we run the code below, we will have a list called `documents` which will contain the cleaned tokens for every text in our corpus. The `print()` function at the end isn't necessary to create this list of documents and their tokens, but it gives us some output to confirm when the code has been run.
###Code
# This is a bit of setup that establishes how the documents will be read in
reader = constellate.dataset_reader(dataset_file)
# Creates a new variable called `documents` that is a list that that will contain all of our documents.
documents = []
# This is the code that will actually read through your documents and apply the cleaning routines
for n, document in enumerate(reader):
this_doc = []
_id = document["id"]
for token, count in document["unigramCount"].items():
clean_token = process_token(token)
if clean_token is None:
continue
this_doc += [clean_token] * count
documents.append((_id, this_doc))
print("Document tokens collected and processed")
###Output
_____no_output_____
###Markdown
To get a sense of what we've just created, you can run the line of code below. It will show the tokens that have been collected for the first text in `documents` (Python starts counting at zero). If you want to see the tokens from a different document, you can change the number in the brackets.
###Code
documents[0]
###Output
_____no_output_____
###Markdown
The next few lines will set us up for TF-IDF analysis. First, let's create a *gensim dictionary*; this is a masterlist of all the tokens across all the documents in our corpus, in which each token gets a unique identifier. This doesn't have information on word frequencies yet; it's essentially a catalog listing the unique tokens in our corpus and giving each its own identifier.
###Code
dictionary = gensim.corpora.Dictionary([d[1] for d in documents])
###Output
_____no_output_____
###Markdown
The next step is to connect the word frequency data found within `documents` to the unique identifiers from our gensim dictionary. To do this, we'll create a *bag of words* corpus: this is a list that contains information on both word frequency and unique identifiers for the tokens in our corpus.
###Code
bow_corpus = [dictionary.doc2bow(doc) for doc in [d[1] for d in documents]]
###Output
_____no_output_____
###Markdown
The next step is to create the gensim TF-IDF model, which will control how TF-IDF is calculated:
###Code
model = gensim.models.TfidfModel(bow_corpus)
###Output
_____no_output_____
###Markdown
Finally, we'll apply our model to the `bow_corpus` to create our results in `corpus_tfidf`, which is a list of each document in our dataset, along with the TF-IDF scores for the tokens in the document.
###Code
corpus_tfidf = model[bow_corpus]
###Output
_____no_output_____
###Markdown
Now that we have those pieces in place, we can run the following code cells to find the most distinctive terms, by TF-IDF, in our dataset. These are terms that appear frequently in a particular text, but rarely or never appear in other texts. First, we'll need to set up a dictionary with our documents and then we'll need to sort the items in our new dictionary. We won't get any output from this code; it will just set things up. This will also take some time to run.
###Code
td = {
dictionary.get(_id): value for doc in corpus_tfidf
for _id, value in doc
}
sorted_td = sorted(td.items(), key=lambda kv: kv[1], reverse=True)
###Output
_____no_output_____
###Markdown
Let's see what we got! The code below will print out the top 20 most distinctive terms in the entire corpus, with their TF-IDF scores. If you want to see more terms, you can change the number in the square brackets.
###Code
for term, weight in sorted_td[:20]:
print(term, weight)
###Output
_____no_output_____
###Markdown
It can also be useful to look at individual documents. This code will print the most significant word, by TF-IDF, for the first 20 documents in the corpus. Do you see any interesting or surprising results?
###Code
for n, doc in enumerate(corpus_tfidf):
if len(doc) < 1:
continue
word_id, score = max(doc, key=lambda x: x[1])
print(documents[n][0], dictionary.get(word_id), score)
if n >= 20:
break
###Output
_____no_output_____ |
phishing_dataset_logistic_regression.ipynb | ###Markdown
###Code
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
data = pd.read_csv("/Phishing.csv")
data.head()
np.random.seed(7)
###Output
_____no_output_____
###Markdown
A phishing attack tries to mimick an activity.
###Code
data.sample(10).T
data.describe().T #-1 is a webiste thats not a phishing site
data.info()
data.shape
###Output
_____no_output_____
###Markdown
It is not a good practice to build machine learning models where the labels are encoded as negative values. This is because it affects model performance. we will change -1 to be ) so that we have a 0 or 1.
###Code
data.rename(columns={'Result': 'Class'}, inplace=True)
data.head().T
data['Class']= data['Class'].map({-1:0, 1:1})
data.tail().T
data['Class'].unique()
data.info()
data.isna().sum()
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Split the data into 80/20
###Code
from sklearn.model_selection import train_test_split
x= data.iloc[:,0:30].values.astype(int)
y= data.iloc[:,30].values.astype(int)
x
y
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2, random_state=np.random.seed(7))
###Output
_____no_output_____
###Markdown
Instanstiate the logisitic regression model and fit it to the training data
###Code
pip install wandb
from sklearn.metrics import accuracy_score,precision_recall_fscore_support,classification_report
from sklearn.linear_model import LogisticRegression
import wandb
import time
###Output
_____no_output_____
###Markdown
We define a utility function for training machine learning model with the code to also measure its training time and performance.
###Code
def train_eval_pipeline(model,train_data,test_data,name):
#Initialize Weights and Biases
wandb.init(project="Logistic Example Using Phishing Dataset", name=name)
#segregate the datasets
(x_train,y_train)=train_data
(x_test,y_test)=test_data
# train and log all the necessary metrics
start=time.time()
model.fit(x_train,y_train)
end=time.time()-start
prediction=model.predict(x_test)
wandb.log({"accuracy": accuracy_score(y_test,prediction)*100.0,
"precision": precision_recall_fscore_support(y_test,prediction,average='macro')[0],
'recall': precision_recall_fscore_support(y_test,prediction,average='macro')[1],
'Training_time':end})
print("Accuracy score of the Logistic Regression Classifier with default hyperparameter values {0:.2f}%"\
.format(accuracy_score(y_test,prediction)*100.0))
print('\n')
print("---Classification report of the Logistic Regression classifier with default hyperparameter values----")
print('\n')
print(classification_report(y_test,prediction,target_names=['Phishing Websites','Normal Websites']))
logreg = LogisticRegression()
logreg
train_eval_pipeline(logreg,(x_train, y_train),(x_test,y_test),'Logisitc_regression')
###Output
_____no_output_____ |
part02-machine-learning/sklearn/skl_random_forest.ipynb | ###Markdown
使用随机森林 用一年的过去天气数据来预测我们城市明天的最高温度
###Code
import pandas as pd
features = pd.read_csv('../part03-neural-network/csv-data/temps.csv')
features.head(5)
###Output
_____no_output_____
###Markdown
数据解释 * year:2016年所有数据点 * month:一年中的月份数 * day:一年中的某一天的数字 * week:星期几 * temp_2:2天前的最高温度 * temp_1:1天前的最高温度 * average: 历史平均最高温度 * actual:最高温度 真实值
###Code
print('The shape of our features is:', features.shape)
features.describe()
###Output
_____no_output_____
###Markdown
数据预处理1. 查看数据的维度,我们注意到只有348行,但其实2016年有366天。通过NOAA的数据,我注意到几个缺失的日子2. week 是字符,可以使用热编码,转成数字型
###Code
features = pd.get_dummies(features)
print('The shape of our features is:', features.shape) # 数据形状现在是349 x 18
features.iloc[:,5:].head(5)
###Output
_____no_output_____
###Markdown
将数据转换为数组
###Code
import numpy as np
labels = np.array(features['actual']) # Labels are the values we want to predict
features= features.drop('actual', axis = 1) # Remove the labels from the features
feature_list = list(features.columns) # Saving feature names
features = np.array(features) # Convert to numpy array
###Output
_____no_output_____
###Markdown
将数据分成训练和测试集
###Code
from sklearn.model_selection import train_test_split
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size = 0.25, random_state = 42)
print('Training Features Shape:', train_features.shape)
print('Training Labels Shape:', train_labels.shape)
print('Testing Features Shape:', test_features.shape)
print('Testing Labels Shape:', test_labels.shape)
###Output
_____no_output_____
###Markdown
训练和评估之前,需要建立一个基线作为明智的衡量标准
###Code
# The baseline predictions are the historical averages
baseline_preds = test_features[:, feature_list.index('average')]
# Baseline errors, and display average baseline error
baseline_errors = abs(baseline_preds - test_labels)
print('Average baseline error: ', round(np.mean(baseline_errors), 2))
###Output
_____no_output_____
###Markdown
使用Scikit-learn创建和训练模型1. 从skicit-learn导入随机森林回归模型2. 实例化模型,并在训练数据上拟合(scikit-learn的训练名称)模型3. 再次设置随机状态以获得可重现的结果
###Code
from sklearn.ensemble import RandomForestRegressor
# Instantiate model with 1000 decision trees
rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
rf.fit(train_features, train_labels);
###Output
_____no_output_____
###Markdown
对测试集进行预测
###Code
predictions = rf.predict(test_features)
# Calculate the absolute errors
errors = abs(predictions - test_labels)
# Print out the mean absolute error (mae)
print('Mean Absolute Error:', round(np.mean(errors), 2), 'degrees.')
# 平均预估值为3.83度。 这比基线平均提高了1度。 虽然这看起来似乎并不重要,但它比基线好近25%,现实生活中,基线可能代表公司数百万美元。
###Output
_____no_output_____
###Markdown
确定性能指标为了正确看待我们的预测,我们可以使用从100%中减去的平均百分比误差来计算准确度。
###Code
# Calculate mean absolute percentage error (MAPE)
mape = 100 * (errors / test_labels)
# Calculate and display accuracy
accuracy = 100 - np.mean(mape)
print('Accuracy:', round(accuracy, 2), '%.') # Accuracy: 93.99 %.
###Output
_____no_output_____
###Markdown
解释模型和报告结果这个模型如何得出价值? 有两种方法可以深入了解随机森林: 首先,我们可以查看森林中的单个树, 其次,我们可以查看解释变量的特征重要性。 可视化单个决策树Skicit-learn中随机森林实施中最酷的部分之一是: 可以实际检查森林中的任何树木。我们将选择一棵树,并将整棵树保存为图像
###Code
# Import tools needed for visualization
from sklearn.tree import export_graphviz
from IPython.display import Image
import pydotplus
# Pull out one tree from the forest
tree = rf.estimators_[5]
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
Image(graph.create_png())
# 使用 pydot 可以
import pydot
# Export the image to a dot file
export_graphviz(tree, out_file = 'tree.dot', feature_names = feature_list, rounded = True, precision = 1)
# Use dot file to create a graph
(graph, ) = pydot.graph_from_dot_file('tree.dot')
# Write graph to a png file
graph.write_png('tree.png')
###Output
_____no_output_____ |
day53/Google Trends and Data Visualisation (start).ipynb | ###Markdown
Introduction Google Trends gives us an estimate of search volume. Let's explore if search popularity relates to other kinds of data. Perhaps there are patterns in Google's search volume and the price of Bitcoin or a hot stock like Tesla. Perhaps search volume for the term "Unemployment Benefits" can tell us something about the actual unemployment rate? Data Sources: Unemployment Rate from FRED Google Trends Yahoo Finance for Tesla Stock Price Yahoo Finance for Bitcoin Stock Price Import Statements
###Code
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Read the DataDownload and add the .csv files to the same folder as your notebook.
###Code
df_tesla = pd.read_csv('TESLA Search Trend vs Price.csv')
df_btc_search = pd.read_csv('Bitcoin Search Trend.csv')
df_btc_price = pd.read_csv('Daily Bitcoin Price.csv')
df_unemployment = pd.read_csv('UE Benefits Search vs UE Rate 2004-19.csv')
###Output
_____no_output_____
###Markdown
Data Exploration Tesla **Challenge**: What are the shapes of the dataframes? How many rows and columns? What are the column names? Complete the f-string to show the largest/smallest number in the search data column Try the .describe() function to see some useful descriptive statisticsWhat is the periodicity of the time series data (daily, weekly, monthly)? What does a value of 100 in the Google Trend search popularity actually mean?
###Code
df_tesla.head()
print(f'Largest value for Tesla in Web Search: {df_tesla["TSLA_WEB_SEARCH"].max()}')
print(f'Smallest value for Tesla in Web Search: {df_tesla["TSLA_WEB_SEARCH"].min()}')
df_tesla.describe()
###Output
_____no_output_____
###Markdown
Unemployment Data
###Code
df_unemployment.head()
print('Largest value for "Unemployemnt Benefits" '
f'in Web Search: {df_unemployment["UE_BENEFITS_WEB_SEARCH"].min()}')
###Output
Largest value for "Unemployemnt Benefits" in Web Search: 14
###Markdown
Bitcoin
###Code
df_btc_price.head()
df_btc_search.head()
print(f'largest BTC News Search: {df_btc_search["BTC_NEWS_SEARCH"].max()}')
###Output
largest BTC News Search: 100
###Markdown
Data Cleaning Check for Missing Values **Challenge**: Are there any missing values in any of the dataframes? If so, which row/rows have missing values? How many missing values are there?
###Code
print(f'Missing values for Tesla?: ')
print(f'Missing values for U/E?: ')
print(f'Missing values for BTC Search?: ')
print(f'Missing values for BTC price?: ')
print(f'Number of missing values: ')
###Output
Number of missing values:
###Markdown
**Challenge**: Remove any missing values that you found. Convert Strings to DateTime Objects **Challenge**: Check the data type of the entries in the DataFrame MONTH or DATE columns. Convert any strings in to Datetime objects. Do this for all 4 DataFrames. Double check if your type conversion was successful. Converting from Daily to Monthly Data[Pandas .resample() documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html) Data Visualisation Notebook Formatting & Style Helpers
###Code
# Create locators for ticks on the time axis
# Register date converters to avoid warning messages
###Output
_____no_output_____ |
Regressions/AirBnB Price prediction/.ipynb_checkpoints/LastName_FirstName_Apprentice_Challenge_2021-checkpoint.ipynb | ###Markdown
Apprentice ChallengeThis challenge is diagnostic of your current python pandas, matplotlib/seaborn, and numpy skills. These diagnostics will help inform your selection into the Machine Learning Guild's Apprentice program. Please ensure you are using Python 3 as the notebook won't work in 2.7 Challenge Background: AirBnB Price PredictionAirBnB is a popular technology platform that serves as an online marketplace for lodging. Using AirBnB, homeowners (called "hosts") can rent out their properties to travelers. Some hosts rent out their properties in full (e.g. an entire house or apartment), whereas some rent out individual rooms separately. Units are rented out for various durations, anywhere from one night up to a month or more, with some hosts specifying a minimum number of nights required for a rental.Over time, this platform has proven to be a powerful competitor to the traditional hotel and bed & breakfast industries, often competing on price, convenience, comfort, and/or the unique nature of its listed properties. The company is constantly onboarding new rental hosts in NYC, and many of these hosts don’t have any idea how much customers would be willing to pay for their rental units. AirBnB has hired you, an analytics consultant, to use their historical NYC rental data and build a predictive model that their new hosts in the city can use to get a sense of what to charge.In this data analysis programming challenge, you’ll have to clean the data, engineer some new modeling features, and finally, build and test the predictive model. InstructionsYou need to know your way around `pandas` DataFrames and basic Python programming. You have **2 hours** to complete the challenge. We strongly discourage searching the internet for challenge answers.Your first task:* Read the first paragraph above to familiarize yourself with the topic.* Feel free to poke around with the iPython notebook.* When you are ready, proceed to the next task.* Complete each of the tasks listed below in the notebook.* You need to provide your code for challenge in the cells which say "-- YOUR CODE FOR TASK NUMBER --"**NOTE: After each Jupyter cell in which you will enter your code, there is an additional cell that will check your outputs. If your outputs are incorrect, this will be printed out for your awareness, and the correct outputs will be loaded in so that you can continue with the assessment. That being said, if you feel you are able to correct your code so that it generates the correct outputs, you should do so in order to get as many points as possible.****Please reach out to [Lauren Moy](mailto:[email protected]) with any questions.**
###Code
# Import packages
import pandas as pd
import numpy as np
import data_load_files
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.metrics import mean_squared_error,r2_score, mean_absolute_error
from sklearn import preprocessing
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Task 1**Instructions**AirBnB just sent you the NYC rentals data as a text file (`AB_NYC_2019_pt1.csv`). First, we'll need to read that text file in as a pandas DataFrame called `df`. As it turns out, AirBnB also received an additional update (`AB_NYC_2019_pt2.csv`) overnight to add to the main dataset, so you'll have to read that in and append it to the main DataFrame as well.Next, to better understand the data, print the first 10 rows of the DataFrame, then print the data types of the DataFrame's columns**Expected Output*** (1pt) Read in the main data file as `df`, a pandas DataFrame and load in the second data file as `df2`* (1pt) Append the new data file to `df`* (1pt) Print the first 10 rows of the df and the datatypes of the df
###Code
# Task 1
# -- YOUR CODE FOR TASK 1 --
#Import primary AirBnB data file as a pandas DataFrame
df = pd.read_csv("AB_NYC_2019_pt1.csv")
#Import the additional AirBnB data file as a pandas DataFrame and append it to the primary data DataFrame
df2 = pd.read_csv("AB_NYC_2019_pt2.csv")
#Append df2 to df
df = df.append(df2)
#Print the first 10 rows of the df, and print the data types of the df's columns
# Your code here
print(df.head(10))
print(df.dtypes)
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_1_check = data_load_files.TASK_1_OUTPUT
task_1_shape = task_1_check.shape
task_1_columns = task_1_check.columns
if df.shape == task_1_shape and list(df.columns) == list(task_1_columns):
print('df is correct')
else:
df = task_1_check
print("'`df' is incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
df is correct
###Markdown
Task 2, Part 1**Instructions**AirBnB is aware that some of its listings are missing values. Let's see if we can determine how much of the dataset is affected. Start by printing out the number of rows in the df that contain any null (NaN) values.Once you've done that, drop those rows from the df before any further analysis is conducted.One of your fellow analytics consultants who was also exploring this data has been having trouble with their analysis. It seems to be due to a data type mismatch. In particular, they need the `last_review` column to be of type Datetime. Convert that column to Datetime for your teammate.**Expected Output**- (1pt) Correct number of rows that conain any null (NaN) values stored in a variable `num_nan`- (1pt) Updated DataFrame `df` where all rows that contain any NaNs have been dropped- (1pt) Updated DataFrame `df` where the dtype of column `last_review` is `datetime`
###Code
# Task 2 (Part 1)
# Import packages
import datetime
# -- YOUR CODE FOR TASK 2 (PART 1) --
#Print out the number of rows in the df that contain any null (NaN) values
num_nan = len(df[df.isna().any(axis=1)])
print(num_nan)
#Drop all rows with any NaNs from the DataFrame
# Your code here
df = df.dropna()
#Convert the ‘last_review’ column to DateTime
# Your code here
df['last_review'] = df['last_review'].astype('datetime64[ns]')
print (df.dtypes)
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
## Checks
task_2_part_1_check = data_load_files.TASK_2_PART_1_OUTPUT
shape_check = (df.shape == task_2_part_1_check.shape)
columns_check = (list(df.columns) == list(task_2_part_1_check.columns))
type_check = (type(df['last_review']) == type(task_2_part_1_check['last_review']))
if shape_check and columns_check and type_check:
print('df is correct')
else:
df = task_2_part_1_check
print("'`df' is incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
df is correct
###Markdown
Task 2, Part 2**Instructions**Airbnb team wants to further explore the expansion of their listings in the Neighbourhood Group of Brooklyn. Create a DataFrame `df_brooklyn` containing only these listings, and then, using that DataFrame, create a new DataFrame `df_brooklyn_prices_room_type` showing the mean price per room type, which will help them determine the room types that generate the highest revenue.AirBnB also wants to understand which neighbourhoods in the Brooklyn neighbourhood group are most common among its listings. Create a pandas Series `top_10_brooklyn_series` that contains the top 10 most common neighborhoods (as the index) in Brooklyn and the number of listings each represents.**Expected Output**- (1pt) A DataFrame `df_brooklyn` that only contains listings in Brooklyn. Don't forget to reset the index if needed- (1pt) Create a dataframe `df_brooklyn_prices_room_type` showing the average (mean) prices of the listings for each room type in Brooklyn. This new DataFrame should contain only three columns: `neighbourhood_group`,`room_type`, and `price`. Don't forget to reset the index, if needed.- (1pt) A Series `top_10_brooklyn_series` that contains the number of listings for only the top 10 most common neighborhoods in Brooklyn
###Code
# Run this cell
pd.set_option('mode.chained_assignment', None)
df.head(10)
# Task 2 (Part 2)
# -- YOUR CODE FOR TASK 2 (PART 2) --
#Create a pandas DataFrame containing only listings in the Brooklyn neighborhood group. Don't forget to reset the index!
df_brooklyn = df[df['neighbourhood_group'] == "Brooklyn"]
df_brooklyn.reset_index()
#Printing Results
df_brooklyn
#Create a dataframe showing the average (mean) prices of the listings in Brooklyn
df_brooklyn_prices_room_type = df_brooklyn.groupby(['neighbourhood_group', 'room_type']).agg({'price': 'mean'}).reset_index()
df_brooklyn_prices_room_type.columns = ['neighbourhood_group','room_type','price']
#Printing Results
df_brooklyn_prices_room_type
#Create a pandas Series showing the number of listings for each of the top 10 most common neighbourhoods
top_10_brooklyn_series = df_brooklyn.neighbourhood.value_counts().head(10)
# #Printing Results
top_10_brooklyn_series
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
## Checks
task_2_part_2_top_10_check = data_load_files.TASK_2_PART_2_T10_OUTPUT
task_2_part_2_brooklyn_check = data_load_files.TASK_2_PART_2_BKN_OUTPUT
task_2_part_2_rm_prices_check = data_load_files.TASK_2_PART_2_RM_PRICES_OUTPUT
price_shape_check = (df_brooklyn_prices_room_type.shape == task_2_part_2_rm_prices_check.shape)
price_columns_check = (list(df_brooklyn_prices_room_type.columns) == list(task_2_part_2_rm_prices_check.columns))
price_avg_check = (df_brooklyn_prices_room_type.price.mean() == task_2_part_2_rm_prices_check.price.mean())
brooklyn_shape_check = (df_brooklyn.shape == task_2_part_2_brooklyn_check.shape)
brooklyn_top_10_avg_check = (top_10_brooklyn_series.mean() == task_2_part_2_top_10_check.mean())
if price_shape_check and price_columns_check and price_avg_check and brooklyn_shape_check and brooklyn_top_10_avg_check:
print('dfs are correct')
else:
df_brooklyn_prices_room_type = task_2_part_2_rm_prices_check
df_brooklyn = task_2_part_2_brooklyn_check
df_brooklyn['last_review'] = pd.to_datetime(df_brooklyn['last_review'])
top_10_brooklyn_series = task_2_part_2_top_10_check
print("df's are incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
dfs are correct
###Markdown
Task 3, Part 1**Instructions**We want to be able to model using the ‘neighbourhood’ column as a feature, but to do so we’ll have to transform it into a series of binary features (one per neighbourhood), and right now there are way too many unique values. To solve this problem, we will re-label all neighbourhoods not in the top 10 neighbourhoods as “Other”. First, you'll create a list of the top 10 most common neighbourhoods in Brooklyn, leveraging the `top_10_brooklyn_series` Series that you created earlier. Then you will replace all neighbourhood values NOT in that list with the value 'Other'.AirBnB believes that long lags between reviews can be an indicator that the rental unit is less desirable (not being booked often). To enable us to test this later, create a new column representing the number of days it has been since the last review was posted. AirBnB believes that ‘Entire home/apt’ rentals in Brooklyn can command a premium; hence, they would like you to separately identify such listings using a new binary column.**Expected Output**- (1pt) A list of neighborhoods `top_10_brooklyn_list` that contains the top 10 neighborhoods in brooklyn by largest count of Air BnBs- (1pt) A column `neighbourhood` that displays the neighbourhood name if it is in the `top_10_brooklyn_list`, otherwise displays "Other"- (1pt) Calculate the `days_since_review` and add as a column in `df_brooklyn`- (1pt) Create a binary column `brooklyn_whole` that create a binary indicator based on 'room_type'=='Entire home/apt'
###Code
#Task 3
# -- YOUR CODE FOR TASK 3 --
#Create a list of the top 10 most common neighbourhoods, using the 'top_10_brooklyn_series'
#that you created earlier
top_10_brooklyn_list = top_10_brooklyn_series.index
#Replace all 'neighbourhood' column values NOT in the top 10 with 'Other'
df_brooklyn.loc[~df_brooklyn["neighbourhood"].isin(top_10_brooklyn_list), "neighbourhood"] = 'Other'
# Printing Results
df_brooklyn['neighbourhood'].value_counts()
df_brooklyn
from datetime import date
#Calculate the days_since_review and add as a column in df_brooklyn
current_date = pd.to_datetime(date.today())
df_brooklyn['days_since_review']= (current_date - df_brooklyn['last_review']).dt.days
# Print Results
df_brooklyn
# Create a binary column brooklyn_whole that create a binary indicator based on 'room_type'=='Entire home/apt'
df_dummy = pd.get_dummies(df_brooklyn['room_type'])
df_brooklyn['brooklyn_whole']=df_dummy["Entire home/apt"]
# Printing Results
df_brooklyn
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_3_part_1_check = data_load_files.TASK_3_PART_1_OUTPUT
brooklyn_shape_check = (df_brooklyn.shape == task_3_part_1_check.shape)
brooklyn_columns_check = (list(df_brooklyn.columns) == list(task_3_part_1_check.columns))
if brooklyn_shape_check and brooklyn_columns_check:
print('df is correct')
else:
df_brooklyn = task_3_part_1_check
print("df is incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
df is incorrect. You can correct for points, but you will still be able to move on to the next task if not.
###Markdown
Task 3, Part 2You want to take a closer look at price in the dataset. You decide to categorize rental properties by their affordability. Categorize each listing into one of three price categories by binning the `price` column and creating a new `price_category` column using the cut method of pandas (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html).
###Code
price_bins = [0, 100, 200, np.inf]
price_cat = ['low', 'medium', 'high']
#Categorize each listing into one of three price categories by binning the price column
df_brooklyn['price_category'] = pd.cut(df_brooklyn['price'], bins=price_bins, labels=price_cat)
# Printing Results
df_brooklyn
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_3_part_2_check = data_load_files.TASK_3_PART_2_OUTPUT
brooklyn_shape_check = (df_brooklyn.shape == task_3_part_2_check.shape)
brooklyn_columns_check = (list(df_brooklyn.columns) == list(task_3_part_2_check.columns))
if brooklyn_shape_check and brooklyn_columns_check:
print('df is correct')
else:
df_brooklyn = task_3_part_2_check
print("df is incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
df is correct
###Markdown
Task 3, Part 3**Instructions*** Create a barchart of your dataset `Price Category` from Part 2, comparing the number of rentals in each category.**Expected Output*** barchart with listing count as bar* grouped by 3 price categories
###Code
#Create a barchart of your dataset
# Your code here
import seaborn as sns
import matplotlib.pyplot as plt
# Set the width and height of the figure
plt.figure(figsize=(10,6))
# Bar chart showing Listing counts for different price categories
sns.barplot(x=df_brooklyn["calculated_host_listings_count"], y=df_brooklyn["price_category"])
###Output
_____no_output_____
###Markdown
Task 4, Part 1**Instructions **Airbnb's business team would like to understand the revenue the hosts make in Brookyln. As you do not have the Airbnb booking details, you can estimate the number of bookings for each property based on the number of reviews they received. You can then extrapolate each property’s revenue with this formula:Number of Reviews x Price of Listing x Minimum Length of StayThis will serve as a conservative estimate of the revenue, since it is likely that properties will have more bookings than reviews. In addition, guests are also likely to stay longer than the minimum number of nights required.**Expected Output**- (1pt) Write a function called generate_estimate_host_revenue to calculate the host revenue using the above formula and return the updated dataframe `df_brooklyn` with a new column `estimated_host_revenue` using the function you created- (1pt) Descriptive Statistics of the `estimated_host_revenue`
###Code
# Write a function to calculate the estimated host revenue, update the dataframe with a new column `estimated_host_revenue` calculated using the above formula
# and return the updated dataframe
#Your code here
def generate_estimate_host(num_of_reviews, price_of_listing, min_len_stay):
return num_of_reviews * price_of_listing * min_len_stay
df_brooklyn
# Apply your function on `df_brooklyn`
df_brooklyn['estimated_host_revenue'] = df_brooklyn.apply(lambda row : generate_estimate_host(row['number_of_reviews'],row['minimum_nights'],row['price']), axis = 1)
# Printing Results
df_brooklyn
#Use the describe() function column `estimated_host_revenue` to generate descriptive statistics which includes
# the summary of the central tendency, dispersion and shape of the numerical column.
#Your code here
df_brooklyn['estimated_host_revenue'].describe()
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_4_part_1_check = data_load_files.TASK_4_PART_1_OUTPUT
brooklyn_shape_check = (df_brooklyn.shape == task_4_part_1_check.shape)
brooklyn_columns_check = (list(df_brooklyn.columns) == list(task_4_part_1_check.columns))
brooklyn_est_host_rev_mean_check = (df_brooklyn['estimated_host_revenue'].mean() == task_4_part_1_check['estimated_host_revenue'].mean())
brooklyn_est_host_rev_max_check = (df_brooklyn['estimated_host_revenue'].max() == task_4_part_1_check['estimated_host_revenue'].max())
if brooklyn_shape_check and brooklyn_columns_check and brooklyn_est_host_rev_mean_check and brooklyn_est_host_rev_max_check:
print('df is correct')
else:
df_brooklyn = task_4_part_1_check
print("df is incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
df is correct
###Markdown
Task 4, Part 2**Instructions**The AirBnB team wants more information on average prices of different types of listings. To help them with this, use a pivot table to look at the average (mean) prices for the various room types within each 'neighbourhood'. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.htmlpandas.DataFrame.pivot_table**Expected Output** - (1pt) Correct Pivot Table with `room_type` as column and `neighbourhood` as index - (1pt) Fill the cell with 0 if value is null
###Code
# YOUR CODE FOR TASK 4, PART 2
Pivot = pd.pivot_table(df_brooklyn, values='price', index=['neighbourhood'],
columns=['room_type'], aggfunc=np.mean, fill_value=0)
# Printing Results
Pivot
###Output
_____no_output_____
###Markdown
TASK 5, Part 1**Instructions**The Airbnb analysts want to know the factors influencing the price. Before proceedeing with Correlation analysis, you need to perform some feature engineering tasks such as converting the categorical columns, dropping descriptive columns.1. Encode the categorical variable `room_type` and `neighbourhood` using One-Hot Encoding.Many machine learning algorithms cannot work with categorical data directly. The categories must be converted into numbers. One hot encoding creates new (binary) columns, indicating the presence of each possible value from the original data. Use pandas get_dummies function to create One-Hot Encoding. Function syntax : new_dataframe = pd.get_dummies(dataframe name, columns = [list of categorical columns])2. Drop the the descriptive columns `name`, `host_id`, `host_name`, `neighbourhood_group`, `latitude` and `longitude` from the dataframe. Expected Output - (2pt) Dataframe `df_brooklyn_rt` contains 19 columns now. - (1pt) Drop descriptive columns `name`, `host_id`, `host_name`, `neighbourhood_group`, `latitude` and `longitude`from the dataframe
###Code
# YOUR CODE FOR TASK 5, PART 1
# encode the columnns room_type and neighbourhood
df_brooklyn_rt = pd.get_dummies(df_brooklyn,columns = ['room_type','neighbourhood'])
# Printing Results
df_brooklyn_rt
#drop the descriptive columns from the dataframe
df_brooklyn_rt = df_brooklyn_rt.drop(['name', 'host_id', 'host_name', 'neighbourhood_group', 'latitude','longitude'],axis=1)
# Printing Results
df_brooklyn_rt
df_brooklyn_rt.info()
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_5_part_1_check = data_load_files.TASK_5_PART_1_OUTPUT
brooklyn_rt_shape_check = (df_brooklyn_rt.shape == task_5_part_1_check.shape)
brooklyn_rt_columns_check = (list(df_brooklyn_rt.columns) == list(task_5_part_1_check.columns))
if brooklyn_rt_shape_check and brooklyn_rt_columns_check:
print('df is correct')
else:
df_brooklyn_rt = task_5_part_1_check
print("df is incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
df is correct
###Markdown
Task 5, Part 2**Instructions**We will now study the correlation of the features in the dataset with `price`. Use Pandas dataframe.corr() to find the pairwise correlation of all columns in the dataframe. Use pandas corr() function to create correlation dataframe.Function syntax : new_dataframe = Dataframe.corr()Visualize the correaltion dataframe using a seaborm heatmap. Heatmap is used to plot rectangular data as a color-encoded matrix.https://seaborn.pydata.org/generated/seaborn.heatmap.htmlExpected Output - (2pt) Visualize the Correlation matrix using a heatmap - (1pt) Correct labels for x and y axis
###Code
# YOUR CODE FOR TASK 5, PART 2
# create a correlation matix
corr = df_brooklyn_rt.corr()
# plot the heatmap
sns.heatmap(corr,)
###Output
_____no_output_____
###Markdown
**Think about it**: Multicollinearity occurs when your data includes multiple attributes that are correlated not just to your target variable, but also to each other. Based on the correlation matrix, **think about** the following:1. Which columns would you drop to prevent multicollinearity? Sample Answer: brooklyn_whole or number_of_reviews2. Which columns do you find are positively related to the price?Sample Answer: reviews_per_month
###Code
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_5_part_2_check = data_load_files.TASK_5_PART_2_OUTPUT
corr_shape_check = (corr.shape == task_5_part_2_check.shape)
if corr_shape_check:
print('df is correct')
else:
print("df is incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
df is correct
###Markdown
Task 6Property Hosts are expected to set their own prices for their listings. Although Airbnb provide some general guidance, there are currently no services which help hosts price their properties using range of data points.Airbnb pricing is important to get right, particularly in big cities like New York where there is a lot of competition and even small differences in prices can make a big difference. It is also a difficult thing to do correctly — price too high and no one will book. Price too low and you’ll be missing out on a lot of potential income.Now, let’s try to make a price prediction model using the basic machine learning model from scikit learn. It is a linear regression model that we will use to predict the prices. Import the correct Library Task 6, Part 1**Instructions****Preparing the data for training the model**Based on the correlation plot observations, we have now identified the features that influence the price of an accomodation. We will prepare the data to train the price prediction model. We will create two dataframes 'X' (contains all features influencing the price) and 'Y' (contains the feature price) from `df_brooklyn_rt`. 1. To create Y, select the `price` column from `df_brooklyn_rt`2. To create X, drop the columns `price`, `last_review`, `brooklyn_whole`,`price_category` from `df_brooklyn_rt`. We are dropping `brooklyn_whole` as it was causing multicollinearity with `room_type`. **Splitting the data into training and testing sets**Next, we split the X and Y datasets into training and testing sets. We train the model with 80% of the samples and test with the remaining 20%. We do this to assess the model’s performance on unseen data. To split the data we use train_test_split function provided by scikit-learn library. We finally print the sizes of our training and test set to verify if the splitting has occurred properly.**Note**: Please don't change the value for `random_state` in your code, it should set be 5. **Expected Output**- (1pt) Create dataframe Y from `df_brooklyn_rt`, select only column `price`- (1pt) Create dataframe X from `df_brooklyn_rt`, do not include columns `price`, `last_review`,` brooklyn_whole` and `price_category`- (1pt) Split the dataframes X and Y into train and test datasets using the train_test_split function
###Code
#Your code here
X = df_brooklyn_rt.drop(['price','last_review','brooklyn_whole','price_category'],axis=1)
Y = df_brooklyn_rt['price']
print(X)
#Your code here
#Please don't change the test_size value it should remain 0.2
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state=5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_6_part_1_output_1_check = data_load_files.TASK_6_PART_1_OUTPUT_1
task_6_part_1_output_2_check = data_load_files.TASK_6_PART_1_OUTPUT_2
task_6_part_1_output_3_check = data_load_files.TASK_6_PART_1_OUTPUT_3
task_6_part_1_output_4_check = data_load_files.TASK_6_PART_1_OUTPUT_4
#X_train
xtrain_shape_check = (X_train.shape == task_6_part_1_output_1_check.shape)
xtrain_columns_check = (list(X_train.columns) == list(task_6_part_1_output_1_check.columns))
#X_test
xtest_shape_check = (X_test.shape == task_6_part_1_output_2_check.shape)
xtest_columns_check = (list(X_test.columns) == list(task_6_part_1_output_2_check.columns))
if xtrain_shape_check and xtrain_columns_check and \
xtest_shape_check and xtest_columns_check:
print('dfs are correct')
else:
X_train = task_6_part_1_output_1_check
X_test = task_6_part_1_output_2_check
Y_train = task_6_part_1_output_3_check
Y_test = task_6_part_1_output_4_check
print("dfs are incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
dfs are correct
###Markdown
Task 6, Part 2**Instructions**Training the modelWe use scikit-learn’s LinearRegression to train our model. Using the fit() method, we will pass the training datasets X_train and Y_train as arguments to the linear regression model. Testing the modelThe model has learnt about the dataset. We will now use the trained model on the test dataset, X_test. Using the predict() method, we will pass the test dataset X_Test as an argument to the model.Expected Output- (1pt) Pass the training datasets X_train and Y_train to the fit method as arguments- (1pt) Pass the test dataset X_test to the predict method as argument
###Code
#Run this cell
lin_model = LinearRegression()
#Training the model
#Your code here
lin_model.fit(X_train, Y_train)
#Testing the model
y_test_predict = lin_model.predict(X_test)
y_test_predict
#Run this cell
#Model Evaluation
rmse = (np.sqrt(mean_squared_error(Y_test, y_test_predict)))
r2 = r2_score(Y_test, y_test_predict)
print("The model performance for testing set")
print("--------------------------------------")
print('RMSE is {}'.format(round(rmse,3)))
print('R2 score is {}'.format(round(r2,3)))
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_6_part_2_check = data_load_files.TASK_6_PART_2_OUTPUT
round(rmse,3)
rmse_check = (round(rmse,3) == task_6_part_2_check['RMSE'][0])
r2_check = (round(r2,3) == task_6_part_2_check['R2'][0])
if rmse_check and r2_check:
print('Model evaluation is correct')
else:
print('Model evaluation is incorrect')
###Output
Model evaluation is correct
###Markdown
Task 6, Part 3**Instructions**Now we will compare the actual output values for X_test with the predicted values using a bar chart.- 1(pt) Create a new dataframe lr_pred_df using the Y_test and y_test_predict. Hint: You'll need to flatten your arrays- 1(pt) Use first 20 records from the dataframe lr_pred_df and plot a bar graph showing comparision of actual and predicted values set Y axis label as 'Price' and Plot title as 'Actual vs Predicted Price'
###Code
#Actual Vs Predicted for Linear Regression
lr_pred_df = pd.DataFrame(np.concatenate((y_test_predict.reshape(len(y_test_predict),1), Y_test.values.reshape(len(Y_test),1)),1))
lr_pred_df.columns = ['y_test_predict','actual_values']
lr_pred_df
#Your code here
lr_pred_df = lr_pred_df.head(20)
print(lr_pred_df.shape)
lr_pred_df.plot(x='actual_values', y='y_test_predict',kind='bar',title="Actual v/s Predicted Price",ylabel = 'Predicted Price',xlabel = "Actual price")
# Printing Results
plt
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE LOADED IN TO ENSURE THAT YOU CAN CONTINUE
## WITH THE ASSESSMENT.
task_6_part_3_check = data_load_files.TASK_6_PART_3_OUTPUT
lr_pred_df_shape_check = (lr_pred_df.shape == task_6_part_3_check.shape)
lr_pred_df_columns_check = (list(lr_pred_df.columns) == list(task_6_part_3_check.columns))
if lr_pred_df_shape_check and lr_pred_df_columns_check:
print('df is correct')
else:
lr_pred_df = task_6_part_3_check
print("df is incorrect. You can correct for points, but you will still be able to move on to the next task if not.")
###Output
df is correct
|
poisson-distribution-process/poisson_distribution_and_poisson_process_tutorial.ipynb | ###Markdown
Diving Into the Poisson Distribution and Poisson Process
* Tutorial: https://news.towardsai.net/pd
* Github: https://github.com/towardsai/tutorials/tree/master/poisson-distribution-process
A TV Assembly unit performs a defects analysis task to understand the number of defects that could happen for a given defective TV. Using past quality and audit data is noted that 12 defects are marked on an average for a defective TV, calculate below:
* The probability that a defective laptop has exactly 5 defects.
* The probability that a defective laptop has less than 5 defects.
###Code
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
n = np.arange(0,30)
n
rate = 12
poisson = stats.poisson.pmf(n,rate)
poisson
poisson[5]
poisson[0] + poisson[1] + poisson[2] + poisson[3] + poisson[4]
plt.plot(n,poisson, 'o-')
plt.show()
###Output
_____no_output_____ |
notebooks/gap_cartilage.ipynb | ###Markdown
Automated gap cartilage generation This notebook explains a method to automatically generate the hj geometry based on the bone geometries imports
###Code
import numpy as np
import meshplot as mp
import pathlib
import sys
sys.path.append('../')
import cargen
import os
"""
DIRECTORIES:
"""
main_dir = pathlib.Path('..')
# input and output paths
i_dir = main_dir / 'models'
o_dir = main_dir / 'output'
# Remove all files inside output directory if it exists, otherwise create it
if o_dir.is_dir():
for file in o_dir.iterdir():
if file.is_file():
file.unlink()
else:
o_dir.mkdir(exist_ok=False)
"""
VALUES:
i_dim, o_dim = input and output dimension ("mm" = millimeters, "m" = meters)
i_format = the format of the input surface mesh ( ".obj" , ".stl")
o_format = format you want the files to be save at ( ".obj" , ".stl")
+ scroll down to calibrate the cartilage generation parameters
"""
# dimensions
i_dim = "mm"
o_dim = "mm"
i_format = ".stl"
o_format = ".obj"
"""
NAMES & PATHS:
"""
# bones
sacrum_name = 'sacrum'
lpelvis_name = 'lpelvis'
# cartilages
lsi_name = 'lsi'
#bones
clean_sacrum_name = 'clean_' + sacrum_name + '_'+ o_dim
clean_lpelvis_name = 'clean_' + lpelvis_name + '_'+ o_dim
#cartilages
lsi_cart_name = lsi_name +'_cart_'+ o_dim
lsi_ring_name = lsi_name +'_ring_'+ o_dim
# input paths
sacrum_path = str((i_dir/ sacrum_name).with_suffix(i_format))
lpelvis_path = str((i_dir/ lpelvis_name).with_suffix(i_format))
# output paths
#bones
clean_sacrum_path = str((o_dir/ clean_sacrum_name).with_suffix(o_format))
clean_lpelvis_path = str((o_dir/ clean_lpelvis_name).with_suffix(o_format))
#cartilage
lsi_cart_path = str((o_dir/ lsi_cart_name).with_suffix(o_format))
lsi_ring_path = str((o_dir/ lsi_ring_name).with_suffix(o_format))
###Output
_____no_output_____
###Markdown
implementation read and clean up input
###Code
s1_vertices, s1_faces = cargen.read_and_clean ( sacrum_path, i_dim )
s2_vertices, s2_faces = cargen.read_and_clean ( lpelvis_path, i_dim )
frame = mp.plot( s1_vertices, s1_faces, c = cargen.bone, shading = cargen.sh_false )
frame.add_mesh ( s2_vertices, s2_faces, c = cargen.bone, shading = cargen.sh_false )
###Output
_____no_output_____
###Markdown
left sacro-iliac joint
###Code
# set the parameters
param = cargen.Var()
# change the ones you like
param.gap_distance = 4.6
param.trimming_iteration = 1
param.smoothing_iteration_base = 5
param.thickness_factor = 1
param.smoothing_iteration_extruded_base = 2
# make it
lsi_vertices, lsi_faces, ring_vertices, ring_faces = cargen.get_gap_cartilage(s1_vertices, s1_faces,
s2_vertices, s2_faces,
param)
# reset the parameters to default values
param.reset()
###Output
_____no_output_____
###Markdown
export results cartilage
###Code
cargen.save_surface ( lsi_vertices, lsi_faces, o_dim, lsi_cart_path )
cargen.save_surface ( ring_vertices, ring_faces, o_dim, lsi_ring_path )
###Output
_____no_output_____
###Markdown
cleaned bones
###Code
cargen.save_surface ( s1_vertices, s1_faces, o_dim, clean_sacrum_path )
cargen.save_surface ( s2_vertices, s2_faces, o_dim, clean_lpelvis_path)
###Output
_____no_output_____
###Markdown
voila!
###Code
frame = mp.plot( s1_vertices, s1_faces, c = cargen.bone, shading = cargen.sh_false )
frame.add_mesh ( s2_vertices, s2_faces, c = cargen.bone, shading = cargen.sh_false )
frame.add_mesh ( ring_vertices, ring_faces, c = cargen.pastel_orange, shading = cargen.sh_true )
###Output
_____no_output_____
###Markdown
check self intersection in tetgen
###Code
lsi_cart_test = str((o_dir/ lsi_cart_name).with_suffix('.stl'))
cargen.save_surface ( lsi_vertices, lsi_faces, o_dim, lsi_cart_test )
os.system ( 'tetgen -d '+ lsi_cart_test )
###Output
_____no_output_____ |
fuzzy_join.ipynb | ###Markdown
Reading the dataFirst we are going to read the data.
###Code
authors = pd.read_csv('~/data/authors_1019.csv', encoding = "ISO-8859-1")
investigators = pd.read_csv('~/data/investigators_837.csv', encoding = "ISO-8859-1")
###Output
_____no_output_____
###Markdown
First Look In this sections I will just take a first look at the data, check if anything is weird or not normal.
###Code
authors.head()
investigators.head()
###Output
_____no_output_____
###Markdown
The cities and country seems to be in JSON format. Just incase we will use them later I would like to do some processing of them. Also I take notice of list format of Topic and co-authors but this can stay as is.
###Code
authors['cities'] = authors['cities'].apply(lambda x : json.loads(x))
authors['countries'] = authors['countries'].apply(lambda x : json.loads(x))
authors = pd.concat([authors, authors['cities'].apply(pd.Series)[0].apply(pd.Series)], axis = 1).drop('cities', axis = 1).rename(columns={"identifier": "city_id", "name": "city_name"})
authors = pd.concat([authors, authors['countries'].apply(pd.Series)[0].apply(pd.Series)], axis = 1).drop('countries', axis = 1).rename(columns={"identifier": "country_id", "name": "country_name"})
investigators['cities'] = investigators['cities'].apply(lambda x : json.loads(x))
investigators['countries'] = investigators['countries'].apply(lambda x : json.loads(x))
investigators = pd.concat([investigators, investigators['cities'].apply(pd.Series)[0].apply(pd.Series)], axis = 1).drop('cities', axis = 1).rename(columns={"identifier": "city_id", "name": "city_name"})
investigators = pd.concat([investigators, investigators['countries'].apply(pd.Series)[0].apply(pd.Series)], axis = 1).drop('countries', axis = 1).rename(columns={"identifier": "country_id", "name": "country_name"})
###Output
_____no_output_____
###Markdown
Lets take a look at the data after we have processed it just to keep it in mind.
###Code
authors
investigators
###Output
_____no_output_____
###Markdown
Divide and conquerI have philosophy when it comes to problems in data science. KISS (Keep It Simple Stupid). With a cartisean product of the subset of the data that we have ground truth for, it would be possible to engineer some features from the strings and build a predictive model. However I think there are better ways to get the ball rolling. We will try a divide and conquer techique. We will try to develop unique IDs using information in the data.1. Join data that has ORCIDs2. Remove data with ORCIDs from the data3. Find unique last names and join those from the two datasets4. Concatenate orcid join and unique last name join5. Filter out already joined data from the data6. Find unique combination of the first letter from first name and last name7. Join on this combination and concatenate with previous data.8. Filter out already joined data from the data9. Combine fist, middle, and last name into 1 column for remaining data10. Do fuzzy join11. Combine with previously joined data
###Code
investigators_with_orcid = investigators[investigators['orcid'].notnull()].rename(columns={'id':'id_authors'})[['id_authors', 'orcid']]
authors_with_orcid = authors[authors['orcid'].notnull()].rename(columns={'id':'id_investigators'})[['id_investigators', 'orcid']]
joined_on_orcid = pd.merge(investigators_with_orcid, authors_with_orcid, on='orcid')[['id_authors', 'id_investigators']]
# finding authors that do not have orcid
authors_without_orcid = authors[authors['orcid'].isnull()]
investigators_without_orcid = investigators[investigators['orcid'].isnull()]
# finding authors with unique last names
authors_unique_last_name = authors_without_orcid.drop_duplicates(subset=['lastname'])
investigators_unique_last_name = investigators_without_orcid.drop_duplicates(subset=['lastname'])
#unique last names count for authors
(authors_unique_last_name.lastname.value_counts().apply(pd.Series)[0] == 1).value_counts()
#unique last names count for investigators
(investigators_unique_last_name.lastname.value_counts().apply(pd.Series)[0] == 1).value_counts()
joined_on_unique_names = pd.merge(authors_unique_last_name, investigators_unique_last_name, on='lastname',
suffixes=('_authors', '_investigators'))[['id_authors', 'id_investigators']]
#198 more joined! It means however there is not perfect overlap with the data
joined_on_unique_names.nunique()
concat_orcid_unique_names = pd.concat([joined_on_unique_names, joined_on_orcid])
# Removing authors that have already been sucessfully joined
authors_left = authors[~authors['id'].isin(concat_orcid_unique_names['id_authors'])]
investigators_left = investigators[~investigators['id'].isin(concat_orcid_unique_names['id_investigators'])]
# Creating first letter last name combo
authors_left['first_letter_last_name'] = (authors_left.first_name.str[0] + '_' + authors_left.lastname).astype(str)
investigators_left['first_letter_last_name'] = (investigators_left.first_name.str[0] + '_' + investigators_left.lastname).astype(str)
authors_left_unique_combo = authors_left.drop_duplicates(subset=['first_letter_last_name'])
investigators_left_unique_combo = investigators_left.drop_duplicates(subset=['first_letter_last_name'])
joined_on_unique_names_combo = pd.merge(authors_left_unique_combo, investigators_left_unique_combo, on='first_letter_last_name',
suffixes=('_authors', '_investigators'))[['id_authors', 'id_investigators']]
joined_on_unique_names_combo.shape
concat_all_before_fuzzy = pd.concat([concat_orcid_unique_names,joined_on_unique_names_combo])
# Removing authors that have already been sucessfully joined
authors_for_fuzzy_join = authors[~authors['id'].isin(concat_all_before_fuzzy['id_authors'])]
investigators_for_fuzzy_join = investigators[~investigators['id'].isin(concat_all_before_fuzzy['id_investigators'])]
authors_for_fuzzy_join.shape
# Replace middle name with neatural character
authors_for_fuzzy_join['middle_name'] = authors_for_fuzzy_join.middle_name.fillna('_')
investigators_for_fuzzy_join['middle_name'] = investigators_for_fuzzy_join.middle_name.fillna('_')
# Creating fuzzy join ID
authors_for_fuzzy_join['fuzzy_id'] = (authors_for_fuzzy_join.first_name + '_' + authors_for_fuzzy_join.middle_name + "_" + authors_for_fuzzy_join.lastname).astype(str)
investigators_for_fuzzy_join['fuzzy_id'] = (investigators_for_fuzzy_join.first_name + '_' + investigators_for_fuzzy_join.middle_name + "_" + investigators_for_fuzzy_join.lastname).astype(str)
# Renaming and selecting columns
authors_for_fuzzy_join = authors_for_fuzzy_join[['fuzzy_id', 'id']].rename(columns={'id':'id_authors'})
investigators_for_fuzzy_join = investigators_for_fuzzy_join[['fuzzy_id', 'id']].rename(columns={'id':'id_investigators'})
# get closest match algorithm based on jaro_winkler
def get_closest_match(x, list_strings):
best_match = None
highest_jw = 0
for current_string in list_strings:
current_score = jellyfish.jaro_winkler(x, current_string)
if(current_score > highest_jw):
highest_jw = current_score
best_match = current_string
return best_match
###Output
_____no_output_____
###Markdown
Evaluation
###Code
investigators_with_orcid = investigators[investigators['orcid'].notnull()].rename(columns={'id':'id_authors'})
authors_with_orcid = authors[authors['orcid'].notnull()].rename(columns={'id':'id_investigators'})
authors_with_orcid['fuzzy_id'] = (authors_with_orcid.first_name + '_' + authors_with_orcid.middle_name + "_" + authors_with_orcid.lastname).astype(str)
investigators_with_orcid['fuzzy_id'] = (investigators_with_orcid.first_name + '_' + investigators_with_orcid.middle_name + "_" + investigators_with_orcid.lastname).astype(str)
authors_with_orcid.fuzzy_id = authors_with_orcid.fuzzy_id.map(lambda x: get_closest_match(x, investigators_with_orcid.fuzzy_id))
evaluate_fuzz = pd.merge(authors_with_orcid, investigators_with_orcid, how='left', on='fuzzy_id', suffixes=('_authors', '_investigators'))
(evaluate_fuzz['orcid_authors'] == evaluate_fuzz['orcid_investigators']).sum()
authors_with_orcid.shape
#correct matches using fuzzy join
133/206
investigators_for_fuzzy_join.fuzzy_id = investigators_for_fuzzy_join.fuzzy_id.map(lambda x: get_closest_match(x, authors_for_fuzzy_join.fuzzy_id))
fuzzy = pd.merge(authors_for_fuzzy_join, investigators_for_fuzzy_join, on='fuzzy_id')
final_fuzz = fuzzy[['id_authors', 'id_investigators']]
concat_all = pd.concat([final_fuzz, concat_all_before_fuzzy])
concat_all.shape
###Output
_____no_output_____ |
validation/matrix_eval.ipynb | ###Markdown
Evaluate Simulated encounters versus Mercy health data using a co-occurence conditional probability matrix
Disease and findings states appearing in encounters are tallied pair-wise, to compute the probability of one state given another. This appears as a square matrix for all combinations of diseases and findings. A comparision of this matrix for the states common between simulated and real data measures the accuracy of the simulation's co-morbidity model.
JMA July 2021
###Code
import os, re, sys
import math
from pathlib import Path
import numpy as np
import pandas as pd
MATRIX_DIR = '.'
simulated_m = pd.read_csv('simulated_confidence_matrix_smoothed.csv')
real_m = pd.read_csv('real_confidence_matrix_smoothed.csv')
real_m.head()
simulated_m.head()
# Columns in one matrix and not in the other
real_cols = set(real_m.columns)
sim_cols = set(simulated_m.columns)
# Use this list as the common row and column index
common_cols = sorted(list(real_cols.intersection(sim_cols)))
common_len = len(common_cols)
common_len, len(real_cols.difference(sim_cols)), len(sim_cols.difference(real_cols))
# Need to subset and sort each matrix row and column.
def subset_matrix_by_index_labels(new_labels, square_df):
sh = square_df.shape
if sh[0] != sh[1]:
print('Matrix is not square')
return None
# Apply the collumn index to the rows
new_index = square_df.columns
old_index = square_df.index
square_df = square_df.rename(index=dict(zip(old_index, new_index)))
# subset the rows
square_df = square_df.loc[new_labels,:]
square_df = square_df.sort_index(axis=0)
# then the columns
square_df = square_df[new_labels]
square_df = square_df.sort_index(axis=1)
return square_df
sim_common_m = subset_matrix_by_index_labels(common_cols, simulated_m)
np.fill_diagonal(sim_common_m.values, np.NaN)
print(sim_common_m.shape)
# Create a real matrix with only columns from the simulated matrix
real_common_m = subset_matrix_by_index_labels(common_cols, real_m)
np.fill_diagonal(real_common_m.values, np.NaN)
print(real_common_m.shape)
diff_common_m = real_common_m.values - sim_common_m.values
chisq = (diff_common_m * diff_common_m)/ real_common_m
np.nanmean(chisq.values)
def kl_div( mat_P, mat_Q):
return np.nansum(np.multiply(mat_P, (np.log(mat_P) - np.log(mat_Q))))
kl_div(real_common_m, sim_common_m)
import matplotlib.pyplot as plt
plt.hist(np.nansum(diff_common_m, axis=0))
simulated_m.columns
###Output
_____no_output_____ |
notebooks/PCA.draft.ipynb | ###Markdown
PCA Принцип работы метода хорошо описан здесь: https://habr.com/ru/post/304214/Количество "групп" в данных при каждом запуске PCA равно параметру n_components. Собственное значение матрицы ковариации, соответствующее каждой компоненте, можно считать показателем важности этой группы в наших данны - чем больше это значение, тем больше разброса в данных мы потеряем, если уберем эту группу. Сохранение variance данных - основное ограничение PCA. Каждому собственному значению соответствует собственный вектор. Каждый его элемент указывает на то, как коррелированы принадлежность объекта к этой группе и значения признака. Порядок признаков неизменен. Просто запустите этот ноутбук с помощью "Restart and Run All"
###Code
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from ipywidgets import interact
import seaborn as sns
%matplotlib inline
def plot_pca(pca, eigenvalues):
tot = sum(eigenvalues)
var_exp = [(i / tot)*100 for i in sorted(eigenvalues, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
plt.bar(range(pca.n_components_), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(pca.n_components_), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
def eigen_to_dict(eigenvalue, eigenvector):
eigendict = dict(zip(data.columns, eigenvector))
eigendict['Eigenvalue'] = eigenvalue
return eigendict
###Output
_____no_output_____
###Markdown
Число главных компонент можно изменять с помощью "бегунка", в ячейке ниже можно задать значение "all", "except_pic", "pic" - то, какие признаки учитывать в PCA.
###Code
с = 'all'
data = pd.read_csv('../data/all_features_unification.csv', index_col='vk_id')
columns = {
'all' : None,
'except_pic' : data.columns[:5].to_list() + ['positive_text'],
'pic' : ['is_nsfw', 'has_smile', 'number_of_faces', 'has_face']
}
cols = columns[с]
if cols:
data = data[cols].copy(deep=True)
@interact
def show_pca(n=(2, data.shape[-1], 1)):
scaler = StandardScaler()
scaled_data = scaler.fit_transform(data)
pca = PCA(n_components=n)
transformed = pca.fit_transform(scaled_data)
n_samples = scaled_data.shape[0]
# We center the data and compute the sample covariance matrix.
scaled_data -= np.mean(scaled_data, axis=0)
cov_matrix = np.dot(scaled_data.T, scaled_data) / n_samples
eigenvalues = pca.explained_variance_
plot_pca(pca, eigenvalues)
df = pd.DataFrame.from_records([eigen_to_dict(eigenvalue, eigenvector)
for eigenvalue, eigenvector in zip(eigenvalues, pca.components_)], index=range(1, n+1))
return df
###Output
_____no_output_____ |
code/phase0/working/TF-WalkThrough/Z.Train_SM_DDP_MNIST.ipynb | ###Markdown
MNIST 세이지 메이커 기본 훈련 및 DDP 훈련 1. 기본 세팅
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
! df -h
# ! docker container prune -f
# ! docker image prune -f --all
# ! rm -rf /tmp/tmp*
# ! df -h
###Output
Filesystem Size Used Avail Use% Mounted on
devtmpfs 241G 80K 241G 1% /dev
tmpfs 241G 0 241G 0% /dev/shm
/dev/xvda1 109G 95G 14G 88% /
/dev/xvdf 984G 24G 911G 3% /home/ec2-user/SageMaker
###Markdown
2. 기본 훈련
###Code
from sagemaker.tensorflow import TensorFlow
instance_type = "local_gpu"
estimator = TensorFlow(
entry_point="mnist_train.py",
source_dir="src",
role=role,
py_version="py37",
framework_version="2.4.1",
# For training with multinode distributed training, set this count. Example: 2
instance_count=1,
# For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge
instance_type=instance_type,
)
estimator.fit(wait=False)
###Output
Creating 93w3xotbzp-algo-1-yejsf ...
Creating 93w3xotbzp-algo-1-yejsf ... done
Attaching to 93w3xotbzp-algo-1-yejsf
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:47.706396: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:47.706591: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:47.711064: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:47.749773: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:49,595 sagemaker-training-toolkit INFO Imported framework sagemaker_tensorflow_container.training
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:50,188 sagemaker-training-toolkit INFO Invoking user script
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m Training Env:
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m {
[36m93w3xotbzp-algo-1-yejsf |[0m "additional_framework_parameters": {},
[36m93w3xotbzp-algo-1-yejsf |[0m "channel_input_dirs": {},
[36m93w3xotbzp-algo-1-yejsf |[0m "current_host": "algo-1-yejsf",
[36m93w3xotbzp-algo-1-yejsf |[0m "framework_module": "sagemaker_tensorflow_container.training:main",
[36m93w3xotbzp-algo-1-yejsf |[0m "hosts": [
[36m93w3xotbzp-algo-1-yejsf |[0m "algo-1-yejsf"
[36m93w3xotbzp-algo-1-yejsf |[0m ],
[36m93w3xotbzp-algo-1-yejsf |[0m "hyperparameters": {
[36m93w3xotbzp-algo-1-yejsf |[0m "model_dir": "s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/model"
[36m93w3xotbzp-algo-1-yejsf |[0m },
[36m93w3xotbzp-algo-1-yejsf |[0m "input_config_dir": "/opt/ml/input/config",
[36m93w3xotbzp-algo-1-yejsf |[0m "input_data_config": {},
[36m93w3xotbzp-algo-1-yejsf |[0m "input_dir": "/opt/ml/input",
[36m93w3xotbzp-algo-1-yejsf |[0m "is_master": true,
[36m93w3xotbzp-algo-1-yejsf |[0m "job_name": "tensorflow-training-2021-10-04-05-07-24-050",
[36m93w3xotbzp-algo-1-yejsf |[0m "log_level": 20,
[36m93w3xotbzp-algo-1-yejsf |[0m "master_hostname": "algo-1-yejsf",
[36m93w3xotbzp-algo-1-yejsf |[0m "model_dir": "/opt/ml/model",
[36m93w3xotbzp-algo-1-yejsf |[0m "module_dir": "s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/source/sourcedir.tar.gz",
[36m93w3xotbzp-algo-1-yejsf |[0m "module_name": "mnist_train",
[36m93w3xotbzp-algo-1-yejsf |[0m "network_interface_name": "eth0",
[36m93w3xotbzp-algo-1-yejsf |[0m "num_cpus": 64,
[36m93w3xotbzp-algo-1-yejsf |[0m "num_gpus": 8,
[36m93w3xotbzp-algo-1-yejsf |[0m "output_data_dir": "/opt/ml/output/data",
[36m93w3xotbzp-algo-1-yejsf |[0m "output_dir": "/opt/ml/output",
[36m93w3xotbzp-algo-1-yejsf |[0m "output_intermediate_dir": "/opt/ml/output/intermediate",
[36m93w3xotbzp-algo-1-yejsf |[0m "resource_config": {
[36m93w3xotbzp-algo-1-yejsf |[0m "current_host": "algo-1-yejsf",
[36m93w3xotbzp-algo-1-yejsf |[0m "hosts": [
[36m93w3xotbzp-algo-1-yejsf |[0m "algo-1-yejsf"
[36m93w3xotbzp-algo-1-yejsf |[0m ]
[36m93w3xotbzp-algo-1-yejsf |[0m },
[36m93w3xotbzp-algo-1-yejsf |[0m "user_entry_point": "mnist_train.py"
[36m93w3xotbzp-algo-1-yejsf |[0m }
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m Environment variables:
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m SM_HOSTS=["algo-1-yejsf"]
[36m93w3xotbzp-algo-1-yejsf |[0m SM_NETWORK_INTERFACE_NAME=eth0
[36m93w3xotbzp-algo-1-yejsf |[0m SM_HPS={"model_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/model"}
[36m93w3xotbzp-algo-1-yejsf |[0m SM_USER_ENTRY_POINT=mnist_train.py
[36m93w3xotbzp-algo-1-yejsf |[0m SM_FRAMEWORK_PARAMS={}
[36m93w3xotbzp-algo-1-yejsf |[0m SM_RESOURCE_CONFIG={"current_host":"algo-1-yejsf","hosts":["algo-1-yejsf"]}
[36m93w3xotbzp-algo-1-yejsf |[0m SM_INPUT_DATA_CONFIG={}
[36m93w3xotbzp-algo-1-yejsf |[0m SM_OUTPUT_DATA_DIR=/opt/ml/output/data
[36m93w3xotbzp-algo-1-yejsf |[0m SM_CHANNELS=[]
[36m93w3xotbzp-algo-1-yejsf |[0m SM_CURRENT_HOST=algo-1-yejsf
[36m93w3xotbzp-algo-1-yejsf |[0m SM_MODULE_NAME=mnist_train
[36m93w3xotbzp-algo-1-yejsf |[0m SM_LOG_LEVEL=20
[36m93w3xotbzp-algo-1-yejsf |[0m SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
[36m93w3xotbzp-algo-1-yejsf |[0m SM_INPUT_DIR=/opt/ml/input
[36m93w3xotbzp-algo-1-yejsf |[0m SM_INPUT_CONFIG_DIR=/opt/ml/input/config
[36m93w3xotbzp-algo-1-yejsf |[0m SM_OUTPUT_DIR=/opt/ml/output
[36m93w3xotbzp-algo-1-yejsf |[0m SM_NUM_CPUS=64
[36m93w3xotbzp-algo-1-yejsf |[0m SM_NUM_GPUS=8
[36m93w3xotbzp-algo-1-yejsf |[0m SM_MODEL_DIR=/opt/ml/model
[36m93w3xotbzp-algo-1-yejsf |[0m SM_MODULE_DIR=s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/source/sourcedir.tar.gz
[36m93w3xotbzp-algo-1-yejsf |[0m SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1-yejsf","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1-yejsf"],"hyperparameters":{"model_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"tensorflow-training-2021-10-04-05-07-24-050","log_level":20,"master_hostname":"algo-1-yejsf","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/source/sourcedir.tar.gz","module_name":"mnist_train","network_interface_name":"eth0","num_cpus":64,"num_gpus":8,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1-yejsf","hosts":["algo-1-yejsf"]},"user_entry_point":"mnist_train.py"}
[36m93w3xotbzp-algo-1-yejsf |[0m SM_USER_ARGS=["--model_dir","s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/model"]
[36m93w3xotbzp-algo-1-yejsf |[0m SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
[36m93w3xotbzp-algo-1-yejsf |[0m SM_HP_MODEL_DIR=s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/model
[36m93w3xotbzp-algo-1-yejsf |[0m PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m Invoking script with the following command:
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m /usr/local/bin/python3.7 mnist_train.py --model_dir s3://sagemaker-us-east-1-057716757052/tensorflow-training-2021-10-04-05-07-24-050/model
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m TensorFlow version: 2.4.1
[36m93w3xotbzp-algo-1-yejsf |[0m Eager execution: True
[36m93w3xotbzp-algo-1-yejsf |[0m Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
[36m93w3xotbzp-algo-1-yejsf |[0m type: <class 'numpy.ndarray'>
[36m93w3xotbzp-algo-1-yejsf |[0m shape: (60000, 28, 28)
[36m93w3xotbzp-algo-1-yejsf |[0m mnist_labels: (60000,)
[36m93w3xotbzp-algo-1-yejsf |[0m [2021-10-04 05:11:01.112 0696a6502669:149 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m93w3xotbzp-algo-1-yejsf |[0m [2021-10-04 05:11:01.145 0696a6502669:149 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m93w3xotbzp-algo-1-yejsf |[0m Step #0 Loss: 2.316157
[36m93w3xotbzp-algo-1-yejsf |[0m Step #1000 Loss: 0.056291
[36m93w3xotbzp-algo-1-yejsf |[0m Step #2000 Loss: 0.052377
[36m93w3xotbzp-algo-1-yejsf |[0m Step #3000 Loss: 0.012640
[36m93w3xotbzp-algo-1-yejsf |[0m Step #4000 Loss: 0.024718
[36m93w3xotbzp-algo-1-yejsf |[0m Step #5000 Loss: 0.007122
[36m93w3xotbzp-algo-1-yejsf |[0m Step #6000 Loss: 0.002134
[36m93w3xotbzp-algo-1-yejsf |[0m Step #7000 Loss: 0.012094
[36m93w3xotbzp-algo-1-yejsf |[0m Step #8000 Loss: 0.007914
[36m93w3xotbzp-algo-1-yejsf |[0m Step #9000 Loss: 0.010098
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:50.342655: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:50.342814: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:10:50.384894: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:11:52.718112: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
[36m93w3xotbzp-algo-1-yejsf |[0m INFO:tensorflow:Assets written to: /opt/ml/model/1/assets
[36m93w3xotbzp-algo-1-yejsf |[0m INFO:tensorflow:Assets written to: /opt/ml/model/1/assets
[36m93w3xotbzp-algo-1-yejsf |[0m
[36m93w3xotbzp-algo-1-yejsf |[0m 2021-10-04 05:11:55,251 sagemaker-training-toolkit INFO Reporting training SUCCESS
[36m93w3xotbzp-algo-1-yejsf exited with code 0
[0mAborting on container exit...
===== Job Complete =====
###Markdown
3. DDP Local Mode
###Code
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(
base_job_name="tensorflow2-smdataparallel-mnist",
source_dir="src",
entry_point="mnist_train_DDP.py",
role=role,
py_version="py37",
framework_version="2.4.1",
instance_count=1,
# For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge
instance_type= instance_type,
# sagemaker_session=sagemaker_session,
# Training using SMDataParallel Distributed Training Framework
distribution={"smdistributed": {"dataparallel": {"enabled": True}}},
)
estimator.fit()
###Output
Creating 7rrh9djrip-algo-1-gy43h ...
Creating 7rrh9djrip-algo-1-gy43h ... done
Attaching to 7rrh9djrip-algo-1-gy43h
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:12.495853: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:12.496041: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:12.500470: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:12.538159: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:14,282 sagemaker-training-toolkit INFO Imported framework sagemaker_tensorflow_container.training
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:14,999 sagemaker-training-toolkit INFO Starting MPI run as worker node.
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:14,999 sagemaker-training-toolkit INFO Creating SSH daemon.
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:15,006 sagemaker-training-toolkit INFO Waiting for MPI workers to establish their SSH connections
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:15,007 sagemaker-training-toolkit INFO Network interface name: eth0
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:15,007 sagemaker-training-toolkit INFO Host: ['algo-1-gy43h']
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:15,008 sagemaker-training-toolkit INFO instance type: local_gpu
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:12:15,094 sagemaker-training-toolkit INFO Invoking user script
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m Training Env:
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m {
[36m7rrh9djrip-algo-1-gy43h |[0m "additional_framework_parameters": {
[36m7rrh9djrip-algo-1-gy43h |[0m "sagemaker_distributed_dataparallel_enabled": true,
[36m7rrh9djrip-algo-1-gy43h |[0m "sagemaker_instance_type": "local_gpu",
[36m7rrh9djrip-algo-1-gy43h |[0m "sagemaker_distributed_dataparallel_custom_mpi_options": ""
[36m7rrh9djrip-algo-1-gy43h |[0m },
[36m7rrh9djrip-algo-1-gy43h |[0m "channel_input_dirs": {},
[36m7rrh9djrip-algo-1-gy43h |[0m "current_host": "algo-1-gy43h",
[36m7rrh9djrip-algo-1-gy43h |[0m "framework_module": "sagemaker_tensorflow_container.training:main",
[36m7rrh9djrip-algo-1-gy43h |[0m "hosts": [
[36m7rrh9djrip-algo-1-gy43h |[0m "algo-1-gy43h"
[36m7rrh9djrip-algo-1-gy43h |[0m ],
[36m7rrh9djrip-algo-1-gy43h |[0m "hyperparameters": {
[36m7rrh9djrip-algo-1-gy43h |[0m "model_dir": "s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/model"
[36m7rrh9djrip-algo-1-gy43h |[0m },
[36m7rrh9djrip-algo-1-gy43h |[0m "input_config_dir": "/opt/ml/input/config",
[36m7rrh9djrip-algo-1-gy43h |[0m "input_data_config": {},
[36m7rrh9djrip-algo-1-gy43h |[0m "input_dir": "/opt/ml/input",
[36m7rrh9djrip-algo-1-gy43h |[0m "is_master": true,
[36m7rrh9djrip-algo-1-gy43h |[0m "job_name": "tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281",
[36m7rrh9djrip-algo-1-gy43h |[0m "log_level": 20,
[36m7rrh9djrip-algo-1-gy43h |[0m "master_hostname": "algo-1-gy43h",
[36m7rrh9djrip-algo-1-gy43h |[0m "model_dir": "/opt/ml/model",
[36m7rrh9djrip-algo-1-gy43h |[0m "module_dir": "s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/source/sourcedir.tar.gz",
[36m7rrh9djrip-algo-1-gy43h |[0m "module_name": "mnist_train_DDP",
[36m7rrh9djrip-algo-1-gy43h |[0m "network_interface_name": "eth0",
[36m7rrh9djrip-algo-1-gy43h |[0m "num_cpus": 64,
[36m7rrh9djrip-algo-1-gy43h |[0m "num_gpus": 8,
[36m7rrh9djrip-algo-1-gy43h |[0m "output_data_dir": "/opt/ml/output/data",
[36m7rrh9djrip-algo-1-gy43h |[0m "output_dir": "/opt/ml/output",
[36m7rrh9djrip-algo-1-gy43h |[0m "output_intermediate_dir": "/opt/ml/output/intermediate",
[36m7rrh9djrip-algo-1-gy43h |[0m "resource_config": {
[36m7rrh9djrip-algo-1-gy43h |[0m "current_host": "algo-1-gy43h",
[36m7rrh9djrip-algo-1-gy43h |[0m "hosts": [
[36m7rrh9djrip-algo-1-gy43h |[0m "algo-1-gy43h"
[36m7rrh9djrip-algo-1-gy43h |[0m ]
[36m7rrh9djrip-algo-1-gy43h |[0m },
[36m7rrh9djrip-algo-1-gy43h |[0m "user_entry_point": "mnist_train_DDP.py"
[36m7rrh9djrip-algo-1-gy43h |[0m }
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m Environment variables:
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m SM_HOSTS=["algo-1-gy43h"]
[36m7rrh9djrip-algo-1-gy43h |[0m SM_NETWORK_INTERFACE_NAME=eth0
[36m7rrh9djrip-algo-1-gy43h |[0m SM_HPS={"model_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/model"}
[36m7rrh9djrip-algo-1-gy43h |[0m SM_USER_ENTRY_POINT=mnist_train_DDP.py
[36m7rrh9djrip-algo-1-gy43h |[0m SM_FRAMEWORK_PARAMS={"sagemaker_distributed_dataparallel_custom_mpi_options":"","sagemaker_distributed_dataparallel_enabled":true,"sagemaker_instance_type":"local_gpu"}
[36m7rrh9djrip-algo-1-gy43h |[0m SM_RESOURCE_CONFIG={"current_host":"algo-1-gy43h","hosts":["algo-1-gy43h"]}
[36m7rrh9djrip-algo-1-gy43h |[0m SM_INPUT_DATA_CONFIG={}
[36m7rrh9djrip-algo-1-gy43h |[0m SM_OUTPUT_DATA_DIR=/opt/ml/output/data
[36m7rrh9djrip-algo-1-gy43h |[0m SM_CHANNELS=[]
[36m7rrh9djrip-algo-1-gy43h |[0m SM_CURRENT_HOST=algo-1-gy43h
[36m7rrh9djrip-algo-1-gy43h |[0m SM_MODULE_NAME=mnist_train_DDP
[36m7rrh9djrip-algo-1-gy43h |[0m SM_LOG_LEVEL=20
[36m7rrh9djrip-algo-1-gy43h |[0m SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
[36m7rrh9djrip-algo-1-gy43h |[0m SM_INPUT_DIR=/opt/ml/input
[36m7rrh9djrip-algo-1-gy43h |[0m SM_INPUT_CONFIG_DIR=/opt/ml/input/config
[36m7rrh9djrip-algo-1-gy43h |[0m SM_OUTPUT_DIR=/opt/ml/output
[36m7rrh9djrip-algo-1-gy43h |[0m SM_NUM_CPUS=64
[36m7rrh9djrip-algo-1-gy43h |[0m SM_NUM_GPUS=8
[36m7rrh9djrip-algo-1-gy43h |[0m SM_MODEL_DIR=/opt/ml/model
[36m7rrh9djrip-algo-1-gy43h |[0m SM_MODULE_DIR=s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/source/sourcedir.tar.gz
[36m7rrh9djrip-algo-1-gy43h |[0m SM_TRAINING_ENV={"additional_framework_parameters":{"sagemaker_distributed_dataparallel_custom_mpi_options":"","sagemaker_distributed_dataparallel_enabled":true,"sagemaker_instance_type":"local_gpu"},"channel_input_dirs":{},"current_host":"algo-1-gy43h","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1-gy43h"],"hyperparameters":{"model_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281","log_level":20,"master_hostname":"algo-1-gy43h","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/source/sourcedir.tar.gz","module_name":"mnist_train_DDP","network_interface_name":"eth0","num_cpus":64,"num_gpus":8,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1-gy43h","hosts":["algo-1-gy43h"]},"user_entry_point":"mnist_train_DDP.py"}
[36m7rrh9djrip-algo-1-gy43h |[0m SM_USER_ARGS=["--model_dir","s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/model"]
[36m7rrh9djrip-algo-1-gy43h |[0m SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
[36m7rrh9djrip-algo-1-gy43h |[0m SM_HP_MODEL_DIR=s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/model
[36m7rrh9djrip-algo-1-gy43h |[0m PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m Invoking script with the following command:
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m mpirun --host algo-1-gy43h -np 8 --allow-run-as-root --tag-output --oversubscribe -mca btl_tcp_if_include eth0 -mca oob_tcp_if_include eth0 -mca plm_rsh_no_tree_spawn 1 -mca pml ob1 -mca btl ^openib -mca orte_abort_on_non_zero_status 1 -mca btl_vader_single_copy_mechanism none -mca plm_rsh_num_concurrent 1 -x NCCL_SOCKET_IFNAME=eth0 -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -x SMDATAPARALLEL_USE_SINGLENODE=1 -x FI_PROVIDER=efa -x RDMAV_FORK_SAFE=1 -x LD_PRELOAD=/usr/local/lib/python3.7/site-packages/gethostname.cpython-37m-x86_64-linux-gnu.so smddprun /usr/local/bin/python3.7 -m mpi4py mnist_train_DDP.py --model_dir s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-04-05-12-09-281/model
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:TensorFlow version: 2.4.1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:Eager execution: True
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:TensorFlow version: 2.4.1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:Eager execution: True
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:TensorFlow version: 2.4.1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:Eager execution: True
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:TensorFlow version: 2.4.1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:Eager execution: True
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:TensorFlow version: 2.4.1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:Eager execution: True[1,5]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:TensorFlow version: 2.4.1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:Eager execution: True
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:TensorFlow version: 2.4.1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:Eager execution: True
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:TensorFlow version: 2.4.1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Eager execution: True
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Bootstrap : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO NET/IB : No device found.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO NET/Socket : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Using network Socket
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:NCCL version 2.7.8+cuda11.0
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Bootstrap : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO NET/IB : No device found.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO NET/Socket : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Using network Socket
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Bootstrap : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Bootstrap : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Bootstrap : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO NET/IB : No device found.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO NET/IB : No device found.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO NET/IB : No device found.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO NET/Socket : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Using network Socket
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO NET/Socket : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Using network Socket
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO NET/Socket : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Using network Socket
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Bootstrap : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO NET/IB : No device found.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO NET/Socket : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Using network Socket
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Bootstrap : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO NET/IB : No device found.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO NET/Socket : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Using network Socket
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Bootstrap : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO NET/IB : No device found.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO NET/Socket : Using [0]eth0:172.18.0.2<0>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Using network Socket
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Trees [0] 7/-1/-1->6->5|5->6->7/-1/-1 [1] 7/-1/-1->6->5|5->6->7/-1/-1 [2] 5/-1/-1->6->7|7->6->5/-1/-1 [3] 5/-1/-1->6->7|7->6->5/-1/-1 [4] 2/-1/-1->6->4|4->6->2/-1/-1 [5] 4/-1/-1->6->2|2->6->4/-1/-1 [6] 7/-1/-1->6->5|5->6->7/-1/-1 [7] 7/-1/-1->6->5|5->6->7/-1/-1 [8] 5/-1/-1->6->7|7->6->5/-1/-1 [9] 5/-1/-1->6->7|7->6->5/-1/-1 [10] 2/-1/-1->6->4|4->6->2/-1/-1 [11] 4/-1/-1->6->2|2->6->4/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Trees [0] 6/-1/-1->5->1|1->5->6/-1/-1 [1] 6/-1/-1->5->1|1->5->6/-1/-1 [2] 1/-1/-1->5->6|6->5->1/-1/-1 [3] 1/-1/-1->5->6|6->5->1/-1/-1 [4] 4/-1/-1->5->7|7->5->4/-1/-1 [5] 7/-1/-1->5->4|4->5->7/-1/-1 [6] 6/-1/-1->5->1|1->5->6/-1/-1 [7] 6/-1/-1->5->1|1->5->6/-1/-1 [8] 1/-1/-1->5->6|6->5->1/-1/-1 [9] 1/-1/-1->5->6|6->5->1/-1/-1 [10] 4/-1/-1->5->7|7->5->4/-1/-1 [11] 7/-1/-1->5->4|4->5->7/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 00/12 : 0 3 2 1 5 6 7 4
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 01/12 : 0 3 2 1 5 6 7 4
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 02/12 : 0 4 7 6 5 1 2 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Trees [0] 4/-1/-1->7->6|6->7->4/-1/-1 [1] 4/-1/-1->7->6|6->7->4/-1/-1 [2] 6/-1/-1->7->4|4->7->6/-1/-1 [3] 6/-1/-1->7->4|4->7->6/-1/-1 [4] 5/-1/-1->7->3|3->7->5/-1/-1 [5] 3/-1/-1->7->5|5->7->3/-1/-1 [6] 4/-1/-1->7->6|6->7->4/-1/-1 [7] 4/-1/-1->7->6|6->7->4/-1/-1 [8] 6/-1/-1->7->4|4->7->6/-1/-1 [9] 6/-1/-1->7->4|4->7->6/-1/-1 [10] 5/-1/-1->7->3|3->7->5/-1/-1 [11] 3/-1/-1->7->5|5->7->3/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 03/12 : 0 4 7 6 5 1 2 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 04/12 : 0 1 3 7 5 4 6 2
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 05/12 : 0 2 6 4 5 7 3 1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 06/12 : 0 3 2 1 5 6 7 4
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 07/12 : 0 3 2 1 5 6 7 4
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Trees [0] 1/-1/-1->2->3|3->2->1/-1/-1 [1] 1/-1/-1->2->3|3->2->1/-1/-1 [2] 3/-1/-1->2->1|1->2->3/-1/-1 [3] 3/-1/-1->2->1|1->2->3/-1/-1 [4] -1/-1/-1->2->6|6->2->-1/-1/-1 [5] 6/-1/-1->2->0|0->2->6/-1/-1 [6] 1/-1/-1->2->3|3->2->1/-1/-1 [7] 1/-1/-1->2->3|3->2->1/-1/-1 [8] 3/-1/-1->2->1|1->2->3/-1/-1 [9] 3/-1/-1->2->1|1->2->3/-1/-1 [10] -1/-1/-1->2->6|6->2->-1/-1/-1 [11] 6/-1/-1->2->0|0->2->6/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Trees [0] 5/-1/-1->1->2|2->1->5/-1/-1 [1] 5/-1/-1->1->2|2->1->5/-1/-1 [2] 2/-1/-1->1->5|5->1->2/-1/-1 [3] 2/-1/-1->1->5|5->1->2/-1/-1 [4] 3/-1/-1->1->0|0->1->3/-1/-1 [5] -1/-1/-1->1->3|3->1->-1/-1/-1 [6] 5/-1/-1->1->2|2->1->5/-1/-1 [7] 5/-1/-1->1->2|2->1->5/-1/-1 [8] 2/-1/-1->1->5|5->1->2/-1/-1 [9] 2/-1/-1->1->5|5->1->2/-1/-1 [10] 3/-1/-1->1->0|0->1->3/-1/-1 [11] -1/-1/-1->1->3|3->1->-1/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 08/12 : 0 4 7 6 5 1 2 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Trees [0] 2/-1/-1->3->0|0->3->2/-1/-1 [1] 2/-1/-1->3->0|0->3->2/-1/-1 [2] -1/-1/-1->3->2|2->3->-1/-1/-1 [3] -1/-1/-1->3->2|2->3->-1/-1/-1 [4] 7/-1/-1->3->1|1->3->7/-1/-1 [5] 1/-1/-1->3->7|7->3->1/-1/-1 [6] 2/-1/-1->3->0|0->3->2/-1/-1 [7] 2/-1/-1->3->0|0->3->2/-1/-1 [8] -1/-1/-1->3->2|2->3->-1/-1/-1 [9] -1/-1/-1->3->2|2->3->-1/-1/-1 [10] 7/-1/-1->3->1|1->3->7/-1/-1 [11] 1/-1/-1->3->7|7->3->1/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 09/12 : 0 4 7 6 5 1 2 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 10/12 : 0 1 3 7 5 4 6 2
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 11/12 : 0 2 6 4 5 7 3 1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Trees [0] -1/-1/-1->4->7|7->4->-1/-1/-1 [1] -1/-1/-1->4->7|7->4->-1/-1/-1 [2] 7/-1/-1->4->0|0->4->7/-1/-1 [3] 7/-1/-1->4->0|0->4->7/-1/-1 [4] 6/-1/-1->4->5|5->4->6/-1/-1 [5] 5/-1/-1->4->6|6->4->5/-1/-1 [6] -1/-1/-1->4->7|7->4->-1/-1/-1 [7] -1/-1/-1->4->7|7->4->-1/-1/-1 [8] 7/-1/-1->4->0|0->4->7/-1/-1 [9] 7/-1/-1->4->0|0->4->7/-1/-1 [10] 6/-1/-1->4->5|5->4->6/-1/-1 [11] 5/-1/-1->4->6|6->4->5/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Trees [0] 3/-1/-1->0->-1|-1->0->3/-1/-1 [1] 3/-1/-1->0->-1|-1->0->3/-1/-1 [2] 4/-1/-1->0->-1|-1->0->4/-1/-1 [3] 4/-1/-1->0->-1|-1->0->4/-1/-1 [4] 1/-1/-1->0->-1|-1->0->1/-1/-1 [5] 2/-1/-1->0->-1|-1->0->2/-1/-1 [6] 3/-1/-1->0->-1|-1->0->3/-1/-1 [7] 3/-1/-1->0->-1|-1->0->3/-1/-1 [8] 4/-1/-1->0->-1|-1->0->4/-1/-1 [9] 4/-1/-1->0->-1|-1->0->4/-1/-1 [10] 1/-1/-1->0->-1|-1->0->1/-1/-1 [11] 2/-1/-1->0->-1|-1->0->2/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 00 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 00 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 00 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 00 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 00 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 00 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 00 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 00 : 0[170] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 00 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 00 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 00 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 00 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 00 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 00 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 00 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 01 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 01 : 0[170] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 01 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 01 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 01 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 01 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 01 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 01 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 01 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 01 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 01 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 01 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 01 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 01 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 01 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 02 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 02 : 0[170] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 02 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 02 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 02 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 02 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 02 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 02 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 02 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 02 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 02 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 02 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 02 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 02 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 02 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 03 : 0[170] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 03 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 03 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 03 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 03 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 03 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 03 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 03 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 03 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 03 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 03 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 03 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 03 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 03 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 03 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 04 : 3[1a0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 04 : 0[170] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 04 : 4[1b0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 04 : 6[1d0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 04 : 5[1c0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 04 : 7[1e0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 04 : 1[180] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 04 : 2[190] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 04 : 2[190] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 04 : 3[1a0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 04 : 4[1b0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 04 : 6[1d0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 04 : 5[1c0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 04 : 7[1e0] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 04 : 1[180] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 05 : 2[190] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 05 : 3[1a0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 05 : 4[1b0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 05 : 0[170] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 05 : 6[1d0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 05 : 5[1c0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 05 : 7[1e0] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 05 : 1[180] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 05 : 2[190] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 05 : 1[180] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 05 : 3[1a0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 05 : 4[1b0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 05 : 6[1d0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 05 : 5[1c0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 05 : 7[1e0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 06 : 0[170] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 06 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 06 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 06 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 06 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 06 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 06 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 06 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 06 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 06 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 06 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 06 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 06 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 06 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 06 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 07 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 07 : 0[170] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 07 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 07 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 07 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 07 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 07 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 07 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 07 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 07 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 07 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 07 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 07 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 07 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 07 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 08 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 08 : 0[170] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 08 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 08 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 08 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 08 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 08 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 08 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 08 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 08 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 08 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 08 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 08 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 08 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 08 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 09 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 09 : 0[170] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 09 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 09 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 09 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 09 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 09 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 09 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 09 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 09 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 09 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 09 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 09 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 09 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 09 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 10 : 3[1a0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 10 : 0[170] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 10 : 1[180] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 10 : 2[190] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 10 : 4[1b0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 10 : 5[1c0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 10 : 6[1d0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 10 : 7[1e0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 10 : 2[190] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 10 : 3[1a0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 10 : 1[180] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 10 : 4[1b0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 10 : 5[1c0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 10 : 6[1d0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 10 : 7[1e0] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 11 : 2[190] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 11 : 0[170] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 11 : 1[180] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 11 : 3[1a0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 11 : 4[1b0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 11 : 5[1c0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 11 : 6[1d0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 11 : 7[1e0] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 11 : 1[180] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 11 : 2[190] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 11 : 4[1b0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 11 : 3[1a0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 11 : 5[1c0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 11 : 6[1d0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 11 : 7[1e0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO comm 0x5619563aa8a0 rank 1 nranks 8 cudaDev 1 busId 180 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO comm 0x55696631a430 rank 0 nranks 8 cudaDev 0 busId 170 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO comm 0x5570d1bbb370 rank 2 nranks 8 cudaDev 2 busId 190 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO comm 0x55dc48be4790 rank 4 nranks 8 cudaDev 4 busId 1b0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO comm 0x557e47a21f00 rank 3 nranks 8 cudaDev 3 busId 1a0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO comm 0x56067cff61a0 rank 5 nranks 8 cudaDev 5 busId 1c0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO comm 0x564fccc81e90 rank 6 nranks 8 cudaDev 6 busId 1d0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO comm 0x55a878a01cc0 rank 7 nranks 8 cudaDev 7 busId 1e0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 00/12 : 0 3 2 1 5 6 7 4
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 01/12 : 0 3 2 1 5 6 7 4
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 02/12 : 0 4 7 6 5 1 2 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Trees [0] 4/-1/-1->7->6|6->7->4/-1/-1 [1] 4/-1/-1->7->6|6->7->4/-1/-1 [2] 6/-1/-1->7->4|4->7->6/-1/-1 [3] 6/-1/-1->7->4|4->7->6/-1/-1 [4] 5/-1/-1->7->3|3->7->5/-1/-1 [5] 3/-1/-1->7->5|5->7->3/-1/-1 [6] 4/-1/-1->7->6|6->7->4/-1/-1 [7] 4/-1/-1->7->6|6->7->4/-1/-1 [8] 6/-1/-1->7->4|4->7->6/-1/-1 [9] 6/-1/-1->7->4|4->7->6/-1/-1 [10] 5/-1/-1->7->3|3->7->5/-1/-1 [11] 3/-1/-1->7->5|5->7->3/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 03/12 : 0 4 7 6 5 1 2 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 04/12 : 0 1 3 7 5 4 6 2
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 05/12 : 0 2 6 4 5 7 3 1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 06/12 : 0 3 2 1 5 6 7 4
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 07/12 : 0 3 2 1 5 6 7 4
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 08/12 : 0 4 7 6 5 1 2 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Trees [0] 5/-1/-1->1->2|2->1->5/-1/-1 [1] 5/-1/-1->1->2|2->1->5/-1/-1 [2] 2/-1/-1->1->5|5->1->2/-1/-1 [3] 2/-1/-1->1->5|5->1->2/-1/-1 [4] 3/-1/-1->1->0|0->1->3/-1/-1 [5] -1/-1/-1->1->3|3->1->-1/-1/-1 [6] 5/-1/-1->1->2|2->1->5/-1/-1 [7] 5/-1/-1->1->2|2->1->5/-1/-1 [8] 2/-1/-1->1->5|5->1->2/-1/-1 [9] 2/-1/-1->1->5|5->1->2/-1/-1 [10] 3/-1/-1->1->0|0->1->3/-1/-1 [11] -1/-1/-1->1->3|3->1->-1/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 09/12 : 0 4 7 6 5 1 2 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 10/12 : 0 1 3 7 5 4 6 2
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 11/12 : 0 2 6 4 5 7 3 1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Trees [0] 1/-1/-1->2->3|3->2->1/-1/-1 [1] 1/-1/-1->2->3|3->2->1/-1/-1 [2] 3/-1/-1->2->1|1->2->3/-1/-1 [3] 3/-1/-1->2->1|1->2->3/-1/-1 [4] -1/-1/-1->2->6|6->2->-1/-1/-1 [5] 6/-1/-1->2->0|0->2->6/-1/-1 [6] 1/-1/-1->2->3|3->2->1/-1/-1 [7] 1/-1/-1->2->3|3->2->1/-1/-1 [8] 3/-1/-1->2->1|1->2->3/-1/-1 [9] 3/-1/-1->2->1|1->2->3/-1/-1 [10] -1/-1/-1->2->6|6->2->-1/-1/-1 [11] 6/-1/-1->2->0|0->2->6/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Trees [0] 2/-1/-1->3->0|0->3->2/-1/-1 [1] 2/-1/-1->3->0|0->3->2/-1/-1 [2] -1/-1/-1->3->2|2->3->-1/-1/-1 [3] -1/-1/-1->3->2|2->3->-1/-1/-1 [4] 7/-1/-1->3->1|1->3->7/-1/-1 [5] 1/-1/-1->3->7|7->3->1/-1/-1 [6] 2/-1/-1->3->0|0->3->2/-1/-1 [7] 2/-1/-1->3->0|0->3->2/-1/-1 [8] -1/-1/-1->3->2|2->3->-1/-1/-1 [9] -1/-1/-1->3->2|2->3->-1/-1/-1 [10] 7/-1/-1->3->1|1->3->7/-1/-1 [11] 1/-1/-1->3->7|7->3->1/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Trees [0] -1/-1/-1->4->7|7->4->-1/-1/-1 [1] -1/-1/-1->4->7|7->4->-1/-1/-1 [2] 7/-1/-1->4->0|0->4->7/-1/-1 [3] 7/-1/-1->4->0|0->4->7/-1/-1 [4] 6/-1/-1->4->5|5->4->6/-1/-1 [5] 5/-1/-1->4->6|6->4->5/-1/-1 [6] -1/-1/-1->4->7|7->4->-1/-1/-1 [7] -1/-1/-1->4->7|7->4->-1/-1/-1 [8] 7/-1/-1->4->0|0->4->7/-1/-1 [9] 7/-1/-1->4->0|0->4->7/-1/-1 [10] 6/-1/-1->4->5|5->4->6/-1/-1 [11] 5/-1/-1->4->6|6->4->5/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Trees [0] 7/-1/-1->6->5|5->6->7/-1/-1 [1] 7/-1/-1->6->5|5->6->7/-1/-1 [2] 5/-1/-1->6->7|7->6->5/-1/-1 [3] 5/-1/-1->6->7|7->6->5/-1/-1 [4] 2/-1/-1->6->4|4->6->2/-1/-1 [5] 4/-1/-1->6->2|2->6->4/-1/-1 [6] 7/-1/-1->6->5|5->6->7/-1/-1 [7] 7/-1/-1->6->5|5->6->7/-1/-1 [8] 5/-1/-1->6->7|7->6->5/-1/-1 [9] 5/-1/-1->6->7|7->6->5/-1/-1 [10] 2/-1/-1->6->4|4->6->2/-1/-1 [11] 4/-1/-1->6->2|2->6->4/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Trees [0] 6/-1/-1->5->1|1->5->6/-1/-1 [1] 6/-1/-1->5->1|1->5->6/-1/-1 [2] 1/-1/-1->5->6|6->5->1/-1/-1 [3] 1/-1/-1->5->6|6->5->1/-1/-1 [4] 4/-1/-1->5->7|7->5->4/-1/-1 [5] 7/-1/-1->5->4|4->5->7/-1/-1 [6] 6/-1/-1->5->1|1->5->6/-1/-1 [7] 6/-1/-1->5->1|1->5->6/-1/-1 [8] 1/-1/-1->5->6|6->5->1/-1/-1 [9] 1/-1/-1->5->6|6->5->1/-1/-1 [10] 4/-1/-1->5->7|7->5->4/-1/-1 [11] 7/-1/-1->5->4|4->5->7/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Trees [0] 3/-1/-1->0->-1|-1->0->3/-1/-1 [1] 3/-1/-1->0->-1|-1->0->3/-1/-1 [2] 4/-1/-1->0->-1|-1->0->4/-1/-1 [3] 4/-1/-1->0->-1|-1->0->4/-1/-1 [4] 1/-1/-1->0->-1|-1->0->1/-1/-1 [5] 2/-1/-1->0->-1|-1->0->2/-1/-1 [6] 3/-1/-1->0->-1|-1->0->3/-1/-1 [7] 3/-1/-1->0->-1|-1->0->3/-1/-1 [8] 4/-1/-1->0->-1|-1->0->4/-1/-1 [9] 4/-1/-1->0->-1|-1->0->4/-1/-1 [10] 1/-1/-1->0->-1|-1->0->1/-1/-1 [11] 2/-1/-1->0->-1|-1->0->2/-1/-1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 00 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 00 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 00 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 00 : 0[170] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 00 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 00 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 00 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 00 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 00 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 00 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 00 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 00 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 00 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 00 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 00 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 01 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 01 : 0[170] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 01 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 01 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 01 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 01 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 01 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 01 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 01 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 01 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 01 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 01 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 01 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 01 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 01 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 02 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 02 : 0[170] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 02 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 02 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 02 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 02 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 02 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 02 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 02 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 02 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 02 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 02 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 02 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 02 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 02 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 03 : 0[170] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 03 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 03 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 03 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 03 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 03 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 03 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 03 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 03 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 03 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 03 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 03 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 03 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 03 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 03 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 04 : 3[1a0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 04 : 0[170] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 04 : 4[1b0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 04 : 7[1e0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 04 : 2[190] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 04 : 1[180] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 04 : 5[1c0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 04 : 6[1d0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 04 : 2[190] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 04 : 3[1a0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 04 : 4[1b0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 04 : 7[1e0] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 04 : 1[180] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 04 : 5[1c0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 04 : 6[1d0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 05 : 2[190] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 05 : 0[170] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 05 : 3[1a0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 05 : 7[1e0] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 05 : 4[1b0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 05 : 1[180] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 05 : 5[1c0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 05 : 6[1d0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 05 : 1[180] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 05 : 3[1a0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 05 : 2[190] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 05 : 7[1e0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 05 : 4[1b0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 05 : 5[1c0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 05 : 6[1d0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 06 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 06 : 0[170] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 06 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 06 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 06 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 06 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 06 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 06 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 06 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 06 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 06 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 06 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 06 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 06 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 06 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 07 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 07 : 0[170] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 07 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 07 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 07 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 07 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 07 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 07 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 07 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 07 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 07 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 07 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 07 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 07 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 07 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 08 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 08 : 0[170] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 08 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 08 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 08 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 08 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 08 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 08 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 08 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 08 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 08 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 08 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 08 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 08 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 08 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 09 : 0[170] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 09 : 3[1a0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 09 : 4[1b0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 09 : 1[180] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 09 : 2[190] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 09 : 7[1e0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 09 : 5[1c0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 09 : 6[1d0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 09 : 3[1a0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 09 : 4[1b0] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 09 : 1[180] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 09 : 2[190] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 09 : 7[1e0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 09 : 5[1c0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 09 : 6[1d0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 10 : 3[1a0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 10 : 0[170] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 10 : 4[1b0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 10 : 1[180] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 10 : 7[1e0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 10 : 2[190] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 10 : 5[1c0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 10 : 6[1d0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 10 : 2[190] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 10 : 3[1a0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 10 : 4[1b0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 10 : 1[180] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 10 : 7[1e0] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 10 : 5[1c0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 10 : 6[1d0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 11 : 2[190] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO Channel 11 : 0[170] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 11 : 3[1a0] -> 1[180] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 11 : 1[180] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 11 : 4[1b0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 11 : 7[1e0] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 11 : 5[1c0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 11 : 6[1d0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO Channel 11 : 1[180] -> 3[1a0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO Channel 11 : 3[1a0] -> 7[1e0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO Channel 11 : 2[190] -> 0[170] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO Channel 11 : 4[1b0] -> 6[1d0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO Channel 11 : 7[1e0] -> 5[1c0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO Channel 11 : 5[1c0] -> 4[1b0] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:algo-1-gy43h:163:163 [1] NCCL INFO comm 0x561959080980 rank 1 nranks 8 cudaDev 1 busId 180 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO Channel 11 : 6[1d0] -> 2[190] via P2P/IPC
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:618 [0] NCCL INFO comm 0x556968ff0510 rank 0 nranks 8 cudaDev 0 busId 170 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:algo-1-gy43h:167:167 [3] NCCL INFO comm 0x557e4a6f7fe0 rank 3 nranks 8 cudaDev 3 busId 1a0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:algo-1-gy43h:169:169 [4] NCCL INFO comm 0x55dc4b8ba870 rank 4 nranks 8 cudaDev 4 busId 1b0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:algo-1-gy43h:165:165 [2] NCCL INFO comm 0x5570d48912a0 rank 2 nranks 8 cudaDev 2 busId 190 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:algo-1-gy43h:172:172 [7] NCCL INFO comm 0x55a87b6d7bf0 rank 7 nranks 8 cudaDev 7 busId 1e0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:algo-1-gy43h:170:170 [5] NCCL INFO comm 0x56067fccc280 rank 5 nranks 8 cudaDev 5 busId 1c0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:algo-1-gy43h:171:171 [6] NCCL INFO comm 0x564fcf957f70 rank 6 nranks 8 cudaDev 6 busId 1d0 - Init COMPLETE
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Running smdistributed.dataparallel v1.2.0
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:dist.rank(): 0
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:mnist files: mnist-0.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:dist.rank(): 1
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:mnist files: mnist-1.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:dist.rank(): 3
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:mnist files: mnist-3.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:dist.rank(): 7
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:mnist files: mnist-7.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:dist.rank(): 2
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:mnist files: mnist-2.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:dist.rank(): 5
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:mnist files: mnist-5.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:dist.rank(): 4[1,4]<stdout>:
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:mnist files: mnist-4.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:dist.rank(): 6
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:mnist files: mnist-6.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step]<stdoutdout>: - ETA: 0s[1,7]<stdout
11493376/11490434 [==============================] - 0s 0us/step]<stdoutdout>: - ETA: 0s[1,3]<stdout
11493376/11490434 [==============================] - 0s 0us/step]<stdout
11493376/11490434 [==============================] - 0s 0us/step
11493376/11490434 [==============================] - 0s 0us/step]<stdout
11493376/11490434 [==============================] - 0s 0us/step]<stdout
11493376/11490434 [==============================] - 0s 0us/step - ETA: 0s[1,6]<stdout
11493376/11490434 [==============================] - 0s 0us/step
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:type: <class 'numpy.ndarray'>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:shape: (60000, 28, 28)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:mnist_labels: (60000,)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:type: <class 'numpy.ndarray'>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:shape: (60000, 28, 28)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:mnist_labels: (60000,)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:type: <class 'numpy.ndarray'>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:shape: (60000, 28, 28)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:mnist_labels: (60000,)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:type: <class 'numpy.ndarray'>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:shape: (60000, 28, 28)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:mnist_labels: (60000,)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:type: <class 'numpy.ndarray'>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:shape: (60000, 28, 28)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:mnist_labels: (60000,)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:type: <class 'numpy.ndarray'>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:shape: (60000, 28, 28)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:mnist_labels: (60000,)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:type: <class 'numpy.ndarray'>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:shape: (60000, 28, 28)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:mnist_labels: (60000,)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:type: <class 'numpy.ndarray'>
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:shape: (60000, 28, 28)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:mnist_labels: (60000,)
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:[2021-10-04 05:12:26.908 algo-1-gy43h:172 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stdout>:[2021-10-04 05:12:26.942 algo-1-gy43h:172 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:[2021-10-04 05:12:26.964 algo-1-gy43h:163 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stdout>:[2021-10-04 05:12:26.997 algo-1-gy43h:163 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:[2021-10-04 05:12:27.001 algo-1-gy43h:167 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:[2021-10-04 05:12:27.018 algo-1-gy43h:170 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stdout>:[2021-10-04 05:12:27.036 algo-1-gy43h:167 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stdout>:[2021-10-04 05:12:27.052 algo-1-gy43h:170 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:[2021-10-04 05:12:27.059 algo-1-gy43h:618 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:[2021-10-04 05:12:27.070 algo-1-gy43h:171 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:[2021-10-04 05:12:27.074 algo-1-gy43h:169 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:[2021-10-04 05:12:27.093 algo-1-gy43h:618 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stdout>:[2021-10-04 05:12:27.105 algo-1-gy43h:171 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:[2021-10-04 05:12:27.108 algo-1-gy43h:165 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stdout>:[2021-10-04 05:12:27.108 algo-1-gy43h:169 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stdout>:[2021-10-04 05:12:27.143 algo-1-gy43h:165 INFO profiler_config_parser.py:102] Unable to find config at /opt/ml/input/config/profilerconfig.json. Profiler is disabled.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:algo-1-gy43h:618:1319 [0] NCCL INFO Launch mode Parallel
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #0 Loss: 2.316157
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #1000 Loss: 0.069425
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #2000 Loss: 0.051255
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #3000 Loss: 0.003335
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #4000 Loss: 0.016152
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #5000 Loss: 0.003432
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #6000 Loss: 0.006560
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #7000 Loss: 0.020202
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #8000 Loss: 0.004256
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stdout>:Step #9000 Loss: 0.016712
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stderr>:2021-10-04 05:12:15.346450: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stderr>:2021-10-04 05:12:15.346452: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stderr>:2021-10-04 05:12:15.346628: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stderr>:2021-10-04 05:12:15.346461: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stderr>:2021-10-04 05:12:15.346627: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stderr>:2021-10-04 05:12:15.346450: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stderr>:2021-10-04 05:12:15.346627: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stderr>:2021-10-04 05:12:15.346450: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stderr>:2021-10-04 05:12:15.346628: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stderr>:2021-10-04 05:12:15.346453: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stderr>:2021-10-04 05:12:15.346626: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stderr>:2021-10-04 05:12:15.346456: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stderr>:2021-10-04 05:12:15.346628: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stderr>:2021-10-04 05:12:15.346627: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,7]<stderr>:2021-10-04 05:12:15.387822: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,6]<stderr>:2021-10-04 05:12:15.387821: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,1]<stderr>:2021-10-04 05:12:15.387821: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,3]<stderr>:2021-10-04 05:12:15.387920: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,5]<stderr>:2021-10-04 05:12:15.387981: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,4]<stderr>:2021-10-04 05:12:15.388115: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,2]<stderr>:2021-10-04 05:12:15.388178: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stderr>:2021-10-04 05:12:16.127185: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stderr>:2021-10-04 05:12:16.127382: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stderr>:2021-10-04 05:12:16.170094: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stderr>:2021-10-04 05:14:00.726473: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stderr>:INFO:tensorflow:Assets written to: /opt/ml/model/1/assets
[36m7rrh9djrip-algo-1-gy43h |[0m [1,0]<stderr>:INFO:tensorflow:Assets written to: /opt/ml/model/1/assets
[36m7rrh9djrip-algo-1-gy43h |[0m
[36m7rrh9djrip-algo-1-gy43h |[0m 2021-10-04 05:14:04,116 sagemaker-training-toolkit INFO Reporting training SUCCESS
[36m7rrh9djrip-algo-1-gy43h exited with code 0
[0mAborting on container exit...
===== Job Complete =====
###Markdown
4. DDP SM Hostmode
###Code
from sagemaker.tensorflow import TensorFlow
instance_type = 'ml.p3.16xlarge'
estimator = TensorFlow(
base_job_name="tensorflow2-smdataparallel-mnist",
source_dir="src",
entry_point="mnist_train_DDP.py",
role=role,
py_version="py37",
framework_version="2.4.1",
instance_count=2,
# For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge
instance_type= instance_type,
# sagemaker_session=sagemaker_session,
# Training using SMDataParallel Distributed Training Framework
distribution={"smdistributed": {"dataparallel": {"enabled": True}}},
)
estimator.fit(wait=False)
estimator.logs()
###Output
2021-10-02 08:01:24 Starting - Starting the training job...
2021-10-02 08:01:49 Starting - Launching requested ML instancesProfilerReport-1633161684: InProgress
............
2021-10-02 08:03:50 Starting - Preparing the instances for training.........
2021-10-02 08:05:18 Downloading - Downloading input data
2021-10-02 08:05:18 Training - Downloading the training image...............
2021-10-02 08:07:51 Training - Training image download completed. Training in progress..[34m2021-10-02 08:07:49.844216: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m2021-10-02 08:07:49.848628: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m2021-10-02 08:07:49.934099: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0[0m
[34m2021-10-02 08:07:50.032883: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[35m2021-10-02 08:07:50.632856: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[35m2021-10-02 08:07:50.638068: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[35m2021-10-02 08:07:50.737472: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0[0m
[35m2021-10-02 08:07:50.838092: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m2021-10-02 08:07:54,197 sagemaker-training-toolkit INFO Imported framework sagemaker_tensorflow_container.training[0m
[34m2021-10-02 08:07:54,878 sagemaker-training-toolkit INFO Starting MPI run as worker node.[0m
[34m2021-10-02 08:07:54,878 sagemaker-training-toolkit INFO Creating SSH daemon.[0m
[34m2021-10-02 08:07:54,888 sagemaker-training-toolkit INFO Waiting for MPI workers to establish their SSH connections[0m
[34m2021-10-02 08:07:54,890 sagemaker-training-toolkit INFO Cannot connect to host algo-2 at port 22. Retrying...[0m
[34m2021-10-02 08:07:54,890 sagemaker-training-toolkit INFO Connection closed[0m
[35m2021-10-02 08:07:54,589 sagemaker-training-toolkit INFO Imported framework sagemaker_tensorflow_container.training[0m
[35m2021-10-02 08:07:55,261 sagemaker-training-toolkit INFO Starting MPI run as worker node.[0m
[35m2021-10-02 08:07:55,261 sagemaker-training-toolkit INFO Waiting for MPI Master to create SSH daemon.[0m
[35m2021-10-02 08:07:55,270 paramiko.transport INFO Connected (version 2.0, client OpenSSH_7.6p1)[0m
[35m2021-10-02 08:07:55,342 paramiko.transport INFO Authentication (publickey) successful![0m
[35m2021-10-02 08:07:55,342 sagemaker-training-toolkit INFO Can connect to host algo-1[0m
[35m2021-10-02 08:07:55,342 sagemaker-training-toolkit INFO MPI Master online, creating SSH daemon.[0m
[35m2021-10-02 08:07:55,342 sagemaker-training-toolkit INFO Writing environment variables to /etc/environment for the MPI process.[0m
[34m2021-10-02 08:07:55,898 paramiko.transport INFO Connected (version 2.0, client OpenSSH_7.6p1)[0m
[34m2021-10-02 08:07:55,978 paramiko.transport INFO Authentication (publickey) successful![0m
[34m2021-10-02 08:07:55,979 sagemaker-training-toolkit INFO Can connect to host algo-2 at port 22[0m
[34m2021-10-02 08:07:55,979 sagemaker-training-toolkit INFO Connection closed[0m
[34m2021-10-02 08:07:55,979 sagemaker-training-toolkit INFO Worker algo-2 available for communication[0m
[34m2021-10-02 08:07:55,979 sagemaker-training-toolkit INFO Network interface name: eth0[0m
[34m2021-10-02 08:07:55,979 sagemaker-training-toolkit INFO Host: ['algo-1', 'algo-2'][0m
[34m2021-10-02 08:07:55,981 sagemaker-training-toolkit INFO instance type: ml.p3.16xlarge[0m
[34m2021-10-02 08:07:56,067 sagemaker-training-toolkit INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"additional_framework_parameters": {
"sagemaker_distributed_dataparallel_enabled": true,
"sagemaker_distributed_dataparallel_custom_mpi_options": "",
"sagemaker_instance_type": "ml.p3.16xlarge"
},
"channel_input_dirs": {},
"current_host": "algo-1",
"framework_module": "sagemaker_tensorflow_container.training:main",
"hosts": [
"algo-1",
"algo-2"
],
"hyperparameters": {
"model_dir": "s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/model"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/source/sourcedir.tar.gz",
"module_name": "mnist_train_DDP",
"network_interface_name": "eth0",
"num_cpus": 64,
"num_gpus": 8,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1",
"algo-2"
],
"network_interface_name": "eth0"
},
"user_entry_point": "mnist_train_DDP.py"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_HOSTS=["algo-1","algo-2"][0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_HPS={"model_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/model"}[0m
[34mSM_USER_ENTRY_POINT=mnist_train_DDP.py[0m
[34mSM_FRAMEWORK_PARAMS={"sagemaker_distributed_dataparallel_custom_mpi_options":"","sagemaker_distributed_dataparallel_enabled":true,"sagemaker_instance_type":"ml.p3.16xlarge"}[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"}[0m
[34mSM_INPUT_DATA_CONFIG={}[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_CHANNELS=[][0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_MODULE_NAME=mnist_train_DDP[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_NUM_CPUS=64[0m
[34mSM_NUM_GPUS=8[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/source/sourcedir.tar.gz[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{"sagemaker_distributed_dataparallel_custom_mpi_options":"","sagemaker_distributed_dataparallel_enabled":true,"sagemaker_instance_type":"ml.p3.16xlarge"},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1","algo-2"],"hyperparameters":{"model_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/source/sourcedir.tar.gz","module_name":"mnist_train_DDP","network_interface_name":"eth0","num_cpus":64,"num_gpus":8,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"},"user_entry_point":"mnist_train_DDP.py"}[0m
[34mSM_USER_ARGS=["--model_dir","s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/model"][0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_HP_MODEL_DIR=s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/model[0m
[34mPYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynload:/usr/local/lib/python3.7/site-packages
[0m
[34mInvoking script with the following command:
[0m
[34mmpirun --host algo-1:8,algo-2:8 -np 16 --allow-run-as-root --tag-output --oversubscribe -mca btl_tcp_if_include eth0 -mca oob_tcp_if_include eth0 -mca plm_rsh_no_tree_spawn 1 -mca pml ob1 -mca btl ^openib -mca orte_abort_on_non_zero_status 1 -mca btl_vader_single_copy_mechanism none -mca plm_rsh_num_concurrent 2 -x NCCL_SOCKET_IFNAME=eth0 -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -x SMDATAPARALLEL_USE_HOMOGENEOUS=1 -x FI_PROVIDER=efa -x RDMAV_FORK_SAFE=1 -x LD_PRELOAD=/usr/local/lib/python3.7/site-packages/gethostname.cpython-37m-x86_64-linux-gnu.so -x SMDATAPARALLEL_SERVER_ADDR=algo-1 -x SMDATAPARALLEL_SERVER_PORT=7592 -x SAGEMAKER_INSTANCE_TYPE=ml.p3.16xlarge smddprun /usr/local/bin/python3.7 -m mpi4py mnist_train_DDP.py --model_dir s3://sagemaker-us-east-1-057716757052/tensorflow2-smdataparallel-mnist-2021-10-02-08-01-24-159/model
[0m
[35m2021-10-02 08:07:55,353 sagemaker-training-toolkit INFO Waiting for MPI process to finish.[0m
[35m2021-10-02 08:07:57,359 sagemaker-training-toolkit INFO Process[es]: [psutil.Process(pid=172, name='orted', status='disk-sleep', started='08:07:56')][0m
[35m2021-10-02 08:07:57,359 sagemaker-training-toolkit INFO Orted process found [psutil.Process(pid=172, name='orted', status='disk-sleep', started='08:07:56')][0m
[35m2021-10-02 08:07:57,359 sagemaker-training-toolkit INFO Waiting for orted process [psutil.Process(pid=172, name='orted', status='disk-sleep', started='08:07:56')][0m
[34m[1,9]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,9]<stdout>:Eager execution: True[0m
[34m[1,6]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,6]<stdout>:Eager execution: True[0m
[34m[1,14]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,14]<stdout>:Eager execution: True[0m
[34m[1,7]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,7]<stdout>:Eager execution: True[0m
[34m[1,5]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,5]<stdout>:Eager execution: True[0m
[34m[1,11]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,11]<stdout>:Eager execution: True[0m
[34m[1,2]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,2]<stdout>:Eager execution: True[0m
[34m[1,1]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,1]<stdout>:Eager execution: True[0m
[34m[1,3]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,3]<stdout>:Eager execution: True[0m
[34m[1,13]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,13]<stdout>:Eager execution: True[0m
[34m[1,12]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,12]<stdout>:Eager execution: True[0m
[34m[1,4]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,4]<stdout>:Eager execution: True[0m
[34m[1,10]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,10]<stdout>:Eager execution: True[0m
[34m[1,15]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,15]<stdout>:Eager execution: True[0m
[34m[1,0]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,0]<stdout>:Eager execution: True[0m
[34m[1,8]<stdout>:TensorFlow version: 2.4.1[0m
[34m[1,8]<stdout>:Eager execution: True[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Bootstrap : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Bootstrap : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,0]<stdout>:[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO NET/IB : No device found.[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO NET/Socket : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Using network Socket[0m
[34m[1,0]<stdout>:NCCL version 2.7.8+cuda11.0[0m
[34m[1,8]<stdout>:[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO NET/IB : No device found.[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO NET/Socket : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Using network Socket[0m
[34m[1,8]<stdout>:NCCL version 2.7.8+cuda11.0[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Bootstrap : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Bootstrap : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Bootstrap : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,3]<stdout>:[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO NET/IB : No device found.[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO NET/Socket : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Using network Socket[0m
[34m[1,1]<stdout>:[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,2]<stdout>:[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO NET/IB : No device found.[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO NET/IB : No device found.[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO NET/Socket : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Using network Socket[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO NET/Socket : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Using network Socket[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Bootstrap : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Bootstrap : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Bootstrap : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,9]<stdout>:[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,11]<stdout>:[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,10]<stdout>:[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO NET/IB : No device found.[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO NET/IB : No device found.[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO NET/IB : No device found.[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO NET/Socket : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Using network Socket[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO NET/Socket : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Using network Socket[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO NET/Socket : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Using network Socket[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Bootstrap : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Bootstrap : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Bootstrap : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Bootstrap : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Bootstrap : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Bootstrap : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Bootstrap : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Bootstrap : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,6]<stdout>:[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,4]<stdout>:[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,5]<stdout>:[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,7]<stdout>:[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO NET/IB : No device found.[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO NET/Socket : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Using network Socket[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO NET/IB : No device found.[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO NET/IB : No device found.[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO NET/IB : No device found.[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO NET/Socket : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Using network Socket[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO NET/Socket : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Using network Socket[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO NET/Socket : Using [0]eth0:10.0.241.168<0>[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Using network Socket[0m
[34m[1,14]<stdout>:[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,13]<stdout>:[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,15]<stdout>:[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO NET/IB : No device found.[0m
[34m[1,12]<stdout>:[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] find_ofi_provider:542 NCCL WARN NET/OFI Couldn't find any optimal provider[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO NET/Socket : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Using network Socket[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO NET/IB : No device found.[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO NET/IB : No device found.[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO NET/Socket : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Using network Socket[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO NET/Socket : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO NET/IB : No device found.[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Using network Socket[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO NET/Socket : Using [0]eth0:10.0.223.118<0>[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Using network Socket[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Trees [0] 7/-1/-1->6->5|5->6->7/-1/-1 [1] 7/-1/-1->6->5|5->6->7/-1/-1 [2] 5/-1/-1->6->7|7->6->5/-1/-1 [3] 5/-1/-1->6->7|7->6->5/-1/-1 [4] 2/-1/-1->6->4|4->6->2/-1/-1 [5] 4/-1/-1->6->2|2->6->4/-1/-1 [6] 7/-1/-1->6->5|5->6->7/-1/-1 [7] 7/-1/-1->6->5|5->6->7/-1/-1 [8] 5/-1/-1->6->7|7->6->5/-1/-1 [9] 5/-1/-1->6->7|7->6->5/-1/-1 [10] 2/-1/-1->6->4|4->6->2/-1/-1 [11] 4/-1/-1->6->2|2->6->4/-1/-1[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 00/12 : 0 3 2 1 5 6 7 4[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 01/12 : 0 3 2 1 5 6 7 4[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 02/12 : 0 4 7 6 5 1 2 3[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 03/12 : 0 4 7 6 5 1 2 3[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 04/12 : 0 1 3 7 5 4 6 2[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Trees [0] 4/-1/-1->7->6|6->7->4/-1/-1 [1] 4/-1/-1->7->6|6->7->4/-1/-1 [2] 6/-1/-1->7->4|4->7->6/-1/-1 [3] 6/-1/-1->7->4|4->7->6/-1/-1 [4] 5/-1/-1->7->3|3->7->5/-1/-1 [5] 3/-1/-1->7->5|5->7->3/-1/-1 [6] 4/-1/-1->7->6|6->7->4/-1/-1 [7] 4/-1/-1->7->6|6->7->4/-1/-1 [8] 6/-1/-1->7->4|4->7->6/-1/-1 [9] 6/-1/-1->7->4|4->7->6/-1/-1 [10] 5/-1/-1->7->3|3->7->5/-1/-1 [11] 3/-1/-1->7->5|5->7->3/-1/-1[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Trees [0] 5/-1/-1->1->2|2->1->5/-1/-1 [1] 5/-1/-1->1->2|2->1->5/-1/-1 [2] 2/-1/-1->1->5|5->1->2/-1/-1 [3] 2/-1/-1->1->5|5->1->2/-1/-1 [4] 3/-1/-1->1->0|0->1->3/-1/-1 [5] -1/-1/-1->1->3|3->1->-1/-1/-1 [6] 5/-1/-1->1->2|2->1->5/-1/-1 [7] 5/-1/-1->1->2|2->1->5/-1/-1 [8] 2/-1/-1->1->5|5->1->2/-1/-1 [9] 2/-1/-1->1->5|5->1->2/-1/-1 [10] 3/-1/-1->1->0|0->1->3/-1/-1 [11] -1/-1/-1->1->3|3->1->-1/-1/-1[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 05/12 : 0 2 6 4 5 7 3 1[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 06/12 : 0 3 2 1 5 6 7 4[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Trees [0] 1/-1/-1->2->3|3->2->1/-1/-1 [1] 1/-1/-1->2->3|3->2->1/-1/-1 [2] 3/-1/-1->2->1|1->2->3/-1/-1 [3] 3/-1/-1->2->1|1->2->3/-1/-1 [4] -1/-1/-1->2->6|6->2->-1/-1/-1 [5] 6/-1/-1->2->0|0->2->6/-1/-1 [6] 1/-1/-1->2->3|3->2->1/-1/-1 [7] 1/-1/-1->2->3|3->2->1/-1/-1 [8] 3/-1/-1->2->1|1->2->3/-1/-1 [9] 3/-1/-1->2->1|1->2->3/-1/-1 [10] -1/-1/-1->2->6|6->2->-1/-1/-1 [11] 6/-1/-1->2->0|0->2->6/-1/-1[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 07/12 : 0 3 2 1 5 6 7 4[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 08/12 : 0 4 7 6 5 1 2 3[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 09/12 : 0 4 7 6 5 1 2 3[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 10/12 : 0 1 3 7 5 4 6 2[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 11/12 : 0 2 6 4 5 7 3 1[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Trees [0] 2/-1/-1->3->0|0->3->2/-1/-1 [1] 2/-1/-1->3->0|0->3->2/-1/-1 [2] -1/-1/-1->3->2|2->3->-1/-1/-1 [3] -1/-1/-1->3->2|2->3->-1/-1/-1 [4] 7/-1/-1->3->1|1->3->7/-1/-1 [5] 1/-1/-1->3->7|7->3->1/-1/-1 [6] 2/-1/-1->3->0|0->3->2/-1/-1 [7] 2/-1/-1->3->0|0->3->2/-1/-1 [8] -1/-1/-1->3->2|2->3->-1/-1/-1 [9] -1/-1/-1->3->2|2->3->-1/-1/-1 [10] 7/-1/-1->3->1|1->3->7/-1/-1 [11] 1/-1/-1->3->7|7->3->1/-1/-1[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Trees [0] -1/-1/-1->4->7|7->4->-1/-1/-1 [1] -1/-1/-1->4->7|7->4->-1/-1/-1 [2] 7/-1/-1->4->0|0->4->7/-1/-1 [3] 7/-1/-1->4->0|0->4->7/-1/-1 [4] 6/-1/-1->4->5|5->4->6/-1/-1 [5] 5/-1/-1->4->6|6->4->5/-1/-1 [6] -1/-1/-1->4->7|7->4->-1/-1/-1 [7] -1/-1/-1->4->7|7->4->-1/-1/-1 [8] 7/-1/-1->4->0|0->4->7/-1/-1 [9] 7/-1/-1->4->0|0->4->7/-1/-1 [10] 6/-1/-1->4->5|5->4->6/-1/-1 [11] 5/-1/-1->4->6|6->4->5/-1/-1[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Trees [0] 6/-1/-1->5->1|1->5->6/-1/-1 [1] 6/-1/-1->5->1|1->5->6/-1/-1 [2] 1/-1/-1->5->6|6->5->1/-1/-1 [3] 1/-1/-1->5->6|6->5->1/-1/-1 [4] 4/-1/-1->5->7|7->5->4/-1/-1 [5] 7/-1/-1->5->4|4->5->7/-1/-1 [6] 6/-1/-1->5->1|1->5->6/-1/-1 [7] 6/-1/-1->5->1|1->5->6/-1/-1 [8] 1/-1/-1->5->6|6->5->1/-1/-1 [9] 1/-1/-1->5->6|6->5->1/-1/-1 [10] 4/-1/-1->5->7|7->5->4/-1/-1 [11] 7/-1/-1->5->4|4->5->7/-1/-1[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Trees [0] 3/-1/-1->0->-1|-1->0->3/-1/-1 [1] 3/-1/-1->0->-1|-1->0->3/-1/-1 [2] 4/-1/-1->0->-1|-1->0->4/-1/-1 [3] 4/-1/-1->0->-1|-1->0->4/-1/-1 [4] 1/-1/-1->0->-1|-1->0->1/-1/-1 [5] 2/-1/-1->0->-1|-1->0->2/-1/-1 [6] 3/-1/-1->0->-1|-1->0->3/-1/-1 [7] 3/-1/-1->0->-1|-1->0->3/-1/-1 [8] 4/-1/-1->0->-1|-1->0->4/-1/-1 [9] 4/-1/-1->0->-1|-1->0->4/-1/-1 [10] 1/-1/-1->0->-1|-1->0->1/-1/-1 [11] 2/-1/-1->0->-1|-1->0->2/-1/-1[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 00 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 00 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 00 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 00 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 00 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 00 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 00 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 00 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Trees [0] 6/-1/-1->5->1|1->5->6/-1/-1 [1] 6/-1/-1->5->1|1->5->6/-1/-1 [2] 1/-1/-1->5->6|6->5->1/-1/-1 [3] 1/-1/-1->5->6|6->5->1/-1/-1 [4] 4/-1/-1->5->7|7->5->4/-1/-1 [5] 7/-1/-1->5->4|4->5->7/-1/-1 [6] 6/-1/-1->5->1|1->5->6/-1/-1 [7] 6/-1/-1->5->1|1->5->6/-1/-1 [8] 1/-1/-1->5->6|6->5->1/-1/-1 [9] 1/-1/-1->5->6|6->5->1/-1/-1 [10] 4/-1/-1->5->7|7->5->4/-1/-1 [11] 7/-1/-1->5->4|4->5->7/-1/-1[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Trees [0] 7/-1/-1->6->5|5->6->7/-1/-1 [1] 7/-1/-1->6->5|5->6->7/-1/-1 [2] 5/-1/-1->6->7|7->6->5/-1/-1 [3] 5/-1/-1->6->7|7->6->5/-1/-1 [4] 2/-1/-1->6->4|4->6->2/-1/-1 [5] 4/-1/-1->6->2|2->6->4/-1/-1 [6] 7/-1/-1->6->5|5->6->7/-1/-1 [7] 7/-1/-1->6->5|5->6->7/-1/-1 [8] 5/-1/-1->6->7|7->6->5/-1/-1 [9] 5/-1/-1->6->7|7->6->5/-1/-1 [10] 2/-1/-1->6->4|4->6->2/-1/-1 [11] 4/-1/-1->6->2|2->6->4/-1/-1[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Trees [0] 4/-1/-1->7->6|6->7->4/-1/-1 [1] 4/-1/-1->7->6|6->7->4/-1/-1 [2] 6/-1/-1->7->4|4->7->6/-1/-1 [3] 6/-1/-1->7->4|4->7->6/-1/-1 [4] 5/-1/-1->7->3|3->7->5/-1/-1 [5] 3/-1/-1->7->5|5->7->3/-1/-1 [6] 4/-1/-1->7->6|6->7->4/-1/-1 [7] 4/-1/-1->7->6|6->7->4/-1/-1 [8] 6/-1/-1->7->4|4->7->6/-1/-1 [9] 6/-1/-1->7->4|4->7->6/-1/-1 [10] 5/-1/-1->7->3|3->7->5/-1/-1 [11] 3/-1/-1->7->5|5->7->3/-1/-1[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Trees [0] 1/-1/-1->2->3|3->2->1/-1/-1 [1] 1/-1/-1->2->3|3->2->1/-1/-1 [2] 3/-1/-1->2->1|1->2->3/-1/-1 [3] 3/-1/-1->2->1|1->2->3/-1/-1 [4] -1/-1/-1->2->6|6->2->-1/-1/-1 [5] 6/-1/-1->2->0|0->2->6/-1/-1 [6] 1/-1/-1->2->3|3->2->1/-1/-1 [7] 1/-1/-1->2->3|3->2->1/-1/-1 [8] 3/-1/-1->2->1|1->2->3/-1/-1 [9] 3/-1/-1->2->1|1->2->3/-1/-1 [10] -1/-1/-1->2->6|6->2->-1/-1/-1 [11] 6/-1/-1->2->0|0->2->6/-1/-1[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 00/12 : 0 3 2 1 5 6 7 4[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 01/12 : 0 3 2 1 5 6 7 4[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 02/12 : 0 4 7 6 5 1 2 3[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 03/12 : 0 4 7 6 5 1 2 3[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 04/12 : 0 1 3 7 5 4 6 2[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 05/12 : 0 2 6 4 5 7 3 1[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 06/12 : 0 3 2 1 5 6 7 4[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 07/12 : 0 3 2 1 5 6 7 4[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 08/12 : 0 4 7 6 5 1 2 3[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Trees [0] 5/-1/-1->1->2|2->1->5/-1/-1 [1] 5/-1/-1->1->2|2->1->5/-1/-1 [2] 2/-1/-1->1->5|5->1->2/-1/-1 [3] 2/-1/-1->1->5|5->1->2/-1/-1 [4] 3/-1/-1->1->0|0->1->3/-1/-1 [5] -1/-1/-1->1->3|3->1->-1/-1/-1 [6] 5/-1/-1->1->2|2->1->5/-1/-1 [7] 5/-1/-1->1->2|2->1->5/-1/-1 [8] 2/-1/-1->1->5|5->1->2/-1/-1 [9] 2/-1/-1->1->5|5->1->2/-1/-1 [10] 3/-1/-1->1->0|0->1->3/-1/-1 [11] -1/-1/-1->1->3|3->1->-1/-1/-1[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 09/12 : 0 4 7 6 5 1 2 3[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 10/12 : 0 1 3 7 5 4 6 2[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 11/12 : 0 2 6 4 5 7 3 1[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Trees [0] 2/-1/-1->3->0|0->3->2/-1/-1 [1] 2/-1/-1->3->0|0->3->2/-1/-1 [2] -1/-1/-1->3->2|2->3->-1/-1/-1 [3] -1/-1/-1->3->2|2->3->-1/-1/-1 [4] 7/-1/-1->3->1|1->3->7/-1/-1 [5] 1/-1/-1->3->7|7->3->1/-1/-1 [6] 2/-1/-1->3->0|0->3->2/-1/-1 [7] 2/-1/-1->3->0|0->3->2/-1/-1 [8] -1/-1/-1->3->2|2->3->-1/-1/-1 [9] -1/-1/-1->3->2|2->3->-1/-1/-1 [10] 7/-1/-1->3->1|1->3->7/-1/-1 [11] 1/-1/-1->3->7|7->3->1/-1/-1[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Trees [0] -1/-1/-1->4->7|7->4->-1/-1/-1 [1] -1/-1/-1->4->7|7->4->-1/-1/-1 [2] 7/-1/-1->4->0|0->4->7/-1/-1 [3] 7/-1/-1->4->0|0->4->7/-1/-1 [4] 6/-1/-1->4->5|5->4->6/-1/-1 [5] 5/-1/-1->4->6|6->4->5/-1/-1 [6] -1/-1/-1->4->7|7->4->-1/-1/-1 [7] -1/-1/-1->4->7|7->4->-1/-1/-1 [8] 7/-1/-1->4->0|0->4->7/-1/-1 [9] 7/-1/-1->4->0|0->4->7/-1/-1 [10] 6/-1/-1->4->5|5->4->6/-1/-1 [11] 5/-1/-1->4->6|6->4->5/-1/-1[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/64[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Trees [0] 3/-1/-1->0->-1|-1->0->3/-1/-1 [1] 3/-1/-1->0->-1|-1->0->3/-1/-1 [2] 4/-1/-1->0->-1|-1->0->4/-1/-1 [3] 4/-1/-1->0->-1|-1->0->4/-1/-1 [4] 1/-1/-1->0->-1|-1->0->1/-1/-1 [5] 2/-1/-1->0->-1|-1->0->2/-1/-1 [6] 3/-1/-1->0->-1|-1->0->3/-1/-1 [7] 3/-1/-1->0->-1|-1->0->3/-1/-1 [8] 4/-1/-1->0->-1|-1->0->4/-1/-1 [9] 4/-1/-1->0->-1|-1->0->4/-1/-1 [10] 1/-1/-1->0->-1|-1->0->1/-1/-1 [11] 2/-1/-1->0->-1|-1->0->2/-1/-1[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 00 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 00 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 00 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 00 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 00 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 00 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 00 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 00 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 00 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 00 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 00 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 00 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 00 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 00 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 00 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 01 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 00 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 01 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 01 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 01 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 01 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 01 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 00 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 00 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 00 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 00 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 00 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 00 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 01 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 01 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 01 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 01 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 01 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 01 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 01 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 01 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 01 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 01 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 01 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 01 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 01 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 01 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 01 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 01 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 01 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 02 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 01 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 02 : 0[170] -> 4[1b0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 02 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 02 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 02 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 02 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 01 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 01 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 01 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 01 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 01 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 02 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 02 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 01 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 02 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 02 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 02 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 02 : 0[170] -> 4[1b0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 02 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 02 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 02 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 02 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 02 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 02 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 02 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 02 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 02 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 02 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 02 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 03 : 0[170] -> 4[1b0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 03 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 02 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 03 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 02 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 03 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 03 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 03 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 03 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 02 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 02 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 02 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 03 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 02 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 02 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 03 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 03 : 0[170] -> 4[1b0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 03 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 03 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 03 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 03 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 03 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 03 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 03 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 03 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 03 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 03 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 03 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 03 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 03 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 04 : 3[1a0] -> 7[1e0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 04 : 0[170] -> 1[180] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 03 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 04 : 4[1b0] -> 6[1d0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 04 : 6[1d0] -> 2[190] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 03 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 04 : 1[180] -> 3[1a0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 04 : 2[190] -> 0[170] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 04 : 7[1e0] -> 5[1c0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 03 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 03 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 04 : 5[1c0] -> 4[1b0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 03 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 03 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 03 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 04 : 3[1a0] -> 7[1e0] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 04 : 0[170] -> 1[180] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 04 : 3[1a0] -> 1[180] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 04 : 2[190] -> 6[1d0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 04 : 4[1b0] -> 6[1d0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 04 : 4[1b0] -> 5[1c0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 04 : 5[1c0] -> 4[1b0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 04 : 6[1d0] -> 2[190] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 04 : 6[1d0] -> 4[1b0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 04 : 1[180] -> 0[170] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 04 : 7[1e0] -> 3[1a0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 04 : 7[1e0] -> 5[1c0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 04 : 1[180] -> 3[1a0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 04 : 5[1c0] -> 7[1e0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 04 : 2[190] -> 0[170] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 05 : 2[190] -> 6[1d0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 05 : 0[170] -> 2[190] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 05 : 4[1b0] -> 5[1c0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 05 : 3[1a0] -> 1[180] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 05 : 6[1d0] -> 4[1b0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 04 : 3[1a0] -> 1[180] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 04 : 2[190] -> 6[1d0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 04 : 4[1b0] -> 5[1c0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 05 : 1[180] -> 0[170] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 05 : 7[1e0] -> 3[1a0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 04 : 5[1c0] -> 7[1e0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 04 : 6[1d0] -> 4[1b0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 05 : 5[1c0] -> 7[1e0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 04 : 1[180] -> 0[170] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 04 : 7[1e0] -> 3[1a0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 05 : 2[190] -> 0[170] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 05 : 1[180] -> 3[1a0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 05 : 2[190] -> 6[1d0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 05 : 3[1a0] -> 7[1e0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 05 : 4[1b0] -> 6[1d0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 05 : 4[1b0] -> 5[1c0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 05 : 6[1d0] -> 2[190] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 05 : 3[1a0] -> 1[180] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 05 : 0[170] -> 2[190] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 05 : 7[1e0] -> 5[1c0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 05 : 5[1c0] -> 7[1e0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 05 : 6[1d0] -> 4[1b0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 05 : 5[1c0] -> 4[1b0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 05 : 7[1e0] -> 3[1a0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 05 : 1[180] -> 0[170] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 06 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 06 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 06 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 06 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 05 : 2[190] -> 0[170] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 06 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 06 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 05 : 1[180] -> 3[1a0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 05 : 4[1b0] -> 6[1d0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 05 : 3[1a0] -> 7[1e0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 06 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 06 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 05 : 5[1c0] -> 4[1b0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 05 : 6[1d0] -> 2[190] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 05 : 7[1e0] -> 5[1c0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 06 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 06 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 06 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 06 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 06 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 06 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 06 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 06 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 06 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 06 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 06 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 06 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 06 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 06 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 06 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 07 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 07 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 07 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 07 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 06 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 06 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 07 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 07 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 06 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 07 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 06 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 06 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 07 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 06 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 06 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 07 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 07 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 07 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 07 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 07 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 07 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 07 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 07 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 07 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 07 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 07 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 07 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 07 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 07 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 07 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 08 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 08 : 0[170] -> 4[1b0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 08 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 08 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 07 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 08 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 07 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 07 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 08 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 07 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 08 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 07 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 07 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 08 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 07 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 08 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 08 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 08 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 08 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 08 : 0[170] -> 4[1b0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 08 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 08 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 08 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 08 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 08 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 08 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 08 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 08 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 08 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 08 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 09 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 09 : 0[170] -> 4[1b0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 09 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 08 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 09 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 09 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 08 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 09 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 08 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 08 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 09 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 08 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 09 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 08 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 09 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 08 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 09 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 09 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 09 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 09 : 0[170] -> 4[1b0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 09 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 09 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 09 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 09 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 09 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 09 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 09 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 09 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 10 : 3[1a0] -> 7[1e0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 09 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 09 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 09 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 10 : 0[170] -> 1[180] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 10 : 2[190] -> 0[170] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 10 : 4[1b0] -> 6[1d0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 10 : 1[180] -> 3[1a0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 10 : 6[1d0] -> 2[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 09 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 09 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 10 : 7[1e0] -> 5[1c0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 10 : 5[1c0] -> 4[1b0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 09 : 4[1b0] -> 0[170] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 09 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 09 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 09 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 10 : 3[1a0] -> 7[1e0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 10 : 2[190] -> 6[1d0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 10 : 3[1a0] -> 1[180] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 10 : 4[1b0] -> 5[1c0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 10 : 1[180] -> 0[170] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 10 : 6[1d0] -> 4[1b0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 10 : 2[190] -> 0[170] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 10 : 0[170] -> 1[180] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 10 : 1[180] -> 3[1a0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 10 : 7[1e0] -> 3[1a0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 10 : 5[1c0] -> 7[1e0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 10 : 5[1c0] -> 4[1b0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 10 : 4[1b0] -> 6[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 10 : 6[1d0] -> 2[190] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 10 : 7[1e0] -> 5[1c0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 11 : 2[190] -> 6[1d0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 11 : 0[170] -> 2[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 10 : 2[190] -> 6[1d0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 11 : 3[1a0] -> 1[180] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 11 : 4[1b0] -> 5[1c0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 11 : 1[180] -> 0[170] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 10 : 1[180] -> 0[170] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 10 : 3[1a0] -> 1[180] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 11 : 6[1d0] -> 4[1b0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 11 : 7[1e0] -> 3[1a0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 10 : 4[1b0] -> 5[1c0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 10 : 5[1c0] -> 7[1e0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 11 : 5[1c0] -> 7[1e0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 10 : 6[1d0] -> 4[1b0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 10 : 7[1e0] -> 3[1a0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 11 : 1[180] -> 3[1a0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 11 : 2[190] -> 0[170] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 11 : 2[190] -> 6[1d0] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 11 : 0[170] -> 2[190] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 11 : 1[180] -> 0[170] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 11 : 3[1a0] -> 7[1e0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 11 : 4[1b0] -> 6[1d0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 11 : 6[1d0] -> 2[190] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 11 : 7[1e0] -> 5[1c0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 11 : 3[1a0] -> 1[180] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 11 : 5[1c0] -> 7[1e0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 11 : 4[1b0] -> 5[1c0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO comm 0x560b30e78d40 rank 1 nranks 8 cudaDev 1 busId 180 - Init COMPLETE[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 11 : 5[1c0] -> 4[1b0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 11 : 6[1d0] -> 4[1b0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO comm 0x5613ecd1f110 rank 0 nranks 8 cudaDev 0 busId 170 - Init COMPLETE[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 11 : 7[1e0] -> 3[1a0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO comm 0x561a97f8a190 rank 2 nranks 8 cudaDev 2 busId 190 - Init COMPLETE[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO comm 0x562332e4f3f0 rank 3 nranks 8 cudaDev 3 busId 1a0 - Init COMPLETE[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO comm 0x559663492c10 rank 4 nranks 8 cudaDev 4 busId 1b0 - Init COMPLETE[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO comm 0x55aa9b038970 rank 6 nranks 8 cudaDev 6 busId 1d0 - Init COMPLETE[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO comm 0x556c52fb34f0 rank 7 nranks 8 cudaDev 7 busId 1e0 - Init COMPLETE[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 11 : 1[180] -> 3[1a0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO comm 0x5573d4fa2780 rank 5 nranks 8 cudaDev 5 busId 1c0 - Init COMPLETE[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 11 : 2[190] -> 0[170] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 11 : 3[1a0] -> 7[1e0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 11 : 4[1b0] -> 6[1d0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 11 : 5[1c0] -> 4[1b0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 11 : 6[1d0] -> 2[190] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO comm 0x55ade25e6290 rank 1 nranks 8 cudaDev 1 busId 180 - Init COMPLETE[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 11 : 7[1e0] -> 5[1c0] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO comm 0x556fa0ae6150 rank 0 nranks 8 cudaDev 0 busId 170 - Init COMPLETE[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO comm 0x5605be7e7570 rank 2 nranks 8 cudaDev 2 busId 190 - Init COMPLETE[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO comm 0x55ba1bbe4a50 rank 3 nranks 8 cudaDev 3 busId 1a0 - Init COMPLETE[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO comm 0x558ac85ec320 rank 4 nranks 8 cudaDev 4 busId 1b0 - Init COMPLETE[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO comm 0x55940b219660 rank 5 nranks 8 cudaDev 5 busId 1c0 - Init COMPLETE[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO comm 0x564ce028b720 rank 6 nranks 8 cudaDev 6 busId 1d0 - Init COMPLETE[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO 12 coll channels, 16 p2p channels, 2 p2p channels per peer[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO comm 0x55e5d7421f90 rank 7 nranks 8 cudaDev 7 busId 1e0 - Init COMPLETE[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Trees [0] 4/-1/-1->7->6|6->7->4/-1/-1 [1] 4/-1/-1->7->6|6->7->4/-1/-1[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Trees [0] 5/-1/-1->1->2|2->1->5/-1/-1 [1] 5/-1/-1->1->2|2->1->5/-1/-1[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Trees [0] 1/-1/-1->2->3|3->2->1/-1/-1 [1] 1/-1/-1->2->3|3->2->1/-1/-1[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Trees [0] -1/-1/-1->4->7|7->4->-1/-1/-1 [1] -1/-1/-1->4->7|7->4->-1/-1/-1[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 00/02 : 0 3 2 1 5 6 7 4 8 11 10 9 13 14 15 12[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 01/02 : 0 3 2 1 5 6 7 4 8 11 10 9 13 14 15 12[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Trees [0] 7/-1/-1->6->5|5->6->7/-1/-1 [1] 7/-1/-1->6->5|5->6->7/-1/-1[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Trees [0] 2/8/-1->3->0|0->3->2/8/-1 [1] 2/-1/-1->3->0|0->3->2/-1/-1[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Trees [0] 6/-1/-1->5->1|1->5->6/-1/-1 [1] 6/-1/-1->5->1|1->5->6/-1/-1[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Trees [0] 3/-1/-1->0->-1|-1->0->3/-1/-1 [1] 3/-1/-1->0->11|11->0->3/-1/-1[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Trees [0] 15/-1/-1->14->13|13->14->15/-1/-1 [1] 15/-1/-1->14->13|13->14->15/-1/-1[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Trees [0] 12/-1/-1->15->14|14->15->12/-1/-1 [1] 12/-1/-1->15->14|14->15->12/-1/-1[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Trees [0] 11/-1/-1->8->3|3->8->11/-1/-1 [1] 11/-1/-1->8->-1|-1->8->11/-1/-1[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Trees [0] 9/-1/-1->10->11|11->10->9/-1/-1 [1] 9/-1/-1->10->11|11->10->9/-1/-1[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Trees [0] -1/-1/-1->12->15|15->12->-1/-1/-1 [1] -1/-1/-1->12->15|15->12->-1/-1/-1[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Trees [0] 10/-1/-1->11->8|8->11->10/-1/-1 [1] 10/0/-1->11->8|8->11->10/0/-1[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Trees [0] 14/-1/-1->13->9|9->13->14/-1/-1 [1] 14/-1/-1->13->9|9->13->14/-1/-1[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Trees [0] 13/-1/-1->9->10|10->9->13/-1/-1 [1] 13/-1/-1->9->10|10->9->13/-1/-1[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 00 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 00 : 15[1e0] -> 12[1b0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 00 : 14[1d0] -> 15[1e0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 00 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 00 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 00 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 00 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 00 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 00 : 13[1c0] -> 14[1d0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 00 : 11[1a0] -> 10[190] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 00 : 9[180] -> 13[1c0] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 00 : 12[1b0] -> 0[170] [receive] via NET/Socket/0[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 00 : 10[190] -> 9[180] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 00 : 4[1b0] -> 8[170] [receive] via NET/Socket/0[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 00 : 4[1b0] -> 8[170] [send] via NET/Socket/0[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 00 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 00 : 12[1b0] -> 0[170] [send] via NET/Socket/0[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 00 : 8[170] -> 11[1a0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 00 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 00 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 00 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 00 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 00 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 00 : 15[1e0] -> 14[1d0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 00 : 14[1d0] -> 13[1c0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 00 : 13[1c0] -> 9[180] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 00 : 9[180] -> 10[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 00 : 10[190] -> 11[1a0] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 00 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 00 : 12[1b0] -> 15[1e0] via P2P/IPC[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 00 : 11[1a0] -> 8[170] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 00 : 8[170] -> 3[1a0] [receive] via NET/Socket/0[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 01 : 5[1c0] -> 6[1d0] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 00 : 8[170] -> 3[1a0] [send] via NET/Socket/0[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 00 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 01 : 6[1d0] -> 7[1e0] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 01 : 1[180] -> 5[1c0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 01 : 2[190] -> 1[180] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 01 : 7[1e0] -> 4[1b0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 01 : 14[1d0] -> 15[1e0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 01 : 13[1c0] -> 14[1d0] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 01 : 15[1e0] -> 12[1b0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 01 : 9[180] -> 13[1c0] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 01 : 10[190] -> 9[180] via P2P/IPC[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 01 : 4[1b0] -> 8[170] [send] via NET/Socket/0[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 01 : 11[1a0] -> 10[190] via P2P/IPC[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 01 : 12[1b0] -> 0[170] [receive] via NET/Socket/0[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO Channel 01 : 5[1c0] -> 1[180] via P2P/IPC[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO Channel 01 : 6[1d0] -> 5[1c0] via P2P/IPC[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 01 : 12[1b0] -> 0[170] [send] via NET/Socket/0[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO Channel 01 : 1[180] -> 2[190] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO Channel 01 : 7[1e0] -> 6[1d0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 00 : 3[1a0] -> 8[170] [send] via NET/Socket/0[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 01 : 0[170] -> 3[1a0] via P2P/IPC[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,5]<stdout>:algo-1:220:220 [5] NCCL INFO comm 0x5573d7c78960 rank 5 nranks 16 cudaDev 5 busId 1c0 - Init COMPLETE[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,6]<stdout>:algo-1:215:215 [6] NCCL INFO comm 0x55aa9dd0eb50 rank 6 nranks 16 cudaDev 6 busId 1d0 - Init COMPLETE[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 00 : 3[1a0] -> 8[170] [receive] via NET/Socket/0[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO Channel 01 : 14[1d0] -> 13[1c0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO Channel 01 : 13[1c0] -> 9[180] via P2P/IPC[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO Channel 01 : 15[1e0] -> 14[1d0] via P2P/IPC[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO Channel 01 : 9[180] -> 10[190] via P2P/IPC[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO Channel 01 : 10[190] -> 11[1a0] via P2P/IPC[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,14]<stdout>:algo-2:228:228 [6] NCCL INFO comm 0x564ce2f61900 rank 14 nranks 16 cudaDev 6 busId 1d0 - Init COMPLETE[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO Channel 01 : 12[1b0] -> 15[1e0] via P2P/IPC[0m
[34m[1,13]<stdout>:algo-2:223:223 [5] NCCL INFO comm 0x55940deef840 rank 13 nranks 16 cudaDev 5 busId 1c0 - Init COMPLETE[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,9]<stdout>:algo-2:224:224 [1] NCCL INFO comm 0x55ade52bc470 rank 9 nranks 16 cudaDev 1 busId 180 - Init COMPLETE[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,15]<stdout>:algo-2:229:229 [7] NCCL INFO comm 0x55e5da0f8170 rank 15 nranks 16 cudaDev 7 busId 1e0 - Init COMPLETE[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,12]<stdout>:algo-2:227:227 [4] NCCL INFO comm 0x558acb2c2500 rank 12 nranks 16 cudaDev 4 busId 1b0 - Init COMPLETE[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 01 : 3[1a0] -> 2[190] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO Channel 01 : 2[190] -> 3[1a0] via P2P/IPC[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO Channel 01 : 3[1a0] -> 0[170] via P2P/IPC[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,1]<stdout>:algo-1:217:217 [1] NCCL INFO comm 0x560b33b4ef20 rank 1 nranks 16 cudaDev 1 busId 180 - Init COMPLETE[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 01 : 4[1b0] -> 8[170] [receive] via NET/Socket/0[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO Channel 01 : 8[170] -> 11[1a0] via P2P/IPC[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,2]<stdout>:algo-1:216:216 [2] NCCL INFO comm 0x561a9ac60370 rank 2 nranks 16 cudaDev 2 busId 190 - Init COMPLETE[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,3]<stdout>:algo-1:219:219 [3] NCCL INFO comm 0x562335b255d0 rank 3 nranks 16 cudaDev 3 busId 1a0 - Init COMPLETE[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 01 : 0[170] -> 11[1a0] [send] via NET/Socket/0[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,10]<stdout>:algo-2:226:226 [2] NCCL INFO comm 0x5605c14bd750 rank 10 nranks 16 cudaDev 2 busId 190 - Init COMPLETE[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO Channel 01 : 4[1b0] -> 7[1e0] via P2P/IPC[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,7]<stdout>:algo-1:221:221 [7] NCCL INFO comm 0x556c55c896d0 rank 7 nranks 16 cudaDev 7 busId 1e0 - Init COMPLETE[0m
[34m[1,4]<stdout>:algo-1:218:218 [4] NCCL INFO comm 0x559666168df0 rank 4 nranks 16 cudaDev 4 busId 1b0 - Init COMPLETE[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 01 : 0[170] -> 11[1a0] [receive] via NET/Socket/0[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 01 : 11[1a0] -> 8[170] via P2P/IPC[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,8]<stdout>:algo-2:678:678 [0] NCCL INFO comm 0x556fa37bc330 rank 8 nranks 16 cudaDev 0 busId 170 - Init COMPLETE[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO Channel 01 : 11[1a0] -> 0[170] [send] via NET/Socket/0[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO Channel 01 : 11[1a0] -> 0[170] [receive] via NET/Socket/0[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer[0m
[34m[1,0]<stdout>:algo-1:670:670 [0] NCCL INFO comm 0x5613ef9f52f0 rank 0 nranks 16 cudaDev 0 busId 170 - Init COMPLETE[0m
[34m[1,11]<stdout>:algo-2:225:225 [3] NCCL INFO comm 0x55ba1e8bac30 rank 11 nranks 16 cudaDev 3 busId 1a0 - Init COMPLETE[0m
[34m[1,0]<stdout>:Running smdistributed.dataparallel v1.2.0[0m
[34m[1,1]<stdout>:dist.rank(): 1[0m
[34m[1,1]<stdout>:mnist files: mnist-1.npz[0m
[34m[1,7]<stdout>:dist.rank(): [1,7]<stdout>: 7[0m
[34m[1,7]<stdout>:mnist files: mnist-7.npz[1,7]<stdout>:[0m
[34m[1,6]<stdout>:dist.rank(): 6[0m
[34m[1,1]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,6]<stdout>:mnist files: mnist-6.npz[0m
[34m[1,4]<stdout>:dist.rank(): 4[1,7]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,4]<stdout>:[0m
[34m[1,4]<stdout>:mnist files: mnist-4.npz[0m
[34m[1,2]<stdout>:dist.rank(): [1,2]<stdout>: 2[0m
[34m[1,2]<stdout>:mnist files: mnist-2.npz[0m
[34m[1,6]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,4]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,5]<stdout>:dist.rank(): 5[1,2]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,5]<stdout>:[0m
[34m[1,5]<stdout>:mnist files: mnist-5.npz[0m
[34m[1,0]<stdout>:dist.rank(): 0[1,0]<stdout>:[0m
[34m[1,0]<stdout>:mnist files: mnist-0.npz[1,0]<stdout>:[0m
[34m[1,3]<stdout>:dist.rank(): 3[0m
[34m[1,3]<stdout>:mnist files: mnist-3.npz[0m
[34m[1,5]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,0]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[1,3]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[1,0]<stdout>:[0m
[34m[1,3]<stdout>:[0m
[34m[1,4]<stdout>:#015 8192/11490434 [..............................] - ETA: 0s[1,2]<stdout>:#015[1,2]<stdout>: 8192/11490434 [..............................][1,2]<stdout>: - ETA: 0s[1,3]<stdout>:#015 8192/11490434 [..............................] - ETA: 0s[1,6]<stdout>:#015[1,6]<stdout>: 8192/11490434 [..............................] - ETA: 0s[1,0]<stdout>:#015[1,5]<stdout>:#015[1,0]<stdout>: 8192/11490434 [..............................][1,0]<stdout>: - ETA: 0s[1,5]<stdout>: 8192/11490434 [..............................][1,5]<stdout>: - ETA: 0s[1,1]<stdout>:#015[1,1]<stdout>: 8192/11490434 [..............................] - ETA: 0s[1,7]<stdout>:#015[1,7]<stdout>: 8192/11490434 [..............................] - ETA: 0s[1,11]<stdout>:dist.rank(): 11[0m
[34m[1,11]<stdout>:mnist files: mnist-11.npz[0m
[34m[1,13]<stdout>:dist.rank(): 13[0m
[34m[1,13]<stdout>:mnist files: mnist-13.npz[0m
[34m[1,8]<stdout>:dist.rank(): 8[0m
[34m[1,8]<stdout>:mnist files: mnist-8.npz[0m
[34m[1,13]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,11]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,8]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,9]<stdout>:dist.rank(): 9[0m
[34m[1,9]<stdout>:mnist files: mnist-9.npz[0m
[34m[1,14]<stdout>:dist.rank(): 14[0m
[34m[1,14]<stdout>:mnist files: mnist-14.npz[0m
[34m[1,9]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,15]<stdout>:dist.rank(): [1,15]<stdout>:15[0m
[34m[1,15]<stdout>:mnist files: mnist-15.npz[0m
[34m[1,14]<stdout>:Downloading data from [1,10]<stdout>:dist.rank(): 10[0m
[34m[1,10]<stdout>:mnist files: mnist-10.npz[0m
[34m[1,14]<stdout>:https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,15]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,12]<stdout>:dist.rank(): 12[0m
[34m[1,12]<stdout>:mnist files: mnist-12.npz[0m
[34m[1,10]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[1,10]<stdout>:[0m
[34m[1,12]<stdout>:Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m[1,4]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2777088/11490434 [======>.......................][1,4]<stdout>: - ETA: 0s[1,2]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,2]<stdout>: 5201920/11490434 [============>.................] - ETA: 0s[1,0]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,5]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,3]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,0]<stdout>: 2662400/11490434 [=====>........................] - ETA: 0s[1,5]<stdout>: 3522560/11490434 [========>.....................] - ETA: 0s[1,3]<stdout>: 2908160/11490434 [======>.......................] - ETA: 0s[1,1]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,1]<stdout>: 2646016/11490434 [=====>........................][1,1]<stdout>: - ETA: 0s[1,6]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010[1,6]<stdout>:#015[1,6]<stdout>: 2580480/11490434 [=====>........................] - ETA: 0s[1,7]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 4202496/11490434 [=========>....................][1,7]<stdout>: - ETA: 0s[1,2]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,4]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 8216576/11490434 [====================>.........] - ETA: 0s[1,3]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 6979584/11490434 [=================>............][1,5]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,3]<stdout>: - ETA: 0s[1,5]<stdout>:10256384/11490434 [=========================>....][1,5]<stdout>: - ETA: 0s[1,0]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,1]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,1]<stdout>:10600448/11490434 [==========================>...][1,1]<stdout>: - ETA: 0s[1,0]<stdout>: 7372800/11490434 [==================>...........][1,0]<stdout>: - ETA: 0s[1,6]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010[1,6]<stdout>:#015[1,7]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 9461760/11490434 [=======================>......] - ETA: 0s[1,6]<stdout>: 6807552/11490434 [================>.............] - ETA: 0s[1,1]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================][1,1]<stdout>: - 0s 0us/step[0m
[34m[1,5]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,7]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,4]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,12]<stdout>:#015[1,12]<stdout>: 8192/11490434 [..............................] - ETA: 0s[1,14]<stdout>:#015 8192/11490434 [..............................] - ETA: 0s[1,3]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================][1,3]<stdout>: - 0s 0us/step[0m
[34m[1,6]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,6]<stdout>:11493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,0]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,15]<stdout>:#015 8192/11490434 [..............................] - ETA: 0s[1,9]<stdout>:#015[1,9]<stdout>: 8192/11490434 [..............................][1,9]<stdout>: - ETA: 0s[1,11]<stdout>:#015 8192/11490434 [..............................] - ETA: 0s[1,8]<stdout>:#015[1,8]<stdout>: 8192/11490434 [..............................] - ETA: 0s[1,12]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 1867776/11490434 [===>..........................] - ETA: 0s[1,14]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2285568/11490434 [====>.........................] - ETA: 0s[1,10]<stdout>:#015 8192/11490434 [..............................][1,10]<stdout>: - ETA: 0s[1,13]<stdout>:#015 8192/11490434 [..............................] - ETA: 0s[1,15]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 884736/11490434 [=>............................] - ETA: 0s[1,9]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 1630208/11490434 [===>..........................] - ETA: 0s[1,8]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 3047424/11490434 [======>.......................] - ETA: 0s[1,12]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,12]<stdout>: 4202496/11490434 [=========>....................][1,12]<stdout>: - ETA: 0s[1,10]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2818048/11490434 [======>.......................] - ETA: 0s[1,13]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,13]<stdout>: 4202496/11490434 [=========>....................] - ETA: 0s[1,15]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,15]<stdout>: 4259840/11490434 [==========>...................][1,15]<stdout>: - ETA: 0s[1,11]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 4202496/11490434 [=========>....................][1,11]<stdout>: - ETA: 0s[1,8]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 5439488/11490434 [=============>................] - ETA: 0s[1,14]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 4202496/11490434 [=========>....................] - ETA: 0s[1,9]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 4202496/11490434 [=========>....................] - ETA: 0s[1,12]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 9576448/11490434 [========================>.....] - ETA: 0s[1,10]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 9330688/11490434 [=======================>......][1,10]<stdout>: - ETA: 0s[1,13]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,12]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,11]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,10]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,15]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 9240576/11490434 [=======================>......] - ETA: 0s[1,8]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,15]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,14]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 8388608/11490434 [====================>.........] - ETA: 0s[1,9]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015[1,9]<stdout>: 8355840/11490434 [====================>.........][1,9]<stdout>: - ETA: 0s[1,14]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,9]<stdout>:#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34m[1,2]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,2]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,2]<stdout>:mnist_labels: (60000,)[0m
[34m[1,5]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,5]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,5]<stdout>:mnist_labels: (60000,)[0m
[34m[1,1]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,1]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,1]<stdout>:mnist_labels: (60000,)[0m
[34m[1,7]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,7]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,7]<stdout>:mnist_labels: (60000,)[0m
[34m[1,4]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,4]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,4]<stdout>:mnist_labels: (60000,)[0m
[34m[1,3]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,3]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,3]<stdout>:mnist_labels: (60000,)[0m
[34m[1,6]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,6]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,6]<stdout>:mnist_labels: (60000,)[0m
[34m[1,0]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,0]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,0]<stdout>:mnist_labels: (60000,)[0m
[34m[1,12]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,12]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,12]<stdout>:mnist_labels: (60000,)[0m
[34m[1,11]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,11]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,11]<stdout>:mnist_labels: (60000,)[0m
[34m[1,13]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,13]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,13]<stdout>:mnist_labels: (60000,)[0m
[34m[1,10]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,10]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,10]<stdout>:mnist_labels: (60000,)[0m
[34m[1,15]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,15]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,15]<stdout>:mnist_labels: (60000,)[0m
[34m[1,8]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,8]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,8]<stdout>:mnist_labels: (60000,)[0m
[34m[1,14]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,14]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,14]<stdout>:mnist_labels: (60000,)[0m
[34m[1,9]<stdout>:type: <class 'numpy.ndarray'>[0m
[34m[1,9]<stdout>:shape: (60000, 28, 28)[0m
[34m[1,9]<stdout>:mnist_labels: (60000,)[0m
[34m[1,1]<stdout>:[2021-10-02 08:08:10.429 algo-1:217 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,2]<stdout>:[2021-10-02 08:08:10.429 algo-1:216 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,7]<stdout>:[2021-10-02 08:08:10.429 algo-1:221 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,5]<stdout>:[2021-10-02 08:08:10.429 algo-1:220 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,6]<stdout>:[2021-10-02 08:08:10.429 algo-1:215 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,3]<stdout>:[2021-10-02 08:08:10.429 algo-1:219 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,4]<stdout>:[2021-10-02 08:08:10.429 algo-1:218 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,0]<stdout>:[2021-10-02 08:08:10.429 algo-1:670 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,4]<stdout>:[2021-10-02 08:08:10.523 algo-1:218 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,1]<stdout>:[2021-10-02 08:08:10.523 algo-1:217 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,2]<stdout>:[2021-10-02 08:08:10.523 algo-1:216 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,7]<stdout>:[2021-10-02 08:08:10.523 algo-1:221 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,5]<stdout>:[2021-10-02 08:08:10.523 algo-1:220 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,6]<stdout>:[2021-10-02 08:08:10.523 algo-1:215 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,3]<stdout>:[2021-10-02 08:08:10.523 algo-1:219 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,0]<stdout>:[2021-10-02 08:08:10.523 algo-1:670 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,3]<stdout>:[2021-10-02 08:08:10.524 algo-1:219 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,4]<stdout>:[2021-10-02 08:08:10.524 algo-1:218 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,1]<stdout>:[2021-10-02 08:08:10.524 algo-1:217 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,2]<stdout>:[2021-10-02 08:08:10.524 algo-1:216 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,7]<stdout>:[2021-10-02 08:08:10.524 algo-1:221 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,5]<stdout>:[2021-10-02 08:08:10.524 algo-1:220 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,6]<stdout>:[2021-10-02 08:08:10.524 algo-1:215 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,0]<stdout>:[2021-10-02 08:08:10.524 algo-1:670 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,6]<stdout>:[2021-10-02 08:08:10.525 algo-1:215 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,4]<stdout>:[2021-10-02 08:08:10.525 algo-1:218 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,7]<stdout>:[2021-10-02 08:08:10.525 algo-1:221 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,5]<stdout>:[2021-10-02 08:08:10.525 algo-1:220 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,0]<stdout>:[2021-10-02 08:08:10.525 algo-1:670 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,3]<stdout>:[2021-10-02 08:08:10.525 algo-1:219 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,1]<stdout>:[2021-10-02 08:08:10.525 algo-1:217 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,2]<stdout>:[2021-10-02 08:08:10.525 algo-1:216 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,0]<stdout>:[2021-10-02 08:08:10.528 algo-1:670 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,3]<stdout>:[2021-10-02 08:08:10.528 algo-1:219 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,7]<stdout>:[2021-10-02 08:08:10.528 algo-1:221 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,6]<stdout>:[2021-10-02 08:08:10.528 algo-1:215 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,4]<stdout>:[2021-10-02 08:08:10.528 algo-1:218 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,3]<stdout>:[2021-10-02 08:08:10.528 algo-1:219 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,7]<stdout>:[2021-10-02 08:08:10.528 algo-1:221 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,0]<stdout>:[2021-10-02 08:08:10.528 algo-1:670 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,1]<stdout>:[2021-10-02 08:08:10.528 algo-1:217 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,5]<stdout>:[2021-10-02 08:08:10.528 algo-1:220 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,6]<stdout>:[2021-10-02 08:08:10.528 algo-1:215 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,4]<stdout>:[2021-10-02 08:08:10.528 algo-1:218 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,1]<stdout>:[2021-10-02 08:08:10.528 algo-1:217 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,5]<stdout>:[2021-10-02 08:08:10.528 algo-1:220 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,3]<stdout>:[2021-10-02 08:08:10.528 algo-1:219 INFO hook.py:413] Monitoring the collections: metrics, sm_metrics, losses[0m
[34m[1,7]<stdout>:[2021-10-02 08:08:10.528 algo-1:221 INFO hook.py:413] Monitoring the collections: losses, metrics, sm_metrics[0m
[34m[1,6]<stdout>:[2021-10-02 08:08:10.528 algo-1:215 INFO hook.py:413] Monitoring the collections: metrics, losses, sm_metrics[0m
[34m[1,4]<stdout>:[2021-10-02 08:08:10.528 algo-1:218 INFO hook.py:413] Monitoring the collections: metrics, sm_metrics, losses[0m
[34m[1,2]<stdout>:[2021-10-02 08:08:10.528 algo-1:216 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,0]<stdout>:[2021-10-02 08:08:10.528 algo-1:670 INFO hook.py:413] Monitoring the collections: losses, metrics, sm_metrics[0m
[34m[1,1]<stdout>:[2021-10-02 08:08:10.528 algo-1:217 INFO hook.py:413] Monitoring the collections: sm_metrics, metrics, losses[0m
[34m[1,2]<stdout>:[2021-10-02 08:08:10.528 algo-1:216 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,5]<stdout>:[2021-10-02 08:08:10.529 algo-1:220 INFO hook.py:413] Monitoring the collections: metrics, sm_metrics, losses[0m
[34m[1,2]<stdout>:[2021-10-02 08:08:10.529 algo-1:216 INFO hook.py:413] Monitoring the collections: metrics, sm_metrics, losses[0m
[34m[1,12]<stdout>:[2021-10-02 08:08:10.684 algo-2:227 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,15]<stdout>:[2021-10-02 08:08:10.684 algo-2:229 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,14]<stdout>:[2021-10-02 08:08:10.684 algo-2:228 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,10]<stdout>:[2021-10-02 08:08:10.684 algo-2:226 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,13]<stdout>:[2021-10-02 08:08:10.684 algo-2:223 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,11]<stdout>:[2021-10-02 08:08:10.684 algo-2:225 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,8]<stdout>:[2021-10-02 08:08:10.684 algo-2:678 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,9]<stdout>:[2021-10-02 08:08:10.684 algo-2:224 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None[0m
[34m[1,8]<stdout>:[2021-10-02 08:08:10.791 algo-2:678 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,12]<stdout>:[2021-10-02 08:08:10.791 algo-2:227 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,15]<stdout>:[2021-10-02 08:08:10.791 algo-2:229 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,14]<stdout>:[2021-10-02 08:08:10.791 algo-2:228 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,10]<stdout>:[2021-10-02 08:08:10.791 algo-2:226 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,13]<stdout>:[2021-10-02 08:08:10.791 algo-2:223 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,11]<stdout>:[2021-10-02 08:08:10.791 algo-2:225 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,9]<stdout>:[2021-10-02 08:08:10.791 algo-2:224 INFO profiler_config_parser.py:102] User has disabled profiler.[0m
[34m[1,8]<stdout>:[2021-10-02 08:08:10.793 algo-2:678 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,12]<stdout>:[2021-10-02 08:08:10.793 algo-2:227 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,15]<stdout>:[2021-10-02 08:08:10.793 algo-2:229 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,14]<stdout>:[2021-10-02 08:08:10.793 algo-2:228 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,10]<stdout>:[2021-10-02 08:08:10.793 algo-2:226 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,13]<stdout>:[2021-10-02 08:08:10.793 algo-2:223 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,9]<stdout>:[2021-10-02 08:08:10.793 algo-2:224 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,11]<stdout>:[2021-10-02 08:08:10.793 algo-2:225 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[1,13]<stdout>:[2021-10-02 08:08:10.793 algo-2:223 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,8]<stdout>:[2021-10-02 08:08:10.793 algo-2:678 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,15]<stdout>:[2021-10-02 08:08:10.793 algo-2:229 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,10]<stdout>:[2021-10-02 08:08:10.793 algo-2:226 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,9]<stdout>:[2021-10-02 08:08:10.793 algo-2:224 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,11]<stdout>:[2021-10-02 08:08:10.793 algo-2:225 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,12]<stdout>:[2021-10-02 08:08:10.793 algo-2:227 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,14]<stdout>:[2021-10-02 08:08:10.793 algo-2:228 INFO hook.py:199] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[1,14]<stdout>:[2021-10-02 08:08:10.798 algo-2:228 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,10]<stdout>:[2021-10-02 08:08:10.798 algo-2:226 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,9]<stdout>:[2021-10-02 08:08:10.798 algo-2:224 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,11]<stdout>:[2021-10-02 08:08:10.798 algo-2:225 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,14]<stdout>:[2021-10-02 08:08:10.798 algo-2:228 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,10]<stdout>:[2021-10-02 08:08:10.798 algo-2:226 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,13]<stdout>:[2021-10-02 08:08:10.798 algo-2:223 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,9]<stdout>:[2021-10-02 08:08:10.798 algo-2:224 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,8]<stdout>:[2021-10-02 08:08:10.798 algo-2:678 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,8]<stdout>:[2021-10-02 08:08:10.798 algo-2:678 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,12]<stdout>:[2021-10-02 08:08:10.798 algo-2:227 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,12]<stdout>:[2021-10-02 08:08:10.798 algo-2:227 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,15]<stdout>:[2021-10-02 08:08:10.798 algo-2:229 INFO hook.py:253] Saving to /opt/ml/output/tensors[0m
[34m[1,11]<stdout>:[2021-10-02 08:08:10.798 algo-2:225 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,15]<stdout>:[2021-10-02 08:08:10.798 algo-2:229 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,13]<stdout>:[2021-10-02 08:08:10.798 algo-2:223 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[1,14]<stdout>:[2021-10-02 08:08:10.798 algo-2:228 INFO hook.py:413] Monitoring the collections: metrics, sm_metrics, losses[0m
[34m[1,9]<stdout>:[2021-10-02 08:08:10.798 algo-2:224 INFO hook.py:413] Monitoring the collections: sm_metrics, metrics, losses[0m
[34m[1,10]<stdout>:[2021-10-02 08:08:10.798 algo-2:226 INFO hook.py:413] Monitoring the collections: sm_metrics, losses, metrics[0m
[34m[1,11]<stdout>:[2021-10-02 08:08:10.798 algo-2:225 INFO hook.py:413] Monitoring the collections: losses, metrics, sm_metrics[0m
[34m[1,8]<stdout>:[2021-10-02 08:08:10.798 algo-2:678 INFO hook.py:413] Monitoring the collections: losses, sm_metrics, metrics[0m
[34m[1,12]<stdout>:[2021-10-02 08:08:10.798 algo-2:227 INFO hook.py:413] Monitoring the collections: losses, sm_metrics, metrics[0m
[34m[1,13]<stdout>:[2021-10-02 08:08:10.798 algo-2:223 INFO hook.py:413] Monitoring the collections: sm_metrics, metrics, losses[0m
[34m[1,15]<stdout>:[2021-10-02 08:08:10.798 algo-2:229 INFO hook.py:413] Monitoring the collections: metrics, sm_metrics, losses[0m
[34m[1,0]<stdout>:algo-1:670:1466 [0] NCCL INFO Launch mode Parallel[0m
[34m[1,0]<stdout>:Step #0#011Loss: 2.316157[0m
[34m[1,0]<stdout>:algo-1:670:1458 [0] NCCL INFO Launch mode Parallel[0m
[34m[1,8]<stdout>:algo-2:678:1454 [0] NCCL INFO Launch mode Parallel[0m
[34m[1,0]<stdout>:Step #1000#011Loss: 0.046124[0m
[34m[1,0]<stdout>:Step #2000#011Loss: 0.033531[0m
[34m[1,0]<stdout>:Step #3000#011Loss: 0.005425[0m
[34m[1,0]<stdout>:Step #4000#011Loss: 0.018390[0m
[34m[1,0]<stdout>:Step #5000#011Loss: 0.025407[0m
[34m[1,0]<stdout>:Step #6000#011Loss: 0.000447[0m
[34m[1,0]<stdout>:Step #7000#011Loss: 0.056060[0m
[34m[1,0]<stdout>:Step #8000#011Loss: 0.001000[0m
[34m[1,0]<stdout>:Step #9000#011Loss: 0.000856[0m
[35m2021-10-02 08:11:13,566 sagemaker-training-toolkit INFO Orted process exited[0m
[34mWarning: Permanently added 'algo-2,10.0.223.118' (ECDSA) to the list of known hosts.#015[0m
[34m[1,13]<stderr>:2021-10-02 08:07:58.031277: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,13]<stderr>:2021-10-02 08:07:58.031428: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,6]<stderr>:2021-10-02 08:07:58.058049: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,6]<stderr>:2021-10-02 08:07:58.058216: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,13]<stderr>:2021-10-02 08:07:58.072822: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,9]<stderr>:2021-10-02 08:07:58.088642: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,9]<stderr>:2021-10-02 08:07:58.088841: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,6]<stderr>:2021-10-02 08:07:58.101103: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,10]<stderr>:2021-10-02 08:07:58.105745: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,11]<stderr>:2021-10-02 08:07:58.105755: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,10]<stderr>:2021-10-02 08:07:58.105895: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,11]<stderr>:2021-10-02 08:07:58.105895: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,12]<stderr>:2021-10-02 08:07:58.128052: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,14]<stderr>:2021-10-02 08:07:58.128056: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,12]<stderr>:2021-10-02 08:07:58.128224: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,14]<stderr>:2021-10-02 08:07:58.128224: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,9]<stderr>:2021-10-02 08:07:58.131387: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,1]<stderr>:2021-10-02 08:07:58.148153: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,2]<stderr>:2021-10-02 08:07:58.148149: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,10]<stderr>:2021-10-02 08:07:58.147203: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,1]<stderr>:2021-10-02 08:07:58.148327: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,2]<stderr>:2021-10-02 08:07:58.148327: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,15]<stderr>:2021-10-02 08:07:58.147387: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,15]<stderr>:2021-10-02 08:07:58.147871: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,11]<stderr>:2021-10-02 08:07:58.148301: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,4]<stderr>:2021-10-02 08:07:58.158762: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,4]<stderr>:2021-10-02 08:07:58.158903: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,3]<stderr>:2021-10-02 08:07:58.165570: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,3]<stderr>:2021-10-02 08:07:58.165702: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,14]<stderr>:2021-10-02 08:07:58.169923: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,12]<stderr>:2021-10-02 08:07:58.171409: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,5]<stderr>:2021-10-02 08:07:58.173056: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,7]<stderr>:2021-10-02 08:07:58.173061: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,5]<stderr>:2021-10-02 08:07:58.173206: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,7]<stderr>:2021-10-02 08:07:58.173206: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,2]<stderr>:2021-10-02 08:07:58.190356: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,1]<stderr>:2021-10-02 08:07:58.190356: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,4]<stderr>:2021-10-02 08:07:58.200729: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,3]<stderr>:2021-10-02 08:07:58.206991: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,7]<stderr>:2021-10-02 08:07:58.214832: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,5]<stderr>:2021-10-02 08:07:58.214817: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,15]<stderr>:2021-10-02 08:07:58.324977: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,0]<stderr>:2021-10-02 08:07:58.660719: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,0]<stderr>:2021-10-02 08:07:58.660905: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,0]<stderr>:2021-10-02 08:07:58.702990: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,8]<stderr>:2021-10-02 08:07:59.182765: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,8]<stderr>:2021-10-02 08:07:59.182932: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.[0m
[34m[1,8]<stderr>:2021-10-02 08:07:59.223940: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.[0m
[34m[1,0]<stderr>:2021-10-02 08:11:09.012526: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.[0m
[34m[1,0]<stderr>:INFO:tensorflow:Assets written to: /opt/ml/model/1/assets[0m
[34m[1,0]<stderr>:INFO:tensorflow:Assets written to: /opt/ml/model/1/assets
[0m
[34m2021-10-02 08:11:13,548 sagemaker-training-toolkit INFO Reporting training SUCCESS[0m
2021-10-02 08:11:52 Uploading - Uploading generated training model[35m2021-10-02 08:11:43,578 sagemaker-training-toolkit INFO MPI process finished.[0m
[35m2021-10-02 08:11:43,579 sagemaker_tensorflow_container.training WARNING No model artifact is saved under path /opt/ml/model. Your training job will not save any model files to S3.[0m
[35mFor details of how to construct your training script see:[0m
[35mhttps://sagemaker.readthedocs.io/en/stable/using_tf.html#adapting-your-local-tensorflow-script[0m
[35m2021-10-02 08:11:43,579 sagemaker-training-toolkit INFO Reporting training SUCCESS[0m
2021-10-02 08:12:12 Completed - Training job completed
ProfilerReport-1633161684: NoIssuesFound
Training seconds: 838
Billable seconds: 838
|
deeplearning.ai/COURSE5 Sequence Models/Week 01/Building a Recurrent Neural Network - Step by Step/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v2.ipynb | ###Markdown
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future. **Notation**:- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input.- Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step. - Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Let's first import all the packages that you will need during this assignment.
###Code
import numpy as np
from rnn_utils import *
###Output
_____no_output_____
###Markdown
1 - Forward propagation for the basic Recurrent Neural NetworkLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. **Figure 1**: Basic RNN model Here's how you can implement an RNN: **Steps**:1. Implement the calculations needed for one time-step of the RNN.2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time. Let's go! 1.1 - RNN cellA Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell. **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $y^{\langle t \rangle}$ **Exercise**: Implement the RNN-cell described in Figure (2).**Instructions**:1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided you a function: `softmax`.3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in cache4. Return $a^{\langle t \rangle}$ , $y^{\langle t \rangle}$ and cacheWe will vectorize over $m$ examples. Thus, $x^{\langle t \rangle}$ will have dimension $(n_x,m)$, and $a^{\langle t \rangle}$ will have dimension $(n_a,m)$.
###Code
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh((np.dot(Wax, xt) + np.dot(Waa, a_prev) + ba))
# compute output of the current cell using the formula given above
yt_pred = softmax(np.dot(Wya, a_next) + by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
###Output
a_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
a_next.shape = (5, 10)
yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
yt_pred.shape = (2, 10)
###Markdown
**Expected Output**: **a_next[4]**: [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037] **a_next.shape**: (5, 10) **yt[1]**: [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526] **yt.shape**: (2, 10) 1.2 - RNN forward pass You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\langle t-1 \rangle}$) and the current time-step's input data ($x^{\langle t \rangle}$). It outputs a hidden state ($a^{\langle t \rangle}$) and a prediction ($y^{\langle t \rangle}$) for this time-step. **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. **Exercise**: Code the forward propagation of the RNN described in Figure (3).**Instructions**:1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.2. Initialize the "next" hidden state as $a_0$ (initial hidden state).3. Start looping over each time step, your incremental index is $t$ : - Update the "next" hidden state and the cache by running `rnn_cell_forward` - Store the "next" hidden state in $a$ ($t^{th}$ position) - Store the prediction in y - Add the cache to the list of caches4. Return $a$, $y$ and caches
###Code
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and Wy
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = np.zeros((n_a, m, T_x))
y_pred = np.zeros((n_y, m, T_x))
# Initialize a_next (≈1 line)
a_next = np.zeros((n_a, m))
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a[:,:,t], parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
###Output
a[4][1] = [-0.93013738 0.94421624 -0.99963121 -0.97342254]
a.shape = (5, 10, 4)
y_pred[1][3] = [ 0.0440187 0.21066662 0.00197809 0.10096196]
y_pred.shape = (2, 10, 4)
caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]
len(caches) = 2
###Markdown
**Expected Output**: **a[4][1]**: [-0.99999375 0.77911235 -0.99861469 -0.99833267] **a.shape**: (5, 10, 4) **y[1][3]**: [ 0.79560373 0.86224861 0.11118257 0.81515947] **y.shape**: (2, 10, 4) **cache[1][1][3]**: [-1.1425182 -0.34934272 -0.20889423 0.58662319] **len(cache)**: 2 Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\langle t \rangle}$ can be estimated using mainly "local" context (meaning information from inputs $x^{\langle t' \rangle}$ where $t'$ is not too far from $t$). In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. 2 - Long Short-Term Memory (LSTM) networkThis following figure shows the operations of an LSTM-cell. **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps. About the gates - Forget gateFor the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this: $$\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} $$Here, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\langle t-1 \rangle}, x^{\langle t \rangle}]$ and multiply by $W_f$. The equation above results in a vector $\Gamma_f^{\langle t \rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\langle t-1 \rangle}$. So if one of the values of $\Gamma_f^{\langle t \rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\langle t-1 \rangle}$. If one of the values is 1, then it will keep the information. - Update gateOnce we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate: $$\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} $$ Similar to the forget gate, here $\Gamma_u^{\langle t \rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\tilde{c}^{\langle t \rangle}$, in order to compute $c^{\langle t \rangle}$. - Updating the cell To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is: $$ \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} $$Finally, the new cell state is: $$ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} $$ - Output gateTo decide which outputs we will use, we will use the following two formulas: $$ \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}$$ $$ a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} $$Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\tanh$ of the previous state. 2.1 - LSTM cell**Exercise**: Implement the LSTM cell described in the Figure (3).**Instructions**:1. Concatenate $a^{\langle t-1 \rangle}$ and $x^{\langle t \rangle}$ in a single matrix: $concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$2. Compute all the formulas 1-6. You can use `sigmoid()` (provided) and `np.tanh()`.3. Compute the prediction $y^{\langle t \rangle}$. You can use `softmax()` (provided).
###Code
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the memory value
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"]
bf = parameters["bf"]
Wi = parameters["Wi"]
bi = parameters["bi"]
Wc = parameters["Wc"]
bc = parameters["bc"]
Wo = parameters["Wo"]
bo = parameters["bo"]
Wy = parameters["Wy"]
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈3 lines)
concat = np.concatenate((a_prev, xt), axis=0)
concat[: n_a, :] = a_prev
concat[n_a :, :] = xt
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
ft = sigmoid(np.dot(Wf, concat) + bf)
it = sigmoid(np.dot(Wi, concat) + bi)
cct = np.tanh(np.dot(Wc, concat) + bc)
c_next = ft * c_prev + it * cct
ot = sigmoid(np.dot(Wo, concat) + bo)
a_next = ot * np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(np.dot(Wy, a_next) + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
###Output
a_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
a_next.shape = (5, 10)
c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
c_next.shape = (5, 10)
yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
yt.shape = (2, 10)
cache[1][3] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
len(cache) = 10
###Markdown
**Expected Output**: **a_next[4]**: [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275] **a_next.shape**: (5, 10) **c_next[2]**: [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932] **c_next.shape**: (5, 10) **yt[1]**: [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381] **yt.shape**: (2, 10) **cache[1][3]**: [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422] **len(cache)**: 10 2.2 - Forward pass for LSTMNow that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. **Figure 4**: LSTM over multiple time-steps. **Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps. **Note**: $c^{\langle 0 \rangle}$ is initialized with zeros.
###Code
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of x and Wy (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wy"].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a, m, T_x))
c = np.zeros((n_a, m, T_x))
y = np.zeros((n_y, m, T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = np.zeros((n_a, m))
c_next = np.zeros((n_a, m))
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = lstm_cell_forward(x[:,:,t], a[:,:,t], c[:,:,t], parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Append the cache into caches (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
###Output
a[4][3][6] = 0.0696683641954
a.shape = (5, 10, 7)
y[1][4][3] = 0.9383155806
y.shape = (2, 10, 7)
caches[1][1[1]] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
0.41005165]
c[1][2][1] 0.0813960565034
len(caches) = 2
###Markdown
**Expected Output**: **a[4][3][6]** = 0.172117767533 **a.shape** = (5, 10, 7) **y[1][4][3]** = 0.95087346185 **y.shape** = (2, 10, 7) **caches[1][1][1]** = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165] **c[1][2][1]** = -0.855544916718 **len(caches)** = 2 Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The rest of this notebook is optional, and will not be graded. 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. 3.1 - Basic RNN backward passWe will start by computing the backward pass for the basic RNN-cell. **Figure 5**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculas. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. Deriving the one step backward functions: To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand. The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \sec(x)^2 = 1 - \tanh(x)^2$Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$. The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
###Code
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
dtanh = None
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = None
dWax = None
# compute the gradient with respect to Waa (≈2 lines)
da_prev = None
dWaa = None
# compute the gradient with respect to b (≈1 line)
dba = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = -0.460564103059 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = 0.0842968653807 **gradients["da_prev"].shape** = (5, 10) **gradients["dWax"][3][1]** = 0.393081873922 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = -0.28483955787 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [ 0.80517166] **gradients["dba"].shape** = (5, 1) Backward pass through the RNNComputing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.**Instructions**:Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
###Code
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = None
(a1, a0, x1, parameters) = None
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈6 lines)
dx = None
dWax = None
dWaa = None
dba = None
da0 = None
da_prevt = None
# Loop through all the time steps
for t in reversed(range(None)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = None
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = None
dWax += None
dWaa += None
dba += None
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dx"][1][2]** = [-2.07101689 -0.59255627 0.02466855 0.01483317] **gradients["dx"].shape** = (3, 10, 4) **gradients["da0"][2][3]** = -0.314942375127 **gradients["da0"].shape** = (5, 10) **gradients["dWax"][3][1]** = 11.2641044965 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = 2.30333312658 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [-0.74747722] **gradients["dba"].shape** = (5, 1) 3.2 - LSTM backward pass 3.2.1 One Step backwardThe LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) 3.2.2 gate derivatives$$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$$$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_i^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$$$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$$$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$ 3.2.3 parameter derivatives $$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$$$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$$$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$$$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.$$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)$$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$$$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
###Code
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = None
n_a, m = None
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = None
dcct = None
dit = None
dft = None
# Code equations (7) to (10) (≈4 lines)
dit = None
dft = None
dot = None
dcct = None
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
da_prev = None
dc_prev = None
dxt = None
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639621419711 **gradients["da_prev"].shape** = (5, 10) **gradients["dc_prev"][2][3]** = 0.797522038797 **gradients["dc_prev"].shape** = (5, 10) **gradients["dWf"][3][1]** = -0.147954838164 **gradients["dWf"].shape** = (5, 8) **gradients["dWi"][1][2]** = 1.05749805523 **gradients["dWi"].shape** = (5, 8) **gradients["dWc"][3][1]** = 2.30456216369 **gradients["dWc"].shape** = (5, 8) **gradients["dWo"][1][2]** = 0.331311595289 **gradients["dWo"].shape** = (5, 8) **gradients["dbf"][4]** = [ 0.18864637] **gradients["dbf"].shape** = (5, 1) **gradients["dbi"][4]** = [-0.40142491] **gradients["dbi"].shape** = (5, 1) **gradients["dbc"][4]** = [ 0.25587763] **gradients["dbc"].shape** = (5, 1) **gradients["dbo"][4]** = [ 0.13893342] **gradients["dbo"].shape** = (5, 1) 3.3 Backward pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
###Code
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈12 lines)
dx = None
da0 = None
da_prevt = None
dc_prevt = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# loop back over the whole sequence
for t in reversed(range(None)):
# Compute all gradients using lstm_cell_backward
gradients = None
# Store or add the gradient to the parameters' previous step's gradient
dx[:,:,t] = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____ |
On the Composition of the Long Tail of Business Processes.ipynb | ###Markdown
Load Data Downlaod and Extract Logs
###Code
urlretrieve("https://data.4tu.nl/ndownloader/files/24060575", "BPI_2014.csv")
urlretrieve("https://data.4tu.nl/ndownloader/files/24027287", "BPI_2012.xes.gz")
gunzip("BPI_2012.xes")
urlretrieve("https://data.4tu.nl/ndownloader/files/24063818", "BPI2015_1.xes")
urlretrieve("https://data.4tu.nl/ndownloader/files/24044639", "BPI2015_2.xes")
urlretrieve("https://data.4tu.nl/ndownloader/files/24076154", "BPI2015_3.xes")
urlretrieve("https://data.4tu.nl/ndownloader/files/24045332", "BPI2015_4.xes")
urlretrieve("https://data.4tu.nl/ndownloader/files/24069341", "BPI2015_5.xes")
urlretrieve("https://data.4tu.nl/ndownloader/files/24044117", "BPI_2017.xes.gz")
gunzip("BPI_2017.xes")
###Output
_____no_output_____
###Markdown
Import Logs
###Code
dfs = []
# BPI Challenge 2014 is only available as CSV. Column Names have to be standardized to work with the rest of the code. Incomplete Traces were removed
df = pd.read_csv("BPI_2014.csv", sep=";")
df = df.loc[df["Incident ID"].isin(df.loc[df["IncidentActivity_Type"] == "Open", "Incident ID"].tolist()) & df["Incident ID"].isin(df.loc[df["IncidentActivity_Type"] == "Closed", "Incident ID"].tolist())]
df.columns = ["case:concept:name", "time:timestamp", "actnumber", "concept:name", "org:resource", "kmnumber", "intid"]
df["time:timestamp"] = pd.to_datetime(df["time:timestamp"], format="%d-%m-%Y %H:%M:%S")
df = df.sort_values(["case:concept:name", "time:timestamp"])
dfs.append(df)
# BPI CHallenge 2012
log = xes_importer.apply('BPI_2012.xes')
df = converter.apply(log, variant=converter.Variants.TO_DATA_FRAME)
dfs.append(df)
# BPI CHallenge 2017
log = xes_importer.apply('BPI_2017.xes')
df = converter.apply(log, variant=converter.Variants.TO_DATA_FRAME)
dfs.append(df)
# BPI Challenge 2015 consists of 5 logs. We append the logs to get one large log
df = pd.DataFrame()
for i in range(1,6):
log = xes_importer.apply('BPI2015_'+ str(i) +'.xes')
dd = converter.apply(log, variant=converter.Variants.TO_DATA_FRAME)
df = df.append(dd)
dfs.append(df)
###Output
_____no_output_____
###Markdown
Ngram Vectorizing
###Code
ngramlist = []
tracelist = []
for df in dfs:
traces = df.groupby("case:concept:name")["concept:name"].agg(lambda col: ",".join(col.tolist())).reset_index()
tracelist.append(traces)
vectorizer = CountVectorizer(ngram_range=(3, 3), token_pattern='(?u)[\w \(\)]+', analyzer='word')
ngrams = vectorizer.fit_transform(traces["concept:name"].tolist())
ngramlist.append(ngrams)
###Output
_____no_output_____
###Markdown
Clustering
###Code
clusterlist = []
for ngrams in ngramlist:
clusters = hierarchy.ward(ngrams.toarray())
clusterlist.append(clusters)
fig, axs = plt.subplots(1,4, figsize=(12,3))
for clusters, ax in zip(clusterlist, axs):
hierarchy.dendrogram(clusters, no_labels=True, ax=ax, truncate_mode="level", p=15, color_threshold=0, above_threshold_color='k')
axs[0].set_title("IT Service Management Log", fontname="Times New Roman")
axs[1].set_title("Loan Application Log 2012", fontname="Times New Roman")
axs[2].set_title("Loan Application Log 2017", fontname="Times New Roman")
axs[3].set_title("Building Permit Application Log", fontname="Times New Roman")
plt.tight_layout()
plt.savefig("Figure_7.svg")
###Output
_____no_output_____
###Markdown
Determine Number of Clusters (!Long Runtime!) This section can be skipped, the number of clusters is hard-coded in the next section Elbow Criterion
###Code
fig, axs = plt.subplots(1,4, figsize=(12,3))
for clusters, ax in zip(clusterlist, axs):
last = clusters[-1200000:, 2]
last_rev = last[::-1]
idxs = np.arange(1, len(last) + 1)
ax.plot(idxs, last_rev)
###Output
_____no_output_____
###Markdown
Silhouette Score
###Code
fig, axs = plt.subplots(1,4, figsize=(12,3))
for clusters, ax in zip(clusterlist, axs):
silhouette = Parallel(n_jobs=-1)(delayed(silhouette_score)(ngrams.toarray(), fcluster(clusters, i, criterion='maxclust')) for i in range(2,100))
print(np.argmax(silhouette)+2)
plt.plot(silhouette)
###Output
_____no_output_____
###Markdown
Davies-Bouldin Index
###Code
fig, axs = plt.subplots(1,4, figsize=(12,3))
for clusters, ax in zip(clusterlist, axs):
db_index = Parallel(n_jobs=-1)(delayed(davies_bouldin_score)(ngrams.toarray(), fcluster(clusters, i, criterion='maxclust')) for i in range(2,100))
print(np.argmin(db_index)+2)
ax.plot(db_index)
###Output
_____no_output_____
###Markdown
Calinski-Harabasz Index
###Code
fig, axs = plt.subplots(1,4, figsize=(12,3))
for clusters, ax in zip(clusterlist, axs):
ch_score = Parallel(n_jobs=-1)(delayed(calinski_harabasz_score)(ngrams.toarray(), fcluster(clusters, i, criterion='maxclust')) for i in range(2,100))
plt.plot(ch_score)
###Output
_____no_output_____
###Markdown
Dataset Description
###Code
df = dfs[0] # IT Service Management Log
act = df.groupby("concept:name").count()["case:concept:name"].tolist()
df.sort_values(["case:concept:name", "time:timestamp"])
df["time:next"] = df["time:timestamp"].tolist()[1:] + [pd.to_datetime("2000-01-01")]
df["case:next"] = df["case:concept:name"].tolist()[1:] + [0]
df["time"] = (pd.to_datetime(df["time:next"], utc=True) - pd.to_datetime(df["time:timestamp"], utc=True)).apply(lambda x: x.total_seconds())
actdur = (df.loc[(df["time"] > 0)].groupby("concept:name").mean()["time"] / 3600).tolist()
resources = df.groupby("org:resource").count()["concept:name"].tolist()
fig, axs = plt.subplots(1,3, figsize=(10,5))
axs[0].bar([i for i,j in enumerate(act)], sorted(act, reverse=True))
axs[1].bar([i for i,j in enumerate(actdur)], sorted(actdur, reverse=True))
axs[2].bar([i for i,j in enumerate(resources)], sorted(resources, reverse=True))
axs[0].set_xlabel("Activities", fontname="Times New Roman")
axs[1].set_xlabel("Activities", fontname="Times New Roman")
axs[2].set_xlabel("Resources", fontname="Times New Roman")
axs[0].set_ylabel("Number of Executions", fontname="Times New Roman")
axs[1].set_ylabel("Average Duration in Hours", fontname="Times New Roman")
axs[2].set_ylabel("Number of Involvements", fontname="Times New Roman")
fig.tight_layout()
plt.savefig("figure_4.svg")
df = dfs[1] # Loan Application Log 2012
act = df.groupby("concept:name").count()["case:concept:name"].tolist()
df.sort_values(["case:concept:name", "time:timestamp"])
df["time:next"] = df["time:timestamp"].tolist()[1:] + [pd.to_datetime("2000-01-01")]
df["case:next"] = df["case:concept:name"].tolist()[1:] + [0]
df["time"] = (pd.to_datetime(df["time:next"], utc=True) - pd.to_datetime(df["time:timestamp"], utc=True)).apply(lambda x: x.total_seconds())
actdur = (df.loc[df["time"] > 0].groupby("concept:name").mean()["time"] / 3600).tolist()
resources = df.groupby("org:resource").count()["concept:name"].tolist()
print(max(actdur))
print(max(resources))
fig, axs = plt.subplots(1,3, figsize=(10,5))
axs[0].bar([i-.25 for i,j in enumerate(act)], sorted(act, reverse=True), width=.5)
axs[1].bar([i-.25 for i,j in enumerate(actdur)], sorted(actdur, reverse=True), width=.5)
axs[2].bar([i-.25 for i,j in enumerate(resources)], sorted(resources, reverse=True), width=.5)
df = dfs[2] # Loan Application Log 2017
act = df.groupby("concept:name").count()["case:concept:name"].tolist()
df.sort_values(["case:concept:name", "time:timestamp"])
df["time:next"] = df["time:timestamp"].tolist()[1:] + [pd.to_datetime("2000-01-01")]
df["case:next"] = df["case:concept:name"].tolist()[1:] + [0]
df["time"] = (pd.to_datetime(df["time:next"], utc=True) - pd.to_datetime(df["time:timestamp"], utc=True)).apply(lambda x: x.total_seconds())
actdur = (df.loc[(df["time"] > 0)].groupby("concept:name").mean()["time"] / 3600).tolist()
resources = df.groupby("org:resource").count()["concept:name"].tolist()
print(max(actdur))
print(max(resources))
axs[0].twinx().bar([i+.25 for i,j in enumerate(act)], sorted(act, reverse=True), width=.5, color="tab:orange")
axs[1].twinx().bar([i+.25 for i,j in enumerate(actdur)], sorted(actdur, reverse=True), width=.5, color="tab:orange")
axs[2].twinx().bar([i+.25 for i,j in enumerate(resources)], sorted(resources, reverse=True), width=.5, color="tab:orange")
axs[0].set_xlabel("Activities", fontname="Times New Roman")
axs[1].set_xlabel("Activities", fontname="Times New Roman")
axs[2].set_xlabel("Resources", fontname="Times New Roman")
axs[0].set_ylabel("Number of Executions", fontname="Times New Roman")
axs[1].set_ylabel("Average Duration in Hours", fontname="Times New Roman")
axs[2].set_ylabel("Number of Involvements", fontname="Times New Roman")
#fig.tight_layout()
plt.savefig("figure_5.svg")
df = dfs[3] # Building Permit Log
act = df.groupby("concept:name").count()["case:concept:name"].tolist()
df.sort_values(["case:concept:name", "time:timestamp"])
df["time:next"] = df["time:timestamp"].tolist()[1:] + [pd.to_datetime("2000-01-01")]
df["case:next"] = df["case:concept:name"].tolist()[1:] + [0]
df["time"] = (pd.to_datetime(df["time:next"], utc=True) - pd.to_datetime(df["time:timestamp"], utc=True)).apply(lambda x: x.total_seconds())
actdur = (df.loc[(df["time"] > 0)].groupby("concept:name").mean()["time"] / 3600).tolist()
resources = df.groupby("org:resource").count()["concept:name"].tolist()
print(max(act))
print(max(actdur))
print(max(resources))
fig, axs = plt.subplots(1,3, figsize=(10,5))
axs[0].bar([i for i,j in enumerate(act)], sorted(act, reverse=True))
axs[1].bar([i for i,j in enumerate(actdur)], sorted(actdur, reverse=True))
axs[2].bar([i for i,j in enumerate(resources)], sorted(resources, reverse=True))
axs[0].set_xlabel("Activities", fontname="Times New Roman")
axs[1].set_xlabel("Activities", fontname="Times New Roman")
axs[2].set_xlabel("Resources", fontname="Times New Roman")
axs[0].set_ylabel("Number of Executions", fontname="Times New Roman")
axs[1].set_ylabel("Average Duration in Hours", fontname="Times New Roman")
axs[2].set_ylabel("Number of Involvements", fontname="Times New Roman")
#fig.tight_layout()
plt.savefig("figure_6.svg")
###Output
5644
5806.293055555556
15748
###Markdown
Calculate Scores
###Code
scorelist = []
nclusterlist = [150, 200, 200, 300]
for df, traces, nclusters, clusters in zip(dfs, tracelist, nclusterlist, clusterlist):
df.sort_values(["case:concept:name", "time:timestamp"])
df["time:next"] = df["time:timestamp"].tolist()[1:] + [pd.to_datetime("2000-01-01")]
df["case:next"] = df["case:concept:name"].tolist()[1:] + [0]
df["time"] = (pd.to_datetime(df["time:next"], utc=True) - pd.to_datetime(df["time:timestamp"], utc=True)).apply(lambda x: x.total_seconds())
labels = fcluster(clusters, nclusters, criterion='maxclust')
traces["cluster"] = labels
clusterdict = {}
for i in range(1,nclusters+1):
clusterdict[i] = traces.loc[traces["cluster"] == i]["case:concept:name"].tolist()
# execution Frequency
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = df.loc[df["case:concept:name"].isin(cases)]
score = temp["case:concept:name"].nunique()
scores.append(score)
ef = scores
# stakeholder scores
stakeholer_scores = df.groupby("org:resource").count()["concept:name"].to_dict()
stakeholer_scores = {s:(1 / stakeholer_scores[s]) for s in stakeholer_scores}
df["sscore"] = df["org:resource"].replace(stakeholer_scores)
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = df.loc[df["case:concept:name"].isin(cases)]
score = np.sum(temp.groupby(["case:concept:name"]).sum()["sscore"] / temp.shape[0])
scores.append(score)
ss = scores
#customer contacts
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = df.loc[df["case:concept:name"].isin(cases)]
if nclusters == 300:
score = temp.loc[(temp["activityNameEN"].str.contains("send"))].shape[0] / temp.shape[0]
else:
score = temp.loc[(temp["concept:name"].str.contains("Sent")) | (temp["concept:name"].str.contains("SENT")) | (temp["concept:name"].str.contains("customer"))].shape[0] / temp.shape[0]
#
scores.append(score)
cc = scores
# activity variance scores
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = df.loc[df["case:concept:name"].isin(cases) & (df["case:next"] == df["case:concept:name"]) & (df["time"] > 1)]
score = temp["time"].var()
scores.append(score)
av = scores
# process variance scores
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = df.loc[df["case:concept:name"].isin(cases)]
score = (pd.to_datetime(temp.groupby("case:concept:name").last()["time:timestamp"], utc=True) - pd.to_datetime(temp.groupby("case:concept:name").first()["time:timestamp"], utc=True)).apply(lambda x: x.total_seconds()).var()
scores.append(score)
pv = [s if str(s) != "nan" else 0 for s in scores ]
# redundancies
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = traces.loc[traces["case:concept:name"].isin(cases)]["concept:name"].tolist()
ngram_vectorizer = CountVectorizer(input = temp, ngram_range=(2,2), tokenizer=lambda x: x.split(','))
counts = ngram_vectorizer.fit_transform(temp)
#names = ngram_vectorizer.get_feature_names()
counts[counts == 1] = 0
trlen = [len(t.split(",")) for t in temp]
scores.append(np.sum(counts) / np.sum(trlen))
rs = scores
# shared activity contexts
ngram_vectorizer = CountVectorizer(input = traces["concept:name"].tolist(), ngram_range=(3,3), tokenizer=lambda x: x.split(','))
counts = ngram_vectorizer.fit_transform(traces["concept:name"].tolist())
names = ngram_vectorizer.get_feature_names()
activities = df["concept:name"].unique().tolist()
contexts = []
actcount = {}
for activity in activities:
for name in names:
if " " + str(activity).lower() + " " in name:
if activity in actcount:
actcount[activity] += 1
else:
actcount[activity] = 1
actcount["A_Create Application"] = 0
actcount["A_SUBMITTED"] = 0
actcount["01_BB_680"] = 0
actcount["14_VRIJ_060_2"] = 0
actcount["01_BB_601"] = 0
actcount["01_BB_650_2"] = 0
df["actscores"] = df["concept:name"].replace(actcount)
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = df.loc[df["case:concept:name"].isin(cases)]
#print(temp["actscores"])
score = (temp["actscores"].sum() / temp.shape[0]) **-1
scores.append(score)
sas = scores
# stakeholder count scores
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = df.loc[df["case:concept:name"].isin(cases)]
score = temp["org:resource"].nunique() ** -1
scores.append(score)
sts = scores
# average process length
scores = []
for i in range(1,nclusters+1):
cases = clusterdict[i]
temp = df.loc[df["case:concept:name"].isin(cases)]
score = temp.shape[0] / temp["case:concept:name"].nunique()
scores.append(score)
pls = scores
scorelist.append(copy.deepcopy((ef, ss, cc, av, pv, rs, sas, sts, pls)))
###Output
_____no_output_____
###Markdown
Plot all Scores for each Log
###Code
fig, axs = plt.subplots(3,3, figsize=(7,5), sharex=True, sharey=True)
i = 8
j = 0
xticks = [[0,75,150], [0,100,200],[0,100,200], [0,150,300]]
for (ef, ss, cc, av, pv, rs, sas, sts, pls), xtick in zip(scorelist, xticks):
sns.set_style("white")
scaler = MinMaxScaler()
axs[0, 0].set_title("Execution Frequency", size=8, fontweight="bold", fontname="Times New Roman")
axs[0, 0].plot(sorted(scaler.fit_transform(np.array([[e] for e in ef])), reverse=True))
axs[0 ,0].set_xticks(xtick)
axs[0, 1].set_title("Resource Utilization", size=8, fontweight="bold", fontname="Times New Roman")
axs[0, 1].plot(sorted(scaler.fit_transform(np.array([[e] for e in ss])), reverse=True))
axs[0, 2].set_title("Customer Contacts", size=8, fontweight="bold", fontname="Times New Roman")
axs[0, 2].plot(sorted(scaler.fit_transform(np.array([[e] for e in cc])), reverse=True))
axs[1, 0].set_title("Activity Duration Variance", size=8, fontweight="bold", fontname="Times New Roman")
axs[1, 0].plot(sorted(scaler.fit_transform(np.array([[e] for e in av])), reverse=True))
axs[1, 1].set_title("Execution Time Variance", size=8, fontweight="bold", fontname="Times New Roman")
axs[1, 1].plot(sorted(scaler.fit_transform(np.array([[e] for e in pv])), reverse=True))
axs[1, 2].set_title("Execution Redundancies", size=8, fontweight="bold", fontname="Times New Roman")
axs[1, 2].plot(sorted(scaler.fit_transform(np.array([[e] for e in rs])), reverse=True))
axs[2, 0].set_title("Shared Activity Contexts", size=8, fontweight="bold", fontname="Times New Roman")
axs[2, 0].plot(sorted(scaler.fit_transform(np.array([[e] for e in sas])), reverse=True))
axs[2, 1].set_title("Stakeholder Involvement", size=8, fontweight="bold", fontname="Times New Roman")
axs[2, 1].plot(sorted(scaler.fit_transform(np.array([[e] for e in sts])), reverse=True))
axs[2, 2].set_title("Process Length", size=8, fontweight="bold", fontname="Times New Roman")
axs[2, 2].plot(sorted(scaler.fit_transform(np.array([[e] for e in pls])), reverse=True))
#fig.tight_layout()
if (j == 0 )| (j == 2) | (j == 3):
print("Figure_" + str(i) + ".svg")
fig.savefig("Figure_" + str(i) + ".svg")
#plt.show()
fig.clf()
fig, axs = plt.subplots(3,3, figsize=(7,5), sharex=True, sharey=True)
i+=1
j+= 1
###Output
Figure_8.svg
Figure_9.svg
Figure_10.svg
###Markdown
Plot Accumulated Scores
###Code
fig, axs = plt.subplots(1,3, figsize=(10,3), sharey=True)
sclistnorm = []
for score in scorelist:
scorenorm = []
for sc in score:
scorenorm.append([x**2 for x in scaler.fit_transform(np.array([[e] for e in sc]))])
sclistnorm.append(scorenorm)
sns.reset_orig()
axs[0].plot((scaler.fit_transform(sorted(np.array(sclistnorm[0]).sum(axis=0), reverse=True))))
axs[1].plot((scaler.fit_transform(sorted(np.array(sclistnorm[1]).sum(axis=0), reverse=True))))
axs[1].plot((scaler.fit_transform(sorted(np.array(sclistnorm[2]).sum(axis=0), reverse=True))))
axs[2].plot((scaler.fit_transform(sorted(np.array(sclistnorm[3]).sum(axis=0), reverse=True))))
axs[0].set_title("IT Service Management Log", fontname="Times New Roman")
axs[1].set_title("Loan Application Log", fontname="Times New Roman")
axs[2].set_title("Building Permit Application Log", fontname="Times New Roman")
plt.tight_layout()
plt.savefig("Figure_11.svg")
###Output
_____no_output_____
###Markdown
Save Data
###Code
save_obj(scorelist[0], "loan2014.pkl")
save_obj(scorelist[1], "loan2012.pkl")
save_obj(scorelist[2], "bpi2017.pkl")
save_obj(scorelist[3], "bpi2015.pkl")
###Output
_____no_output_____
###Markdown
Spider Plots
###Code
scoredfs = []
for scores in scorelist:
scorenorm = []
for sc in scores:
scorenorm.append([x**2 for x in scaler.fit_transform(np.array([[e] for e in sc]))])
d = {
"EF":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[0]])))],
"RU":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[1]])))],
"CC":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[2]])))],
"AD":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[3]])))],
"ET":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[4]])))],
"ER":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[5]])))],
"SA":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[6]])))],
"SI":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[7]])))],
"PL":[a[0] for a in list(scaler.fit_transform(np.array([[e] for e in scores[8]])))],
"total": [a[0] for a in list(np.array(scorenorm).sum(axis=0))]
}
scoredfs.append(pd.DataFrame(d).sort_values("total"))
scorenames= []
scorenames.append("Execution Frequency")
scorenames.append("Resource Utilization")
scorenames.append("Customer Contacts")
scorenames.append("Activity Duration Variance")
scorenames.append("Execution Time Variance")
scorenames.append("Execution Redundancies")
scorenames.append("Shared Activity Contexts")
scorenames.append("Stakeholder Involvement")
scorenames.append("Process Length")
scorenames.append("Execution Frequency")
titles = ["Loan Application Log 2012", "Loan Application Log 2017", "IT Service Management Log", "Building Permit Application Log"]
#breakpoints = [40, 40, 30, 150]
angles = [n / float(9) * 2 * pi for n in range(9)]
angles += angles[:1]
sns.set(style="whitegrid", rc={"lines.linewidth": 2})
sns.set_style("whitegrid")
#plt.style.use("ggplot")
f, axs = plt.subplots(2,2, subplot_kw=dict(projection='polar'), figsize=(12,12))
i = 0
j = 0
for d in [scoredfs[1], scoredfs[2], scoredfs[0], scoredfs[3]]:
ratio = []
#print(d.head())
breakpoint = d.shape[0] // 5 * 4
for t in range(9):
c = d.values[:,t]
ratio.append(sum([a for a in c[:breakpoint]]) / sum([a for a in c]))
ratio.append(ratio[0])
axs[i][j%2].set_theta_offset(pi / 2)
axs[i][j%2].set_theta_direction(-1)
axs[i][j%2].set_xticks(angles)
axs[i][j%2].set_xticklabels(scorenames, fontweight="bold", fontname="Times New Roman", fontsize=10.5)
axs[i][j%2].set_rlabel_position(0)
axs[i][j%2].axes.set_ylim(0,1)
axs[i][j%2].set_title(titles[j] + "\n", size=14, fontname="Times New Roman", fontweight="bold")
axs[i][j%2].plot(angles, ratio, linewidth=1, linestyle='solid', color="blue")
axs[i][j%2].fill(angles, ratio, 'b', color="blue", alpha=0.1)
axs[i][j%2].plot(angles, [1-r for r in ratio], linewidth=1, linestyle='solid', color="red")
axs[i][j%2].fill(angles, [1-r for r in ratio], 'b',color="red", alpha=0.1)
j += 1
if( j%2 == 0 ):
i +=1
plt.tight_layout()
plt.savefig("Figure 12.svg")
###Output
_____no_output_____
###Markdown
Plot Example Process Traces for Clusters
###Code
df = dfs[0] # Chose BPI2014 as Example
df.sort_values(["case:concept:name", "time:timestamp"])
df["time:next"] = df["time:timestamp"].tolist()[1:] + [pd.to_datetime("2000-01-01")]
df["case:next"] = df["case:concept:name"].tolist()[1:] + [0]
df["time"] = (pd.to_datetime(df["time:next"], utc=True) - pd.to_datetime(df["time:timestamp"], utc=True)).apply(lambda x: x.total_seconds())
labels = fcluster(clusters, nclusters, criterion='maxclust')
traces["cluster"] = labels
clusterdict = {}
for i in range(1,nclusters+1):
clusterdict[i] = traces.loc[traces["cluster"] == i]["case:concept:name"].tolist()
varcount = traces.loc[traces["cluster"] == 10]
varcount.columns = ["asdf", "Activity", "CaseID"]
varcount = varcount[["Activity", "CaseID"]].to_dict()
variants = [(["Start"] + variant.split(",") + ["End"] , count) for variant, count in zip(varcount["Activity"].values(), varcount["CaseID"].values())]
plotGraphFromVariantsSimple(variants).draw('process10.png')
varcount = traces.loc[traces["cluster"] == 12]
varcount.columns = ["asdf", "Activity", "CaseID"]
varcount = varcount[["Activity", "CaseID"]].to_dict()
variants = [(["Start"] + variant.split(",") + ["End"] , count) for variant, count in zip(varcount["Activity"].values(), varcount["CaseID"].values())]
plotGraphFromVariantsSimple(variants).draw('process12.png')
###Output
_____no_output_____ |
.ipynb_checkpoints/CSV tag creation -checkpoint.ipynb | ###Markdown
Intro Create Image annotation CSV First we Load the images from the folders into a list. Then also create a list with tags belonging to the image. Here 1 for closed hand and 2 for open hand
###Code
import os
import csv #provides functionality to both read from and write to CSV files. Designed to work out of the box with Excel-generated CSV files
import pandas as pd #pandas is a great data handling library
root_dir = "T:/werth/Messe/hände/small/" # Path relative to current dir (os.getcwd())
folder1 = "Faust"
folder2 = "Offen"
#os.listdir(root_dir + folder) # List all the items in root_dir
Faust=[item for item in os.listdir(root_dir+ folder1) if os.path.isfile(os.path.join(root_dir+ folder1, item))] # Filter items and only keep files (strip out directories)
Offen=[item for item in os.listdir(root_dir+ folder2) if os.path.isfile(os.path.join(root_dir+ folder2, item))] # Filter items and only keep files (strip out directories)
Faust=[root_dir + s for s in Faust] # Adding the root path to the image name to create a full path
Offen=[root_dir + s for s in Offen]
Annot1= [1] * len(Faust) # Same length list with annotation key
Annot2= [2] * len(Offen)
###Output
_____no_output_____
###Markdown
Then we ceate a data frame with pandas in the way we want the image and tags merged. In this case we need the format: Imagepath,tag
###Code
df1= pd.DataFrame(list(zip(*[Faust,Annot1]))).add_prefix('Col_')
df2= pd.DataFrame(list(zip(*[Offen,Annot2]))).add_prefix('Col_')
frames = [df1,df2] # Merge list
F_and_A = pd.concat(frames, sort=False) #Merge into the pandas dataframe
F_and_A.to_csv('Hand_Annotations.csv', index=False) # Write to CSV file
#print(frames)
###Output
_____no_output_____
###Markdown
Alternaternatively we could also create an dictionary :
###Code
d1 = dict(zip(Faust,Annot1)) # Create dict from data list and annotaions keys
d2 = dict(zip(Offen,Annot2))
d = {**d1, **d2} #Merge both dicts. Attention, if the keys have the same name, they will be overwritten by (here) d2
###Output
_____no_output_____
###Markdown
But the transformation is not as straight forward as with pandas to_csv Create ID mapping CSV Now we create another CSV file which maps the tag od ID to a annotation. First create a simple dict to wit the key and values as ID mapping and then transfer that again into CSV
###Code
df3 = {'Faust': [1], 'Offen': [2]} #regular dict with the annotation mapping
df3 = pd.DataFrame.from_dict(data=df3,orient='index') #convert to a pandas dataframe for easy conversion into CSV
df3.to_csv('IDmapping.csv', index=True, header=False) # Write to CSV file (Can also just be added in the line before with: ...).to_csv(...))
###Output
_____no_output_____ |
DSE_GL1_VIF_Eliminated.ipynb | ###Markdown
Data Ingestion
###Code
Newyork = pd.read_csv("Newyork.csv")
Newyork.head(2)
pd.set_option('max_columns',1000)
pd.set_option('max_rows',1000)
np.set_printoptions(threshold=np.inf)
pd.set_option('display.width', 1000)
Newyork.shape
df = Newyork.copy()
###Output
_____no_output_____
###Markdown
Oakland = pd.read_csv("Oakland.csv")Oakland.head(2) Oakland.shape df = Newyork.append(Oakland) df.head()
###Code
df.shape
column_names = df.columns
print(column_names)
###Output
Index(['id', 'listing_url', 'scrape_id', 'last_scraped', 'name', 'summary', 'space', 'description', 'experiences_offered', 'neighborhood_overview',
...
'instant_bookable', 'is_business_travel_ready', 'cancellation_policy', 'require_guest_profile_picture', 'require_guest_phone_verification', 'calculated_host_listings_count', 'calculated_host_listings_count_entire_homes', 'calculated_host_listings_count_private_rooms', 'calculated_host_listings_count_shared_rooms', 'reviews_per_month'], dtype='object', length=106)
###Markdown
Dropping Columns which are unecessary and requires indepth of processing
###Code
# id - listing identifier that can be used to create a join with other files
# last_scraped - we will use it to calculate reviews_per_month
# listing_url - interesting if we want to analyse the pictures as well but out of scope otherwise
# scrape_id - same for all the records
# name - textual description already extracted as continous variables in other columns
# summary - as above
# space - as above
# description - as above
# experiences_offered - contains only none value
# neighborhood_overview - requires lot of preprocessing to turn into useful a feature
# notes - requires lot of preprocessing to turn into useful a feature
# transit - requires lot of preprocessing to turn into useful a feature
# access - requires lot of preprocessing to turn into useful a feature
# interaction - requires lot of preprocessing to turn into useful a feature
# house_rules - requires lot of preprocessing to turn into useful a feature
# thumbnail_url - contains no values
# medium_url - contains no values
# picture_url - interesting if we want to analyse the pictures as well but out of scope otherwise
# xl_picture_url - contains no values
# host_id - id that is not used anywhere else
df.drop('listing_url', inplace=True, axis=1) # dropping as it is not usable
df.drop('scrape_id', inplace=True, axis=1) # dropping as it is not usable
df.drop('name',inplace=True, axis=1) # dropping as it is not usable
df.drop('summary',inplace=True, axis=1) # dropping as it is not usable
df.drop('description',inplace=True, axis=1) # dropping as it is not usable
df.drop('experiences_offered',inplace=True, axis=1) # dropping as it is not usable
df.drop('neighborhood_overview',inplace=True, axis=1) # dropping as it is not usable
df.drop('notes',inplace=True, axis=1) # dropping as it is not usable
df.drop('access',inplace=True, axis=1) # dropping as it is not usable
df.drop('interaction',inplace=True, axis=1) # dropping as it is not usable
df.drop('house_rules',inplace=True, axis=1) # dropping as it is not usable
df.drop('thumbnail_url',inplace=True, axis=1) # dropping as it is not usable
df.drop('medium_url',inplace=True, axis=1) # dropping as it is not usable
df.drop('picture_url',inplace=True, axis=1) # dropping as it is not usable
df.drop('xl_picture_url',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_id',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_location',inplace=True, axis=1) # dropping as it is not usable
df.head()
column_names = df.columns
print(column_names)
# From the next 20 columns we will keep the following:
# host_name - can be used to identify words associated with the host in reviews
# host_since - can be used to calculate host experience based on duration since the first listing
# host_location - we can use it to establish if host is local or not
# host_about - since its only a text we will count number of characters
# host_is_superhost - categorical t or f - describing highly rated and relaible hosts - https://www.airbnb.co.uk/superhost
# host_has_profile_pic - categorical t or f - profiles with pictures are seen as more credible
# host_identity_verified - categorical t or f - another credibility metric
# And remove all the below:
# host_url - host profile is out of scope
# host_response_time - this value could be useful but contains high percentage of N/A and is contained within score_communication
# host_response_rate - same as above
# host_acceptance_rate - eaither NA or blank
# host_thumbnail_url - host picture is out of scope
# host_picture_url - host picture is out of scope
# host_neighbourhood - host_location to be instead
# host_listings_count - we will use more accurate calculated_host_listings_count
# host_total_listings_count - as above
# host_verifications - list of host verification methods - information already contained in host_identity_verified
# street - neighbourhood_cleansed will be used instead
# neighbourhood - neighbourhood_cleansed will be used instead
df.drop('host_url', inplace=True, axis=1) # dropping as it is not usable
df.drop('host_response_time', inplace=True, axis=1) # dropping as it is not usable
df.drop('host_response_rate',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_acceptance_rate',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_thumbnail_url',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_picture_url',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_neighbourhood',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_listings_count',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_total_listings_count',inplace=True, axis=1) # dropping as it is not usable
df.drop('host_verifications',inplace=True, axis=1) # dropping as it is not usable
df.drop('neighbourhood',inplace=True, axis=1) # dropping as it is not usable
df.head()
column_names = df.columns
print(column_names)
# From the next 20 columns we will keep the following:
# neighbourhood_cleansed - we will use only for visualisation due to number of neighbourhoods while we use gruoupped neighbourhoods instead
# neighbourhood_group_cleansed - categorical value which will be used to identify most popular parts of Barclona
# latitude - we will use it later to visualise the data on the map
# longitude - we will use it later to visualise the data on the map
# property_type - categorical variable
# room_type - categorical variable
# accommodates - discrete value describing property
# bathrooms - another discrete value describing property
# bedrooms - another discrete value describing property
# beds - another discrete value describing property
# bed_type - categorical value describing property
# amenities - due to number of unique features (over 100) we will only concentrate on the total number of amenities
# And remove all the below:
# city - we already know the city
# state - and region being Catalonia
# zipcode - we will use neighbourhood
# market - it is mainly Barcelona
# smart_location - it is mainly Barcelona
# country_code - we already know the country
# country - as above
# is_location_exact - unimportant as it could be inacurate up to 150 meters http://insideairbnb.com/about.html#disclaimers
df.drop('city', inplace=True, axis=1) # dropping as it is not usable
df.drop('state', inplace=True, axis=1) # dropping as it is not usable
df.drop('zipcode',inplace=True, axis=1) # dropping as it is not usable
df.drop('market',inplace=True, axis=1) # dropping as it is not usable
df.drop('smart_location',inplace=True, axis=1) # dropping as it is not usable
df.drop('country_code',inplace=True, axis=1) # dropping as it is not usable
df.drop('country',inplace=True, axis=1) # dropping as it is not usable
df.drop('is_location_exact',inplace=True, axis=1) # dropping as it is not usable
df.head()
column_names = df.columns
print(column_names)
# From the next 20 columns we will keep the following:
# price - price per night for number of included guests
# security_deposit - another continous value assiociated with the cost
# cleaning_fee - additional cost at the top of rent
# guests_included - descrete value which we will use to evaluate the cost per person
# extra_people - cost of additional person per night
# minimum_nights - another discrete value that is cost related. Listing with high value of minimum nights are likely sublettings
# first_review - we will use it to calculate reviews_per_month
# last_review - we will use this field to filter out no longer active listings
# number_of_reviews - total number of reviews in entire listing history
# And remove all the below:
# square_feet - could be used to evaluate the property size but most of the values are missing
# weekly_price - mostly blank so we will use price instead
# monthly_price - mostly blank so we will use price instead
# maximum_nights - most of the values are above 30 days suggesting its used as an open bracket
# calendar_updated - we are not interested in future data that is a subject to daily updates
# has_availability - as above
# availability_30 - as above
# availability_60 - as above
# availability_90 - as above
# availability_365 - as above
# calendar_last_scraped - as above
df.drop('square_feet', inplace=True, axis=1) # dropping as it is not usable
df.drop('weekly_price', inplace=True, axis=1) # dropping as it is not usable
df.drop('monthly_price',inplace=True, axis=1) # dropping as it is not usable
df.drop('maximum_nights',inplace=True, axis=1) # dropping as it is not usable
df.drop('calendar_updated',inplace=True, axis=1) # dropping as it is not usable
df.drop('has_availability',inplace=True, axis=1) # dropping as it is not usable
df.drop('availability_30',inplace=True, axis=1) # dropping as it is not usable
df.drop('availability_60',inplace=True, axis=1) # dropping as it is not usable
df.drop('availability_90',inplace=True, axis=1) # dropping as it is not usable
df.drop('availability_365',inplace=True, axis=1) # dropping as it is not usable
df.drop('calendar_last_scraped',inplace=True, axis=1) # dropping as it is not usable
df.head()
column_names = df.columns
print(column_names)
# From the final set of columns we will keep the following:
# review_scores_accuracy - discrete value - numbers between 2 and 10
# review_scores_cleanliness - discrete value - numbers between 2 and 10
# review_scores_checkin - discrete value - numbers between 2 and 10
# review_scores_communication - discrete value - numbers between 2 and 10
# review_scores_location - discrete value - numbers between 2 and 10
# review_scores_value - discrete value - numbers between 2 and 10
# instant_bookable - categorical value - t or false
# cancellation_policy - ordinal value with 5 categories that can be ordered from lowest to highest level of flexibility
# require_guest_profile_picture - categorical value - t or false
# require_guest_phone_verification categorical value - t or false
# calculated_host_listings_count - continious value which is actual number of host listings - another metric to measure host experience or to distinguish buisness from individual
# And remove all the below:
# review_scores_rating - this value is calculated as weighted sum of other scores
# requires_license - all values are t
# license - textual value that is mostly null
# jurisdiction_names - contains only nulls
# is_business_travel_ready - contains one value of f
# reviews_per_month - we will re-calculate this field using our formula
df.drop('review_scores_rating', inplace=True, axis=1) # dropping as it is not usable
df.drop('requires_license', inplace=True, axis=1) # dropping as it is not usable
df.drop('license',inplace=True, axis=1) # dropping as it is not usable
df.drop('minimum_minimum_nights',inplace=True, axis=1) # dropping as it is not usable
df.drop('maximum_minimum_nights',inplace=True, axis=1) # dropping as it is not usable
df.drop('minimum_maximum_nights',inplace=True, axis=1) # dropping as it is not usable
df.drop('maximum_maximum_nights',inplace=True, axis=1) # dropping as it is not usable
df.drop('minimum_nights_avg_ntm',inplace=True, axis=1) # dropping as it is not usable
df.drop('maximum_nights_avg_ntm',inplace=True, axis=1) # dropping as it is not usable
df.drop('jurisdiction_names',inplace=True, axis=1) # dropping as it is not usable
df.drop('is_business_travel_ready',inplace=True, axis=1) # dropping as it is not usable
df.drop('reviews_per_month',inplace=True, axis=1) # dropping as it is not usable
df.head()
column_names = df.columns
print(column_names)
df.drop('number_of_reviews_ltm', inplace=True, axis=1) # dropping as it is not usable
df.drop('street', inplace=True, axis=1) # dropping as it is not usable
df.drop('transit',inplace=True, axis=1) # dropping as it is not usable
df.drop('calculated_host_listings_count_entire_homes',inplace=True, axis=1) # dropping as it is not usable
df.drop('calculated_host_listings_count_private_rooms',inplace=True, axis=1) # dropping as it is not usable
df.drop('calculated_host_listings_count_shared_rooms',inplace=True, axis=1) # dropping as it is not usable
df.drop('space',inplace=True, axis=1) # dropping as it is not usable
df.head()
column_names = df.columns
print(column_names)
df_sel = df.copy()
df_sel.head()
###Output
_____no_output_____
###Markdown
Dropping Values for number_of_reviews that are less than 0 as it is considered to be null
###Code
df_sel.drop(df_sel[df_sel['number_of_reviews'] <= 0].index, inplace = True)
# dropping all values less than or equal to 0 as it is equal to NAN or NA
df_sel['number_of_reviews'].unique()
df_sel['price']=df_sel['price'].str.replace('$','')
df_sel['price']=df_sel['price'].str.replace(',','')
df_sel['price']=df_sel['price'].str.replace('.','').astype(float)
df_sel['extra_people']=df_sel['extra_people'].str.replace('$','')
df_sel['extra_people']=df_sel['extra_people'].str.replace(',','')
df_sel['extra_people']=df_sel['extra_people'].str.replace('.','').astype(float)
# security_deposit - conversion from $ to numeric values
df_sel['security_deposit']=df_sel['security_deposit'].str.replace('$','')
df_sel['security_deposit']=df_sel['security_deposit'].str.replace(',','')
df_sel['security_deposit']=df_sel['security_deposit'].str.replace('.','').astype(float)
df_sel['cleaning_fee']=df_sel['cleaning_fee'].str.replace('$','')
df_sel['cleaning_fee']=df_sel['cleaning_fee'].str.replace(',','')
df_sel['cleaning_fee']=df_sel['cleaning_fee'].str.replace('.','').astype(float)
df_sel['security_deposit'].isnull().sum()
df_sel['cleaning_fee'].isnull().sum()
###Output
_____no_output_____
###Markdown
As the values are empty we fill it with 0 to fill values
###Code
df_sel['cleaning_fee'] = df_sel ['cleaning_fee'].fillna(df_sel['cleaning_fee'].mean()).astype(float)
df_sel['cleaning_fee'].isnull().sum()
df_sel['security_deposit'] = df_sel ['security_deposit'].fillna(df_sel['security_deposit'].mean()).astype(float)
df_sel['security_deposit'].isnull().sum()
df_sel['host_about'].isnull().sum()
df_sel['host_about'] = df_sel.host_about.fillna('')
df_sel['host_about'].isnull().sum()
df_sel1 = df_sel.copy()
df_sel = df_sel.dropna()
df_sel.isnull().sum()
df_sel.head()
###Output
_____no_output_____
###Markdown
Performing label Encoding for host_is_superhost,host_has_profile_pic, host_identity_verified, instant_bookable, require_guest_profile_picture, require_guest_phone_verification
###Code
from sklearn import preprocessing
label_encoder = preprocessing.LabelEncoder()
df_sel['host_is_superhost']= label_encoder.fit_transform(df_sel['host_is_superhost'])
df_sel.head()
df_sel['host_has_profile_pic'] = label_encoder.fit_transform(df_sel['host_has_profile_pic'])
df_sel['host_identity_verified'] = label_encoder.fit_transform(df_sel['host_identity_verified'])
df_sel['instant_bookable'] = label_encoder.fit_transform(df_sel['instant_bookable'])
df_sel['require_guest_profile_picture'] = label_encoder.fit_transform(df_sel['require_guest_profile_picture'])
df_sel['require_guest_phone_verification'] = label_encoder.fit_transform(df_sel['require_guest_phone_verification'])
df_sel['cancellation_policy'] = label_encoder.fit_transform(df_sel['cancellation_policy'])
df_sel['bed_type'] = label_encoder.fit_transform(df_sel['bed_type'])
df_sel['room_type'] = label_encoder.fit_transform(df_sel['room_type'])
df_sel['neighbourhood_group_cleansed'] = label_encoder.fit_transform(df_sel['neighbourhood_group_cleansed'])
df_sel['property_type'] = label_encoder.fit_transform(df_sel['property_type'])
df_sel.head()
df_sel.info()
df_sel.select_dtypes(include='object').columns
# listing_duration =
df_sel['last_review']= pd.to_datetime(df_sel['last_review'])
df_sel['first_review']= pd.to_datetime(df_sel['first_review'])
df_sel['listing_duration'] = df_sel['last_review'] - df_sel['first_review']
# hosting_duration =
df_sel['host_since']= pd.to_datetime(df_sel['host_since'])
df_sel['hosting_duration'] = df_sel['last_review'] - df_sel['host_since']
# host_about_len =
df_sel['host_about_len']=df_sel['host_about'].str.replace('NA','0')
df.drop('host_about',inplace=True, axis=1) # dropping as it is not usable
# price_per_person - (price/accommodates)
df_sel['price_per_person'] =df_sel['price'] / df_sel['accommodates']
a_longitude= 40.7128
a_latitude= 74.0060
from math import radians, cos, sin, asin, sqrt
def haversine(lon1, lat1, lon2, lat2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
"""
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1)*cos(lat2)*sin(dlon/2)**2
c = 2 * asin(sqrt(a))
km = 6367 * c
return km
for index, row in df_sel.iterrows():
df_sel.loc[index, 'distance'] = haversine(a_longitude, a_latitude, row['longitude'], row['latitude'])
df_sel.head(5)
df_sel['last_scraped']= pd.DatetimeIndex(df_sel.last_scraped)
df_sel['first_review']= pd.DatetimeIndex(df_sel.first_review)
df_sel['last_review']= pd.DatetimeIndex(df_sel.last_review)
df_sel['host_since']= pd.DatetimeIndex(df_sel.host_since)
df_sel.head()
df_sel = df_sel.drop(['last_scraped','host_name','host_since','host_about','neighbourhood_cleansed','amenities','first_review','last_review','listing_duration','hosting_duration','host_about_len'], axis=1)
df_sel.head()
X=df_sel.drop(['price'],1)
y = df_sel['price']
###Output
_____no_output_____
###Markdown
Dropping all Categorical variables and Considering only numerical variables for Train And Test Retaining columns for prediction
###Code
X.head(5)
from sklearn.model_selection import train_test_split
X_train, X_test , y_train, y_test = train_test_split(X,y, test_size = 0.30, random_state = 1)
print(X_train.shape)
print(X_test.shape)
print(y_test.shape)
lin_reg = LinearRegression()
model = lin_reg.fit(X_train,y_train)
print(f'R^2 score for train: {lin_reg.score(X_train, y_train)}')
print(f'R^2 score for test: {lin_reg.score(X_test, y_test)}')
###Output
R^2 score for train: 0.6987829570744384
R^2 score for test: 0.7899394341859203
###Markdown
Base Model has an R2 value of 0.77% for trainBase Model has an R2 value of 0.63% for train
###Code
X.columns
import warnings
warnings.filterwarnings('ignore')
import statsmodels.api as sm
X=df_sel.drop(['price','id'],1)
y = df_sel['price']
X_constant = sm.add_constant(X)
lin_reg = sm.OLS(y,X_constant).fit()
predictions = lin_reg.predict(X_constant)
lin_reg.summary()
###Output
_____no_output_____
###Markdown
AutoCorrelation
###Code
import statsmodels.tsa.api as smt
acf = smt.graphics.plot_acf(lin_reg.resid)
acf.show()
import scipy.stats as stats
print(stats.jarque_bera(lin_reg.resid))
###Output
(2862431836.2239127, 0.0)
###Markdown
pvalue is less than alpha value so it is normal.we reject the null hypothesis that the error terms are normally distributed.
###Code
sns.distplot(lin_reg.resid)
sns.set_style('darkgrid')
sns.mpl.rcParams['figure.figsize'] = (15.0, 9.0)
def linearity_test(model, y):
'''
Function for visually inspecting the assumption of linearity in a linear regression model.
It plots observed vs. predicted values and residuals vs. predicted values.
Args:
* model - fitted OLS model from statsmodels
* y - observed values
'''
fitted_vals = model.predict()
resids = model.resid
fig, ax = plt.subplots(1,2)
sns.regplot(x=fitted_vals, y=y, lowess=True, ax=ax[0], line_kws={'color': 'red'})
ax[0].set_title('Observed vs. Predicted Values', fontsize=16)
ax[0].set(xlabel='Predicted', ylabel='Observed')
sns.regplot(x=fitted_vals, y=resids, lowess=True, ax=ax[1], line_kws={'color': 'red'})
ax[1].set_title('Residuals vs. Predicted Values', fontsize=16)
ax[1].set(xlabel='Predicted', ylabel='Residuals')
linearity_test(lin_reg, y)
###Output
_____no_output_____
###Markdown
The plot symmetrical about the, so it seems to be linear To confirm it rainbow test is done
###Code
print(sm.stats.linear_rainbow(res = lin_reg,frac = 0.5))
lin_reg.resid.mean() # it is close to zero linearity is present
%config InlineBackend.figure_format ='retina'
import scipy.stats as stats
import pylab
from statsmodels.graphics.gofplots import ProbPlot
st_residual = lin_reg.get_influence().resid_studentized_internal
stats.probplot(st_residual, dist="norm", plot = pylab)
plt.show()
from statsmodels.compat import lzip
import numpy as np
from statsmodels.compat import lzip
%matplotlib inline
%config InlineBackend.figure_format ='retina'
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.stats.api as sms
sns.set_style('darkgrid')
sns.mpl.rcParams['figure.figsize'] = (15.0, 9.0)
model = lin_reg
fitted_vals = model.predict()
resids = model.resid
resids_standardized = model.get_influence().resid_studentized_internal
fig, ax = plt.subplots(1,2)
sns.regplot(x=fitted_vals, y=resids, lowess=True, ax=ax[0], line_kws={'color': 'red'})
ax[0].set_title('Residuals vs Fitted', fontsize=16)
ax[0].set(xlabel='Fitted Values', ylabel='Residuals')
sns.regplot(x=fitted_vals, y=np.sqrt(np.abs(resids_standardized)), lowess=True, ax=ax[1], line_kws={'color': 'red'})
ax[1].set_title('Scale-Location', fontsize=16)
ax[1].set(xlabel='Fitted Values', ylabel='sqrt(abs(Residuals))')
name = ['F statistic', 'p-value']
test = sms.het_goldfeldquandt(model.resid, model.model.exog)
lzip(name, test)
# Pvalue is more than alpha so there may homo scadescity in the model
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = [variance_inflation_factor(X_constant.values, i) for i in range(X_constant.shape[1])]
pd.DataFrame({'vif': vif[1:]}, index=X.columns).T
X=df_sel.drop(['price','latitude'],1)
y = df_sel['price']
from sklearn.model_selection import train_test_split
X_train, X_test , y_train, y_test = train_test_split(X,y, test_size = 0.30, random_state = 1)
print(X_train.shape)
print(X_test.shape)
print(y_test.shape)
import warnings
warnings.filterwarnings('ignore')
import statsmodels.api as sm
X=df_sel.drop(['price','id','latitude'],1)
y = df_sel['price']
X_constant = sm.add_constant(X)
lin_reg = sm.OLS(y,X_constant).fit()
predictions = lin_reg.predict(X_constant)
lin_reg.summary()
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = [variance_inflation_factor(X_constant.values, i) for i in range(X_constant.shape[1])]
pd.DataFrame({'vif': vif[1:]}, index=X.columns).T
X.columns
X=df_sel.drop(['price','latitude','longitude','distance','require_guest_phone_verification','instant_bookable','bed_type','review_scores_accuracy','host_has_profile_pic','review_scores_communication','cancellation_policy','host_is_superhost','extra_people','review_scores_value','host_identity_verified'],1)
y = df_sel['price']
from sklearn.model_selection import train_test_split
X_train, X_test , y_train, y_test = train_test_split(X,y, test_size = 0.30, random_state = 1)
print(X_train.shape)
print(X_test.shape)
print(y_test.shape)
import warnings
warnings.filterwarnings('ignore')
import statsmodels.api as sm
X=df_sel.drop(['price','id','latitude','longitude','distance','require_guest_phone_verification','instant_bookable','bed_type','review_scores_accuracy','host_has_profile_pic','review_scores_communication','cancellation_policy','host_is_superhost','extra_people','review_scores_value','host_identity_verified'],1)
y = df_sel['price']
X_constant = sm.add_constant(X)
lin_reg = sm.OLS(y,X_constant).fit()
predictions = lin_reg.predict(X_constant)
lin_reg.summary()
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = [variance_inflation_factor(X_constant.values, i) for i in range(X_constant.shape[1])]
pd.DataFrame({'vif': vif[1:]}, index=X.columns).T
X.columns
df1 = pd.concat([X,df_sel['price']],axis = 1)
df1.head()
df1.isnull().sum()
X=df1.drop(['price'],1)
y = df1['price']
RFE
lin_reg = LinearRegression()
rfe = RFE(lin_reg, 5)
#Transforming data using RFE
X_rfe = rfe.fit_transform(X,y)
#Fitting the data to model
lin_reg.fit(X_rfe,y)
print(rfe.support_)
print(rfe.ranking_)
nof_list=np.arange(1,32)
high_score=0
#Variable to store the optimum features
nof=0
score_list =[]
for n in range(len(nof_list)):
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state = 0)
lin_reg = LinearRegression()
rfe = RFE(model,nof_list[n])
X_train_rfe = rfe.fit_transform(X_train,y_train)
X_test_rfe = rfe.transform(X_test)
lin_reg.fit(X_train_rfe,y_train)
score = lin_reg.score(X_test_rfe,y_test)
score_list.append(score)
if(score>high_score):
high_score = score
nof = nof_list[n]
print("Optimum number of features: %d" %nof)
print("Score with %d features: %f" % (nof, high_score))
cols = list(X.columns)
lin_reg = LinearRegression()
#Initializing RFE model
rfe = RFE(lin_reg, 20)
#Transforming data using RFE
X_rfe = rfe.fit_transform(X,y)
#Fitting the data to model
lin_reg.fit(X_rfe,y)
temp = pd.Series(rfe.support_,index = cols)
selected_features_rfe = temp[temp==True].index
print(selected_features_rfe)
X = df1[['neighbourhood_group_cleansed', 'property_type', 'room_type', 'accommodates', 'bathrooms', 'bedrooms', 'beds', 'security_deposit', 'cleaning_fee', 'guests_included', 'minimum_nights', 'number_of_reviews', 'review_scores_cleanliness', 'review_scores_checkin', 'review_scores_location', 'require_guest_profile_picture', 'calculated_host_listings_count', 'price_per_person']]
y = df1.price
X_constant = sm.add_constant(X)
lin_reg = sm.OLS(y, X_constant).fit()
predictions = lin_reg.predict(X_constant)
lin_reg.summary()
###Output
_____no_output_____
###Markdown
Ridge
###Code
from sklearn.linear_model import Ridge
ridgeReg = Ridge(alpha=1, normalize=True)
ridgeReg.fit(X_train,y_train)
pred = ridgeReg.predict(X_test)
ridgeReg.score(X_test,y_test)
ridgeReg.score(X_train,y_train)
###Output
_____no_output_____
###Markdown
Lasso
###Code
from sklearn.linear_model import Lasso
lassoReg = Lasso(alpha=18, normalize=True)
lassoReg.fit(X_train,y_train)
pred = lassoReg.predict(X_test)
lassoReg.score(X_test,y_test)
from sklearn.ensemble import BaggingRegressor
from sklearn.tree import DecisionTreeRegressor
import matplotlib.gridspec as gridspec
from sklearn.model_selection import cross_val_score, train_test_split
import itertools
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
Hyperparameter tuning of decision tree
###Code
# Create the parameter grid
param_grid = {
'max_depth':range(5,10,5),
'min_samples_leaf': range(50, 150, 50),
'min_samples_split': range(50, 150, 50),
'criterion': ["mse", "mae"]
}
n_folds = 5
# Instantiate the grid search model
dtree = DecisionTreeRegressor()
grid_search = GridSearchCV(estimator = dtree, param_grid = param_grid,
cv = n_folds, verbose = 1)
# Fit the grid search to the data
grid_search.fit(X_train,y_train)
print("best accuracy", grid_search.best_score_)
print(grid_search.best_estimator_)
tr=grid_search.best_estimator_
tr
###Output
_____no_output_____
###Markdown
Hyper parameter tuning of random forest
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
# Create the parameter grid based on the results of random search
param_grid = {
'bootstrap': [True],
'max_depth': [80, 90,],
'max_features': [2, 3],
'min_samples_leaf': [3, 4, 5],
'min_samples_split': [8, 10, 12],
'n_estimators': [100, 200]
}
# Create a based model
rf = RandomForestRegressor()
# Instantiate the grid search model
grid_search = GridSearchCV(estimator =rf, param_grid = param_grid,
cv = 3, n_jobs = -1, verbose = 2)
grid_search.fit(X_train,y_train)
print("best accuracy", grid_search.best_score_)
print(grid_search.best_estimator_)
rft=grid_search.best_estimator_
rft
###Output
_____no_output_____
###Markdown
Bagging
###Code
from sklearn.ensemble import RandomForestRegressor
clf1 = DecisionTreeRegressor(max_depth=1)
clf2 = LinearRegression()
clf3 = Ridge()
clf4 = Lasso()
bagging1 = BaggingRegressor(base_estimator=clf1, n_estimators=10, max_samples=0.8, max_features=0.8)
bagging2 = BaggingRegressor(base_estimator=clf2, n_estimators=10, max_samples=0.8, max_features=0.8)
bagging3 = BaggingRegressor(base_estimator=clf3, n_estimators=10, max_samples=0.8, max_features=0.8)
bagging4 = BaggingRegressor(base_estimator=clf4, n_estimators=10, max_samples=0.8, max_features=0.8)
label = ['Decision Tree','Bagging Tree','Linear','bagg_lr','Ridge','bagg_ridge','Lasso','bagg_lasso']
clf_list = [clf1,bagging1,clf2,bagging2,clf3,bagging3,clf4,bagging4]
grid = itertools.product([0,1],repeat=4)
for clf, label, grd in zip(clf_list, label, grid):
scores =cross_val_score(clf,X_train,y_train, cv=10)
print ("Accuracy: %.2f (+/- %.2f) [%s]" %(scores.mean(), scores.std(), label))
clf.fit(X_train, y_train)
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
import matplotlib.gridspec as gridspec
from sklearn.model_selection import cross_val_score, train_test_split
import itertools
from sklearn.linear_model import LinearRegression
import xgboost
from sklearn.metrics import explained_variance_score
xgb = xgboost.XGBRegressor(n_estimators=100, learning_rate=0.08, gamma=0, subsample=0.75,
colsample_bytree=1, max_depth=7)
xgb.fit(X_train,y_train)
predictions = xgb.predict(X_test)
print(explained_variance_score(y_test,predictions))
accuracy = explained_variance_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
from sklearn.ensemble import BaggingRegressor
from sklearn.tree import DecisionTreeRegressor
bag_tree = BaggingRegressor(DecisionTreeRegressor(),
max_features=0.8, n_estimators=200,
random_state=0)
dtree= DecisionTreeRegressor()
bag_tree.fit(X_train, y_train)
bag_tree.score(X_test, y_test)
from sklearn.ensemble import AdaBoostRegressor
ada_clf=AdaBoostRegressor(base_estimator=DecisionTreeRegressor(), learning_rate=1.0, loss='linear',
n_estimators=100, random_state=0)
ada_clf.fit(X_train, y_train)
ada_clf.score(X_test, y_test)
bag_tree = BaggingRegressor(RandomForestRegressor(),
max_features=0.8, n_estimators=200,
random_state=0)
rf= RandomForestRegressor()
bag_tree.fit(X_train, y_train)
bag_tree.score(X_test, y_test)
bag_tree.score(X_train, y_train)
from sklearn.ensemble import AdaBoostRegressor
regr_1 = DecisionTreeRegressor(max_depth=4)
regr_2 =AdaBoostRegressor(DecisionTreeRegressor(max_depth=4),
n_estimators=10)
num_est = [1, 2, 3, 10]
label = ['AdaBoost (n_est=1)', 'AdaBoost (n_est=2)', 'AdaBoost (n_est=3)', 'AdaBoost (n_est=10)']
print(X.shape)
print(y.shape)
clf1 = DecisionTreeRegressor(max_depth=1)
clf2 = LinearRegression()
clf3 = Ridge()
clf4 = Lasso()
boster1 = AdaBoostRegressor(base_estimator=clf1, n_estimators=10)
boster2 = AdaBoostRegressor(base_estimator=clf2, n_estimators=10)
boster3 = AdaBoostRegressor(base_estimator=clf3, n_estimators=10)
boster4 = AdaBoostRegressor(base_estimator=clf4, n_estimators=10)
label = ['Decision Tree','Bos_Tree','Linear','bos_lr','Ridge','bos_ridge','Lasso','bos_lasso']
clf_list = [clf1,boster1,clf2,boster2,clf3,boster3,clf4,boster4]
grid = itertools.product([0,1],repeat=4)
for clf, label, grd in zip(clf_list, label, grid):
scores =cross_val_score(clf,X_train,y_train, cv=10)
print ("Accuracy: %.2f (+/- %.2f) [%s]" %(scores.mean(), scores.std(), label))
clf.fit(X_train, y_train)
###Output
Accuracy: 0.10 (+/- 0.18) [Decision Tree]
Accuracy: 0.13 (+/- 0.33) [Bos_Tree]
Accuracy: 0.66 (+/- 0.22) [Linear]
Accuracy: -1.13 (+/- 1.01) [bos_lr]
Accuracy: 0.66 (+/- 0.22) [Ridge]
Accuracy: -1.02 (+/- 0.90) [bos_ridge]
Accuracy: 0.66 (+/- 0.22) [Lasso]
Accuracy: -1.63 (+/- 1.98) [bos_lasso]
|
day12/Lab/lab11-problem1-RedBlackImage.ipynb | ###Markdown
DeepLab DemoThis demo will demostrate the steps to run deeplab semantic segmentation model on sample input images.
###Code
#@title Imports
import os
from io import BytesIO
import tarfile
import tempfile
from six.moves import urllib
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
import tensorflow as tf
#@title Helper methods
class DeepLabModel(object):
"""Class to load deeplab model and run inference."""
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
INPUT_SIZE = 513
FROZEN_GRAPH_NAME = 'frozen_inference_graph'
def __init__(self, tarball_path):
"""Creates and loads pretrained deeplab model."""
self.graph = tf.Graph()
graph_def = None
# Extract frozen graph from tar archive.
tar_file = tarfile.open(tarball_path)
for tar_info in tar_file.getmembers():
if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name):
file_handle = tar_file.extractfile(tar_info)
graph_def = tf.GraphDef.FromString(file_handle.read())
break
tar_file.close()
if graph_def is None:
raise RuntimeError('Cannot find inference graph in tar archive.')
with self.graph.as_default():
tf.import_graph_def(graph_def, name='')
self.sess = tf.Session(graph=self.graph)
def run(self, image):
"""Runs inference on a single image.
Args:
image: A PIL.Image object, raw input image.
Returns:
resized_image: RGB image resized from original input image.
seg_map: Segmentation map of `resized_image`.
"""
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(
self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})
seg_map = batch_seg_map[0]
return resized_image, seg_map
def create_pascal_label_colormap():
"""Creates a label colormap used in PASCAL VOC segmentation benchmark.
Returns:
A Colormap for visualizing segmentation results.
"""
colormap = np.zeros((256, 3), dtype=int)
ind = np.arange(256, dtype=int)
for shift in reversed(range(8)):
for channel in range(3):
colormap[:, channel] |= ((ind >> channel) & 1) << shift
ind >>= 3
# Change color maps here
objects = np.array([[0, 0, 0]] * 255)
background = np.array([[90, 11, 0]])
colormap = np.concatenate((background, objects))
return colormap
def label_to_color_image(label):
"""Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with integer type, storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the PASCAL color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color
map maximum entry.
"""
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_pascal_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image, seg_map):
"""Visualizes input image, segmentation map and overlay view."""
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation map')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(
FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
plt.show()
LABEL_NAMES = np.asarray([
'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv'
])
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
#@title Select and download models {display-mode: "form"}
MODEL_NAME = 'xception_coco_voctrainaug' # @param ['mobilenetv2_coco_voctrainaug', 'mobilenetv2_coco_voctrainval', 'xception_coco_voctrainaug', 'xception_coco_voctrainval']
_DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/'
_MODEL_URLS = {
'mobilenetv2_coco_voctrainaug':
'deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz',
'mobilenetv2_coco_voctrainval':
'deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz',
'xception_coco_voctrainaug':
'deeplabv3_pascal_train_aug_2018_01_04.tar.gz',
'xception_coco_voctrainval':
'deeplabv3_pascal_trainval_2018_01_04.tar.gz',
}
_TARBALL_NAME = 'deeplab_model.tar.gz'
model_dir = tempfile.mkdtemp()
tf.gfile.MakeDirs(model_dir)
download_path = os.path.join(model_dir, _TARBALL_NAME)
print('downloading model, this might take a while...')
urllib.request.urlretrieve(_DOWNLOAD_URL_PREFIX + _MODEL_URLS[MODEL_NAME],
download_path)
print('download completed! loading DeepLab model...')
MODEL = DeepLabModel(download_path)
print('model loaded successfully!')
###Output
downloading model, this might take a while...
download completed! loading DeepLab model...
model loaded successfully!
###Markdown
Run on sample imagesSelect one of sample images (leave `IMAGE_URL` empty) or feed any internet imageurl for inference.Note that we are using single scale inference in the demo for fast computation,so the results may slightly differ from the visualizations in[README](https://github.com/tensorflow/models/blob/master/research/deeplab/README.md),which uses multi-scale and left-right flipped inputs.
###Code
#@title Run on sample images {display-mode: "form"}
SAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3']
IMAGE_URL = 'https://scontent.fbkk9-2.fna.fbcdn.net/v/t1.0-9/10696383_10152401136638175_5508535560271200458_n.jpg?_nc_cat=109&_nc_eui2=AeEUgu1BU9jkKV-efr73ECfAA_YmMOzfAYOd7dQZl-b3AhvPQyKXCd3e0ChuGDv3lCpvxW5f0ReOogpglFRmOAxDEJRvJFDaR9LPRHcGIdREZA&_nc_ht=scontent.fbkk9-2.fna&oh=64229db2a580fcba1ce12cb4449ecd21&oe=5D0B607C' #@param {type:"string"}
_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/'
'deeplab/g3doc/img/%s.jpg?raw=true')
def run_visualization(url):
"""Inferences DeepLab model and visualizes result."""
try:
f = urllib.request.urlopen(url)
jpeg_str = f.read()
original_im = Image.open(BytesIO(jpeg_str))
except IOError:
print('Cannot retrieve image. Please check url: ' + url)
return
print('running deeplab on image %s...' % url)
resized_im, seg_map = MODEL.run(original_im)
vis_segmentation(resized_im, seg_map)
return seg_map
image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE
segmap = run_visualization(image_url)
seg_image = label_to_color_image(segmap).astype(np.uint8)
plt.grid('off')
plt.imshow(seg_image)
import cv2
from PIL import Image
seg_image_pil = Image.fromarray(seg_image)
imgray = cv2.cvtColor(seg_image, cv2.COLOR_BGR2GRAY)
image_blurred = cv2.GaussianBlur(imgray, (5, 5), 0)
image_blurred = cv2.dilate(image_blurred, None)
ret, thresh = cv2.threshold(image_blurred, 8, 255, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contours_area = []
for contour in contours:
contours_area.append(cv2.contourArea(contour))
largest_contour_index = contours_area.index(max(contours_area))
plt.grid('off')
plt.imshow(image_blurred)
len(contours)
cv2.drawContours(seg_image, contours, -1, (179,48,9), 3)
cv2.drawContours(seg_image, contours, -1, (163,73,49), 1)
plt.grid('off')
plt.imshow(seg_image)
###Output
_____no_output_____ |
notebooks/plot_tracks.ipynb | ###Markdown
Plot data tracks (e.g., from ChIP-seq)
###Code
from __future__ import division
%pylab inline
import matplotlib.gridspec as gridspec
import matplotlib.ticker as ticker
from scipy.signal import medfilt
import seaborn as sns
from Bio import SeqIO
import glob
sns.set_style('ticks')
sns.set_context('paper')
def parse_sist_melt(fn):
"""SIST melt file to numpy array"""
data = []
with open(fn,'r') as f:
for line in f:
line = line.strip()
if 'Position' in line or 'WARNING' in line:
continue
line = line.split()
line[0],line[1],line[2] = int(line[0])-1,float(line[1]),float(line[2])
data.append(line[1])
return np.array(data)
def parse_sist_cruciform(fn):
"""SIST cruciform file to numpy array"""
data = []
with open(fn,'r') as f:
for line in f:
line = line.strip()
if 'Position' in line or 'WARNING' in line:
continue
line = line.split()
line[0],line[1] = int(line[0])-1,float(line[1])
data.append(line[1])
return np.array(data)
def stitch_sist(fns,dtype='melt',maxe = 200000):
"""Stitch together a SIST file based in information contained in
the filename:
# Example fn format: II_1603582-1643583_0.algM.txt
"""
data = None
for fn in fns:
fn_split = fn.split('_')
fn_split[-1] = fn_split[-1].split('.')[0]
offset = int(fn_split[-1])
try:
[s,e] = fn_split[-2].split('-')
s,e = int(s),int(e)
except:
s = 0
e = maxe
n = e-s+1
if data is None:
data = np.zeros(n)
if dtype=='melt':
sdata = parse_sist_melt(fn)
else:
sdata = parse_sist_cruciform(fn)
data[offset:offset+len(sdata)] = np.maximum(sdata,data[offset:offset+len(sdata)])
return data
def movingaverage (values, window):
"""Compute the moving average for a specified window width"""
weights = np.repeat(1.0, window)/window
sma = np.convolve(values, weights, 'valid')
return sma
def major_formatter(x,pos):
xs = np.floor(np.log(abs(x)+1))
if xs <= 2:
xs = 0
elif xs >= 3 and xs < 6:
xs = 2
elif xs >= 6:
xs = 5
return "%.1f" % (x/(10**xs))
def format_fill_axis(ax,data,xvals=None,xlim=None,ylim=None,xlabel=None,
ylabel=None,xticks=None,yticks=None,xticklabels=None,
yticklabels=None,ticklen=5,color='black',pos=''):
"""Format an axis object to produce a filled in ChIP-seq track;
specifiy pos='bottom' to create the bottom-most track, which contains
the x-axis"""
if xvals is None:
xvals = np.arange(len(data))
ax.fill_between(xvals,data,0,facecolor=color,lw=1,edgecolor=color,rasterized=True)
if xlim is not None:
ax.set_xlim(xlim)
else:
ax.set_xlim(np.min(xvals),np.max(xvals))
if ylim is not None:
ax.set_ylim(ylim)
else:
ax.set_ylim(np.min(data),np.max(data))
if xticks is not None and xticklabels is not None:
ax.set_xticks(xticks)
ax.set_xticklabels(xticklabels)
else:
ax.xaxis.set_major_locator(ticker.AutoLocator())
ax.xaxis.set_major_formatter(ticker.ScalarFormatter())
setp(ax.get_yticklabels(),fontsize=10)
ax.tick_params('y',length=ticklen)
if pos != 'bottom':
setp(ax.get_xticklabels(),visible=False)
ax.tick_params('x',length=0)
else:
setp(ax.get_xticklabels(),fontsize=10)
ax.tick_params('x',length=ticklen)
if xlabel is not None:
ax.set_xlabel(xlabel,size=10)
if ylabel is not None:
ax.set_ylabel(ylabel,rotation=0,size=12,ha='right')
ax.yaxis.set_label_coords(-0.2,0.25)
if yticks is not None:
ax.set_yticks(yticks)
else:
ax.yaxis.set_major_locator(ticker.MaxNLocator(3,prune=None))
ax.yaxis.set_major_formatter(ticker.ScalarFormatter())
if yticklabels is not None:
ax.set_yticklabels(yticklabels)
if pos != 'bottom':
sns.despine(ax=ax,bottom=True,trim=False)
else:
sns.despine(ax=ax,trim=False)
ax.xaxis.offsetText.set_visible(False)
def format_img_axis(ax,data,xlabel=None,ylabel=None,xticks=None,colormap=None,ticklen=5,pos='mid'):
ax.imshow(data.reshape(1,len(data)),aspect='auto',cmap=colormap,rasterized=True)
ax.set_yticks([])
if pos != 'bottom':
ax.tick_params('x',size=0)
setp(ax.get_xticklabels(), visible=False)
else:
if xticks is None:
ax.xaxis.set_major_locator(ticker.AutoLocator())
ax.xaxis.set_major_formatter(ticker.FuncFormatter(major_formatter))
else:
ax.set_xticks(xticks)
ax.tick_params('x',size=ticklen)
if xlabel is not None:
ax.set_xlabel(xlabel,size=10)
setp(ax.get_xticklabels(),fontsize=10)
if ylabel is not None:
ax.set_ylabel(ylabel,rotation=0,size=12,ha='right')
ax.yaxis.set_label_coords(-0.2,0)
setp(ax.get_yticklabels(),fontsize=10)
ax.xaxis.offsetText.set_visible(False)
def bed2arr(fn,mine,maxe,ignorescore=False,chrom=None):
arr = np.zeros(maxe-mine+1)
with open(fn,'r') as f:
for line in f:
line = line.strip().split()
if chrom is not None and line[0] != chrom:
continue
try:
line[1],line[2],line[3] = int(line[1]),int(line[2]),float(line[3])
incr = float(line[3])
except:
line[1],line[2] = int(line[1]),int(line[2])
incr = 1
s = line[1]-mine
e = line[2]-mine+1
arr[s:e] += incr
return arr
def properbed2arr(fn,mine,maxe,chrom=None,useScore=False):
arr = np.zeros(maxe-mine+1)
with open(fn,'r') as f:
for line in f:
line = line.strip().split()
if chrom is not None and line[0] != chrom:
continue
line[1],line[2],line[4] = int(line[1]),int(line[2]),float(line[4])
s = line[1]-mine
e = line[2]-mine+1
if (s < 0 or e >= len(arr)):
continue
if (useScore):
arr[s:e] = np.maximum(arr[s:e],line[4])
else:
arr[s:e] += 1
return arr
def read_bed_coords(fn):
coords = []
with open(fn,'r') as f:
for line in f:
line = line.strip().split()
line[1],line[2] = int(line[1]),int(line[2])
coords.append((line[0],line[1],line[2]))
return coords
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Chicken
###Code
chrom='chr5'
s=3007475
e=3087475
cenpa = bed2arr('../data/tracks/dt40.cenpa.avg.bed',mine=s,maxe=e,chrom=chrom,ignorescore=True)
melt = stitch_sist(glob.glob('../data/tracks/chicken.cen.unique.mid.win40000/'+chrom+'*algM.txt'),'melt')
cruc = stitch_sist(glob.glob('../data/tracks/chicken.cen.unique.mid.win40000/'+chrom+'*algC.txt'),'cruc')
g = glob.glob('../data/palindrome/chicken/unique_cen/'+chrom+'*emboss.bed')
dyads = bed2arr(g[0],0,e-s,ignorescore=False)
xts = np.linspace(0,e-s+1,5)
plt.figure(figsize=(2.5,3.2))
G = gridspec.GridSpec(4,1,height_ratios=[0.5,0.5,0.5,0.5],hspace=0.5)
ax1 = plt.subplot(G[0,:])
format_fill_axis(ax1,cenpa,ylabel='CENP-A ChIP',ylim=[0,25000],yticklabels=[0,250],yticks=[0,25000],color='#e31a1c',
xticks=xts)
ax2 = plt.subplot(G[1,:],sharex = ax1)
format_fill_axis(ax2,dyads,ylabel='Dyad symmetry',ylim=[5,50],yticklabels=[5,50],yticks=[5,50],color='black')
ax3 = plt.subplot(G[2,:],sharex = ax1)
format_fill_axis(ax3,melt,ylabel='DNA melting',ylim=[0,1],yticklabels=[0,1],yticks=[0,1],color='#1f78b4')
ax4 = plt.subplot(G[3,:],sharex = ax1)
format_fill_axis(ax4,cruc,ylabel='Cruc. extrusion',ylim=[0,1],yticklabels=[0,1],yticks=[0,1],color='#b15928',
pos='bottom',xlabel='Position (Mb)',xticks=xts)
newlabels = ["%.2f" % ((s+x)/1000000) for x in xts]
ax4.set_xticklabels(newlabels)
ax1.set_title('chr5',size=12)
plt.savefig('../figures/chicken_cen5.svg',dpi=300)
###Output
_____no_output_____
###Markdown
Mouse
###Code
chrom='gnl|ti|184700396'
s=0
e=1007
# chrom='gnl|ti|71556253'
# s=0
# e=1012
# chrom ='gnl|ti|19360812'
# s =0
# e=1060
rname = chrom.replace('|','_')
cenpa = bed2arr('../data/tracks/misat.per_base.cvg',mine=s,maxe=e,chrom=chrom,ignorescore=False)
misat = properbed2arr('../data/tracks/misat.118-122.1kb.misat.blast.bed',s,e,chrom=chrom,useScore=False)
misat = misat >= 1
cenpb = properbed2arr('../data/tracks/misat_118-122.1kb.cenp_b.fimo.bed',s,e,chrom=chrom,useScore=False)
cenpb = cenpb >=1
ssdna_activ = bed2arr('../data/tracks/misat.ssdna_activ.cvg',mine=s,maxe=e,chrom=chrom,ignorescore=False)
ssdna_control = bed2arr('../data/tracks/misat.ssdna_control.cvg',mine=s,maxe=e,chrom=chrom,ignorescore=False)
ssdna = np.log2(ssdna_activ+1) - np.log2(ssdna_control+1)
g = glob.glob('../data/palindrome/mouse/'+rname+'*emboss.bed')
dyads = bed2arr(g[0],0,e-s+1,ignorescore=False)
xts = np.linspace(0,1000,5)
plt.figure(figsize=(1.6,2))
G = gridspec.GridSpec(5,1,height_ratios=[0.2,0.2,0.5,0.5,0.5],hspace=0.6)
ax1 = plt.subplot(G[0,:])
format_img_axis(ax1,misat,colormap=cm.binary,ylabel='MiSat')
ax2 = plt.subplot(G[1,:],sharex=ax1)
format_img_axis(ax2,cenpb,colormap=cm.binary,ylabel='CENP-B boxes')
ax3 = plt.subplot(G[2,:],sharex=ax1)
format_fill_axis(ax3,np.log(cenpa+1),ylabel='CENP-A ChIP',ylim=[0,15],yticklabels=[0,15],yticks=[0,15],color='#e31a1c')
ax4 = plt.subplot(G[3,:],sharex=ax1)
format_fill_axis(ax4,ssdna,ylabel='Permanganate-seq',ylim=[0,5],yticklabels=[0,5],yticks=[0,5],color='#fb9a99',
xlabel='Position (bp)')
ax5 = plt.subplot(G[4,:],sharex=ax1)
format_fill_axis(ax5,dyads,ylabel='Dyad symmetry',xlim=[0,e],ylim=[4,20],yticklabels=[4,20],yticks=[4,20],color='black',
xlabel='Position (bp)', pos='bottom')
ax1.set_title(chrom,size=12)
plt.savefig('../figures/mouse'+rname+'.svg',dpi=300)
###Output
_____no_output_____
###Markdown
Pombe
###Code
coords = [('I',3736553,3806554),('II',1588582,1658583),('III',1068953,1138954)]
N = len(coords)
cenpa_mu = None
melt_mu = None
cruc_mu = None
dyad_mu = None
for chrom,s,e in coords:
cenpa = bed2arr('../data/tracks/cnp1.pombe.cov.bed',mine=s,maxe=e,chrom=chrom)
melt = stitch_sist(glob.glob('../data/tracks/pombe.cen.mid.win70k_sist/'+chrom+'_'+str(s)+'-*algM.txt'),'melt')
cruc = stitch_sist(glob.glob('../data/tracks/pombe.cen.mid.win70k_sist/'+chrom+'_'+str(s)+'-*algC.txt'),'cruc')
dyads = bed2arr('../data/palindrome/pombe/pombe_cen/'+chrom+'_'+str(s)+'_'+str(e)+'.emboss.bed',0,e-s,ignorescore=False)
if cenpa_mu is None:
cenpa_mu = cenpa
melt_mu = melt
cruc_mu = cruc
dyad_mu = dyads
else:
cenpa_mu += cenpa
melt_mu += melt
cruc_mu += cruc
dyad_mu += dyads
cenpa_mu/=N
melt_mu/=N
cruc_mu/=N
dyad_mu/=N
plt.figure(figsize=(2.25,3.2))
G = gridspec.GridSpec(4,1,height_ratios=[0.5,0.5,0.5,0.5],hspace=0.5)
xts = [0,17500,35000,52500,70000]
labs = [-35,-17.5,0,17.5,35]
ax1 = plt.subplot(G[0,:])
format_fill_axis(ax1,cenpa_mu,ylabel='Cnp1 ChIP',ylim=[0,50000],yticklabels=[0,50],yticks=[0,50000],color='#e31a1c'
,xticks=xts,xticklabels=labs)
ax2 = plt.subplot(G[1,:],sharex=ax1)
format_fill_axis(ax2,dyad_mu,ylabel='Dyad symmetry',ylim=[4,50],yticklabels=[4,50],yticks=[4,50],color='black')
ax3 = plt.subplot(G[2,:],sharex=ax1)
format_fill_axis(ax3,melt_mu,ylabel='DNA melting',ylim=[0,0.5],yticklabels=[0,0.5],yticks=[0,0.5],color='#1f78b4')
ax4 = plt.subplot(G[3,:],sharex=ax1)
format_fill_axis(ax4,cruc_mu,ylabel='Cruc. extrusion',ylim=[0,0.5],yticklabels=[0,0.5],yticks=[0,0.5],color='#b15928',
pos='bottom',xticks=xts,xticklabels=labs,xlabel=('Distance from cen midpoint (kb)'))
ax4.set_xticklabels(labs)
plt.savefig('../figures/pombe_cen_avg.svg',dpi=300)
###Output
_____no_output_____
###Markdown
S. cerevisiae
###Code
cse4_mu = np.zeros(2002)
dyad_mu = np.zeros(2002)
ssdna_mu = np.zeros(2001)
cruc_mu = np.zeros(2001)
coords = read_bed_coords('../data/yeast/sist/sacCer2.cen.mid.win.1kb.bed')
for c in coords:
ch,s,e = c[0],c[1],c[2]
cse4_mu += bed2arr('../data/yeast/cse4_krassovsky.cov.bed',s,e,chrom=ch)
palfn = '../data/yeast/emboss/'+ch+'.sc2.palindrome.min5.max100.gap10.mismatch0.ovl.bedgraph'
dyad_mu += bed2arr(palfn,s,e,ignorescore=False,chrom=ch)
ssdna_mu += parse_sist_melt('../data/yeast/sist/sc2.'+ch+'.algM.txt')
cruc_mu += parse_sist_cruciform('../data/yeast/sist/sc2.'+ch+'.algC.txt')
cse4_mu /= len(coords)
dyad_mu /= len(coords)
ssdna_mu /= len(coords)
ssdna_mu /= len(coords)
plt.figure(figsize=(2.25,3.2))
G = gridspec.GridSpec(4,1,height_ratios=[0.5,0.5,0.5,0.5],hspace=0.5)
xts = [0,500,1000,1500,2000]
labs = [-1,-0.5,0,0.5,1]
ax1 = plt.subplot(G[0,:])
format_fill_axis(ax1,cse4_mu,ylabel='CenH3 ChIP',ylim=[0,75000],yticklabels=[0,75],yticks=[0,75000],color='#e31a1c'
,xticks=xts,xticklabels=labs)
ax2 = plt.subplot(G[1,:],sharex=ax1)
format_fill_axis(ax2,dyad_mu,ylabel='Dyad symmetry',ylim=[3,12],yticklabels=[3,12],yticks=[3,12],color='black')
ax3 = plt.subplot(G[2,:],sharex=ax1)
format_fill_axis(ax3,ssdna_mu,ylabel='DNA melting',ylim=[0,0.1],yticklabels=[0,0.1],yticks=[0,0.1],color='#1f78b4')
ax4 = plt.subplot(G[3,:],sharex=ax1)
format_fill_axis(ax4,cruc_mu,ylabel='Cruc. extrusion',ylim=[0,1],yticklabels=[0,1],yticks=[0,1],color='#b15928',
pos='bottom',xticks=xts,xticklabels=labs,xlabel=('Distance from cen midpoint (kb)'))
ax4.set_xticklabels(labs)
ax1.set_title(r'$\it{S. cerevisiae}$',size=12)
plt.savefig('../figures/sc2_average.svg',dpi=300)
###Output
_____no_output_____
###Markdown
Human neocen
###Code
chrom='chr4'
s = 88100000
e = 88600000
cenpa = bed2arr('../data/tracks/neocen/chr4.pdcn4_cenpa.cov.bed',s,e)
pal = bed2arr('../data/tracks/neocen/PDNC4.emboss.bed',0,e-s,ignorescore=True)
gapf = bed2arr('../data/tracks/neocen/chr4.gapf.cov.bed',s,e)
melt = stitch_sist(glob.glob('../data/human_neocen_sist/'+chrom+'_*algM.txt'),'melt')[50000:-50001]
cruc = stitch_sist(glob.glob('../data/human_neocen_sist/'+chrom+'_*algC.txt'),'cruc')[50000:-50001]
plt.figure(figsize=(2.8,4))
G = gridspec.GridSpec(5,1,height_ratios=[0.5,0.5,0.5,0.5,0.5],hspace=0.5)
ax1 = plt.subplot(G[0,:])
format_fill_axis(ax1,cenpa,xvals=np.arange(s,e+1),ylabel='CENP-A ChIP',color='#e31a1c',ylim=[0,325],yticks=[0,325])
ax2 = plt.subplot(G[1,:],sharex=ax1)
format_fill_axis(ax2,gapf,xvals=np.arange(s,e+1),ylabel='GAP-seq',color='#1f78b4',ylim=[2,15],yticks=[2,15])
ax3 = plt.subplot(G[2,:],sharex=ax1)
format_fill_axis(ax3,pal,xvals=np.arange(s,e+1),ylabel='Dyad symmetry',color='black',ylim=[0,100],yticks=[0,100])
ax4 = plt.subplot(G[3,:],sharex=ax1)
format_fill_axis(ax4,melt,xvals=np.arange(s,e+1),ylabel='DNA melting',color='#1f78b4',ylim=[0,1],yticks=[0,1])
ax5 = plt.subplot(G[4,:],sharex=ax1)
format_fill_axis(ax5,cruc,xvals=np.arange(s,e+1),ylabel='Cruc. extrusion',color='#b15928',ylim=[0,1],yticks=[0,1],pos='bottom',
xlabel = 'Position (Mb)')
newlabels = ["%.2f" % ((x)/1000000) for x in ax3.get_xticks()]
ax4.set_xticklabels(newlabels)
ax1.set_title('chr4 neocentromere',size=12)
plt.savefig('../figures/chr4neocen.svg',dpi=300)
chrom='chr13'
s = 97650000
e = 97850000
cenpa = bed2arr('../data/neocen/chr13.ims13q_cenpa.cov.bed',s,e)
pal = bed2arr('../data/neocen/IMS13q.emboss.bed',0,e-s,ignorescore=True)
gapf = bed2arr('../data/neocen/chr13.gapf.cov.bed',s,e)
melt = stitch_sist(glob.glob('../data/human_neocen_sist/'+chrom+'_*algM.txt'),'melt')[200000:-200001]
cruc = stitch_sist(glob.glob('../data/human_neocen_sist/'+chrom+'_*algC.txt'),'cruc')[200000:-200001]
plt.figure(figsize=(2.8,4))
G = gridspec.GridSpec(5,1,height_ratios=[0.5,0.5,0.5,0.5,0.5],hspace=0.5)
ax1 = plt.subplot(G[0,:])
format_fill_axis(ax1,cenpa,xvals=np.arange(s,e+1),ylabel='CENP-A ChIP',color='#e31a1c',ylim=[0,2000],yticks=[0,2000])
ax2 = plt.subplot(G[1,:],sharex=ax1)
format_fill_axis(ax2,gapf,xvals=np.arange(s,e+1),ylabel='GAP-seq',color='#1f78b4',ylim=[2,15],yticks=[2,15])
ax3 = plt.subplot(G[2,:],sharex=ax1)
format_fill_axis(ax3,pal,xvals=np.arange(s,e+1),ylabel='Dyad symmetry',color='black',ylim=[0,100],yticks=[0,100])
ax4 = plt.subplot(G[3,:],sharex=ax1)
format_fill_axis(ax4,melt,xvals=np.arange(s,e+1),ylabel='DNA melting',color='#1f78b4',ylim=[0,1],yticks=[0,1])
ax5 = plt.subplot(G[4,:],sharex=ax1)
format_fill_axis(ax5,cruc,xvals=np.arange(s,e+1),ylabel='Cruc. extrusion',color='#b15928',ylim=[0,1],yticks=[0,1],pos='bottom',
xlabel = 'Position (Mb)')
newlabels = ["%.2f" % ((x)/1000000) for x in ax3.get_xticks()]
ax4.set_xticklabels(newlabels)
ax1.set_title('chr13 neocentromere',size=12)
plt.savefig('../figures/chr13neocen.svg',dpi=300)
chrom='chr8'
s = 86400000
e = 87000000
cenpa = bed2arr('../data/neocen/chr8.ims13q_cenpa.cov.bed',s,e)
pal = bed2arr('../data/neocen/MS4221q.emboss.bed',0,e-s,ignorescore=True)
gapf = bed2arr('../data/neocen/chr8.gapf.cov.bed',s,e)
melt = stitch_sist(glob.glob('../data/human_neocen_sist/'+chrom+'_*algM.txt'),'melt')[:-1]
cruc = stitch_sist(glob.glob('../data/human_neocen_sist/'+chrom+'_*algC.txt'),'cruc')[:-1]
plt.figure(figsize=(2.8,4))
G = gridspec.GridSpec(5,1,height_ratios=[0.5,0.5,0.5,0.5,0.5],hspace=0.5)
ax1 = plt.subplot(G[0,:])
format_fill_axis(ax1,cenpa,xvals=np.arange(s,e+1),ylabel='CENP-A ChIP',color='#e31a1c',ylim=[0,50],yticks=[0,50])
ax2 = plt.subplot(G[1,:],sharex=ax1)
format_fill_axis(ax2,gapf,xvals=np.arange(s,e+1),ylabel='GAP-seq',color='#1f78b4',ylim=[2,15],yticks=[2,15])
ax3 = plt.subplot(G[2,:],sharex=ax1)
format_fill_axis(ax3,pal,xvals=np.arange(s,e+1),ylabel='Dyad symmetry',color='black',ylim=[0,100],yticks=[0,100])
ax4 = plt.subplot(G[3,:],sharex=ax1)
format_fill_axis(ax4,melt,xvals=np.arange(s,e+1),ylabel='DNA melting',color='#1f78b4',ylim=[0,1],yticks=[0,1])
ax5 = plt.subplot(G[4,:],sharex=ax1)
format_fill_axis(ax5,cruc,xvals=np.arange(s,e+1),ylabel='Cruc. extrusion',color='#b15928',ylim=[0,1],yticks=[0,1],pos='bottom',
xlabel = 'Position (Mb)')
newlabels = ["%.2f" % ((x)/1000000) for x in ax3.get_xticks()]
ax4.set_xticklabels(newlabels)
ax1.set_title('chr8 neocentromere',size=12)
plt.savefig('../figures/chr8neocen.svg',dpi=300)
###Output
_____no_output_____
###Markdown
Chicken neocen
###Code
chrom,s,e = 'chrZ',3770000,3820000
cenpa = bed2arr('../data/tracks/bm23.cenpa.neocen.avg.bed',mine=s,maxe=e,chrom=chrom,ignorescore=True)
melt = stitch_sist(glob.glob('../data/tracks/chicken.neocen.sist/*algM.txt'),'melt')
cruc = stitch_sist(glob.glob('../data/tracks/chicken.neocen.sist/*algC.txt'),'cruc')
g = glob.glob('../data/tracks/chicken_palindrome/neocen/'+chrom+'*.emboss.bed')
dyads = bed2arr(g[0],0,e-s,ignorescore=False)
plt.figure(figsize=(1.5,3.2))
G = gridspec.GridSpec(4,1,height_ratios=[0.5,0.5,0.5,0.5],hspace=0.5)
xv = np.arange(s,e+1)
ax1 = plt.subplot(G[0,:])
format_fill_axis(ax1,cenpa,xvals=xv,ylabel='CENP-A ChIP',color='#e31a1c',ylim=[0,15000],yticks=[0,15000],
yticklabels=[0,150],xlim=(s,e),xticks=[s,e])
ax2 = plt.subplot(G[1,:],sharex=ax1)
format_fill_axis(ax2,dyads,xvals=xv,ylabel='Dyad symmetry',color='black',ylim=[4,40],yticks=[4,49],yticklabels=[4,40])
ax3 = plt.subplot(G[2,:],sharex=ax1)
format_fill_axis(ax3,melt,xvals=xv,ylabel='DNA melting',color='#78aed2',ylim=[0,1],yticks=[0,1],yticklabels=[0,1])
ax4 = plt.subplot(G[3,:],sharex=ax1)
format_fill_axis(ax4,cruc,xvals=xv,ylabel='Cruc. extrusion',color='#b15928',ylim=[0,1],yticks=[0,1],yticklabels=[0,1],
pos='bottom',xlabel = 'Position (Mb)',xlim=[s,e],xticks=[s,e])
# newlabels = ["%.2f" % ((x)/1000000) for x in ax4.get_xticks()]
# ax4.set_xticklabels(newlabels)
ax4.set_xticks(np.linspace(s,e+1,4))
ax4.set_xticklabels(["%.2f" %z for z in np.linspace(s,e+1,4)/1000000])
ax1.set_title('chrZ neocentromere',size=12)
plt.savefig('../figures/chicken_cenZ.svg',dpi=300)
###Output
_____no_output_____
###Markdown
BACs
###Code
# chrom='chrUnplaced_BAC2'
# s,e = 0,3921
# chrom = 'D5Z2'
# chrom = 'D7Z2'
# s,e = 0,6205
# chrom = 'D5Z1'
s,e = 0,6295
chrom = 'DYZ3'
s,e = 0,6205
asat = properbed2arr('../data/tracks/6kb_BACs.alphoid.bed',s,e,chrom=chrom,useScore=False)
# asat = asat.reshape((1,len(asat)))
boxes = properbed2arr('../data/tracks/6kb_BACs.cenp_b.fimo.bed',s,e,chrom=chrom,useScore=False)
# boxes = boxes.reshape((1,len(boxes)))
cenpa = bed2arr('../data/tracks/huref_cenpa.bacs.cov.bed',s,e,chrom=chrom)
ssdna = bed2arr('../data/tracks/raji_ssdna.bacs.cov.5p.bed',s,e,chrom=chrom)
ssdna_s = movingaverage(ssdna,100)
dyad = bed2arr('../data/palindrome/bacs_palindrome/'+chrom+'_0_'+str(e)+'.emboss.bed',s,e,ignorescore=False)
dyad_s = medfilt(dyad,5)
melt = parse_sist_melt('../data/6kb_BACs_sist/'+chrom+'.algM.txt')
cruc = parse_sist_cruciform('../data/6kb_BACs_sist/'+chrom+'.algC.txt')
plt.figure(figsize=(2.75,3.5))
G = gridspec.GridSpec(7,1,height_ratios=[0.2,0.2,0.5,0.5,0.5,0.5,0.5],hspace=0.5)
ax1 = plt.subplot(G[0,:])
format_img_axis(ax1,asat>0,ylabel=r'$\alpha$'+'-satellite')
ax2 = plt.subplot(G[1,:],sharex=ax1)
format_img_axis(ax2,boxes>0,ylabel='CENP-B boxes')
ax3 = plt.subplot(G[2,:],sharex = ax1)
# format_fill_axis(ax3,cenpa,color='#e31a1c',ylabel='CENP-A ChIP',ylim=[0,500000],yticks=[0,500000],yticklabels=[0,50])
# format_fill_axis(ax3,cenpa,color='#e31a1c',ylabel='CENP-A ChIP',ylim=[0,10000],yticks=[0,10000],yticklabels=[0,1])
format_fill_axis(ax3,cenpa,color='#e31a1c',ylabel='CENP-A ChIP',ylim=[0,40000],yticks=[0,40000],yticklabels=[0,4])
ax4 = plt.subplot(G[3,:],sharex = ax1)
# format_fill_axis(ax4,ssdna_s,color='#fb9a99',ylabel='Permanganate-seq',ylim=[0,1000],yticks=[0,1000],yticklabels=[0,10])
# DYZ3
# format_fill_axis(ax4,ssdna_s,color='#fb9a99',ylabel='Permanganate-seq',ylim=[0,400],yticks=[0,400],yticklabels=[0,10])
ax5 = plt.subplot(G[4,:],sharex=ax1)
format_fill_axis(ax5,dyad_s,color='black',ylabel='Dyad symmetry',ylim=[4,25],yticks=[4,25])
ax6 = plt.subplot(G[5,:],sharex=ax1)
format_fill_axis(ax6,melt,color='#1f78b4',ylabel='DNA melting',ylim=[0,1],yticks=[0,1])
ax7 = plt.subplot(G[6,:],sharex=ax1)
format_fill_axis(ax7,cruc,color='#b15928',ylabel='Cruc. extrusion',ylim=[0,1],yticks=[0,1],pos='bottom',xlabel='Position (kb)')
ax7.set_xticklabels((ax6.get_xticks()/1000).astype(int))
ax1.set_title(chrom,size=12)
plt.savefig('../figures/'+chrom+'.svg',dpi=300)
###Output
_____no_output_____ |
chapter 11- A Pythonic Object/example 11-13-Saving Memory with __slots__.ipynb | ###Markdown
By default, Python stores the attributes of each instance in a dict named__dict__. As we saw in “Practical Consequences of How dict Works”, adict has a signficant memory overhead—even with the optimizationsmentioned in that section. But if you define a class attribute named__slots__ holding sequence of attribute names, Python uses analternative storage model for the instance attributes: the attributes named in__slots__ are stored in a hidden array or references that uses lessmemory than a dict. Example 11-13. The Pixel class uses `slots.
###Code
class Pixel:
__slots__ = ('x', 'y')
p = Pixel()
p.__dict__
###Output
_____no_output_____
###Markdown
Now let’s create a subclass of Pixel to see thecounterintuitive side of __slots__:Example 11-14. The OpenPixel is a subclass of Pixel.
###Code
class OpenPixel(Pixel):
pass
op = OpenPixel()
op.__dict__
###Output
_____no_output_____ |
ConvertType.ipynb | ###Markdown
This example comes from the ML.NET documentation: https://docs.microsoft.com/en-us/dotnet/api/microsoft.ml.conversionsextensionscatalog.converttype?view=ml-dotnet
###Code
class InputData
{
public bool Feature1;
public string Feature2;
public DateTime Feature3;
public double Feature4;
}
class TransformedData : InputData
{
public float Converted1 { get; set; }
public float Converted2 { get; set; }
public float Converted3 { get; set; }
public float Converted4 { get; set; }
}
var mlContext = new MLContext(seed: 1);
var rawData = new[] {
new InputData() { Feature1 = true, Feature2 = "0.4", Feature3 = DateTime.Now, Feature4 = 0.145},
new InputData() { Feature1 = false, Feature2 = "0.5", Feature3 = DateTime.Today, Feature4 = 3.14},
new InputData() { Feature1 = false, Feature2 = "14", Feature3 = DateTime.Today, Feature4 = 0.2046},
new InputData() { Feature1 = false, Feature2 = "23", Feature3 = DateTime.Now, Feature4 = 0.1206},
new InputData() { Feature1 = true, Feature2 = "8904", Feature3 = DateTime.UtcNow, Feature4 = 8.09},
};
var data = mlContext.Data.LoadFromEnumerable(rawData);
var pipeline = mlContext.Transforms.Conversion.ConvertType(new[]
{
new InputOutputColumnPair("Converted1", "Feature1"),
new InputOutputColumnPair("Converted2", "Feature2"),
new InputOutputColumnPair("Converted3", "Feature3"),
new InputOutputColumnPair("Converted4", "Feature4"),
}, DataKind.Single);
var transformer = pipeline.Fit(data);
var transformedData = transformer.Transform(data);
mlContext.Data.CreateEnumerable<TransformedData>(transformedData, true)
###Output
_____no_output_____ |
lecture_03/03_transpose.ipynb | ###Markdown
転置転置により、行列の行と列を入れ替えます。人工知能のコードでは転置を頻繁に使います。 転置とは?行列に対する重要な操作に、**転置**というものがあります。行列を転置することにより、行と列が入れ替わります。以下は転置の例ですが、例えば行列$A$の転置行列は$A^{\mathrm{T}}$と表します。 $$ \begin{aligned} \\ A & = \left( \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \\ \end{array} \right) \\ A^{\mathrm{T}} & = \left( \begin{array}{cc} 1 & 4 \\ 2 & 5 \\ 3 & 6 \\ \end{array} \right) \\\end{aligned} $$ $$ \begin{aligned} \\ B & = \left( \begin{array}{cc} a & b \\ c & d \\ e & f \\ \end{array} \right) \\ B^{\mathrm{T}} & = \left( \begin{array}{ccc} a & c & e \\ b & d & f \\ \end{array} \right) \\\end{aligned} $$ 転置の実装Numpyにおいては、行列を表す配列名の後に`.T`を付けると転置されます。
###Code
import numpy as np
a = np.array([[1, 2, 3],
[4, 5, 6]])
print(a.T) # 転置
###Output
[[1 4]
[2 5]
[3 6]]
###Markdown
転置と行列積の実装以下は、Numpyの配列を転置し、行列積を計算している例です。配列名のあとに```.T```をつけると、転置行列になります。
###Code
import numpy as np
a = np.array([[0, 1, 2],
[1, 2, 3]]) # 2x3
b = np.array([[0, 1, 2],
[1, 2, 3]]) # 2x3 このままでは行列積ができない
# print(np.dot(a, b)) # エラー
print(np.dot(a, b.T)) # 転置により行列積が可能に
###Output
[[ 5 8]
[ 8 14]]
###Markdown
上記のコードでは、行列`b`を転置することで行数が3になり、行列`a`の列数と一致するので行列積が可能になっています。 演習:以下のセルで行列aもしくは行列bを転置し、行列aと行列bの行列積を計算しましょう。
###Code
import numpy as np
a = np.array([[0, 1, 2],
[1, 2, 3]])
b = np.array([[0, 1, 2],
[2, 3, 4]])
# 行列積
print(np.dot(a, b.T))
###Output
[[ 5 11]
[ 8 20]]
|
topic_translation/dated/topic_translation_v002 translate.ipynb | ###Markdown
Text Translation - Support Both English & Chinese Inputs www.KudosData.com By: Sam GU Zhan March, 2017 Imports
###Code
# coding=UTF-8
from __future__ import division
import re
# Python2 unicode & float-division support:
# from __future__ import unicode_literals, division
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import io
# 中文字符和语言处理库
import jieba
# 机器学习库 sklearn 分类学习模型库
#from sklearn import linear_model
from sklearn.feature_extraction import DictVectorizer # 数据结构变换:把 Dict 转换为 稀疏矩阵
# from sklearn.linear_model import LogisticRegression # 逻辑回归分类模型
# from sklearn.pipeline import make_pipeline # 封装机器学习模型流程
# from sklearn.metrics import confusion_matrix, roc_curve, auc
# 中文显示设置
from pylab import *
mpl.rcParams['font.sans-serif'] = ['SimHei'] # 指定默认字体
mpl.rcParams['axes.unicode_minus'] = False # 解决保存图像是负号'-'显示为方块的问题
mpl.rcParams['font.size'] = 14 # 设置字体大小
np.random.seed(88)
###Output
_____no_output_____
###Markdown
Define Functions
###Code
# Python3
# 中文分词功能小函数, 输出 字符串, 各词组由空格分隔
def KudosData_word_tokenizer(foo):
# remove lead & tail spaces firstly:
foo = foo.strip()
seg_token = jieba.cut(str(foo), cut_all=True)
seg_str = str(' '.join(seg_token)).strip()
return seg_str
# Python2
# 中文分词功能小函数, 输出 字符串, 各词组由空格分隔
# def KudosData_word_tokenizer(foo):
# seg_token = jieba.cut(foo, cut_all=True)
# seg_str = ' '.join(seg_token)
# return seg_str
# Python3
# 中文分词功能小函数, 输出 字符串, 各词组由空格分隔
def KudosData_word_count(foo):
# remove lead & tail spaces firstly:
foo = foo.strip()
seg_token = jieba.cut(str(foo), cut_all=True)
seg_str = str(' '.join(seg_token)).strip()
seg_count = pd.value_counts(str(seg_str).lower().split(' '))
seg_count = seg_count.to_dict()
seg_count.pop('', None) # remove EMPTY dict key: ''
# 输出 dictionary: { key 词组, value 计数 }
# return seg_count.to_dict()
return seg_count
# Python2
# 中文分词功能小函数, 输出 dictionary: { key 词组, value 计数 }
# def KudosData_word_count(foo):
# seg_token = jieba.cut(foo, cut_all=True)
# seg_str = '^'.join(seg_token)
# seg_count = pd.value_counts(seg_str.lower().split('^'))
# return seg_count.to_dict()
###Output
_____no_output_____
###Markdown
Input text
###Code
# process Unicode text input
with io.open('output_topic_summary.txt','r',encoding='utf8') as f:
content = f.read()
title = '''
<Dummy Title>
'''
content
###Output
_____no_output_____
###Markdown
https://pypi.python.org/pypi/translate
###Code
import translate
translate.translator('en', 'zh', content)
###Output
_____no_output_____ |
0-Analysis/Integral_Method.ipynb | ###Markdown
Integral Method*(If you are not interested in the code itself, you can collapse it by selecting the cell and then clicking on the bar to its left. You will still be able to run it and view its output. Please note that the code for the second image depends on the code for the first, so they must be run in order. If you'd like to see the code in full, consider looking at `../2-Additional_Figures/Fig10_Distribution-Yield-Curves-and-Components.ipynb`, which has the same code, but presents it in a cleaner fashion.)*The "integral method" of fitting refers to using integration to get a near-perfect mathematical "fit" for a set of data. This method doesn't rely on choosing a model to fit beforehand, which means it can give results that aren't entirely physical. However, it is a useful tool for comparing to multiple fits from different models.Our integral method fit relied on several assumptions. We assumed that each event could be characterized by a single recoil energy representing the sum of all hit recoil energies, which is important for NRs with multiple scatters. We also assumed that the yield Y is monotonic, allowing us to treat the measured rate (in $eV_{ee}$ above some energy $E_{ee,i}$ as equal to the rate from electron recoils above that energy plus that of nuclear recoils above the corresponding recoil energy $E_{nr,i}$. For the sake of computation, we fixed the maximum value $E_{ee,max}$ to 2 keVand integrated from $E_{ee,i}$ to $E_{ee,max}$.
###Code
#Import libraries & data
exec(open("../python/nb_setup.py").read())#Is there a better way to do this?
from IPython.core.display import display, HTML
from matplotlib.pyplot import *
style.use('../mplstyles/stylelib/standard.mplstyle')
from tqdm.notebook import tqdm
from scipy.optimize import fsolve
from scipy.special import erf
from scipy.interpolate import CubicSpline
import pickle
import sys
sys.path.append('../python')
import R68_yield as Yield
import R68_spec_tools as spec
import R68_plot_tools as pt
display(HTML("<style>.container { width:100% !important; }</style>"))
import warnings
warnings.filterwarnings("ignore",category=RuntimeWarning)
#Set up notebook and load some R68 constants (V, eps, etc.)
from constants import *
#Load the data
import R68_load as r68
meas=r68.load_measured(keVmax=10)
g4=r68.load_G4(load_frac=1)
cap=r68.load_simcap(file='../data/v3_400k.pkl',
rcapture=0.218, load_frac=1)
#Function Definitions
def extract_Y_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, Ebins=None):
#Assumed global variables:
#Ebins: eVee bins
#R_meas: Measured, bkg-subtracted, efficiency-corrected rate
#tlive_er(nr,ng): livetime of ER(NR,NG) hits
global tlive_er,tlive_nr,tlive_ng, V, eps
if Ebins is None:
Ebins=np.linspace(0,2e3,201)
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
R_meas,dR_meas=spec.doBkgSub(meas, Ebins, Efit_min=50,Efit_max=2e3,
doEffsyst=False, doBurstLeaksyst=False,
output='reco-rate')
E_er_max=Ebins[-1]
E_nr_max=ERtoNR(E_er_max,Y_max,V,eps)
Ebin_ctr_rev=Ebin_ctr[::-1]
rev_csum_meas=np.cumsum(R_meas[::-1])
R_sim_er=fer*np.histogram(E_er,Ebins)[0]/tlive_er
rev_csum_er=np.cumsum(R_sim_er[::-1])
w_nr=fnr/tlive_nr*np.ones(np.sum(E_nr<=E_nr_max))
w_ng=fng/tlive_ng*np.ones(np.sum(E_ng<=E_nr_max))
E_nrng=np.concatenate((E_nr[E_nr<=E_nr_max],E_ng[E_ng<=E_nr_max]))
w_nrng=np.concatenate((w_nr,w_ng))
E_nrng_rev_srt=(E_nrng[np.argsort(E_nrng)])[::-1]
w_nrng_rev_srt=(w_nrng[np.argsort(E_nrng)])[::-1]
rev_csum_nrng=np.cumsum(w_nrng_rev_srt)
diff=rev_csum_meas-rev_csum_er
E_nrs=[]
error=[]
for entry in diff:
if np.isfinite(entry):
args=np.argwhere(rev_csum_nrng>=entry)
if len(args)==0:
E_nrs.append(-99)
else:
E_nr_this=E_nrng_rev_srt[args[0][0]]
error.append(rev_csum_nrng[args[0][0]]-entry)
if len(E_nrs)>0:
E_nrs.append(min(E_nr_this,E_nrs[-1]))
else:
E_nrs.append(E_nr_this)
else:
E_nrs.append(-999)
error.append(-999)
E_nrs=np.array(E_nrs[::-1])
Ys=((Ebins[:-1]/E_nrs)*(1+V/eps)-1)*eps/V
error=np.array(error)
return (E_nrs,Ys,error)
#Extract yield curve using the integral method
#Treats each event as a single scatter of the total energy
#fer: ER livetime factor
#fnr: NR livetime factor
#fng: NG livetime factor
#Y_max: Yield value that corresponds to the highest bin edge of Ebins
def extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, nIt=2, F=0, Ebins=None, seed=None):
#Assumed global variables:
#Ebins: eVee bins
#R_meas: Measured, bkg-subtracted, efficiency-corrected rate
#tlive_er(nr,ng): livetime of ER(NR,NG) hits
global tlive_er,tlive_nr,tlive_ng, V, eps
if Ebins is None:
Ebins=np.linspace(0,2e3,201)
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
#Initial yield, with no resolution effects
E_nrs,Ys,errors=extract_Y_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, Ebins)
iIt=0
while iIt<nIt:
iIt+=1
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
#Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fCS=lambda E: np.interp(E,E_nrs[cFit],Ys[cFit])
Y_fit = lambda E: Y_conditioned_test(E,Y_fCS,E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('User',[Y_fit])
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
#ERtoNR_fCS=CubicSpline(xx,yy,extrapolate=True)
ERtoNR_fCS=lambda E: np.interp(E,xx,yy)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
ERtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fCS(E) + (E>xx[-1])*pf_hi(E)
E_er_sm=spec.getSmeared(E_er,seed=seed,F=F)
E_er_sm[E_er_sm<0]=0
E_nr_sm=ERtoNR_fcombo(spec.getSmeared(E_nr_eVee,seed=seed,F=F))
E_nr_sm[E_nr_sm<0]=0
E_ng_sm=ERtoNR_fcombo(spec.getSmeared(E_ng_eVee,seed=seed,F=F))
E_ng_sm[E_ng_sm<0]=0
E_nrs,Ys,errors=extract_Y_v2(E_er_sm, E_nr_sm, E_ng_sm, fer, fnr, fng, Y_max, Ebins)
return (E_nrs,Ys,errors)
def Y_conditioned(E, Y_fCS, Emin, Ymin, Emax, Ymax):
y=Y_fCS(E)
y[E>=Emax]=Ymax
y[E<=Emin]=Ymin
return y
def Y_conditioned_test(E, Y_fCS, E_nrs_fit, Ys_fit):
y=Y_fCS(E)
ylow=np.poly1d(np.polyfit(E_nrs_fit[:2],Ys_fit[:2], 1))
y[E<=E_nrs_fit[0]]=ylow(E[E<=E_nrs_fit[0]])
#y[E<=E_nrs_fit[0]]=Ys_fit[0]
yhi=np.poly1d(np.polyfit(E_nrs_fit[-2:],Ys_fit[-2:], 1))
y[E>=E_nrs_fit[-1]]=yhi(E[E>=E_nrs_fit[-1]])
#y[E>=E_nrs_fit[-1]]=Ys_fit[-1]
y[y<0]=0
return y
def ERtoNR(ER,Y,V,eps):
if isinstance(Y,(float,int)):
return ER*(1+V/eps)/(1+Y*V/eps)
else:
func = lambda NR : NR-ER*(1+V/eps)/(1+Y.calc(NR)*V/eps)
NR_guess = ER
return fsolve(func, NR_guess)
def NRtoER(NR,Y,V,eps):
if isinstance(Y,(float,int)):
return NR*(1+Y*V/eps)/(1+V/eps)
else:
return NR*(1+Y.calc(NR)*V/eps)/(1+V/eps)
def Nint(Es,Emin,Emax):
return np.sum((Es>=Emin)&(Es<Emax))
def extract_Y(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, E_nr_step=1):
#Assumed global variables:
#Ebins: eVee bins
#R_meas: Measured, bkg-subtracted, efficiency-corrected rate
#tlive_er(nr,ng): livetime of ER(NR,NG) hits
global Ebins,R_meas,tlive_er,tlive_nr,tlive_ng, V, eps
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
E_er_max=Ebins[-1]
E_nr_max=ERtoNR(E_er_max,Y_max,V,eps)
E_nrs=[]
E_nr_test=E_nr_max
for i in tqdm(range(len(Ebin_ctr))[::-1]):
if np.isfinite(R_meas[i]):
#Is there a more efficienct way to do this? Yep
# Am I going to spend time working it out? Nope
while True:
R_meas_this=np.sum(R_meas[(Ebin_ctr>=Ebin_ctr[i])&(Ebin_ctr<E_er_max)])
R_sim_er=fer*Nint(E_er,Ebin_ctr[i],E_er_max)/tlive_er
R_sim_nr=fnr*Nint(E_nr,E_nr_test,E_nr_max)/tlive_nr
R_sim_ng=fng*Nint(E_ng,E_nr_test,E_nr_max)/tlive_ng
R_sim_this=R_sim_er+R_sim_nr+R_sim_ng
if (R_sim_this>=R_meas_this) or (E_nr_test<0):
break
E_nr_test-=E_nr_step
E_nrs.append(E_nr_test)
else:
E_nrs.append(-999)
E_nrs=np.array(E_nrs[::-1])
#E_ee=E_nr*(1+Y*V/eps)/(1+V/eps)
#=> Y=((E_ee/E_nr)*(1+V/eps)-1)*eps/V
Ys=((Ebin_ctr/E_nrs)*(1+V/eps)-1)*eps/V
return (E_nrs,Ys)
def Y_fit(E):
y=Y_fCS(E)
y[E>E_nrs[-1]]=Ys[-1]
y[E<0]=0
return y
def extract_Y_wSmear(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, E_nr_step=1,nIt=2):
#Assumed global variables:
#Ebins: eVee bins
#R_meas: Measured, bkg-subtracted, efficiency-corrected rate
#tlive_er(nr,ng): livetime of ER(NR,NG) hits
global Ebins,R_meas,tlive_er,tlive_nr,tlive_ng, V, eps
#Initial yield, with no resolution effects
E_nrs,Ys=extract_Y(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, E_nr_step)
iIt=0
while iIt<nIt:
iIt+=1
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[0],0,E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
#E_nr_sm=ERtoNR(spec.getSmeared(NRtoER(E_nr,Y,V,eps)),Y,V,eps)#Overflow and slow
#E_ng_sm1=ERtoNR(spec.getSmeared(NRtoER(E_ng,Y,V,eps)),Y,V,eps)
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
ERtoNR_fCS=CubicSpline(NRtoER(E_nrs[cFit],Y,V,eps),E_nrs[cFit])
E_er_sm=spec.getSmeared(E_er)
E_er_sm[E_er_sm<0]=0
E_nr_sm=ERtoNR_fCS(spec.getSmeared(E_nr_eVee))
E_nr_sm[E_nr_sm<0]=0
E_ng_sm=ERtoNR_fCS(spec.getSmeared(E_ng_eVee))
E_ng_sm[E_ng_sm<0]=0
E_nrs,Ys=extract_Y(E_er_sm, E_nr_sm, E_ng_sm, fer, fnr, fng, Y_max, E_nr_step)
return (E_nrs,Ys)
def Y_conditioned(E, Y_fCS, Emin, Ymin, Emax, Ymax):
y=Y_fCS(E)
y[E>=Emax]=Ymax
y[E<=Emin]=Ymin
return y
#TODO: use this same function every time we do this
def getYfitCond(E_nrs,Ys):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
return Yield.Yield('User',[Y_fit])
#Find the full enevelope of yield curves
#Includes first and last point of each curve and the min and max Y at each Enr
def getEYenvelope(lE_nrs_sample,lYs_sample,eVeeMin=50):
Yenv_left=[]
Yenv_right=[]
Enr_env_left=[]
Enr_env_right=[]
for E_nrs,Ys in zip(lE_nrs_sample,lYs_sample):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Yenv_left.append(Ys[cFit&(Ebin_ctr>eVeeMin)][0])
Enr_env_left.append(E_nrs[cFit&(Ebin_ctr>eVeeMin)][0])
Yenv_right.append(Ys[cFit&(Ebin_ctr>eVeeMin)][-1])
Enr_env_right.append(E_nrs[cFit&(Ebin_ctr>eVeeMin)][-1])
Enr_env_right=np.array(Enr_env_right)
Yenv_right=np.array(Yenv_right)
Enr_env_left=np.array(Enr_env_left)
Yenv_left=np.array(Yenv_left)
Enr_env_top=np.linspace(Enr_env_left[np.argmax(Yenv_left)],Enr_env_right[np.argmax(Yenv_right)],1000)
Ytestmax=[]
for E_nrs,Ys in zip(lE_nrs_sample,lYs_sample):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y=getYfitCond(E_nrs,Ys)
Ytesti=Y.calc(Enr_env_top)
cgoodval=(Enr_env_top>=np.min(E_nrs[Ebin_ctr>eVeeMin]))
Ytesti[~cgoodval]=-99
Ytestmax.append(Ytesti)
Yenv_top=np.max(np.array(Ytestmax),axis=0)
Enr_env_bottom=np.linspace(Enr_env_left[np.argmin(Yenv_left)],Enr_env_right[np.argmin(Yenv_right)],1000)
Ytestmin=[]
for E_nrs,Ys in zip(lE_nrs_sample,lYs_sample):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y=getYfitCond(E_nrs,Ys)
Ytesti=Y.calc(Enr_env_bottom)
cgoodval=(Enr_env_bottom>=np.min(E_nrs[Ebin_ctr>eVeeMin]))
Ytesti[~cgoodval]=99
Ytestmin.append(Ytesti)
Yenv_bottom=np.min(np.array(Ytestmin),axis=0)
#Need to sort the points so that they form a closed polygon
#Go clockwise from top left
Enr_env=np.concatenate( (Enr_env_top, Enr_env_right[np.argsort(Enr_env_right)], Enr_env_bottom[::-1], Enr_env_left[np.argsort(Enr_env_left)][::-1]) )
Yenv=np.concatenate((Yenv_top, Yenv_right[np.argsort(Enr_env_right)], Yenv_bottom[::-1], Yenv_left[np.argsort(Enr_env_left)][::-1]))
return (Enr_env, Yenv)
#Find the full range of rates for each component for plotting
def getERminmax(lE_nrs_sample,lYs_sample,lfer_sample,lfnr_sample,lfng_sample,dosmear=True,FanoER=0.1161,FanoNR=0.1161):
R_er_test=[]
R_nr_test=[]
R_ng_test=[]
for E_nrs,Ys,fer,fnr,fng in zip(lE_nrs_sample,lYs_sample,lfer_sample,lfnr_sample,lfng_sample):
Y=getYfitCond(E_nrs,Ys)
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=FanoER)
#E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed,F=FanoNR)
#E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed,F=FanoNR)
#E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_er_test.append(R_er)
R_nr_test.append(R_nr)
R_ng_test.append(R_ng)
R_er_test=np.array(R_er_test)
R_nr_test=np.array(R_nr_test)
R_ng_test=np.array(R_ng_test)
R_total_test=R_er_test+R_nr_test+R_ng_test
Renvelopes={'eVee':Ebin_ctr,
'ER':{'max':np.max(R_er_test,axis=0),'min':np.min(R_er_test,axis=0)},
'NR':{'max':np.max(R_nr_test,axis=0),'min':np.min(R_nr_test,axis=0)},
'NG':{'max':np.max(R_ng_test,axis=0),'min':np.min(R_ng_test,axis=0)},
'Total':{'max':np.max(R_total_test,axis=0),'min':np.min(R_total_test,axis=0)},
}
return Renvelopes
def extract_Y_v3(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, Ebins=None):
#Assumed global variables:
#Ebins: eVee bins
#R_meas: Measured, bkg-subtracted, efficiency-corrected rate
#tlive_er(nr,ng): livetime of ER(NR,NG) hits
global tlive_er,tlive_nr,tlive_ng, V, eps
if Ebins is None:
Ebins=np.linspace(0,2e3,201)
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
R_meas,dR_meas=spec.doBkgSub(meas, Ebins, Efit_min=50,Efit_max=2e3,
doEffsyst=False, doBurstLeaksyst=False,
output='reco-rate')
E_er_max=Ebins[-1]
E_nr_max=ERtoNR(E_er_max,Y_max,V,eps)
Ebin_ctr_rev=Ebin_ctr[::-1]
rev_csum_meas=np.cumsum(R_meas[::-1])
R_sim_er=fer*np.histogram(E_er,Ebins)[0]/tlive_er
rev_csum_er=np.cumsum(R_sim_er[::-1])
w_nr=fnr/tlive_nr*np.ones(np.sum(E_nr<=E_nr_max))
w_ng=fng/tlive_ng*np.ones(np.sum(E_ng<=E_nr_max))
E_nrng=np.concatenate((E_nr[E_nr<=E_nr_max],E_ng[E_ng<=E_nr_max]))
w_nrng=np.concatenate((w_nr,w_ng))
E_nrng_rev_srt=(E_nrng[np.argsort(E_nrng)])[::-1]
w_nrng_rev_srt=(w_nrng[np.argsort(E_nrng)])[::-1]
rev_csum_nrng=np.cumsum(w_nrng_rev_srt)
diff=rev_csum_meas-rev_csum_er
E_nrs=[]
error=[]
for entry in diff:
if np.isfinite(entry):
args=np.argwhere(rev_csum_nrng>=entry)
if len(args)==0:
E_nrs.append(-99)
else:
E_nr_this=E_nrng_rev_srt[args[0][0]]
error.append(rev_csum_nrng[args[0][0]]-entry)
if len(E_nrs)>0:
E_nrs.append(min(E_nr_this,E_nrs[-1]))
else:
E_nrs.append(E_nr_this)
else:
E_nrs.append(-999)
error.append(-999)
E_nrs=np.array(E_nrs[::-1])
Ys=((Ebins[:-1]/E_nrs)*(1+V/eps)-1)*eps/V
error=np.array(error)
return (E_nrs,Ys,error)
def extract_Y_wSmear_v3(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, nIt=2, FanoER=0.1161, FanoNR=0.1161, Ebins=None, seed=None):
#Assumed global variables:
#Ebins: eVee bins
#R_meas: Measured, bkg-subtracted, efficiency-corrected rate
#tlive_er(nr,ng): livetime of ER(NR,NG) hits
global tlive_er,tlive_nr,tlive_ng, V, eps
if Ebins is None:
Ebins=np.linspace(0,2e3,201)
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
#Initial yield, with no resolution effects
E_nrs,Ys,errors=extract_Y_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, Ebins)
iIt=0
while iIt<nIt:
iIt+=1
Y=getYfitCond(E_nrs,Ys)
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
ERtoNR_fit=getEEtoNRfitCond(E_nrs,Y)
E_er_sm=spec.getSmeared(E_er,seed=seed,F=FanoER)
E_nr_sm=ERtoNR_fit(spec.getSmeared(E_nr_eVee,seed=seed,F=FanoNR))
E_ng_sm=ERtoNR_fit(spec.getSmeared(E_ng_eVee,seed=seed,F=FanoNR))
E_nrs,Ys,errors=extract_Y_v3(E_er_sm, E_nr_sm, E_ng_sm, fer, fnr, fng, Y_max, Ebins)
return (E_nrs,Ys,errors)
#Find the full range of rates for each component for plotting
def getERminmax_v3(scanData,cut,dosmear=True,seed=None,nAvg=1):
R_er_test=[]
R_nr_test=[]
R_ng_test=[]
for i in range(len(scanData['lE_nrs'][cut])):
E_nrs=scanData['lE_nrs'][cut][i]
Ys=scanData['lYs'][cut][i]
fer=scanData['lfer'][cut][i]
fnr=scanData['lfnr'][cut][i]
fng=scanData['lfng'][cut][i]
FanoER=scanData['lFanoER'][cut][i]
FanoNR=scanData['lFanoNR'][cut][i]
Y=getYfitCond(E_nrs,Ys)
R_er_avg=[]
R_nr_avg=[]
R_ng_avg=[]
for iteration in range(nAvg):
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=FanoER)
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed,F=FanoNR)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed,F=FanoNR)
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_er_avg.append(R_er)
R_nr_avg.append(R_nr)
R_ng_avg.append(R_ng)
R_er_test.append(np.mean(np.array(R_er_avg),axis=0))
R_nr_test.append(np.mean(np.array(R_nr_avg),axis=0))
R_ng_test.append(np.mean(np.array(R_ng_avg),axis=0))
R_er_test=np.array(R_er_test)
R_nr_test=np.array(R_nr_test)
R_ng_test=np.array(R_ng_test)
R_total_test=R_er_test+R_nr_test+R_ng_test
Renvelopes={'eVee':Ebin_ctr,
'ER':{'max':np.max(R_er_test,axis=0),'min':np.min(R_er_test,axis=0)},
'NR':{'max':np.max(R_nr_test,axis=0),'min':np.min(R_nr_test,axis=0)},
'NG':{'max':np.max(R_ng_test,axis=0),'min':np.min(R_ng_test,axis=0)},
'Total':{'max':np.max(R_total_test,axis=0),'min':np.min(R_total_test,axis=0)},
}
return Renvelopes
def Y_conditioned(E, Y_fCS, Emin, Ymin, Emax, Ymax):
y=Y_fCS(E)
y[E>=Emax]=Ymax
y[E<=Emin]=Ymin
return y
def getYfitCond(E_nrs,Ys):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
return Yield.Yield('User',[Y_fit])
#Fitted function to map from eVee back to eVnr
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
def getEEtoNRfitCond(E_nrs,Y):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
ERtoNR_fCS=CubicSpline(xx,yy,extrapolate=True)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))#Should maintain const Y at low end
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
EEtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fCS(E) + (E>xx[-1])*pf_hi(E)
return EEtoNR_fcombo
#v4: Remove R_meas calculation and use global value.
#Assumes R_meas matches Ebins.
def extract_Y_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, Ebins=None):
#Assumed global variables:
#Ebins: eVee bins
#R_meas: Measured, bkg-subtracted, efficiency-corrected rate
#tlive_er(nr,ng): livetime of ER(NR,NG) hits
global tlive_er,tlive_nr,tlive_ng, V, eps, R_meas
if Ebins is None:
Ebins=np.linspace(0,2e3,201)
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
#R_meas,dR_meas=spec.doBkgSub(meas, Ebins, Efit_min=50,Efit_max=2e3,
# doEffsyst=False, doBurstLeaksyst=False,
# output='reco-rate')
E_er_max=Ebins[-1]
E_nr_max=ERtoNR(E_er_max,Y_max,V,eps)
Ebin_ctr_rev=Ebin_ctr[::-1]
rev_csum_meas=np.cumsum(R_meas[::-1])
R_sim_er=fer*np.histogram(E_er,Ebins)[0]/tlive_er
rev_csum_er=np.cumsum(R_sim_er[::-1])
w_nr=fnr/tlive_nr*np.ones(np.sum(E_nr<=E_nr_max))
w_ng=fng/tlive_ng*np.ones(np.sum(E_ng<=E_nr_max))
E_nrng=np.concatenate((E_nr[E_nr<=E_nr_max],E_ng[E_ng<=E_nr_max]))
w_nrng=np.concatenate((w_nr,w_ng))
E_nrng_rev_srt=(E_nrng[np.argsort(E_nrng)])[::-1]
w_nrng_rev_srt=(w_nrng[np.argsort(E_nrng)])[::-1]
rev_csum_nrng=np.cumsum(w_nrng_rev_srt)
diff=rev_csum_meas-rev_csum_er
E_nrs=[]
error=[]
for entry in diff:
if np.isfinite(entry):
args=np.argwhere(rev_csum_nrng>=entry)
if len(args)==0:
E_nrs.append(-99)
else:
E_nr_this=E_nrng_rev_srt[args[0][0]]
error.append(rev_csum_nrng[args[0][0]]-entry)
if len(E_nrs)>0:
E_nrs.append(min(E_nr_this,E_nrs[-1]))
else:
E_nrs.append(E_nr_this)
else:
E_nrs.append(-999)
error.append(-999)
E_nrs=np.array(E_nrs[::-1])
Ys=((Ebins[:-1]/E_nrs)*(1+V/eps)-1)*eps/V
error=np.array(error)
return (E_nrs,Ys,error)
def extract_Y_wSmear_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max,
nItMax=2, fit_frac_all_goal=0.8, fit_frac_low_goal=1,
FanoER=0.1161, FanoNR=0.1161, Ebins=None, seed=None):
#Assumed global variables:
#Ebins: eVee bins
#R_meas: Measured, bkg-subtracted, efficiency-corrected rate
#tlive_er(nr,ng): livetime of ER(NR,NG) hits
global tlive_er,tlive_nr,tlive_ng, V, eps
if Ebins is None:
Ebins=np.linspace(0,2e3,201)
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
#Initial yield, with no resolution effects
E_nrs,Ys,errors=extract_Y_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max, Ebins)
iIt=0
while iIt<nItMax:
Y=getYfitCond_v4(E_nrs,Ys)
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
ERtoNR_fit=getEEtoNRfitCond_v4(E_nrs,Y)
E_er_sm=spec.getSmeared(E_er,seed=seed,F=FanoER)
E_nr_eVee_sm=spec.getSmeared(E_nr_eVee,seed=seed,F=FanoNR)
E_ng_eVee_sm=spec.getSmeared(E_ng_eVee,seed=seed,F=FanoNR)
#Check if the currently smeared version agrees with the measurement
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
#Some goodness of fit condition
#Hard to get this right because we want the whole thing to fit well, but are
# especially concerned about the lowest few bins which tend to go astray
R_max=R_meas[Ebin_ctr>50]+1*dR_meas[0][Ebin_ctr>50]
R_min=R_meas[Ebin_ctr>50]-1*dR_meas[1][Ebin_ctr>50]
#fracion of bins within error bars
fit_frac_all=np.sum((R_tot[Ebin_ctr>50]<=R_max)&(R_tot[Ebin_ctr>50]>=R_min))/np.sum(Ebin_ctr>50)
#Fraction of lowest 10 bins within error bars
fit_frac_low=np.sum((R_tot[Ebin_ctr>50][:10]<=R_max[:10])&(R_tot[Ebin_ctr>50][:10]>=R_min[:10]))/10
if (fit_frac_all>=fit_frac_all_goal) and (fit_frac_low>=fit_frac_low_goal):
break
#Continue to the next iteration
iIt+=1
E_nr_sm=ERtoNR_fit(E_nr_eVee_sm)
E_ng_sm=ERtoNR_fit(E_ng_eVee_sm)
E_nrs,Ys,errors=extract_Y_v4(E_er_sm, E_nr_sm, E_ng_sm, fer, fnr, fng, Y_max, Ebins)
return (E_nrs,Ys,errors,iIt)
def Y_conditioned_v4(E, Y_fit_func, E_nrs_fit, Ys_fit):
y=Y_fit_func(E)
ylow=np.poly1d(np.polyfit(E_nrs_fit[:2],Ys_fit[:2], 1))
y[E<=E_nrs_fit[0]]=ylow(E[E<=E_nrs_fit[0]])
yhi=np.poly1d(np.polyfit(E_nrs_fit[-2:],Ys_fit[-2:], 1))
y[E>=E_nrs_fit[-1]]=yhi(E[E>=E_nrs_fit[-1]])
y[y<0]=0
return y
def getYfitCond_v4(E_nrs,Ys):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fit_func=lambda E: np.interp(E,E_nrs[cFit],Ys[cFit])
Y_fit = lambda E: Y_conditioned_v4(E,Y_fit_func,E_nrs[cFit],Ys[cFit])
return Yield.Yield('User',[Y_fit])
#Fitted function to map from eVee back to eVnr
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
def getEEtoNRfitCond_v4(E_nrs,Y):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
ERtoNR_fit_func=lambda E: np.interp(E,xx,yy)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))#Should maintain const Y at low end
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
EEtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fit_func(E) + (E>xx[-1])*pf_hi(E)
return EEtoNR_fcombo
#Find the full range of rates for each component for plotting
#v4: Updated getYfitCond version
def getERminmax_v4(scanData,cut,dosmear=True,seed=None,nAvg=1):
R_er_test=[]
R_nr_test=[]
R_ng_test=[]
for i in range(len(scanData['lE_nrs'][cut])):
E_nrs=scanData['lE_nrs'][cut][i]
Ys=scanData['lYs'][cut][i]
fer=scanData['lfer'][cut][i]
fnr=scanData['lfnr'][cut][i]
fng=scanData['lfng'][cut][i]
FanoER=scanData['lFanoER'][cut][i]
FanoNR=scanData['lFanoNR'][cut][i]
Y=getYfitCond_v4(E_nrs,Ys)
R_er_avg=[]
R_nr_avg=[]
R_ng_avg=[]
for iteration in range(nAvg):
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=FanoER)
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed,F=FanoNR)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed,F=FanoNR)
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_er_avg.append(R_er)
R_nr_avg.append(R_nr)
R_ng_avg.append(R_ng)
R_er_test.append(np.mean(np.array(R_er_avg),axis=0))
R_nr_test.append(np.mean(np.array(R_nr_avg),axis=0))
R_ng_test.append(np.mean(np.array(R_ng_avg),axis=0))
R_er_test=np.array(R_er_test)
R_nr_test=np.array(R_nr_test)
R_ng_test=np.array(R_ng_test)
R_total_test=R_er_test+R_nr_test+R_ng_test
Renvelopes={'eVee':Ebin_ctr,
'ER':{'max':np.max(R_er_test,axis=0),'min':np.min(R_er_test,axis=0)},
'NR':{'max':np.max(R_nr_test,axis=0),'min':np.min(R_nr_test,axis=0)},
'NG':{'max':np.max(R_ng_test,axis=0),'min':np.min(R_ng_test,axis=0)},
'Total':{'max':np.max(R_total_test,axis=0),'min':np.min(R_total_test,axis=0)},
}
return Renvelopes
#Find the full enevelope of yield curves
#Includes first and last point of each curve and the min and max Y at each Enr
def getEYenvelope_v4(lE_nrs_sample,lYs_sample,eVeeMin=50):
Yenv_left=[]
Yenv_right=[]
Enr_env_left=[]
Enr_env_right=[]
for E_nrs,Ys in zip(lE_nrs_sample,lYs_sample):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Yenv_left.append(Ys[cFit&(Ebin_ctr>eVeeMin)][0])
Enr_env_left.append(E_nrs[cFit&(Ebin_ctr>eVeeMin)][0])
Yenv_right.append(Ys[cFit&(Ebin_ctr>eVeeMin)][-1])
Enr_env_right.append(E_nrs[cFit&(Ebin_ctr>eVeeMin)][-1])
Enr_env_right=np.array(Enr_env_right)
Yenv_right=np.array(Yenv_right)
Enr_env_left=np.array(Enr_env_left)
Yenv_left=np.array(Yenv_left)
Enr_env_top=np.linspace(Enr_env_left[np.argmax(Yenv_left)],Enr_env_right[np.argmax(Yenv_right)],1000)
Ytestmax=[]
for E_nrs,Ys in zip(lE_nrs_sample,lYs_sample):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y=getYfitCond_v4(E_nrs,Ys)
Ytesti=Y.calc(Enr_env_top)
cgoodval=(Enr_env_top>=np.min(E_nrs[Ebin_ctr>eVeeMin]))
Ytesti[~cgoodval]=-99
Ytestmax.append(Ytesti)
Yenv_top=np.max(np.array(Ytestmax),axis=0)
Enr_env_bottom=np.linspace(Enr_env_left[np.argmin(Yenv_left)],Enr_env_right[np.argmin(Yenv_right)],1000)
Ytestmin=[]
for E_nrs,Ys in zip(lE_nrs_sample,lYs_sample):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y=getYfitCond_v4(E_nrs,Ys)
Ytesti=Y.calc(Enr_env_bottom)
cgoodval=(Enr_env_bottom>=np.min(E_nrs[Ebin_ctr>eVeeMin]))
Ytesti[~cgoodval]=99
Ytestmin.append(Ytesti)
Yenv_bottom=np.min(np.array(Ytestmin),axis=0)
#Need to sort the points so that they form a closed polygon
#Go clockwise from top left
Enr_env=np.concatenate( (Enr_env_top, Enr_env_right[np.argsort(Enr_env_right)], Enr_env_bottom[::-1], Enr_env_left[np.argsort(Enr_env_left)][::-1]) )
Yenv=np.concatenate((Yenv_top, Yenv_right[np.argsort(Enr_env_right)], Yenv_bottom[::-1], Yenv_left[np.argsort(Enr_env_left)][::-1]))
return (Enr_env, Yenv)
Emax = 2000 #eVee
Ebins=np.linspace(0,Emax,201)
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
#uncertainty is (high,low)
R_meas,dR_meas=spec.doBkgSub(meas, Ebins, Efit_min=50,Efit_max=2e3,\
doEffsyst=True, doBurstLeaksyst=True,\
output='reco-rate')
#Illustration of method
Elim_er=[255.0,505.0,1005.0,1505.0,1995.0]
Elim_nr=[806.3832567888599, 1967.2490338155576, 4045.3075738134753, 5739.940139258986, 7281.31517699986]
for Elim in Elim_er[:-1]:
cut=(Ebin_ctr>=Elim)&(Ebin_ctr<=Elim_er[-1])
c,b=np.histogram(np.sum(g4['ER']['E'],axis=1),bins=Ebins)
bctr=(b[:-1]+b[1:])/2
for Elim in Elim_er[:-1]:
cut=(bctr>=Elim)&(bctr<=Elim_er[-1])
Ebnr=np.linspace(0,7.3e3,200)
c,b=np.histogram(np.sum(g4['NR']['E'],axis=1),bins=Ebnr)
bctr=(b[:-1]+b[1:])/2
for Elim in Elim_nr[:-1]:
cut=(bctr>=Elim)&(bctr<=Elim_nr[-1])
c,b=np.histogram(np.sum(cap['dE'],axis=1),bins=Ebnr)
bctr=(b[:-1]+b[1:])/2
for Elim in Elim_nr[:-1]:
cut=(bctr>=Elim)&(bctr<=Elim_nr[-1])
#For this analysis, we'll just use the total Edep of each event and apply yield to that.
#How big of an assumption is this?
E_er=np.sum(g4['ER']['E'],axis=1)
E_nr=np.sum(g4['NR']['E'],axis=1)
E_ng=np.sum(cap['dE'],axis=1)
Emax_frac_er=np.max(g4['ER']['E'],axis=1)/np.sum(g4['ER']['E'],axis=1)
Emax_frac_nr=np.max(g4['NR']['E'],axis=1)/np.sum(g4['NR']['E'],axis=1)
Emax_frac_ng=np.max(cap['dE'],axis=1)/np.sum(cap['dE'],axis=1)
#Trim events that won't figure into the analysis range
E_er=E_er[(E_er>0) & (E_er<10e3)]
E_nr=E_nr[(E_nr>0) & (E_nr<1000e3)]
#Spectra with default livetimes and standard yield, Fano
#Y=Yield.Yield('Lind',[0.146])
Y=Yield.Yield('Chav',[0.146,1e3/0.247])
N_er,_=np.histogram(E_er,bins=Ebins)
N_nr,_=np.histogram(NRtoER(E_nr,Y,V,eps),bins=Ebins)
N_ng,_=np.histogram(NRtoER(E_ng,Y,V,eps),bins=Ebins)
R_er=N_er/g4['ER']['tlive']
R_nr=N_nr/g4['NR']['tlive']
R_ng=N_ng/cap['tlive']
#Need to set some NR max I think.
#Not sure how to choose this because there's NRs up to 1 MeV
#Do we need a fixed (Er,Y) to work from?
Y=Yield.Yield('Lind',[0.146])
E_nr_max=ERtoNR(Ebin_ctr[-1],Y,V,eps)[0]
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y,V,eps)[0],E_nr_max)/g4['NR']['tlive'])
fng=0
E_nrs=[]
E_nr_step=1
E_nr_test=E_nr_max
for i in tqdm(range(len(Ebin_ctr))[::-1]):
if np.isfinite(R_meas[i]):
while True:
R_meas_this=np.sum(R_meas[(Ebin_ctr>Ebin_ctr[i])&(Ebin_ctr<2e3)])
R_sim_this=fg4*(Nint(E_er,Ebin_ctr[i],2e3)/g4['ER']['tlive'] + Nint(E_nr,E_nr_test,E_nr_max)/g4['NR']['tlive']) + fng*Nint(E_ng,E_nr_test,E_nr_max)/cap['tlive']
if (R_meas_this<R_sim_this) or (E_nr_test<0):
break
E_nr_test-=E_nr_step
E_nrs.append(E_nr_test)
else:
E_nrs.append(np.inf)
E_nrs=np.array(E_nrs[::-1])
Ys=((Ebin_ctr/E_nrs)*(1+V/eps)-1)*eps/V
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('Chav',[0.146,1e3/0.247])
C_er,_=np.histogram(E_er,bins=Ebins)
R_er=fg4*C_er/g4['ER']['tlive']
Y=Yield.Yield('User',[Y_fCS])
C_nr,_=np.histogram(NRtoER(E_nr,Y,V,eps),bins=Ebins)
R_nr=fg4*C_nr/g4['NR']['tlive']
C_ng,_=np.histogram(NRtoER(E_ng,Y,V,eps),bins=Ebins)
R_ng=fng*C_ng/cap['tlive']
#Extract yield curve using the integral method
#Treats each event as a single scatter of the total energy
#fer: ER livetime factor
#fnr: NR livetime factor
#fng: NG livetime factor
#Y_max: Yield value that corresponds to the highest bin edge of Ebins
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
lY_max=np.linspace(0.1,0.6,6)
lfer=[]
lfnr=[]
lE_nrs=[]
lYs=[]
for Y_max in tqdm(lY_max):
#Normalize so that ER+NR matches data near 2 keV
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive'])
lfer.append(fg4)
lfnr.append(fg4)
E_nrs,Ys=extract_Y(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=0, Y_max=Y_max, E_nr_step=1)
lE_nrs.append(E_nrs)
lYs.append(Ys)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
for E_nrs,Ys in zip(lE_nrs,lYs):
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
for E_nrs,Ys,fer,fnr in zip(lE_nrs,lYs,lfer,lfnr):
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
C_er,_=np.histogram(E_er,bins=Ebins)
R_er=fer*C_er/tlive_er
Y=Yield.Yield('User',[Y_fCS])
C_nr,_=np.histogram(NRtoER(E_nr,Y,V,eps),bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(NRtoER(E_ng,Y,V,eps),bins=Ebins)
R_ng=fng*C_ng/tlive_ng
bins=np.linspace(-100,2500,100)
#Looks like that's doing the right thing. Maybe need to truncate at 0
ERsmeared=spec.getSmeared(NRtoER(E_ng,0.2,V,eps))
ERsmeared[ERsmeared<0]=0
Y_max=0.25
#Normalize so that ER+NR matches data near 2 keV
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive'])
E_nrs,Ys=extract_Y(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max, E_nr_step=1)
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y=Yield.Yield('User',[Y_fit])
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
ERtoNR_fCS=CubicSpline(NRtoER(E_nrs[cFit],Y,V,eps),E_nrs[cFit])
E_nr_sm=ERtoNR_fCS(spec.getSmeared(E_nr_eVee))
E_ng_sm=ERtoNR_fCS(spec.getSmeared(E_ng_eVee))
E_ng_sm2=ERtoNR_fCS(spec.getSmeared(E_ng_eVee))
Ebnr=np.linspace(0,3e3,200)
E_nrs_0=E_nrs
Ys_0=Ys
E_nrs,Ys=extract_Y(E_er, E_nr_sm, E_ng_sm, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max, E_nr_step=1)
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
Y_max=0.25
#Normalize so that ER+NR matches data near 2 keV
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive'])
E_nrs,Ys=extract_Y(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max, E_nr_step=1)
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[E_nrs>0][0],0,E_nrs[-1],Ys[-1])
E_nrs,Ys=extract_Y_wSmear(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max, nIt=1, E_nr_step=1)
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[E_nrs>0][0],0,E_nrs[-1],Ys[-1])
lY_max=[0.3]
lfer=[]
lfnr=[]
lfng=[]
lE_nrs=[]
lYs=[]
for Y_max in tqdm(lY_max):
#Normalize so that ER+NR matches data near 2 keV
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive'])
lfer.append(fg4)
lfnr.append(fg4)
lfng.append(1)
E_nrs,Ys=extract_Y_wSmear(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max,
nIt=1, E_nr_step=1)
lE_nrs.append(E_nrs)
lYs.append(Ys)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
for E_nrs,Ys,fer,fnr,fng in zip(lE_nrs,lYs,lfer,lfnr,lfng):
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
#Smear
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[0],0,E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
E_er_sm=spec.getSmeared(E_er)
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps))
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps))
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
Y_max=0.3
R0_meas=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.9e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=0
fnr=(R0_meas)/R0_nr
fng=0
E_er_max=2e3
E_nr_max=ERtoNR(E_er_max,Y_max,V,eps)
Ebin_ctr_rev=Ebin_ctr[::-1]
rev_csum_meas=np.cumsum(R_meas[::-1])
R_sim_er=fer*np.histogram(E_er,Ebins)[0]/tlive_er
rev_csum_er=np.cumsum(R_sim_er[::-1])
w_nr=fnr/tlive_nr*np.ones(np.sum(E_nr<=E_nr_max))
w_ng=fng/tlive_ng*np.ones(np.sum(E_ng<=E_nr_max))
E_nrng=np.concatenate((E_nr[E_nr<=E_nr_max],E_ng[E_ng<=E_nr_max]))
w_nrng=np.concatenate((w_nr,w_ng))
E_nrng_rev_srt=(E_nrng[np.argsort(E_nrng)])[::-1]
w_nrng_rev_srt=(w_nrng[np.argsort(E_nrng)])[::-1]
rev_csum_nrng=np.cumsum(w_nrng_rev_srt)
diff=rev_csum_meas-rev_csum_er
E_nrs=[]
error=[]
for entry in diff:
if np.isfinite(entry):
args=np.argwhere(rev_csum_nrng>=entry)
if len(args)==0:
E_nrs.append(-99)
else:
E_nr_this=E_nrng_rev_srt[args[0][0]]
error.append(rev_csum_nrng[args[0][0]]-entry)
if len(E_nrs)>0:
E_nrs.append(min(E_nr_this,E_nrs[-1]))
else:
E_nrs.append(E_nr_this)
else:
E_nrs.append(-999)
error.append(-999)
E_nrs=np.array(E_nrs[::-1])
Ys=((Ebins[:-1]/E_nrs)*(1+V/eps)-1)*eps/V
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
ERtoNR_fCS=CubicSpline(xx,yy,extrapolate=True)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
ERtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fCS(E) + (E>xx[-1])*pf_hi(E)
E_er_sm=spec.getSmeared(E_er,seed=None,F=F)
E_er_sm[E_er_sm<0]=0
E_nr_sm=ERtoNR_fcombo(spec.getSmeared(E_nr_eVee,seed=None,F=F))
E_ng_sm=ERtoNR_fcombo(spec.getSmeared(E_ng_eVee,seed=None,F=F))
E_nrs,Ys,errors=extract_Y_v2(E_er_sm, E_nr_sm, E_ng_sm, fer, fnr, fng, Y_max, Ebins)
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
ERtoNR_fCS=CubicSpline(xx,yy,extrapolate=True)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
ERtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fCS(E) + (E>xx[-1])*pf_hi(E)
E_er_sm2=spec.getSmeared(E_er,seed=None,F=F)
E_nr_sm2=ERtoNR_fcombo(spec.getSmeared(E_nr_eVee,seed=None,F=F))
E_ng_sm2=ERtoNR_fcombo(spec.getSmeared(E_ng_eVee,seed=None,F=F))
E_nrs,Ys,errors=extract_Y_v2(E_er_sm2, E_nr_sm2, E_ng_sm2, fer, fnr, fng, Y_max, Ebins)
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
ERtoNR_fCS=CubicSpline(xx,yy,extrapolate=True)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
ERtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fCS(E) + (E>xx[-1])*pf_hi(E)
E_er_sm3=spec.getSmeared(E_er,seed=None,F=F)
E_nr_sm3=ERtoNR_fcombo(spec.getSmeared(E_nr_eVee,seed=None,F=F))
E_ng_sm3=ERtoNR_fcombo(spec.getSmeared(E_ng_eVee,seed=None,F=F))
E_nrs,Ys,errors=extract_Y_v2(E_er_sm, E_nr_sm, E_ng_sm, fer, fnr, fng, Y_max, Ebins)
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
lY_max=np.linspace(0.2,0.3,5)
lfer=[]
lfnr=[]
lfng=[]
lE_nrs=[]
lYs=[]
lerrors=[]
for Y_max in tqdm(lY_max):
#Normalize so that ER+NR matches data near 2 keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fnr=6
fer=(R0_meas-fnr*R0_nr)/R0_er
fng=2#2.037
lfer.append(fer)
lfnr.append(fnr)
lfng.append(fng)
E_nrs,Ys,errors=extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=1, Ebins=np.linspace(0,2e3,201), seed=None)
#If binning is too small, will get some errors and things won't work.
#Probably in bkg_sub, but not exactly sure
lE_nrs.append(E_nrs)
lYs.append(Ys)
lerrors.append(errors)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
dosmear=True
seed=None
#Add other measurements from lit
for E_nrs,Ys,fer,fnr,fng in zip(lE_nrs,lYs,lfer,lfnr,lfng):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fCS=lambda E: np.interp(E,E_nrs[cFit],Ys[cFit])
#Smear
Y_fit = lambda E: Y_conditioned_test(E,Y_fCS,E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('User',[Y_fit])
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed)
E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_nr_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed)
E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
chi=np.mean((((R_tot-R_meas)/((dR_meas[0]+dR_meas[1])/2))**2)[Ebin_ctr>50])
lnIt=[0,1,2,5,10,15,20,30]
lY_max=[]
lfer=[]
lfnr=[]
lfng=[]
lE_nrs=[]
lYs=[]
lerrors=[]
for nIt in tqdm(lnIt):
Y_max=0.25
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
lY_max.append(Y_max)
fnr=6
fer=(R0_meas-fnr*R0_nr)/R0_er
fng=4#2.037+0.41
lfer.append(fer)
lfnr.append(fnr)
lfng.append(fng)
E_nrs,Ys,errors=extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=nIt, Ebins=np.linspace(0,2e3,201), seed=None)
#If binning is too small, will get some errors and things won't work.
#Probably in bkg_sub, but not exactly sure
lE_nrs.append(E_nrs)
lYs.append(Ys)
lerrors.append(errors)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
dosmear=True
seed=None
#Add other measurements from lit
for E_nrs,Ys,fer,fnr,fng,nIt in zip(lE_nrs,lYs,lfer,lfnr,lfng,lnIt):
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=lambda E: np.interp(E,E_nrs[cFit],Ys[cFit])
#Smear
Y_fit = lambda E: Y_conditioned_test(E,Y_fCS,E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('User',[Y_fit])
if nIt>0:
E_er_sm=spec.getSmeared(E_er,seed=seed)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed)
E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_nr_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed)
E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
chi=np.mean((((R_tot-R_meas)/((dR_meas[0]+dR_meas[1])/2))**2)[Ebin_ctr>50])
E_nrs=lE_nrs[4]
Ys=lYs[4]
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fCS=lambda E: np.interp(E,E_nrs[cFit],Ys[cFit])
#Smear
Y_fit = lambda E: Y_conditioned_test(E,Y_fCS,E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('User',[Y_fit])
lY_max=np.concatenate((np.linspace(0.2,0.3,5),np.linspace(0.2,0.3,5)))
lfer=[]
lfnr=[]
lfng=np.concatenate(((2.037+0.408)*np.ones(5),(2.037-0.408)*np.ones(5)))
lE_nrs=[]
lYs=[]
lerrors=[]
for Y_max,fng in zip(tqdm(lY_max),lfng):
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas)/(R0_er+R0_nr)
fnr=fer
lfer.append(fer)
lfnr.append(fnr)
E_nrs,Ys,errors=extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=1, Ebins=np.linspace(0,2e3,201), seed=0)
#If binning is too small, will get some errors and things won't work.
#Probably in bkg_sub, but not exactly sure
lE_nrs.append(E_nrs)
lYs.append(Ys)
lerrors.append(errors)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
dosmear=True
seed=0
#Add other measurements from lit
N=len(lE_nrs)
for i in range(int(N/2)):
cFit1=(Ebin_ctr>50) &(lE_nrs[i]>0) & (np.isfinite(lE_nrs[i])) & (np.insert(np.diff(lE_nrs[i])>0,-1,True))
E_nrs1=lE_nrs[i][cFit1]
Ys1=lYs[i][cFit1]
Y_fCS1=CubicSpline(E_nrs1,Ys1,extrapolate=True)
cFit2=(Ebin_ctr>50) &(lE_nrs[i+int(N/2)]>0) & (np.isfinite(lE_nrs[i+int(N/2)])) & (np.insert(np.diff(lE_nrs[i+int(N/2)])>0,-1,True))
E_nrs2=lE_nrs[i+int(N/2)][cFit2]
Ys2=lYs[i+int(N/2)][cFit2]
Y_fCS2=CubicSpline(E_nrs2,Ys2,extrapolate=True)
#Smear
Y_fit1 = lambda E: Y_conditioned(E,Y_fCS1,E_nrs1[0],Ys1[0],E_nrs1[-1],Ys1[-1])
Y1=Yield.Yield('User',[Y_fit1])
Y_fit2 = lambda E: Y_conditioned(E,Y_fCS2,E_nrs2[0],Ys2[0],E_nrs2[-1],Ys2[-1])
Y2=Yield.Yield('User',[Y_fit2])
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y1,V,eps),seed=seed)
E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_nr_sm=NRtoER(E_nr,Y1,V,eps)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y1,V,eps),seed=seed)
E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y1,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y1,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=lfer[i]*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=lfnr[i]*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=lfng[i]*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
chi=np.mean((((R_tot-R_meas)/((dR_meas[0]+dR_meas[1])/2))**2)[Ebin_ctr>50])
izr=pt.get_old_Y_data()
Y_izr_up=CubicSpline(izr['Enr'],izr['Y'],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_izr_up,izr['Enr'][0],(izr['Y'])[0],izr['Enr'][-1],(izr['Y'])[-1])
Y=Yield.Yield('User',[Y_fit])
xx=np.linspace(0,30e3,1000)
izr=pt.get_old_Y_data()
Y_izr_up=CubicSpline(izr['Enr'],izr['Y'],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_izr_up,izr['Enr'][0],(izr['Y'])[0],izr['Enr'][-1],(izr['Y'])[-1])
Y=Yield.Yield('User',[Y_fit])
xx=np.linspace(0,30e3,1000)
#Load data if possible. If not possible, save for future use.
save = False
try:
with open( "data/cdf_results.p", "rb" ) as file:
results = pickle.load( file )
lY_max=results['lY_max']
lfer=results['lfer']
lfnr=results['lfnr']
lfng=results['lfng']
lE_nrs=results['lE_nrs']
lYs=results['lYs']
lerrors=results['lerrors']
except:
save = True
#Let's scan through a bunch of scalings and then only retain those which are consistent with Izr
if save:
lY_max=[]
lfer=[]
lfnr=[]
lfng=[]
lE_nrs=[]
lYs=[]
lerrors=[]
for Y_max in tqdm(np.linspace(0.25,0.29,20)):
for fnr in np.linspace(4,9,20):
for fng in [0,2.037+0.408,2.037-0.408]:
lY_max.append(Y_max)
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
lfer.append(fer)
lfnr.append(fnr)
lfng.append(fng)
E_nrs,Ys,errors=extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=1, Ebins=np.linspace(0,2e3,201), seed=0, F=0.1161)
#If binning is too small, will get some errors and things won't work.
#Probably in bkg_sub, but not exactly sure
lE_nrs.append(E_nrs)
lYs.append(Ys)
lerrors.append(errors)
results={'lY_max':lY_max, 'lfer':lfer, 'lfnr':lfnr, 'lfng':lfng, 'lE_nrs':lE_nrs, 'lYs':lYs, 'lerrors':lerrors}
with open( "data/cdf_results.p", "wb" ) as file:
pickle.dump( results, file )
lY_max=np.array(lY_max)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lfng=np.array(lfng)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
#Find those which are consistent with Izr
cgood=[]
Y_1keV=[]
for E_nrs,Ys in zip(lE_nrs,lYs):
Y=getYfitCond(E_nrs,Ys)
cizr=izr['Enr']<E_nrs[-1]
Y_1keV.append(Y.calc(1e3))
cgood.append(((np.abs(Y.calc(izr['Enr'])-izr['Y'])<1*izr['dY'])[cizr]).all())
cgood=np.array(cgood)
Y_1keV=np.array(Y_1keV)
dosmear=True
seed=0
Fthis=0.1161
#Add other measurements from lit
for E_nrs,Ys,fer,fnr,fng,good in zip(lE_nrs,lYs,lfer,lfnr,lfng,cgood):
if not good:
continue
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
if fng==0:
color='red'
else:
color='gray'
#Smear
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=Fthis)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed,F=Fthis)
E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed,F=Fthis)
E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
#Pick mins and maxes at a given energy
#This isn't quite right, since envelope is not jsut from a single curve
ifng0=np.argwhere(cgood&(lfng==0))
ifng0_min=ifng0[np.argmin(Y_1keV[ifng0])][0]
ifng0_max=ifng0[np.argmax(Y_1keV[ifng0])][0]
ifng=np.argwhere(cgood&(lfng!=0))
ifng_min=ifng[np.argmin(Y_1keV[ifng])][0]
ifng_max=ifng[np.argmax(Y_1keV[ifng])][0]
dosmear=True
seed=0
Fthis=0.1161
#Add other measurements from lit
labels=[r'no (n,$\gamma$)',r'with (n,$\gamma$)']
colors=['red','gray']
for inds,label,color in zip([[ifng0_max,ifng0_min],[ifng_max,ifng_min]],labels,colors):
#for E_nrs,Ys,fer,fnr,fng,good in zip(lE_nrs,lYs,lfer,lfnr,lfng,cgood):
i=inds[0]
j=inds[1]
cFit1=(Ebin_ctr>50) &(lE_nrs[i]>0) & (np.isfinite(lE_nrs[i])) & (np.insert(np.diff(lE_nrs[i])>0,-1,True))
E_nrs1=lE_nrs[i][cFit1]
Ys1=lYs[i][cFit1]
Y_fCS1=CubicSpline(E_nrs1,Ys1,extrapolate=True)
cFit2=(Ebin_ctr>50) &(lE_nrs[j]>0) & (np.isfinite(lE_nrs[j])) & (np.insert(np.diff(lE_nrs[j])>0,-1,True))
E_nrs2=lE_nrs[j][cFit2]
Ys2=lYs[j][cFit2]
Y_fCS2=CubicSpline(E_nrs2,Ys2,extrapolate=True)
#Smear
Y_fit1 = lambda E: Y_conditioned(E,Y_fCS1,E_nrs1[0],Ys1[0],E_nrs1[-1],Ys1[-1])
Y1=Yield.Yield('User',[Y_fit1])
Y_fit2 = lambda E: Y_conditioned(E,Y_fCS2,E_nrs2[0],Ys2[0],E_nrs2[-1],Ys2[-1])
Y2=Yield.Yield('User',[Y_fit2])
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=Fthis)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm1=spec.getSmeared(NRtoER(E_nr,Y1,V,eps),seed=seed,F=Fthis)
E_nr_eVee_sm1[E_nr_eVee_sm1<0]=0
E_nr_eVee_sm2=spec.getSmeared(NRtoER(E_nr,Y2,V,eps),seed=seed,F=Fthis)
E_nr_eVee_sm2[E_nr_eVee_sm2<0]=0
E_ng_eVee_sm1=spec.getSmeared(NRtoER(E_ng,Y1,V,eps),seed=seed,F=Fthis)
E_ng_eVee_sm1[E_ng_eVee_sm1<0]=0
E_ng_eVee_sm2=spec.getSmeared(NRtoER(E_ng,Y2,V,eps),seed=seed,F=Fthis)
E_ng_eVee_sm2[E_ng_eVee_sm2<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm1=NRtoER(E_nr,Y1,V,eps)
E_nr_eVee_sm2=NRtoER(E_nr,Y2,V,eps)
E_ng_eVee_sm1=NRtoER(E_ng,Y1,V,eps)
E_ng_eVee_sm2=NRtoER(E_ng,Y2,V,eps)
C_er1,_=np.histogram(E_er_sm,bins=Ebins)
R_er1=lfer[i]*C_er1/tlive_er
C_er2,_=np.histogram(E_er_sm,bins=Ebins)
R_er2=lfer[j]*C_er2/tlive_er
C_nr1,_=np.histogram(E_nr_eVee_sm1,bins=Ebins)
R_nr1=lfnr[i]*C_nr1/tlive_nr
C_nr2,_=np.histogram(E_nr_eVee_sm2,bins=Ebins)
R_nr2=lfnr[j]*C_nr2/tlive_nr
C_ng1,_=np.histogram(E_ng_eVee_sm1,bins=Ebins)
R_ng1=lfng[i]*C_ng1/tlive_ng
C_ng2,_=np.histogram(E_ng_eVee_sm2,bins=Ebins)
R_ng2=lfng[j]*C_ng2/tlive_ng
cut=cgood&(lfng!=0)
ERenvData=getERminmax(lE_nrs[cut],lYs[cut],lfer[cut],lfnr[cut],lfng[cut])
cut=cgood&(lfng!=0)
cut=cgood&(lfng==0)
#Add other measurements from lit
ERenvData=getERminmax(lE_nrs[cut],lYs[cut],lfer[cut],lfnr[cut],lfng[cut])
#Extract yield curve using the integral method
#Treats each event as a single scatter of the total energy
#fer: ER livetime factor
#fnr: NR livetime factor
#fng: NG livetime factor
#Y_max: Yield value that corresponds to the highest bin edge of Ebins
#v3: Separate ER and NR Fanos. Also allow smeared energies to be negative
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
save = False
try:
with open( "data/intmeth_scan_v3.p", "rb" ) as file:
scanData = pickle.load( file )
except:
save = True
#Single data structure to hold all those arrays of stuff
if save:
scanData={'lY_max':[], 'lfer':[], 'lfnr':[], 'lfng':[],
'lE_nrs':[], 'lYs':[], 'lerrors':[], 'lFanoER':[],'lFanoNR':[]}
for Y_max in tqdm(np.linspace(0.25,0.29,20)):
for fnr in np.linspace(4,9,20):
for fng in [0,2.037+0.408,2.037,2.037-0.408]:
for FanoNR in [0.1161,1,2,5]:
scanData['lY_max'].append(Y_max)
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
scanData['lfer'].append(fer)
scanData['lfnr'].append(fnr)
scanData['lfng'].append(fng)
scanData['lFanoER'].append(0.1161)
scanData['lFanoNR'].append(FanoNR)
E_nrs,Ys,errors=extract_Y_wSmear_v3(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=1, Ebins=np.linspace(0,2e3,201), seed=0,
FanoER=0.1161, FanoNR=FanoNR)
scanData['lE_nrs'].append(E_nrs)
scanData['lYs'].append(Ys)
scanData['lerrors'].append(errors)
with open( "data/intmeth_scan_v3.p", "wb" ) as file:
pickle.dump( scanData, file )
for key in scanData.keys():
scanData[key]=np.array(scanData[key])
scanData['N']=len(scanData['lY_max'])
#Find those which are consistent with Izr
scanData['cgood']=[]
scanData['IzrChi']=[]
for i in zip(range(scanData['N'])):
Y=getYfitCond(scanData['lE_nrs'][i],scanData['lYs'][i])
cizr=izr['Enr']<scanData['lE_nrs'][i][-1]
scanData['IzrChi'].append(np.sum((((Y.calc(izr['Enr'])-izr['Y'])/izr['dY'])[cizr])**2))
scanData['cgood'].append(((np.abs(Y.calc(izr['Enr'])-izr['Y'])<1*izr['dY'])[cizr]).all())
scanData['cgood']=np.array(scanData['cgood'])
scanData['IzrChi']=np.array(scanData['IzrChi'])
fig_w=9
#fig,axs=subplots(1,2,figsize=(2*fig_w, fig_w*(.75)))
cut=scanData['cgood']&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)
iPlot=0
#Best fit to Izr
iBest=np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])]
#Add other measurements from lit
#pt.plotOldYs_noSat(axs[0],fmt='o',markersize=6)
Yiso = lambda Enr,Eee: Eee/Enr*(1+eps/V)-eps/V
ERenvData=getERminmax_v3(scanData,cut,nAvg=1)
ERmidData=getERminmax_v3(scanData,np.arange(len(scanData['lE_nrs']))==iBest,nAvg=5)#Cheat to get mid. min==max
#Extract yield curve using the integral method
#Treats each event as a single scatter of the total energy
#fer: ER livetime factor
#fnr: NR livetime factor
#fng: NG livetime factor
#Y_max: Yield value that corresponds to the highest bin edge of Ebins
#v3: Separate ER and NR Fanos. Also allow smeared energies to be negative
#v4: Add dynamic smearing iteration. Stop if smeared matches measured via some measure of closeness.
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
#Single data structure to hold all those arrays of stuff
scanData={'lY_max':[], 'lfer':[], 'lfnr':[], 'lfng':[],
'lE_nrs':[], 'lYs':[], 'lerrors':[], 'lFanoER':[],'lFanoNR':[],
'lnItMax':[],'liIt':[]}
Y_max=0.25
FanoNR=0#0.1161
fnr=4
fng=4#2.037+0.41
for nIt in tqdm([0,1,2,5,10,15,20,30]):
scanData['lnItMax'].append(nIt)
scanData['lY_max'].append(Y_max)
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
scanData['lfer'].append(fer)
scanData['lfnr'].append(fnr)
scanData['lfng'].append(fng)
scanData['lFanoER'].append(0.1161)
scanData['lFanoNR'].append(FanoNR)
E_nrs,Ys,errors,iIt=extract_Y_wSmear_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nItMax=nIt, fit_frac_all_goal=0.83, Ebins=np.linspace(0,2e3,201),
seed=0,FanoER=0.1161, FanoNR=FanoNR)
scanData['liIt'].append(iIt)
scanData['lE_nrs'].append(E_nrs)
scanData['lYs'].append(Ys)
scanData['lerrors'].append(errors)
for key in scanData.keys():
scanData[key]=np.array(scanData[key])
scanData['N']=len(scanData['lY_max'])
save = False
try:
with open( "data/R_Cal.p", "rb" ) as file:
temp = pickle.load( file )
R_er = temp['R_er']
R_nr = temp['R_nr']
R_ng = temp['R_ng']
R_tot = temp['R_tot']
R_max = temp['R_max']
R_min = temp['R_min']
except:
save = True
seed=0
#Tried speeding this up by only including the last entry as intermediate ones aren't saved
#But that resulted in errors later on... :/
if save:
for i in range(scanData['N']):
E_nrs=scanData['lE_nrs'][i]
Ys=scanData['lYs'][i]
fer=scanData['lfer'][i]
fnr=scanData['lfnr'][i]
fng=scanData['lfng'][i]
FanoER=scanData['lFanoER'][i]
FanoNR=scanData['lFanoNR'][i]
Y=getYfitCond_v4(E_nrs,Ys)
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
if nIt>0:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=FanoER)
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed,F=FanoNR)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed,F=FanoNR)
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
R_max=R_meas[Ebin_ctr>50]+1*dR_meas[0][Ebin_ctr>50]
R_min=R_meas[Ebin_ctr>50]-1*dR_meas[1][Ebin_ctr>50]
with open( "data/R_Cal.p" , "wb" ) as file:
temp = {'R_er':R_er, 'R_nr':R_nr, 'R_ng':R_ng, 'R_tot':R_tot, 'R_max':R_max, 'R_min': R_min}
pickle.dump( temp, file )
save = False
try:
with open( "data/intmeth_prescan_v4.p", "rb" ) as file:
temp = pickle.load( file )
Y_max_test=temp['Y_max_test']
fnr_test=temp['fnr_test']
matchIzr_test=temp['matchIzr_test']
except:
save = True
if save:
#Do a first pass w/o smearing to determine the set of Y_max,fnr values that are even close.
Y_max_test_1d=np.linspace(0.25,0.29,100)
fnr_test_1d=np.linspace(4,9,100)
Y_max_test,fnr_test= np.meshgrid(Y_max_test_1d,fnr_test_1d)
Y_max_test=Y_max_test.flatten()
fnr_test=fnr_test.flatten()
matchIzr_test=[]
for Y_max,fnr in zip(tqdm(Y_max_test),fnr_test):
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
E_nrs,Ys,errors,iIt=extract_Y_wSmear_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nItMax=0, fit_frac_all_goal=0.8, fit_frac_low_goal=1,
Ebins=np.linspace(0,2e3,201),
seed=None,FanoER=0.1161, FanoNR=0.1161)
Y=getYfitCond_v4(E_nrs,Ys)
cizr=izr['Enr']<E_nrs[-1]
matchIzr_test.append(((np.abs(Y.calc(izr['Enr'])-izr['Y'])<1*izr['dY'])[cizr]).all())
matchIzr_test=np.array(matchIzr_test)
#save
temp={'Y_max_test':Y_max_test, 'fnr_test':fnr_test, 'matchIzr_test':matchIzr_test}
with open( "data/intmeth_prescan_v4.p", "wb" ) as file:
pickle.dump( temp, file )
save = False
try:
with open( "data/intmeth_scan_v4.p", "rb" ) as file:
scanData = pickle.load( file )
#R_er = temp['R_er']
#R_nr = temp['R_nr']
#R_ng = temp['R_ng']
#R_tot = temp['R_tot']
#R_max = temp['R_max']
#R_min = temp['R_min']
except:
save = True
#Calculate using those initally good pairs of values
# But now we'll allow a few rounds of smearing and try different fng and FanoNR values.
#Single data structure to hold all those arrays of stuff
if save:
scanData={'lY_max':[], 'lfer':[], 'lfnr':[], 'lfng':[],
'lE_nrs':[], 'lYs':[], 'lerrors':[], 'lFanoER':[],'lFanoNR':[],
'lnItMax':[],'liIt':[]}
nItMax=4
#for Y_max in tqdm(np.linspace(0.25,0.29,2)):
# for fnr in np.linspace(4,9,2):
for Y_max,fnr in zip(tqdm(Y_max_test[matchIzr_test]),fnr_test[matchIzr_test]):
for fng in [0,2.037+0.408,2.037,2.037-0.408]:
for FanoNR in [0.1161,1,2,5]:
scanData['lnItMax'].append(nItMax)
scanData['lY_max'].append(Y_max)
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
scanData['lfer'].append(fer)
scanData['lfnr'].append(fnr)
scanData['lfng'].append(fng)
scanData['lFanoER'].append(0.1161)
scanData['lFanoNR'].append(FanoNR)
E_nrs,Ys,errors,iIt=extract_Y_wSmear_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nItMax=nItMax, fit_frac_all_goal=0.8, fit_frac_low_goal=1,
Ebins=np.linspace(0,2e3,201),
seed=None,FanoER=0.1161, FanoNR=FanoNR)
scanData['liIt'].append(iIt)
scanData['lE_nrs'].append(E_nrs)
scanData['lYs'].append(Ys)
scanData['lerrors'].append(errors)
for key in scanData.keys():
scanData[key]=np.array(scanData[key])
scanData['N']=len(scanData['lY_max'])
with open( "data/intmeth_scan_v4.p", "wb" ) as file:
pickle.dump( scanData, file )
#Save results
save = False
try:
with open( "data/intmeth_scan_v6.p", "rb" ) as file:
scanData = pickle.load( file )
except:
save = True
#Find those which are consistent with Izr
if save:
scanData['cgood']=[]
scanData['IzrChi']=[]
scanData['Y1keV']=[]
for i in zip(range(scanData['N'])):
Y=getYfitCond_v4(scanData['lE_nrs'][i],scanData['lYs'][i])
cizr=izr['Enr']<scanData['lE_nrs'][i][-1]
scanData['Y1keV'].append(Y.calc(np.array([1e3])))
scanData['IzrChi'].append(np.sum((((Y.calc(izr['Enr'])-izr['Y'])/izr['dY'])[cizr])**2))
scanData['cgood'].append(((np.abs(Y.calc(izr['Enr'])-izr['Y'])<1*izr['dY'])[cizr]).all())
scanData['cgood']=np.array(scanData['cgood'])
scanData['IzrChi']=np.array(scanData['IzrChi'])
scanData['Y1keV']=np.array(scanData['Y1keV'])
with open( "data/intmeth_scan_v6.p", "wb" ) as file:
pickle.dump( scanData, file )
save = False
try:
with open( "data/collect.p", "rb") as file:
temp = pickle.load(file)
EYenvelopes = temp['EYenvelopes']
ERenvData = temp['ERenvData']
ERmidData = temp['ERmidData']
iBest = temp['iBest']
cut_noNG = temp['cut_noNG']
mask = temp['mask']
except:
save = True
#Collect the things we want to plot since it can take a while
if save:
EYenvelopes=[]
ERenvData=[]
ERmidData=[]
iBest=[]
mask=np.zeros(len(cut),dtype=bool)
mask[:]=True
#No NG
cut_noNG=(scanData['cgood'])&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
cut_noNG&mask
with open( "data/collect.p", "wb") as file:
temp = {'EYenvelopes':EYenvelopes, 'ERenvData':ERenvData, 'ERmidData':ERmidData, 'iBest':iBest, 'cut_noNG':cut_noNG, 'mask':mask}
pickle.dump( temp, file )
iBest=[]
cut_noNG=(scanData['cgood'])&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
iPlot=0
if iPlot==0:
cut=cut_noNG
else:
cut=cut_wNG
save = False
try:
with open( "data/collect2.p", "rb") as file:
temp = pickle.load(file)
EYenvelopes = temp['EYenvelopes']
ERenvData = temp['ERenvData']
ERmidData = temp['ERmidData']
iBest = temp['iBest']
cut_noNG = temp['cut_noNG']
mask = temp['mask']
except:
save = True
#Collect the things we want to plot since it can take a while
if save:
EYenvelopes=[]
ERenvData=[]
ERmidData=[]
iBest=[]
mask=np.zeros(len(cut),dtype=bool)
mask[:]=True
#No NG
cut_noNG=(scanData['cgood'])&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
cut_noNG&mask
iBest=np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])]#Best fit to Izr
EYenvelopes.append(getEYenvelope_v4(scanData['lE_nrs'][cut],scanData['lYs'][cut],eVeeMin=50))
#This part is slow, please be patient
ERenvData.append(getERminmax_v4(scanData,cut,nAvg=5))
#Cheat to get mid. min==max
ERmidData.append(getERminmax_v4(scanData,np.arange(len(scanData['lE_nrs']))==iBest,nAvg=5))
#With NG
cut_wNG=(scanData['cgood'])&(scanData['lfng']!=0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
cut=cut_wNG&mask
iBest=np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])]#Best fit to Izr
EYenvelopes.append(getEYenvelope_v4(scanData['lE_nrs'][cut],scanData['lYs'][cut],eVeeMin=50))
ERenvData.append(getERminmax_v4(scanData,cut,nAvg=5))
#Cheat to get mid. min==max
ERmidData.append(getERminmax_v4(scanData,np.arange(len(scanData['lE_nrs']))==iBest,nAvg=5))
with open( "data/collect2.p", "wb") as file:
temp = {'EYenvelopes':EYenvelopes, 'ERenvData':ERenvData, 'ERmidData':ERmidData, 'iBest':iBest, 'cut_noNG':cut_noNG, 'mask':mask}
pickle.dump( temp, file )
iBest=[]
cut_noNG=(scanData['cgood'])&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
cut=cut_noNG&mask
iBest.append(np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])])
cut_wNG=(scanData['cgood'])&(scanData['lfng']!=0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
cut=cut_wNG&mask
iBest.append(np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])])
fig_w=9
fig,axs=subplots(1,2,figsize=(2*fig_w, fig_w*(.75)))
iPlot=0
if iPlot==0:
cut=cut_noNG
else:
cut=cut_wNG
labels=[r'no (n,$\gamma$)',r'with (n,$\gamma$)']
colors=['gray','green']
#Add other measurements from lit
pt.plotOldYs(axs[0],datasets=['chav','izr','dough','gerb','zech','agnese'],
labels=['Chavarria','Izraelevitch','Dougherty','Gerbier','Zecher','Agnese'],
fmt='o',markersize=6)
axs[0].fill(*EYenvelopes[iPlot],colors[iPlot],alpha=0.5,label=labels[iPlot])
axs[0].plot(scanData['lE_nrs'][iBest[iPlot]][Ebin_ctr>50],scanData['lYs'][iBest[iPlot]][Ebin_ctr>50], colors[iPlot],linestyle='--')
axs[1].errorbar(Ebin_ctr[Ebin_ctr>50],R_meas[Ebin_ctr>50],(dR_meas.T[Ebin_ctr>50]).T,
ecolor='k', marker='o',markersize=6,color='k', linestyle='none',label='Measured',zorder=5)
axs[0].set_prop_cycle(None)#Reset color cycle
axs[1].set_prop_cycle(None)
axs[1].step(ERmidData[iPlot]['eVee'],ERmidData[iPlot]['NR']['min'],color='r',where='mid')
axs[1].step(ERmidData[iPlot]['eVee'],ERmidData[iPlot]['ER']['min'],color='k',where='mid')
axs[1].step(ERmidData[iPlot]['eVee'],ERmidData[iPlot]['NG']['min'],color='b',where='mid')
axs[1].step(ERmidData[iPlot]['eVee'],ERmidData[iPlot]['Total']['min'],color='g',where='mid')
axs[1].fill_between(ERenvData[iPlot]['eVee'],ERenvData[iPlot]['NR']['min'],ERenvData[iPlot]['NR']['max'],color='r',alpha=0.5,step='mid',label='NR')
axs[1].fill_between(ERenvData[iPlot]['eVee'],ERenvData[iPlot]['ER']['min'],ERenvData[iPlot]['ER']['max'],color='k',alpha=0.5,step='mid',label='ER')
axs[1].fill_between(ERenvData[iPlot]['eVee'],ERenvData[iPlot]['NG']['min'],ERenvData[iPlot]['NG']['max'],color='b',alpha=0.5,step='mid',label=r'(n,$\gamma)$')
axs[1].fill_between(ERenvData[iPlot]['eVee'],ERenvData[iPlot]['Total']['min'],ERenvData[iPlot]['Total']['max'],color='g',alpha=0.5,step='mid',label='Total')
#Analysis Range
axs[1].axvline(50,linestyle='--',color='m',label='Threshold')
Yiso = lambda Enr,Eee: Eee/Enr*(1+eps/V)-eps/V
axs[0].plot(np.logspace(-2,5,100),Yiso(np.logspace(-2,5,100),50),'--m')
axs[0].plot(np.logspace(-2,5,100),Yiso(np.logspace(-2,5,100),2e3),'--m')
axs[0].text(2e2,0.2,r'50 $eV_{ee}$',size=16,color='m',rotation=-72)
axs[0].text(1e4,0.15,r'2 $keV_{ee}$',size=16,color='m',rotation=-65)
#Axes
axs[0].set_xlim(1e2,5e4);
axs[0].set_xscale('log')
axs[0].set_ylim(0,0.4)
axs[0].yaxis.set_major_locator(plt.MultipleLocator(0.1))
axs[0].set_xlabel('Energy [eVnr]')
axs[0].set_ylabel('Y')
axs[0].legend(loc='lower right',ncol=2,prop={'size': 16})
axs[1].set_ylim(0,0.04)
axs[1].yaxis.set_major_locator(plt.MultipleLocator(0.01))
axs[1].set_xlim(0,1e3)
axs[1].set_xlabel('Energy [eVee]')
axs[1].set_ylabel('Rate [1/bin/s]')
axs[1].legend(loc='upper right', prop={'size': 16})
tight_layout()
savefig('../figures/intmeth_izr_benchmark_noNG_FNRFER.png')
#Extract yield curve using the integral method
#Treats each event as a single scatter of the total energy
#fer: ER livetime factor
#fnr: NR livetime factor
#fng: NG livetime factor
#Y_max: Yield value that corresponds to the highest bin edge of Ebins
#v3: Separate ER and NR Fanos. Also allow smeared energies to be negative
#v4: Add dynamic smearing iteration. Stop if smeared matches measured via some measure of closeness.
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
#Single data structure to hold all those arrays of stuff
scanData={'lY_max':[], 'lfer':[], 'lfnr':[], 'lfng':[],
'lE_nrs':[], 'lYs':[], 'lerrors':[], 'lFanoER':[],'lFanoNR':[],
'lnItMax':[],'liIt':[]}
Y_max=0.25
FanoNR=0#0.1161
fnr=4
fng=4#2.037+0.41
for nIt in tqdm([0,1,2,5,10,15,20,30]):
scanData['lnItMax'].append(nIt)
scanData['lY_max'].append(Y_max)
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
scanData['lfer'].append(fer)
scanData['lfnr'].append(fnr)
scanData['lfng'].append(fng)
scanData['lFanoER'].append(0.1161)
scanData['lFanoNR'].append(FanoNR)
E_nrs,Ys,errors,iIt=extract_Y_wSmear_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nItMax=nIt, fit_frac_all_goal=0.83, Ebins=np.linspace(0,2e3,201),
seed=0,FanoER=0.1161, FanoNR=FanoNR)
scanData['liIt'].append(iIt)
scanData['lE_nrs'].append(E_nrs)
scanData['lYs'].append(Ys)
scanData['lerrors'].append(errors)
for key in scanData.keys():
scanData[key]=np.array(scanData[key])
scanData['N']=len(scanData['lY_max'])
fig_w=9
seed=0
for i in range(scanData['N']):
E_nrs=scanData['lE_nrs'][i]
Ys=scanData['lYs'][i]
fer=scanData['lfer'][i]
fnr=scanData['lfnr'][i]
fng=scanData['lfng'][i]
FanoER=scanData['lFanoER'][i]
FanoNR=scanData['lFanoNR'][i]
Y=getYfitCond_v4(E_nrs,Ys)
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
if nIt>0:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=FanoER)
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed,F=FanoNR)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed,F=FanoNR)
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
R_max=R_meas[Ebin_ctr>50]+1*dR_meas[0][Ebin_ctr>50]
R_min=R_meas[Ebin_ctr>50]-1*dR_meas[1][Ebin_ctr>50]
save = False
try:
with open( "data/intmeth_prescan_v4.p", "rb" ) as file:
temp = pickle.load( file )
Y_max_test=temp['Y_max_test']
fnr_test=temp['fnr_test']
matchIzr_test=temp['matchIzr_test']
except:
save = True
#Do a first pass w/o smearing to determine the set of Y_max,fnr values that are even close.
if save:
Y_max_test_1d=np.linspace(0.25,0.29,100)
fnr_test_1d=np.linspace(4,9,100)
Y_max_test,fnr_test= np.meshgrid(Y_max_test_1d,fnr_test_1d)
Y_max_test=Y_max_test.flatten()
fnr_test=fnr_test.flatten()
matchIzr_test=[]
for Y_max,fnr in zip(tqdm(Y_max_test),fnr_test):
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
E_nrs,Ys,errors,iIt=extract_Y_wSmear_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nItMax=0, fit_frac_all_goal=0.8, fit_frac_low_goal=1,
Ebins=np.linspace(0,2e3,201),
seed=None,FanoER=0.1161, FanoNR=0.1161)
Y=getYfitCond_v4(E_nrs,Ys)
cizr=izr['Enr']<E_nrs[-1]
matchIzr_test.append(((np.abs(Y.calc(izr['Enr'])-izr['Y'])<1*izr['dY'])[cizr]).all())
matchIzr_test=np.array(matchIzr_test)
temp={'Y_max_test':Y_max_test, 'fnr_test':fnr_test, 'matchIzr_test':matchIzr_test}
with open( "data/intmeth_prescan_v4.p", "wb" ) as file:
pickle.dump( temp, file )
save = False
try:
with open( "data/intmeth_scan_v4.p", "rb" ) as file:
scanData = pickle.load( file )
except:
save=True
#Calculate using those initally good pairs of values
# But now we'll allow a few rounds of smearing and try different fng and FanoNR values.
if save:
#Single data structure to hold all those arrays of stuff
scanData={'lY_max':[], 'lfer':[], 'lfnr':[], 'lfng':[],
'lE_nrs':[], 'lYs':[], 'lerrors':[], 'lFanoER':[],'lFanoNR':[],
'lnItMax':[],'liIt':[]}
nItMax=4
for Y_max,fnr in zip(tqdm(Y_max_test[matchIzr_test]),fnr_test[matchIzr_test]):
for fng in [0,2.037+0.408,2.037,2.037-0.408]:
for FanoNR in [0.1161,1,2,5]:
scanData['lnItMax'].append(nItMax)
scanData['lY_max'].append(Y_max)
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
scanData['lfer'].append(fer)
scanData['lfnr'].append(fnr)
scanData['lfng'].append(fng)
scanData['lFanoER'].append(0.1161)
scanData['lFanoNR'].append(FanoNR)
E_nrs,Ys,errors,iIt=extract_Y_wSmear_v4(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nItMax=nItMax, fit_frac_all_goal=0.8, fit_frac_low_goal=1,
Ebins=np.linspace(0,2e3,201),
seed=None,FanoER=0.1161, FanoNR=FanoNR)
scanData['liIt'].append(iIt)
scanData['lE_nrs'].append(E_nrs)
scanData['lYs'].append(Ys)
scanData['lerrors'].append(errors)
for key in scanData.keys():
scanData[key]=np.array(scanData[key])
scanData['N']=len(scanData['lY_max'])
with open( "data/intmeth_scan_v4.p", "wb" ) as file:
pickle.dump( scanData, file )
#Save results
save = False
try:
with open( "data/intmeth_scan_v5.p", "rb" ) as file:
scanData = pickle.load( file )
except:
save = True
#Find those which are consistent with Izr
if save:
scanData['cgood']=[]
scanData['IzrChi']=[]
scanData['Y1keV']=[]
for i in zip(range(scanData['N'])):
Y=getYfitCond_v4(scanData['lE_nrs'][i],scanData['lYs'][i])
cizr=izr['Enr']<scanData['lE_nrs'][i][-1]
scanData['Y1keV'].append(Y.calc(np.array([1e3])))
scanData['IzrChi'].append(np.sum((((Y.calc(izr['Enr'])-izr['Y'])/izr['dY'])[cizr])**2))
scanData['cgood'].append(((np.abs(Y.calc(izr['Enr'])-izr['Y'])<1*izr['dY'])[cizr]).all())
scanData['cgood']=np.array(scanData['cgood'])
scanData['IzrChi']=np.array(scanData['IzrChi'])
scanData['Y1keV']=np.array(scanData['Y1keV'])
with open( "data/intmeth_scan_v5.p", "wb" ) as file:
pickle.dump( scanData, file )
#Collect the things we want to plot since it can take a while
EYenvelopes=[]
ERenvData=[]
ERmidData=[]
iBest=[]
mask=np.zeros(len(cut),dtype=bool)
mask[:]=True
#No NG
cut_noNG=(scanData['cgood'])&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
iBest=[]
cut_noNG=(scanData['cgood'])&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
#NR equivalent threshold values for some yield models
#Also compare with arb threshold of 10 eVee
#Lindhard (k=0.146 for Si)
Y=Yield.Yield('Lind',[0.146])
#Should really put these calculations somewhere more useful
#Lindhard for Ge (Si) at 100(110) eV, that's the assumed SNOLAB iZIP threshold
Y=Yield.Yield('Lind',[0.157]) #Used <A>=72.8
Y=Yield.Yield('Lind',[0.146])
importlib.reload(pt)
Emax = 2000 #eVee
Ebins=np.linspace(0,Emax,201)
Ebin_ctr=(Ebins[:-1]+Ebins[1:])/2
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
#uncertainty is (high,low)
R_meas,dR_meas=spec.doBkgSub(meas, Ebins, Efit_min=50,Efit_max=2e3,\
doEffsyst=True, doBurstLeaksyst=True,\
output='reco-rate')
#Illustration of method
Elim_er=[255.0,505.0,1005.0,1505.0,1995.0]
Elim_nr=[806.3832567888599, 1967.2490338155576, 4045.3075738134753, 5739.940139258986, 7281.31517699986]
for Elim in Elim_er[:-1]:
cut=(Ebin_ctr>=Elim)&(Ebin_ctr<=Elim_er[-1])
c,b=np.histogram(np.sum(g4['ER']['E'],axis=1),bins=Ebins)
bctr=(b[:-1]+b[1:])/2
for Elim in Elim_er[:-1]:
cut=(bctr>=Elim)&(bctr<=Elim_er[-1])
Ebnr=np.linspace(0,7.3e3,200)
c,b=np.histogram(np.sum(g4['NR']['E'],axis=1),bins=Ebnr)
bctr=(b[:-1]+b[1:])/2
for Elim in Elim_nr[:-1]:
cut=(bctr>=Elim)&(bctr<=Elim_nr[-1])
c,b=np.histogram(np.sum(cap['dE'],axis=1),bins=Ebnr)
bctr=(b[:-1]+b[1:])/2
for Elim in Elim_nr[:-1]:
cut=(bctr>=Elim)&(bctr<=Elim_nr[-1])
#For this analysis, we'll just use the total Edep of each event and apply yield to that.
#How big of an assumption is this?
E_er=np.sum(g4['ER']['E'],axis=1)
E_nr=np.sum(g4['NR']['E'],axis=1)
E_ng=np.sum(cap['dE'],axis=1)
Emax_frac_er=np.max(g4['ER']['E'],axis=1)/np.sum(g4['ER']['E'],axis=1)
Emax_frac_nr=np.max(g4['NR']['E'],axis=1)/np.sum(g4['NR']['E'],axis=1)
Emax_frac_ng=np.max(cap['dE'],axis=1)/np.sum(cap['dE'],axis=1)
#Trim events that won't figure into the analysis range
E_er=E_er[(E_er>0) & (E_er<10e3)]
E_nr=E_nr[(E_nr>0) & (E_nr<1000e3)]
#Spectra with default livetimes and standard yield, Fano
#Y=Yield.Yield('Lind',[0.146])
Y=Yield.Yield('Chav',[0.146,1e3/0.247])
N_er,_=np.histogram(E_er,bins=Ebins)
N_nr,_=np.histogram(NRtoER(E_nr,Y,V,eps),bins=Ebins)
N_ng,_=np.histogram(NRtoER(E_ng,Y,V,eps),bins=Ebins)
R_er=N_er/g4['ER']['tlive']
R_nr=N_nr/g4['NR']['tlive']
R_ng=N_ng/cap['tlive']
#Need to set some NR max I think.
#Not sure how to choose this because there's NRs up to 1 MeV
#Do we need a fixed (Er,Y) to work from?
Y=Yield.Yield('Lind',[0.146])
E_nr_max=ERtoNR(Ebin_ctr[-1],Y,V,eps)[0]
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y,V,eps)[0],E_nr_max)/g4['NR']['tlive'])
fng=0
E_nrs=[]
E_nr_step=1
E_nr_test=E_nr_max
for i in tqdm(range(len(Ebin_ctr))[::-1]):
if np.isfinite(R_meas[i]):
while True:
R_meas_this=np.sum(R_meas[(Ebin_ctr>Ebin_ctr[i])&(Ebin_ctr<2e3)])
R_sim_this=fg4*(Nint(E_er,Ebin_ctr[i],2e3)/g4['ER']['tlive'] + Nint(E_nr,E_nr_test,E_nr_max)/g4['NR']['tlive']) + fng*Nint(E_ng,E_nr_test,E_nr_max)/cap['tlive']
if (R_meas_this<R_sim_this) or (E_nr_test<0):
break
E_nr_test-=E_nr_step
E_nrs.append(E_nr_test)
else:
E_nrs.append(np.inf)
E_nrs=np.array(E_nrs[::-1])
Ys=((Ebin_ctr/E_nrs)*(1+V/eps)-1)*eps/V
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('Chav',[0.146,1e3/0.247])
C_er,_=np.histogram(E_er,bins=Ebins)
R_er=fg4*C_er/g4['ER']['tlive']
Y=Yield.Yield('User',[Y_fCS])
C_nr,_=np.histogram(NRtoER(E_nr,Y,V,eps),bins=Ebins)
R_nr=fg4*C_nr/g4['NR']['tlive']
C_ng,_=np.histogram(NRtoER(E_ng,Y,V,eps),bins=Ebins)
R_ng=fng*C_ng/cap['tlive']
#Extract yield curve using the integral method
#Treats each event as a single scatter of the total energy
#fer: ER livetime factor
#fnr: NR livetime factor
#fng: NG livetime factor
#Y_max: Yield value that corresponds to the highest bin edge of Ebins
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
lY_max=np.linspace(0.1,0.6,6)
lfer=[]
lfnr=[]
lE_nrs=[]
lYs=[]
for Y_max in tqdm(lY_max):
#Normalize so that ER+NR matches data near 2 keV
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive'])
lfer.append(fg4)
lfnr.append(fg4)
E_nrs,Ys=extract_Y(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=0, Y_max=Y_max, E_nr_step=1)
lE_nrs.append(E_nrs)
lYs.append(Ys)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
for E_nrs,Ys in zip(lE_nrs,lYs):
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
for E_nrs,Ys,fer,fnr in zip(lE_nrs,lYs,lfer,lfnr):
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
C_er,_=np.histogram(E_er,bins=Ebins)
R_er=fer*C_er/tlive_er
Y=Yield.Yield('User',[Y_fCS])
C_nr,_=np.histogram(NRtoER(E_nr,Y,V,eps),bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(NRtoER(E_ng,Y,V,eps),bins=Ebins)
R_ng=fng*C_ng/tlive_ng
bins=np.linspace(-100,2500,100)
#Looks like that's doing the right thing. Maybe need to truncate at 0
ERsmeared=spec.getSmeared(NRtoER(E_ng,0.2,V,eps))
ERsmeared[ERsmeared<0]=0
Y_max=0.25
#Normalize so that ER+NR matches data near 2 keV
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive'])
E_nrs,Ys=extract_Y(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max, E_nr_step=1)
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y=Yield.Yield('User',[Y_fit])
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
ERtoNR_fCS=CubicSpline(NRtoER(E_nrs[cFit],Y,V,eps),E_nrs[cFit])
E_nr_sm=ERtoNR_fCS(spec.getSmeared(E_nr_eVee))
E_ng_sm=ERtoNR_fCS(spec.getSmeared(E_ng_eVee))
E_ng_sm2=ERtoNR_fCS(spec.getSmeared(E_ng_eVee))
Ebnr=np.linspace(0,3e3,200)
E_nrs_0=E_nrs
Ys_0=Ys
E_nrs,Ys=extract_Y(E_er, E_nr_sm, E_ng_sm, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max, E_nr_step=1)
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
Y_max=0.25
#Normalize so that ER+NR matches data near 2 keV
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive'])
E_nrs,Ys=extract_Y(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max, E_nr_step=1)
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[E_nrs>0][0],0,E_nrs[-1],Ys[-1])
E_nrs,Ys=extract_Y_wSmear(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max, nIt=1, E_nr_step=1)
cFit=(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[E_nrs>0][0],0,E_nrs[-1],Ys[-1])
lY_max=[0.3]
lfer=[]
lfnr=[]
lfng=[]
lE_nrs=[]
lYs=[]
for Y_max in tqdm(lY_max):
#Normalize so that ER+NR matches data near 2 keV
fg4=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)]) / (Nint(E_er,1.9e3,2e3)/g4['ER']['tlive'] + Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive'])
lfer.append(fg4)
lfnr.append(fg4)
lfng.append(1)
E_nrs,Ys=extract_Y_wSmear(E_er, E_nr, E_ng, fer=fg4, fnr=fg4, fng=1, Y_max=Y_max,
nIt=1, E_nr_step=1)
lE_nrs.append(E_nrs)
lYs.append(Ys)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
for E_nrs,Ys,fer,fnr,fng in zip(lE_nrs,lYs,lfer,lfnr,lfng):
cFit=(np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
#Smear
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[0],0,E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
E_er_sm=spec.getSmeared(E_er)
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps))
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps))
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
Y_max=0.3
R0_meas=np.sum(R_meas[(Ebin_ctr>1.9e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.9e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.9e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=0
fnr=(R0_meas)/R0_nr
fng=0
E_er_max=2e3
E_nr_max=ERtoNR(E_er_max,Y_max,V,eps)
Ebin_ctr_rev=Ebin_ctr[::-1]
rev_csum_meas=np.cumsum(R_meas[::-1])
R_sim_er=fer*np.histogram(E_er,Ebins)[0]/tlive_er
rev_csum_er=np.cumsum(R_sim_er[::-1])
w_nr=fnr/tlive_nr*np.ones(np.sum(E_nr<=E_nr_max))
w_ng=fng/tlive_ng*np.ones(np.sum(E_ng<=E_nr_max))
E_nrng=np.concatenate((E_nr[E_nr<=E_nr_max],E_ng[E_ng<=E_nr_max]))
w_nrng=np.concatenate((w_nr,w_ng))
E_nrng_rev_srt=(E_nrng[np.argsort(E_nrng)])[::-1]
w_nrng_rev_srt=(w_nrng[np.argsort(E_nrng)])[::-1]
rev_csum_nrng=np.cumsum(w_nrng_rev_srt)
diff=rev_csum_meas-rev_csum_er
E_nrs=[]
error=[]
for entry in diff:
if np.isfinite(entry):
args=np.argwhere(rev_csum_nrng>=entry)
if len(args)==0:
E_nrs.append(-99)
else:
E_nr_this=E_nrng_rev_srt[args[0][0]]
error.append(rev_csum_nrng[args[0][0]]-entry)
if len(E_nrs)>0:
E_nrs.append(min(E_nr_this,E_nrs[-1]))
else:
E_nrs.append(E_nr_this)
else:
E_nrs.append(-999)
error.append(-999)
E_nrs=np.array(E_nrs[::-1])
Ys=((Ebins[:-1]/E_nrs)*(1+V/eps)-1)*eps/V
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
ERtoNR_fCS=CubicSpline(xx,yy,extrapolate=True)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
ERtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fCS(E) + (E>xx[-1])*pf_hi(E)
E_er_sm=spec.getSmeared(E_er,seed=None,F=F)
E_er_sm[E_er_sm<0]=0
E_nr_sm=ERtoNR_fcombo(spec.getSmeared(E_nr_eVee,seed=None,F=F))
E_ng_sm=ERtoNR_fcombo(spec.getSmeared(E_ng_eVee,seed=None,F=F))
E_nrs,Ys,errors=extract_Y_v2(E_er_sm, E_nr_sm, E_ng_sm, fer, fnr, fng, Y_max, Ebins)
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
ERtoNR_fCS=CubicSpline(xx,yy,extrapolate=True)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
ERtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fCS(E) + (E>xx[-1])*pf_hi(E)
E_er_sm2=spec.getSmeared(E_er,seed=None,F=F)
E_nr_sm2=ERtoNR_fcombo(spec.getSmeared(E_nr_eVee,seed=None,F=F))
E_ng_sm2=ERtoNR_fcombo(spec.getSmeared(E_ng_eVee,seed=None,F=F))
E_nrs,Ys,errors=extract_Y_v2(E_er_sm2, E_nr_sm2, E_ng_sm2, fer, fnr, fng, Y_max, Ebins)
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
#Use this correspondence to map back to NR
#But need to condition it outside of the spline region.
#Just extrapolate with linear from each end
xx=NRtoER(E_nrs[cFit],Y,V,eps)
yy=E_nrs[cFit]
ERtoNR_fCS=CubicSpline(xx,yy,extrapolate=True)
pf_low=np.poly1d(np.polyfit([0,xx[0]], [0,yy[0]], 1))
pf_hi=np.poly1d(np.polyfit(xx[-10:], yy[-10:], 1))
ERtoNR_fcombo = lambda E: (E<xx[0])*pf_low(E) + (E>=xx[0])*(E<=xx[-1])*ERtoNR_fCS(E) + (E>xx[-1])*pf_hi(E)
E_er_sm3=spec.getSmeared(E_er,seed=None,F=F)
E_nr_sm3=ERtoNR_fcombo(spec.getSmeared(E_nr_eVee,seed=None,F=F))
E_ng_sm3=ERtoNR_fcombo(spec.getSmeared(E_ng_eVee,seed=None,F=F))
E_nrs,Ys,errors=extract_Y_v2(E_er_sm, E_nr_sm, E_ng_sm, fer, fnr, fng, Y_max, Ebins)
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
lY_max=np.linspace(0.2,0.3,5)
lfer=[]
lfnr=[]
lfng=[]
lE_nrs=[]
lYs=[]
lerrors=[]
for Y_max in tqdm(lY_max):
#Normalize so that ER+NR matches data near 2 keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fnr=6
fer=(R0_meas-fnr*R0_nr)/R0_er
fng=2#2.037
lfer.append(fer)
lfnr.append(fnr)
lfng.append(fng)
E_nrs,Ys,errors=extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=1, Ebins=np.linspace(0,2e3,201), seed=None)
#If binning is too small, will get some errors and things won't work.
#Probably in bkg_sub, but not exactly sure
lE_nrs.append(E_nrs)
lYs.append(Ys)
lerrors.append(errors)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
dosmear=True
seed=None
#Add other measurements from lit
for E_nrs,Ys,fer,fnr,fng in zip(lE_nrs,lYs,lfer,lfnr,lfng):
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fCS=lambda E: np.interp(E,E_nrs[cFit],Ys[cFit])
#Smear
Y_fit = lambda E: Y_conditioned_test(E,Y_fCS,E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('User',[Y_fit])
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed)
E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_nr_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed)
E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
chi=np.mean((((R_tot-R_meas)/((dR_meas[0]+dR_meas[1])/2))**2)[Ebin_ctr>50])
#lnIt=np.arange(11)
lnIt=[0,1,2,5,10,15,20,30]
lY_max=[]
lfer=[]
lfnr=[]
lfng=[]
lE_nrs=[]
lYs=[]
lerrors=[]
for nIt in tqdm(lnIt):
Y_max=0.25
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
lY_max.append(Y_max)
fnr=6
fer=(R0_meas-fnr*R0_nr)/R0_er
fng=4#2.037+0.41
lfer.append(fer)
lfnr.append(fnr)
lfng.append(fng)
E_nrs,Ys,errors=extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=nIt, Ebins=np.linspace(0,2e3,201), seed=None)
#If binning is too small, will get some errors and things won't work.
#Probably in bkg_sub, but not exactly sure
lE_nrs.append(E_nrs)
lYs.append(Ys)
lerrors.append(errors)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
dosmear=True
seed=None
#Add other measurements from lit
for E_nrs,Ys,fer,fnr,fng,nIt in zip(lE_nrs,lYs,lfer,lfnr,lfng,lnIt):
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=lambda E: np.interp(E,E_nrs[cFit],Ys[cFit])
#Smear
Y_fit = lambda E: Y_conditioned_test(E,Y_fCS,E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('User',[Y_fit])
if nIt>0:
E_er_sm=spec.getSmeared(E_er,seed=seed)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed)
E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_nr_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed)
E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
chi=np.mean((((R_tot-R_meas)/((dR_meas[0]+dR_meas[1])/2))**2)[Ebin_ctr>50])
E_nrs=lE_nrs[4]
Ys=lYs[4]
cFit=(Ebin_ctr>50) & (E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
Y_fCS=lambda E: np.interp(E,E_nrs[cFit],Ys[cFit])
#Smear
Y_fit = lambda E: Y_conditioned_test(E,Y_fCS,E_nrs[cFit],Ys[cFit])
Y=Yield.Yield('User',[Y_fit])
lY_max=np.concatenate((np.linspace(0.2,0.3,5),np.linspace(0.2,0.3,5)))
lfer=[]
lfnr=[]
lfng=np.concatenate(((2.037+0.408)*np.ones(5),(2.037-0.408)*np.ones(5)))
lE_nrs=[]
lYs=[]
lerrors=[]
for Y_max,fng in zip(tqdm(lY_max),lfng):
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas)/(R0_er+R0_nr)
fnr=fer
lfer.append(fer)
lfnr.append(fnr)
E_nrs,Ys,errors=extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=1, Ebins=np.linspace(0,2e3,201), seed=0)
#If binning is too small, will get some errors and things won't work.
#Probably in bkg_sub, but not exactly sure
lE_nrs.append(E_nrs)
lYs.append(Ys)
lerrors.append(errors)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
dosmear=True
seed=0
#Add other measurements from lit
N=len(lE_nrs)
for i in range(int(N/2)):
cFit1=(Ebin_ctr>50) &(lE_nrs[i]>0) & (np.isfinite(lE_nrs[i])) & (np.insert(np.diff(lE_nrs[i])>0,-1,True))
E_nrs1=lE_nrs[i][cFit1]
Ys1=lYs[i][cFit1]
Y_fCS1=CubicSpline(E_nrs1,Ys1,extrapolate=True)
cFit2=(Ebin_ctr>50) &(lE_nrs[i+int(N/2)]>0) & (np.isfinite(lE_nrs[i+int(N/2)])) & (np.insert(np.diff(lE_nrs[i+int(N/2)])>0,-1,True))
E_nrs2=lE_nrs[i+int(N/2)][cFit2]
Ys2=lYs[i+int(N/2)][cFit2]
Y_fCS2=CubicSpline(E_nrs2,Ys2,extrapolate=True)
#Smear
Y_fit1 = lambda E: Y_conditioned(E,Y_fCS1,E_nrs1[0],Ys1[0],E_nrs1[-1],Ys1[-1])
Y1=Yield.Yield('User',[Y_fit1])
Y_fit2 = lambda E: Y_conditioned(E,Y_fCS2,E_nrs2[0],Ys2[0],E_nrs2[-1],Ys2[-1])
Y2=Yield.Yield('User',[Y_fit2])
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y1,V,eps),seed=seed)
E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_nr_sm=NRtoER(E_nr,Y1,V,eps)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y1,V,eps),seed=seed)
E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y1,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y1,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=lfer[i]*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=lfnr[i]*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=lfng[i]*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
chi=np.mean((((R_tot-R_meas)/((dR_meas[0]+dR_meas[1])/2))**2)[Ebin_ctr>50])
izr=pt.get_old_Y_data()
Y_izr_up=CubicSpline(izr['Enr'],izr['Y'],extrapolate=True)
Y_fit = lambda E: Y_conditioned(E,Y_izr_up,izr['Enr'][0],(izr['Y'])[0],izr['Enr'][-1],(izr['Y'])[-1])
Y=Yield.Yield('User',[Y_fit])
xx=np.linspace(0,30e3,1000)
save = False
try:
with open( "data/cdf_results.p", "rb" ) as file:
results = pickle.load( file )
lY_max=results['lY_max']
lfer=results['lfer']
lfnr=results['lfnr']
lfng=results['lfng']
lE_nrs=results['lE_nrs']
lYs=results['lYs']
lerrors=results['lerrors']
lY_max=np.array(lY_max)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lfng=np.array(lfng)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
except:
save = True
#Let's scan through a bunch of scalings and then only retain those which are consistent with Izr
if save:
lY_max=[]
lfer=[]
lfnr=[]
lfng=[]
lE_nrs=[]
lYs=[]
lerrors=[]
for Y_max in tqdm(np.linspace(0.25,0.29,20)):
for fnr in np.linspace(4,9,20):
for fng in [0,2.037+0.408,2.037-0.408]:
lY_max.append(Y_max)
#Normalize near 2keV
R0_meas=np.sum(R_meas[(Ebin_ctr>1.99e3)&(Ebin_ctr<2e3)])
R0_er=Nint(E_er,1.99e3,2e3)/g4['ER']['tlive']
R0_nr=Nint(E_nr,ERtoNR(1.99e3,Y_max,V,eps),ERtoNR(2e3,Y_max,V,eps))/g4['NR']['tlive']
fer=(R0_meas-fnr*R0_nr)/R0_er
lfer.append(fer)
lfnr.append(fnr)
lfng.append(fng)
E_nrs,Ys,errors=extract_Y_wSmear_v2(E_er, E_nr, E_ng, fer, fnr, fng, Y_max=Y_max,
nIt=1, Ebins=np.linspace(0,2e3,201), seed=0, F=0.1161)
#If binning is too small, will get some errors and things won't work.
#Probably in bkg_sub, but not exactly sure
lE_nrs.append(E_nrs)
lYs.append(Ys)
lerrors.append(errors)
lY_max=np.array(lY_max)
lfer=np.array(lfer)
lfnr=np.array(lfnr)
lfng=np.array(lfng)
lE_nrs=np.array(lE_nrs)
lYs=np.array(lYs)
lerrors=np.array(lerrors)
results={'lY_max':lY_max, 'lfer':lfer, 'lfnr':lfnr, 'lfng':lfng, 'lE_nrs':lE_nrs, 'lYs':lYs, 'lerrors':lerrors}
with open( "data/cdf_results.p", "wb" ) as file:
pickle.dump( results, file )
dosmear=True
seed=0
Fthis=0.1161
#Add other measurements from lit
for E_nrs,Ys,fer,fnr,fng,good in zip(lE_nrs,lYs,lfer,lfnr,lfng,cgood):
if not good:
continue
cFit=(Ebin_ctr>50) &(E_nrs>0) & (np.isfinite(E_nrs)) & (np.insert(np.diff(E_nrs)>0,-1,True))
Y_fCS=CubicSpline(E_nrs[cFit],Ys[cFit],extrapolate=True)
if fng==0:
color='red'
else:
color='gray'
#Smear
Y_fit = lambda E: Y_conditioned(E,Y_fCS,E_nrs[cFit][0],Ys[cFit][0],E_nrs[-1],Ys[-1])
Y=Yield.Yield('User',[Y_fit])
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=Fthis)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed,F=Fthis)
E_nr_eVee_sm[E_nr_eVee_sm<0]=0
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed,F=Fthis)
E_ng_eVee_sm[E_ng_eVee_sm<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
#Pick mins and maxes at a given energy
#This isn't quite right, since envelope is not jsut from a single curve
ifng0=np.argwhere(cgood&(lfng==0))
ifng0_min=ifng0[np.argmin(Y_1keV[ifng0])][0]
ifng0_max=ifng0[np.argmax(Y_1keV[ifng0])][0]
ifng=np.argwhere(cgood&(lfng!=0))
ifng_min=ifng[np.argmin(Y_1keV[ifng])][0]
ifng_max=ifng[np.argmax(Y_1keV[ifng])][0]
dosmear=True
seed=0
Fthis=0.1161
#Add other measurements from lit
labels=[r'no (n,$\gamma$)',r'with (n,$\gamma$)']
colors=['red','gray']
for inds,label,color in zip([[ifng0_max,ifng0_min],[ifng_max,ifng_min]],labels,colors):
#for E_nrs,Ys,fer,fnr,fng,good in zip(lE_nrs,lYs,lfer,lfnr,lfng,cgood):
i=inds[0]
j=inds[1]
cFit1=(Ebin_ctr>50) &(lE_nrs[i]>0) & (np.isfinite(lE_nrs[i])) & (np.insert(np.diff(lE_nrs[i])>0,-1,True))
E_nrs1=lE_nrs[i][cFit1]
Ys1=lYs[i][cFit1]
Y_fCS1=CubicSpline(E_nrs1,Ys1,extrapolate=True)
cFit2=(Ebin_ctr>50) &(lE_nrs[j]>0) & (np.isfinite(lE_nrs[j])) & (np.insert(np.diff(lE_nrs[j])>0,-1,True))
E_nrs2=lE_nrs[j][cFit2]
Ys2=lYs[j][cFit2]
Y_fCS2=CubicSpline(E_nrs2,Ys2,extrapolate=True)
#Smear
Y_fit1 = lambda E: Y_conditioned(E,Y_fCS1,E_nrs1[0],Ys1[0],E_nrs1[-1],Ys1[-1])
Y1=Yield.Yield('User',[Y_fit1])
Y_fit2 = lambda E: Y_conditioned(E,Y_fCS2,E_nrs2[0],Ys2[0],E_nrs2[-1],Ys2[-1])
Y2=Yield.Yield('User',[Y_fit2])
if dosmear:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=Fthis)
E_er_sm[E_er_sm<0]=0
E_nr_eVee_sm1=spec.getSmeared(NRtoER(E_nr,Y1,V,eps),seed=seed,F=Fthis)
E_nr_eVee_sm1[E_nr_eVee_sm1<0]=0
E_nr_eVee_sm2=spec.getSmeared(NRtoER(E_nr,Y2,V,eps),seed=seed,F=Fthis)
E_nr_eVee_sm2[E_nr_eVee_sm2<0]=0
E_ng_eVee_sm1=spec.getSmeared(NRtoER(E_ng,Y1,V,eps),seed=seed,F=Fthis)
E_ng_eVee_sm1[E_ng_eVee_sm1<0]=0
E_ng_eVee_sm2=spec.getSmeared(NRtoER(E_ng,Y2,V,eps),seed=seed,F=Fthis)
E_ng_eVee_sm2[E_ng_eVee_sm2<0]=0
else:
E_er_sm=E_er
E_nr_eVee_sm1=NRtoER(E_nr,Y1,V,eps)
E_nr_eVee_sm2=NRtoER(E_nr,Y2,V,eps)
E_ng_eVee_sm1=NRtoER(E_ng,Y1,V,eps)
E_ng_eVee_sm2=NRtoER(E_ng,Y2,V,eps)
C_er1,_=np.histogram(E_er_sm,bins=Ebins)
R_er1=lfer[i]*C_er1/tlive_er
C_er2,_=np.histogram(E_er_sm,bins=Ebins)
R_er2=lfer[j]*C_er2/tlive_er
C_nr1,_=np.histogram(E_nr_eVee_sm1,bins=Ebins)
R_nr1=lfnr[i]*C_nr1/tlive_nr
C_nr2,_=np.histogram(E_nr_eVee_sm2,bins=Ebins)
R_nr2=lfnr[j]*C_nr2/tlive_nr
C_ng1,_=np.histogram(E_ng_eVee_sm1,bins=Ebins)
R_ng1=lfng[i]*C_ng1/tlive_ng
C_ng2,_=np.histogram(E_ng_eVee_sm2,bins=Ebins)
R_ng2=lfng[j]*C_ng2/tlive_ng
cut=cgood&(lfng!=0)
ERenvData=getERminmax(lE_nrs[cut],lYs[cut],lfer[cut],lfnr[cut],lfng[cut])
cut=cgood&(lfng!=0)
cut=cgood&(lfng==0)
#Add other measurements from lit
ERenvData=getERminmax(lE_nrs[cut],lYs[cut],lfer[cut],lfnr[cut],lfng[cut])
#Extract yield curve using the integral method
#Treats each event as a single scatter of the total energy
#fer: ER livetime factor
#fnr: NR livetime factor
#fng: NG livetime factor
#Y_max: Yield value that corresponds to the highest bin edge of Ebins
#v3: Separate ER and NR Fanos. Also allow smeared energies to be negative
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
#Find those which are consistent with Izr
scanData['cgood']=[]
scanData['IzrChi']=[]
for i in zip(range(scanData['N'])):
Y=getYfitCond(scanData['lE_nrs'][i],scanData['lYs'][i])
cizr=izr['Enr']<scanData['lE_nrs'][i][-1]
scanData['IzrChi'].append(np.sum((((Y.calc(izr['Enr'])-izr['Y'])/izr['dY'])[cizr])**2))
scanData['cgood'].append(((np.abs(Y.calc(izr['Enr'])-izr['Y'])<1*izr['dY'])[cizr]).all())
scanData['cgood']=np.array(scanData['cgood'])
scanData['IzrChi']=np.array(scanData['IzrChi'])
"""
fig_w=9
fig,axs=subplots(1,2,figsize=(2*fig_w, fig_w*(.75)))
cut=scanData['cgood']&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)
iPlot=0
#Best fit to Izr
iBest=np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])]
labels=[r'no (n,$\gamma$)',r'with (n,$\gamma$)']
colors=['gray','green']
#Add other measurements from lit
pt.plotOldYs_noSat(axs[0],fmt='o',markersize=6)
axs[0].fill(*getEYenvelope(scanData['lE_nrs'][cut],scanData['lYs'][cut],eVeeMin=70),
colors[iPlot],alpha=0.5,label=labels[iPlot])
axs[0].plot(scanData['lE_nrs'][iBest][Ebin_ctr>70],scanData['lYs'][iBest][Ebin_ctr>70], colors[iPlot], linestyle='--')
Yiso = lambda Enr,Eee: Eee/Enr*(1+eps/V)-eps/V
axs[0].plot(np.logspace(-2,5,100),Yiso(np.logspace(-2,5,100),50),'--m')
axs[0].plot(np.logspace(-2,5,100),Yiso(np.logspace(-2,5,100),2e3),'--m')
axs[0].text(2e2,0.2,r'50 $eV_{ee}$',size=16,color='m',rotation=-72)
axs[0].text(1e4,0.15,r'2 $keV_{ee}$',size=16,color='m',rotation=-65)
axs[1].errorbar(Ebin_ctr[Ebin_ctr>50],R_meas[Ebin_ctr>50],(dR_meas.T[Ebin_ctr>50]).T,
ecolor='k', marker='o',markersize=6,color='k', linestyle='none',label='Measured',zorder=5)
axs[0].set_prop_cycle(None)#Reset color cycle
axs[1].set_prop_cycle(None)
ERenvData=getERminmax_v3(scanData,cut,nAvg=1)
ERmidData=getERminmax_v3(scanData,np.arange(len(scanData['lE_nrs']))==iBest,nAvg=5)#Cheat to get mid. min==max
axs[1].step(ERmidData['eVee'],ERmidData['NR']['min'],color='r',where='mid')
axs[1].step(ERmidData['eVee'],ERmidData['ER']['min'],color='k',where='mid')
axs[1].step(ERmidData['eVee'],ERmidData['NG']['min'],color='b',where='mid')
axs[1].step(ERmidData['eVee'],ERmidData['Total']['min'],color='g',where='mid')
axs[1].fill_between(ERenvData['eVee'],ERenvData['NR']['min'],ERenvData['NR']['max'],color='r',alpha=0.5,step='mid',label='NR')
axs[1].fill_between(ERenvData['eVee'],ERenvData['ER']['min'],ERenvData['ER']['max'],color='k',alpha=0.5,step='mid',label='ER')
axs[1].fill_between(ERenvData['eVee'],ERenvData['NG']['min'],ERenvData['NG']['max'],color='b',alpha=0.5,step='mid',label=r'(n,$\gamma)$')
axs[1].fill_between(ERenvData['eVee'],ERenvData['Total']['min'],ERenvData['Total']['max'],color='g',alpha=0.5,step='mid',label='Total')
axs[0].set_xlim(1e2,5e4);
axs[0].set_xscale('log')
axs[0].set_ylim(0,0.4)
axs[0].yaxis.set_major_locator(plt.MultipleLocator(0.1))
axs[0].set_xlabel('Energy [eVnr]')
axs[0].set_ylabel('Y')
axs[0].legend(loc='lower right',ncol=2,prop={'size': 16})
axs[1].axvline(50,linestyle='--',color='m')
axs[1].set_ylim(0,0.04)
axs[1].yaxis.set_major_locator(plt.MultipleLocator(0.01))
axs[1].set_xlim(0,1e3)
axs[1].set_xlabel('Energy [eVee]')
axs[1].set_ylabel('Rate [1/bin/s]')
axs[1].legend(loc='upper right', prop={'size': 16})
tight_layout()"""
#Extract yield curve using the integral method
#Treats each event as a single scatter of the total energy
#fer: ER livetime factor
#fnr: NR livetime factor
#fng: NG livetime factor
#Y_max: Yield value that corresponds to the highest bin edge of Ebins
#v3: Separate ER and NR Fanos. Also allow smeared energies to be negative
#v4: Add dynamic smearing iteration. Stop if smeared matches measured via some measure of closeness.
tlive_er=g4['ER']['tlive']
tlive_nr=g4['NR']['tlive']
tlive_ng=cap['tlive']
save = False
try:
with open( "data/R_Cal.p", "rb" ) as file:
temp = pickle.load( file )
R_er = temp['R_er']
R_nr = temp['R_nr']
R_ng = temp['R_ng']
R_tot = temp['R_tot']
R_max = temp['R_max']
R_min = temp['R_min']
except:
save = True
fig_w=9
seed=0
#Tried speeding this up by only including the last entry as intermediate ones aren't saved
#But that resulted in errors later on... :/
if save:
for i in range(scanData['N']):
E_nrs=scanData['lE_nrs'][i]
Ys=scanData['lYs'][i]
fer=scanData['lfer'][i]
fnr=scanData['lfnr'][i]
fng=scanData['lfng'][i]
FanoER=scanData['lFanoER'][i]
FanoNR=scanData['lFanoNR'][i]
Y=getYfitCond_v4(E_nrs,Ys)
E_nr_eVee=NRtoER(E_nr,Y,V,eps)
E_ng_eVee=NRtoER(E_ng,Y,V,eps)
if nIt>0:
E_er_sm=spec.getSmeared(E_er,seed=seed,F=FanoER)
E_nr_eVee_sm=spec.getSmeared(NRtoER(E_nr,Y,V,eps),seed=seed,F=FanoNR)
E_ng_eVee_sm=spec.getSmeared(NRtoER(E_ng,Y,V,eps),seed=seed,F=FanoNR)
else:
E_er_sm=E_er
E_nr_eVee_sm=NRtoER(E_nr,Y,V,eps)
E_ng_eVee_sm=NRtoER(E_ng,Y,V,eps)
C_er,_=np.histogram(E_er_sm,bins=Ebins)
R_er=fer*C_er/tlive_er
C_nr,_=np.histogram(E_nr_eVee_sm,bins=Ebins)
R_nr=fnr*C_nr/tlive_nr
C_ng,_=np.histogram(E_ng_eVee_sm,bins=Ebins)
R_ng=fng*C_ng/tlive_ng
R_tot=R_er+R_nr+R_ng
R_max=R_meas[Ebin_ctr>50]+1*dR_meas[0][Ebin_ctr>50]
R_min=R_meas[Ebin_ctr>50]-1*dR_meas[1][Ebin_ctr>50]
with open( "data/R_Cal.p" , "wb" ) as file:
temp = {'R_er':R_er, 'R_nr':R_nr, 'R_ng':R_ng, 'R_tot':R_tot, 'R_max':R_max, 'R_min': R_min}
pickle.dump( temp, file )
#Find those which are consistent with Izr
if save:
scanData['cgood']=[]
scanData['IzrChi']=[]
scanData['Y1keV']=[]
for i in zip(range(scanData['N'])):
Y=getYfitCond_v4(scanData['lE_nrs'][i],scanData['lYs'][i])
cizr=izr['Enr']<scanData['lE_nrs'][i][-1]
scanData['Y1keV'].append(Y.calc(np.array([1e3])))
scanData['IzrChi'].append(np.sum((((Y.calc(izr['Enr'])-izr['Y'])/izr['dY'])[cizr])**2))
scanData['cgood'].append(((np.abs(Y.calc(izr['Enr'])-izr['Y'])<1*izr['dY'])[cizr]).all())
scanData['cgood']=np.array(scanData['cgood'])
scanData['IzrChi']=np.array(scanData['IzrChi'])
scanData['Y1keV']=np.array(scanData['Y1keV'])
with open( "data/intmeth_scan_v6.p", "wb" ) as file:
pickle.dump( scanData, file )
save = False
try:
with open( "data/collect.p", "rb") as file:
temp = pickle.load(file)
EYenvelopes = temp['EYenvelopes']
ERenvData = temp['ERenvData']
ERmidData = temp['ERmidData']
iBest = temp['iBest']
cut_noNG = temp['cut_noNG']
mask = temp['mask']
except:
save = True
#Collect the things we want to plot since it can take a while
#save = True
if save:
EYenvelopes=[]
ERenvData=[]
ERmidData=[]
iBest=[]
mask=np.zeros(len(cut),dtype=bool)
mask[:]=True
#No NG
cut_noNG=(scanData['cgood'])&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
cut_noNG&mask
iBest=np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])]#Best fit to Izr
EYenvelopes.append(getEYenvelope_v4(scanData['lE_nrs'][cut],scanData['lYs'][cut],eVeeMin=50))
#This part is slow, please be patient
ERenvData.append(getERminmax_v4(scanData,cut,nAvg=5))
#Cheat to get mid. min==max
ERmidData.append(getERminmax_v4(scanData,np.arange(len(scanData['lE_nrs']))==iBest,nAvg=5))
#With NG
cut_wNG=(scanData['cgood'])&(scanData['lfng']!=0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
cut=cut_wNG&mask
iBest=np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])]#Best fit to Izr
EYenvelopes.append(getEYenvelope_v4(scanData['lE_nrs'][cut],scanData['lYs'][cut],eVeeMin=50))
ERenvData.append(getERminmax_v4(scanData,cut,nAvg=5))
#Cheat to get mid. min==max
ERmidData.append(getERminmax_v4(scanData,np.arange(len(scanData['lE_nrs']))==iBest,nAvg=5))
with open( "data/collect.p", "wb") as file:
temp = {'EYenvelopes':EYenvelopes, 'ERenvData':ERenvData, 'ERmidData':ERmidData, 'iBest':iBest, 'cut_noNG':cut_noNG, 'mask':mask}
pickle.dump( temp, file )
iBest=[]
cut_noNG=(scanData['cgood'])&(scanData['lfng']==0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
print(len(cut_noNG))
print(len(mask))
cut=cut_noNG&mask
iBest.append(np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])])
cut_wNG=(scanData['cgood'])&(scanData['lfng']!=0)&(scanData['lFanoNR']==0.1161)&(scanData['liIt']<=3)
cut=cut_wNG&mask
iBest.append(np.argwhere(cut)[:,0][np.argmin(scanData['IzrChi'][cut])])
fig_w=9
fig,axs=subplots(1,2,figsize=(2*fig_w, fig_w*(.75)))
iPlot=1
if iPlot==0:
cut=cut_noNG
else:
cut=cut_wNG
labels=[r'no (n,$\gamma$)',r'with (n,$\gamma$)']
colors=['gray','green']
#Add other measurements from lit
pt.plotOldYs(axs[0],datasets=['chav','izr','dough','gerb','zech','agnese'],
labels=['Chavarria','Izraelevitch','Dougherty','Gerbier','Zecher','Agnese'],
fmt='o',markersize=6)
axs[0].fill(*EYenvelopes[iPlot],colors[iPlot],alpha=0.5,label=labels[iPlot])
axs[0].plot(scanData['lE_nrs'][iBest[iPlot]][Ebin_ctr>50],scanData['lYs'][iBest[iPlot]][Ebin_ctr>50], colors[iPlot],linestyle='--')
axs[1].errorbar(Ebin_ctr[Ebin_ctr>50],R_meas[Ebin_ctr>50],(dR_meas.T[Ebin_ctr>50]).T,
ecolor='k', marker='o',markersize=6,color='k', linestyle='none',label='Measured',zorder=5)
axs[0].set_prop_cycle(None)#Reset color cycle
axs[1].set_prop_cycle(None)
axs[1].step(ERmidData[iPlot]['eVee'],ERmidData[iPlot]['NR']['min'],color='r',where='mid')
axs[1].step(ERmidData[iPlot]['eVee'],ERmidData[iPlot]['ER']['min'],color='k',where='mid')
axs[1].step(ERmidData[iPlot]['eVee'],ERmidData[iPlot]['NG']['min'],color='b',where='mid')
axs[1].step(ERmidData[iPlot]['eVee'],ERmidData[iPlot]['Total']['min'],color='g',where='mid')
axs[1].fill_between(ERenvData[iPlot]['eVee'],ERenvData[iPlot]['NR']['min'],ERenvData[iPlot]['NR']['max'],color='r',alpha=0.5,step='mid',label='NR')
axs[1].fill_between(ERenvData[iPlot]['eVee'],ERenvData[iPlot]['ER']['min'],ERenvData[iPlot]['ER']['max'],color='k',alpha=0.5,step='mid',label='ER')
axs[1].fill_between(ERenvData[iPlot]['eVee'],ERenvData[iPlot]['NG']['min'],ERenvData[iPlot]['NG']['max'],color='b',alpha=0.5,step='mid',label=r'(n,$\gamma)$')
axs[1].fill_between(ERenvData[iPlot]['eVee'],ERenvData[iPlot]['Total']['min'],ERenvData[iPlot]['Total']['max'],color='g',alpha=0.5,step='mid',label='Total')
#Analysis Range
axs[1].axvline(50,linestyle='--',color='m',label='Threshold')
Yiso = lambda Enr,Eee: Eee/Enr*(1+eps/V)-eps/V
axs[0].plot(np.logspace(-2,5,100),Yiso(np.logspace(-2,5,100),50),'--m')
axs[0].plot(np.logspace(-2,5,100),Yiso(np.logspace(-2,5,100),2e3),'--m')
axs[0].text(2e2,0.2,r'50 $eV_{ee}$',size=16,color='m',rotation=-72)
axs[0].text(1e4,0.15,r'2 $keV_{ee}$',size=16,color='m',rotation=-65)
#Axes
axs[0].set_xlim(1e2,5e4);
axs[0].set_xscale('log')
axs[0].set_ylim(0,0.4)
axs[0].yaxis.set_major_locator(plt.MultipleLocator(0.1))
axs[0].set_xlabel('Energy [eVnr]')
axs[0].set_ylabel('Y')
axs[0].legend(loc='lower right',ncol=2,prop={'size': 16})
axs[1].set_ylim(0,0.04)
axs[1].yaxis.set_major_locator(plt.MultipleLocator(0.01))
axs[1].set_xlim(0,1e3)
axs[1].set_xlabel('Energy [eVee]')
axs[1].set_ylabel('Rate [1/bin/s]')
axs[1].legend(loc='upper right', prop={'size': 16})
tight_layout()
###Output
<>:1187: DeprecationWarning: invalid escape sequence \g
<>:1187: DeprecationWarning: invalid escape sequence \g
<>:1187: DeprecationWarning: invalid escape sequence \g
<ipython-input-2-52c5c0a66521>:1187: DeprecationWarning: invalid escape sequence \g
tight_layout()"""
|
Section 2/2.1_ImportingTimeSeriesInPython.ipynb | ###Markdown
Importing Time Series in Python Importing and Inspecting Datasets We will go over how to import time series in python into a pandas dataframe. We will then inspect the dataframe for missing values, change the column names if necessary, convert the date column to datetime, and set the index for the dataframes. We will then move on to provide the descriptive (summary) statistics, plot the time series, display the frequency of the data with a histogram, and show the distribution of the data, using a kernel density plot. Finally, we will save the dataframes. We will look at the following datasets:1. Google trends-- term search count of the word "vacation"2. Retail Furniture and Furnishing data in Millions of Dollars3. Adjusted Close Stock price data for Bank of America4. Adjusted Close Stock price data for J.P. Morgan5. Monthly Average Temperature data in Fahrenheit for St. Louis
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plot
###Output
_____no_output_____
###Markdown
Example 1: Vacation dataset
###Code
# Read in data
# https://trends.google.com/trends/explore?date=all&geo=US&q=vacation, google trends, term search of the word "vacation", count data
# Date period range: January 2004 to October 2019, 15 years, data is monthly
vacation = pd.read_csv("~/Desktop/section_2/vacation.csv", skiprows=2)
vacation.head()
# Check for missing values
vacation.isna().sum()
# Fix column names
col_names = ['Month','Num_Search_Vacation']
vacation.columns = col_names
vacation.tail()
# Convert to datetime
from datetime import datetime
vacation['Month'] = pd.to_datetime(vacation['Month'])
# Set the 'Month' as index
vacation.set_index('Month', inplace=True)
vacation.head(3)
# Provide the descriptive (summary) statistics
# Generate descriptive statistics that summarize the central tendency, dispersion and
# shape of a dataset’s distribution, excluding NaN values.
# Percentile values (quantile 1, 2, and 3) on numeric values
vacation.describe()
# Calculate median value (middle value), which is the 50% percentile value, quantile 2
# Mean > median implies that data is right skewed
# Mean < median implies that data is left skewed
vacation.median()
# Plot the time series of google searches of the word "vacation"
plot.style.use('seaborn-deep')
ax = vacation.plot(color='coral', grid=True, linewidth=3)
ax.set_xlabel('Year')
ax.set_ylabel('Number of Searches')
ax.set_title('Google Trend of the word "vacation"')
plot.show()
###Output
_____no_output_____
###Markdown
Visually inspecting the time series above, we can see that it trends downward and then stabilizes around 2013. There's also periodic patterns or cycles with the low points in the search for the word "vacation", mostly occurring in October of each year, though occasionally it is in November as well. There's a notable spike in June 2015 with 75 counts of the search term "vacation". The grid lines help us to see that the pattern repeats every year.
###Code
# Check options for fonts, lines, styles
print(plot.style.available)
# Plot histogram (frequency of counts), change num of bins to see different plots
vacation.plot(kind='hist', bins=30, color='pink', grid=True)
# Calculate kernel density plot
# A density plot shows the distribution of the data over a continuous interval.
# Kernel density plot smoothes out the noise in time series data.
# The peaks of a density plot help display where values are concentrated over the interval.
# A Kernel density plot is a a better way to display the distribution because it's not affected by
# the number of bins used (each bar used in a typical histogram).
vacation.plot(kind='density', color="red", grid=True, linewidth=3, fontsize=10)
# saving the dataframe
vacation.to_csv('df_vacation.csv')
###Output
_____no_output_____
###Markdown
Example 2: Furniture dataset
###Code
# Source https://fred.stlouisfed.org/series/RSFHFSN
# Advance Retail Sales: Furniture and Home Furnishings Stores
# Units are in Millions of Dollars, not seasonally adjusted, price
# Date period range is 01/01/1992 to 07/01/2019, monthly data
# Read in data
# Advance Retail Sales: Furniture and Home Furnishings Stores
furniture = pd.read_csv("/Users/karenyang/Desktop/section_2/furniture.csv")
furniture.head()
# Rename columns for ease of reference
col_names = ['Month', 'Millions of Dollars']
furniture.columns = col_names
furniture.head()
# Check for any null values
furniture.isna().sum()
from datetime import datetime
# Convert the Date column to datetime, notice data is in months
furniture['Month'] = pd.to_datetime(furniture['Month'])
# Set index, use inplace=True
furniture.set_index('Month', inplace=True)
furniture.head()
# Obtain the descriptive (summary) statistics
furniture.describe()
###Output
_____no_output_____
###Markdown
Notice that the mean is 7553.8 and the median is 7651.0. The maximum value in the dataset is 11297.0 and the minimum value is 3846.0.
###Code
# Plot
plot.style.use('Solarize_Light2')
ax = furniture.plot(color='blue', grid=False, figsize=(8,5))
ax.set_xlabel('Year')
ax.set_ylabel('Millions of Dollars')
ax.set_title('Retail Sales of Furniture and Home Furnishings Stores')
# Add a brown vertical line
ax.axvline('2001-03-01', color='brown', linestyle='--')
ax.axvline('2001-10-01', color='brown', linestyle='--')
ax.axvline('2007-12-01', color='brown', linestyle='--')
ax.axvline('2009-06-01', color='brown', linestyle='--')
plot.show()
# Plot histogram (frequency of counts), change num of bins to see different plots
furniture.plot(kind='hist', bins=40, color='green', grid=True)
# frequency count of column 'Millions of Dollars'
#count = furniture['Millions of Dollars'].value_counts()
#print(count)
# Calculate kernel density plot
# A density plot shows the distribution of the data over a continuous interval.
# Kernel density plot smoothes out the noise in time series data.
# The peaks of a density plot help display where values are concentrated over the interval.
# A Kernel density plot is a a better way to display the distribution because it's not affected by
# the number of bins used (each bar used in a typical histogram).
furniture.plot(kind='kde', color="purple", grid=False)
###Output
_____no_output_____
###Markdown
Price Adjustment
###Code
# https://fred.stlouisfed.org/series/CPIAUCSL
# Consumer Price Index: All Items in U.S. City Average, All Urban Consumers (CPIAUCSL)
# Index 1982-1984=100, Seasonally Adjusted
# Period is 1992-01-01 to 2019-07-01, monthly
# Unit is millions of dollars
# Updated Oct. 10, 2019
# Read in cpi data
# # https://www.minneapolisfed.org/community/financial-and-economic-education/cpi-calculator-information
cpi = pd .read_csv('~/Desktop/section_2/CPI.csv')
cpi.head()
cpi.tail()
###Output
_____no_output_____
###Markdown
July 2019 is the most current CPI data that we have.
###Code
# convert to python list
cpi_list = cpi['CPIAUCNS'].to_list()
# Create a new column in the dataframe
furniture['CPI'] = cpi_list
furniture.head()
# We will use July 2019 (last value in series)
july2019_cpi = 256.161
# Calculate the CPI for all months from 1992 to 2019 by dividing by the July 2019 CPI value
furniture['CPI_July19_rate'] = furniture['CPI']/july2019_cpi
furniture.head()
# Calculate the furniture sales (millions of dollars) in terms of July 2019 dollars
furniture['furniture_price_adjusted'] = furniture['Millions of Dollars'] * furniture['CPI_July19_rate']
furniture.head(10)
# Create a new dataframe that specifies the column we want
furniture_adjusted = furniture[['furniture_price_adjusted']]
furniture_adjusted.head()
###Output
_____no_output_____
###Markdown
The prices are adjusted by consumer price index in terms of July 2019 dollars.
###Code
# saving the dataframe, which has the sales adjusted to July 2019 prices
furniture_adjusted.to_csv('df_furniture.csv')
###Output
_____no_output_____
###Markdown
Example 3: Bank of America dataset
###Code
import pandas_datareader
from pandas_datareader import data
# Adjusted Close Stock Price data for Bank of America, source is Yahoo finance
# Only get the adjusted close.
# bac = data.DataReader("BAC",
# start='1990-1-1',
# end='2019-10-15',
# data_source='yahoo')['Adj Close']
# bac.plot(title=' Bank of America Adj. Closing Price', figsize=(10,5))
# bac.to_csv('bac.csv')
bac = pd.read_csv('~/Desktop/section_2/bac.csv')
# Check for missing values
bac.isna().sum()
bac.columns
# Convert to datetime
from datetime import datetime
bac['Date'] = pd.to_datetime(bac['Date'])
# Set index, use inplace=True
bac.set_index('Date', inplace=True)
bac.head()
# Provide the descriptive (summary) statistics
# Generate descriptive statistics that summarize the central tendency, dispersion and
# shape of a dataset’s distribution, excluding NaN values.
# Percentile values (quantile 1, 2, and 3) on numeric values
bac.describe()
# Plot the time series for Bank of America Adjusted Close Price
plot.style.use('_classic_test')
ax = bac.plot(color='red', grid=True, linewidth=3)
ax.set_xlabel('Year')
ax.set_ylabel('Adjusted Close Price')
ax.set_title('Bank of America Adjusted Close Price')
plot.show()
# Plot histogram (frequency of counts), change num of bins to see different plots
bac.plot(kind='hist', bins=50, color='violet', grid=True)
# Calculate kernel density plot
# A density plot shows the distribution of the data over a continuous interval.
# Kernel density plot smoothes out the noise in time series data.
# The peaks of a density plot help display where values are concentrated over the interval.
# A Kernel density plot is a a better way to display the distribution because it's not affected by
# the number of bins used (each bar used in a typical histogram).
bac.plot(kind='density', color="red", grid=True, linewidth=3, fontsize=10)
# saving the dataframe
bac.to_csv('df_bankofamerica.csv')
###Output
_____no_output_____
###Markdown
Example 4: J.P. Morgan dataset
###Code
import pandas_datareader
from pandas_datareader import data
# Adjusted Close Stock Price data for J.P. Morgan, source is Yahoo finance
# jpm = data.DataReader("JPM",
# start='1990-1-1',
# end='2019-10-15',
# data_source='yahoo')['Adj Close']
# jpm.plot(title='J.P. Morgan Adj. Closing Price', figsize=(10,5))
# jpm.to_csv('jpm.csv')
# Read in data
jpm = pd.read_csv('~/Desktop/section_2/jpm.csv')
jpm.head()
# Check for missing values
jpm.isna().sum()
# Convert to datetime
from datetime import datetime
jpm['Date'] = pd.to_datetime(jpm['Date'])
# Set index, use inplace=True
jpm.set_index('Date', inplace=True)
jpm.head()
# Provide the descriptive (summary) statistics
# Generate descriptive statistics that summarize the central tendency, dispersion and
# shape of a dataset’s distribution, excluding NaN values.
# Percentile values (quantile 1, 2, and 3) on numeric values
jpm.describe()
# Plot the time series for J.P. Morgan Adjusted Close Price
plot.style.use('tableau-colorblind10')
ax = jpm.plot(color='blue', grid=True, linewidth=3)
ax.set_xlabel('Year')
ax.set_ylabel('Adjusted Close Price')
ax.set_title('J.P. Morgan Adjusted Close Price')
plot.show()
# Plot histogram (frequency of counts), change num of bins to see different plots
jpm.plot(kind='hist', bins=50, color='brown', grid=True)
# Calculate kernel density plot
# A density plot shows the distribution of the data over a continuous interval.
# Kernel density plot smoothes out the noise in time series data.
# The peaks of a density plot help display where values are concentrated over the interval.
# A Kernel density plot is a a better way to display the distribution because it's not affected by
# the number of bins used (each bar used in a typical histogram).
jpm.plot(kind='density', color="red", grid=True, linewidth=3, fontsize=10)
# saving the dataframe
jpm.to_csv('df_jpmorgan.csv')
###Output
_____no_output_____
###Markdown
Example 5: Average Temperature dataset
###Code
# Source: National Centers for Environmental Information, National Oceanic and Atmospheric Administration
# https://www.ncdc.noaa.gov/cag/city/time-series/USW00013994/tavg/all/1/1930-2019?base_prd=true&begbaseyear=1930&endbaseyear=2000
# Average Temperature, all months, Saint Louis, Missouri, 1938-04 to 2019-01
# Anomaly: Departure from mean (relative to month) 1938-2000 base period, Missing value is -99.0
# https://www.ncdc.noaa.gov/cag/city/time-series/USW00013994-tavg-all-1-1930-2019.csv?base_prd=true&begbaseyear=1938&endbaseyear=2000
temp = pd.read_csv("~/Desktop/section_2/stl_temp.csv", skiprows=4, infer_datetime_format=True)
temp.head()
# Recall that missing value is set to -99.0 so isna().sum() will not help in this case
temp.isna().sum()
# Query to find missing value assigned to -99.0, determine the index position
Index_position = temp.query('Value == -99.0').index.tolist()
Index_position
temp['Value'].loc[898,]
temp['Value'].loc[900,]
new_val = (temp['Value'].loc[898,] + temp['Value'].loc[900,])/2
new_val
# Let's put NaN instead of -99.0. You will need to use numpy's nan
# At row 899 and column Value, set to NaN, using Numpy way
temp.at[899, 'Value'] = np.nan
temp['Value'].loc[899,]
# Now, check for Nan
temp.isna().sum()
# Now, let's use interpolation method to put a value in the Nan's place
temp = temp.interpolate(method ='linear', limit_direction ='forward')
# Check the value where previously nan (null value) originally coded as -99.0 was at
temp['Value'].loc[899,]
# Convert to datetime format
temp['Date'] = pd.to_datetime(temp['Date'], format='%Y%m')
# Set the index as Date column
temp.set_index('Date', inplace=True)
temp.head()
temp.describe()
# Subset out the column of interest
temp = temp[['Value']]
temp.head()
# Plot the time series for Temperature
plot.style.use('fivethirtyeight')
ax = temp.plot(color='blue', grid=True, linewidth=1)
ax.set_xlabel('Monthly')
ax.set_ylabel('Temperature')
ax.set_title('Temperature Average of St. Louis in Fahrenheit')
plot.show()
# Plot histogram (frequency of counts), change num of bins to see different plots
temp.plot(kind='hist', bins=55, color='orange', grid=True)
# Calculate kernel density plot
# A density plot shows the distribution of the data over a continuous interval.
# Kernel density plot smoothes out the noise in time series data.
# The peaks of a density plot help display where values are concentrated over the interval.
# A Kernel density plot is a a better way to display the distribution because it's not affected by
# the number of bins used (each bar used in a typical histogram).
temp.plot(kind='density', color="red", grid=True, linewidth=3, fontsize=10)
# saving the dataframe
temp.to_csv('df_temp.csv')
# end
###Output
_____no_output_____ |
notebooks/PyTorch On-Demand Compute Grid - User Story.ipynb | ###Markdown
PyTorch On-Demand Compute Grid - User StoryThis notebook shows existing functionality which allows PyTorch users to train models on our decentralized compute grid.
###Code
import torch
import numpy as np
from grid.clients.torch import TorchClient
g = TorchClient(min_om_nodes=2,verbose=False)
a = torch.FloatTensor([1,2,3,4])
g.refresh(False,True)
b = a + a
a.send(g[3])
a.owner
a
a.get()
a.owner
a
###Output
_____no_output_____ |
sample-notebooks/Hello_worker_sample.ipynb | ###Markdown
Introduction to CI Template Basic SetupYou need to change __IP__ variable to point to your Docker host.
###Code
IP = '192.168.99.100'
BASE = 'http://' + IP + '/v1/' # You need to change this to your service server
import requests
import json
def jprint(data):
print(json.dumps(data, indent=4))
# Change this to your Docker container's IP
HEADERS = {'Content-Type': 'application/json'}
# Show list of available services:
services_url = BASE + 'services'
print(services_url)
res = requests.get(services_url)
jprint(res.json())
res = requests.get(BASE + 'services/hello-python')
jprint(res.json())
query = {
'message': "sample message 1"
}
res = requests.post(BASE + 'services/hello-python', data=json.dumps(query), headers=HEADERS)
jprint(res.json())
query = {
# MAPK Signaling Pathway network
'network_id': '99bea41b-6194-11e5-8ac5-06603eb7f303'
}
res = requests.post(BASE + 'services/ndex', data=json.dumps(query), headers=HEADERS)
jprint(res.json())
query = {
'gene_id': 'brca1_human'
}
res = requests.post(BASE + 'services/shell', data=json.dumps(query), headers=HEADERS)
jprint(res.json())
res = requests.get(BASE + 'queue')
job_id1 = res.json()[0]['job_id']
jprint(res.json())
result_url = BASE + 'queue/' + job_id1 + '/result'
print(result_url)
res = requests.get(result_url)
result_str = res.json()
# jprint(result_str)
# Deletion
res = requests.delete(BASE + 'queue/' + job_id1)
jprint(res.json())
res = requests.get(BASE + 'queue')
jprint(res.json())
# Delete All jobs and results
res = requests.delete(BASE + 'queue')
jprint(res.json())
###Output
{
"deletedJobs": [
"705f291b-5e6e-4135-abf9-2adcd37c95e7",
"29e8f2e6-2711-4fd3-9873-12dfb53d3b15",
"d1f168bc-8b4c-4c59-98ce-43f01415067c",
"db6b146d-30d8-447d-afaf-aae2efdda9bd",
"1d3f9a2a-752b-4a21-b521-a34cad1436be",
"722b9783-945d-4d3e-a528-e47a0f81c781",
"cdea76ac-2a73-40f0-bac7-8f40df58ed59",
"8b836b9d-b8c1-4181-99d8-fca21a06ec3c",
"6a8abf2d-6704-4b7e-9ba9-410f195f2d29",
"7cd23266-2fc4-46c1-b503-cddcc8312ad5"
]
}
|
figures/Figure4_infer_activities.ipynb | ###Markdown
Figure 4: Infer activities
###Code
from os import path
import seaborn as sns
import matplotlib.pyplot as plt
from pymodulon.compare import compare_ica
from pymodulon.io import load_json_model
from pymodulon.plotting import *
from pymodulon.example_data import load_bsub_data, load_ecoli_data
###Output
_____no_output_____
###Markdown
Set plotting style
###Code
sns.set_style('ticks')
plt.style.use('custom.mplstyle')
###Output
_____no_output_____
###Markdown
Load data
###Code
figure_dir = 'raw_figures'
data_dir = path.join('..','data','processed_data')
data_file = path.join(data_dir,'bsu.json.gz')
ica_data = load_json_model(data_file)
###Output
_____no_output_____
###Markdown
Load new data
###Code
new_data = pd.read_csv(path.join('..','data','raw_data','GSE141305','log_tpm.csv'),index_col=0)
multiqc = pd.read_csv(path.join('..','data','raw_data','GSE141305','multiqc_stats.tsv'),index_col=0,sep='\t')
metadata = pd.read_csv(path.join('..','data','raw_data','GSE141305','GSE141305_metadata.tsv'),index_col=0,sep='\t')
drop_samples = multiqc[multiqc['Assigned'] < 5000000].index
# Drop 3 samples with <5M reads
new_data = new_data.drop(drop_samples,axis=1)
metadata = metadata.drop(drop_samples,axis=0)
# Center to reference
reference = metadata[metadata.condition == 'LC'].index
new_data = new_data.sub(new_data[reference].mean(axis=1),axis=0)
drop_samples
###Output
_____no_output_____
###Markdown
Infer activities
###Code
from pymodulon.util import infer_activities
new_A = infer_activities(ica_data,new_data)
new_A.columns = metadata.condition
# Only plot iModulon activities with high absolute activities
top_A = new_A[(abs(new_A) > 20).any(axis=1)]
# Average replicates
avg_top_A = top_A.T.reset_index().groupby('condition').mean().T
# Reorder data
avg_top_A = avg_top_A[['LC','6H','12H','1D','2D','3D','5D','7D','14D','1M']]
cg = sns.clustermap(avg_top_A,
center=0,
col_cluster=False,
cmap='RdBu_r',
figsize=(3.5,3),
cbar_pos=(0.18,0.05,.4,.025),
cbar_kws={'orientation':'horizontal'})
cg.ax_heatmap.set_xlabel('')
plt.savefig(path.join('raw_figures','Fig4_infer_activities.pdf'))
###Output
_____no_output_____ |
Chinese scene text recognition.ipynb | ###Markdown
常规赛:2021年12月中文场景文字识别-第3名技术方案分享本项目为[常规赛:中文场景文字识别](https://aistudio.baidu.com/aistudio/competition/detail/20) 2021年12月份第3名的技术方案分享项目。最终得分为84.22158。本项目使用PaddleOCR-develop(静态图版本),PaddleOCR主要由DB文本检测、检测框矫正和CRNN文本识别三部分组成,本次中文场景文本识别只需要使用第三阶段的文本识别器即可。采用CRNN文本识别模型作为baseline,更多关于PaddleOCR的信息详见[PaddleOCR-develop](https://github.com/PaddlePaddle/PaddleOCR/tree/develop)下面将从环境安装、数据处理、模型调优、训练与预测四个方面进行介绍。 一、环境安装 1.1 安装PaddleOCRAI Studio已经提供了paddlepaddle1.8.4及python3.7的环境,因此只需要参考[官方教程](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/installation.md)安装PaddleOCR即可。
###Code
!cd ~/work && git clone -b develop https://gitee.com/paddlepaddle/PaddleOCR.git
!cd ~/work/PaddleOCR && pip install -r requirements.txt && python setup.py install
###Output
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting shapely (from -r requirements.txt (line 1))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ae/20/33ce377bd24d122a4d54e22ae2c445b9b1be8240edb50040b40add950cd9/Shapely-1.8.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 5.2MB/s eta 0:00:01
[?25hCollecting imgaug (from -r requirements.txt (line 2))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/66/b1/af3142c4a85cba6da9f4ebb5ff4e21e2616309552caca5e8acefe9840622/imgaug-0.4.0-py2.py3-none-any.whl (948kB)
[K |████████████████████████████████| 952kB 7.8MB/s eta 0:00:01
[?25hCollecting pyclipper (from -r requirements.txt (line 3))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c5/fa/2c294127e4f88967149a68ad5b3e43636e94e3721109572f8f17ab15b772/pyclipper-1.3.0.post2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (603kB)
[K |████████████████████████████████| 604kB 10.1MB/s eta 0:00:01
[?25hCollecting lmdb (from -r requirements.txt (line 4))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/2e/dd/ada2fd91cd7832979069c556607903f274470c3d3d2274e0a848908272e8/lmdb-1.2.1-cp37-cp37m-manylinux2010_x86_64.whl (299kB)
[K |████████████████████████████████| 307kB 8.3MB/s eta 0:00:01
[?25hRequirement already satisfied: tqdm in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 5)) (4.36.1)
Requirement already satisfied: numpy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 6)) (1.16.4)
Collecting opencv-python==4.2.0.32 (from -r requirements.txt (line 7))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/34/a3/403dbaef909fee9f9f6a8eaff51d44085a14e5bb1a1ff7257117d744986a/opencv_python-4.2.0.32-cp37-cp37m-manylinux1_x86_64.whl (28.2MB)
[K |████████████████████████████████| 28.2MB 5.2MB/s eta 0:00:01
[?25hRequirement already satisfied: scipy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug->-r requirements.txt (line 2)) (1.3.0)
Requirement already satisfied: imageio in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug->-r requirements.txt (line 2)) (2.6.1)
Collecting scikit-image>=0.14.2 (from imgaug->-r requirements.txt (line 2))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9a/44/8f8c7f9c9de7fde70587a656d7df7d056e6f05192a74491f7bc074a724d0/scikit_image-0.19.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (13.3MB)
[K |████████████████████████████████| 13.3MB 12.1MB/s eta 0:00:01
[?25hRequirement already satisfied: Pillow in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug->-r requirements.txt (line 2)) (7.1.2)
Requirement already satisfied: matplotlib in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug->-r requirements.txt (line 2)) (2.2.3)
Requirement already satisfied: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug->-r requirements.txt (line 2)) (1.15.0)
Requirement already satisfied: packaging>=20.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-image>=0.14.2->imgaug->-r requirements.txt (line 2)) (20.9)
Collecting PyWavelets>=1.1.1 (from scikit-image>=0.14.2->imgaug->-r requirements.txt (line 2))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a1/9c/564511b6e1c4e1d835ed2d146670436036960d09339a8fa2921fe42dad08/PyWavelets-1.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (6.1MB)
[K |████████████████████████████████| 6.2MB 4.3MB/s eta 0:00:01 |███████████████████▉ | 3.8MB 4.3MB/s eta 0:00:01
[?25hCollecting tifffile>=2019.7.26 (from scikit-image>=0.14.2->imgaug->-r requirements.txt (line 2))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d8/38/85ae5ed77598ca90558c17a2f79ddaba33173b31cf8d8f545d34d9134f0d/tifffile-2021.11.2-py3-none-any.whl (178kB)
[K |████████████████████████████████| 184kB 10.7MB/s eta 0:00:01
[?25hRequirement already satisfied: networkx>=2.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-image>=0.14.2->imgaug->-r requirements.txt (line 2)) (2.4)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug->-r requirements.txt (line 2)) (2019.3)
Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug->-r requirements.txt (line 2)) (2.8.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug->-r requirements.txt (line 2)) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug->-r requirements.txt (line 2)) (2.4.2)
Requirement already satisfied: cycler>=0.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug->-r requirements.txt (line 2)) (0.10.0)
Requirement already satisfied: decorator>=4.3.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from networkx>=2.2->scikit-image>=0.14.2->imgaug->-r requirements.txt (line 2)) (4.4.0)
Requirement already satisfied: setuptools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib->imgaug->-r requirements.txt (line 2)) (41.4.0)
[31mERROR: pywavelets 1.2.0 has requirement numpy>=1.17.3, but you'll have numpy 1.16.4 which is incompatible.[0m
[31mERROR: scikit-image 0.19.1 has requirement numpy>=1.17.0, but you'll have numpy 1.16.4 which is incompatible.[0m
[31mERROR: scikit-image 0.19.1 has requirement scipy>=1.4.1, but you'll have scipy 1.3.0 which is incompatible.[0m
Installing collected packages: shapely, opencv-python, PyWavelets, tifffile, scikit-image, imgaug, pyclipper, lmdb
Found existing installation: opencv-python 4.1.1.26
Uninstalling opencv-python-4.1.1.26:
Successfully uninstalled opencv-python-4.1.1.26
Successfully installed PyWavelets-1.2.0 imgaug-0.4.0 lmdb-1.2.1 opencv-python-4.2.0.32 pyclipper-1.3.0.post2 scikit-image-0.19.1 shapely-1.8.0 tifffile-2021.11.2
running install
running bdist_egg
running egg_info
creating paddleocr.egg-info
writing paddleocr.egg-info/PKG-INFO
writing dependency_links to paddleocr.egg-info/dependency_links.txt
writing entry points to paddleocr.egg-info/entry_points.txt
writing requirements to paddleocr.egg-info/requires.txt
writing top-level names to paddleocr.egg-info/top_level.txt
writing manifest file 'paddleocr.egg-info/SOURCES.txt'
reading manifest file 'paddleocr.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'LICENSE.txt'
writing manifest file 'paddleocr.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/paddleocr
copying __init__.py -> build/lib/paddleocr
copying paddleocr.py -> build/lib/paddleocr
copying MANIFEST.in -> build/lib/paddleocr
copying README.md -> build/lib/paddleocr
creating build/lib/paddleocr/paddleocr.egg-info
copying paddleocr.egg-info/PKG-INFO -> build/lib/paddleocr/paddleocr.egg-info
copying paddleocr.egg-info/SOURCES.txt -> build/lib/paddleocr/paddleocr.egg-info
copying paddleocr.egg-info/dependency_links.txt -> build/lib/paddleocr/paddleocr.egg-info
copying paddleocr.egg-info/entry_points.txt -> build/lib/paddleocr/paddleocr.egg-info
copying paddleocr.egg-info/requires.txt -> build/lib/paddleocr/paddleocr.egg-info
copying paddleocr.egg-info/top_level.txt -> build/lib/paddleocr/paddleocr.egg-info
creating build/lib/paddleocr/ppocr
creating build/lib/paddleocr/ppocr/data
creating build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/__init__.py -> build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/data_augment.py -> build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/dataset_traversal.py -> build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/db_process.py -> build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/east_process.py -> build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/make_border_map.py -> build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/make_shrink_map.py -> build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/random_crop_data.py -> build/lib/paddleocr/ppocr/data/det
copying ppocr/data/det/sast_process.py -> build/lib/paddleocr/ppocr/data/det
creating build/lib/paddleocr/ppocr/postprocess
copying ppocr/postprocess/__init__.py -> build/lib/paddleocr/ppocr/postprocess
copying ppocr/postprocess/db_postprocess.py -> build/lib/paddleocr/ppocr/postprocess
copying ppocr/postprocess/east_postprocess.py -> build/lib/paddleocr/ppocr/postprocess
copying ppocr/postprocess/locality_aware_nms.py -> build/lib/paddleocr/ppocr/postprocess
copying ppocr/postprocess/sast_postprocess.py -> build/lib/paddleocr/ppocr/postprocess
creating build/lib/paddleocr/ppocr/postprocess/lanms
copying ppocr/postprocess/lanms/.gitignore -> build/lib/paddleocr/ppocr/postprocess/lanms
copying ppocr/postprocess/lanms/.ycm_extra_conf.py -> build/lib/paddleocr/ppocr/postprocess/lanms
copying ppocr/postprocess/lanms/__init__.py -> build/lib/paddleocr/ppocr/postprocess/lanms
copying ppocr/postprocess/lanms/__main__.py -> build/lib/paddleocr/ppocr/postprocess/lanms
copying ppocr/postprocess/lanms/adaptor.cpp -> build/lib/paddleocr/ppocr/postprocess/lanms
copying ppocr/postprocess/lanms/lanms.h -> build/lib/paddleocr/ppocr/postprocess/lanms
creating build/lib/paddleocr/ppocr/postprocess/lanms/include
creating build/lib/paddleocr/ppocr/postprocess/lanms/include/clipper
copying ppocr/postprocess/lanms/include/clipper/clipper.cpp -> build/lib/paddleocr/ppocr/postprocess/lanms/include/clipper
copying ppocr/postprocess/lanms/include/clipper/clipper.hpp -> build/lib/paddleocr/ppocr/postprocess/lanms/include/clipper
creating build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/attr.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/buffer_info.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/cast.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/chrono.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/class_support.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/common.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/complex.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/descr.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/eigen.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/embed.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/eval.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/functional.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/numpy.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/operators.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/options.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/pybind11.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/pytypes.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/stl.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/stl_bind.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying ppocr/postprocess/lanms/include/pybind11/typeid.h -> build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11
creating build/lib/paddleocr/ppocr/utils
copying ppocr/utils/character.py -> build/lib/paddleocr/ppocr/utils
copying ppocr/utils/check.py -> build/lib/paddleocr/ppocr/utils
copying ppocr/utils/ic15_dict.txt -> build/lib/paddleocr/ppocr/utils
copying ppocr/utils/ppocr_keys_v1.txt -> build/lib/paddleocr/ppocr/utils
copying ppocr/utils/utility.py -> build/lib/paddleocr/ppocr/utils
creating build/lib/paddleocr/ppocr/utils/corpus
copying ppocr/utils/corpus/occitan_corpus.txt -> build/lib/paddleocr/ppocr/utils/corpus
creating build/lib/paddleocr/ppocr/utils/dict
copying ppocr/utils/dict/french_dict.txt -> build/lib/paddleocr/ppocr/utils/dict
copying ppocr/utils/dict/german_dict.txt -> build/lib/paddleocr/ppocr/utils/dict
copying ppocr/utils/dict/japan_dict.txt -> build/lib/paddleocr/ppocr/utils/dict
copying ppocr/utils/dict/korean_dict.txt -> build/lib/paddleocr/ppocr/utils/dict
copying ppocr/utils/dict/occitan_dict.txt -> build/lib/paddleocr/ppocr/utils/dict
creating build/lib/paddleocr/tools
creating build/lib/paddleocr/tools/infer
copying tools/infer/__init__.py -> build/lib/paddleocr/tools/infer
copying tools/infer/predict_cls.py -> build/lib/paddleocr/tools/infer
copying tools/infer/predict_det.py -> build/lib/paddleocr/tools/infer
copying tools/infer/predict_rec.py -> build/lib/paddleocr/tools/infer
copying tools/infer/predict_system.py -> build/lib/paddleocr/tools/infer
copying tools/infer/utility.py -> build/lib/paddleocr/tools/infer
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/paddleocr
copying build/lib/paddleocr/README.md -> build/bdist.linux-x86_64/egg/paddleocr
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess
copying build/lib/paddleocr/ppocr/postprocess/__init__.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess
copying build/lib/paddleocr/ppocr/postprocess/sast_postprocess.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess
copying build/lib/paddleocr/ppocr/postprocess/locality_aware_nms.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess
copying build/lib/paddleocr/ppocr/postprocess/db_postprocess.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess
copying build/lib/paddleocr/ppocr/postprocess/east_postprocess.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms
copying build/lib/paddleocr/ppocr/postprocess/lanms/__main__.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/clipper
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/clipper/clipper.cpp -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/clipper
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/clipper/clipper.hpp -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/clipper
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/options.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/stl_bind.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/typeid.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/stl.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/class_support.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/pytypes.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/eigen.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/embed.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/descr.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/buffer_info.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/pybind11.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/functional.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/attr.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/common.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/eval.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/chrono.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/operators.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/numpy.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/cast.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/include/pybind11/complex.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/include/pybind11
copying build/lib/paddleocr/ppocr/postprocess/lanms/__init__.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms
copying build/lib/paddleocr/ppocr/postprocess/lanms/.ycm_extra_conf.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms
copying build/lib/paddleocr/ppocr/postprocess/lanms/.gitignore -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms
copying build/lib/paddleocr/ppocr/postprocess/lanms/lanms.h -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms
copying build/lib/paddleocr/ppocr/postprocess/lanms/adaptor.cpp -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils
copying build/lib/paddleocr/ppocr/utils/ic15_dict.txt -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils
copying build/lib/paddleocr/ppocr/utils/utility.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils
copying build/lib/paddleocr/ppocr/utils/character.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils
copying build/lib/paddleocr/ppocr/utils/ppocr_keys_v1.txt -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils
copying build/lib/paddleocr/ppocr/utils/check.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/dict
copying build/lib/paddleocr/ppocr/utils/dict/occitan_dict.txt -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/dict
copying build/lib/paddleocr/ppocr/utils/dict/french_dict.txt -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/dict
copying build/lib/paddleocr/ppocr/utils/dict/korean_dict.txt -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/dict
copying build/lib/paddleocr/ppocr/utils/dict/japan_dict.txt -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/dict
copying build/lib/paddleocr/ppocr/utils/dict/german_dict.txt -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/dict
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/corpus
copying build/lib/paddleocr/ppocr/utils/corpus/occitan_corpus.txt -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/corpus
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/data
creating build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/dataset_traversal.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/data_augment.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/sast_process.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/__init__.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/random_crop_data.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/east_process.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/make_border_map.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/make_shrink_map.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/ppocr/data/det/db_process.py -> build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det
copying build/lib/paddleocr/__init__.py -> build/bdist.linux-x86_64/egg/paddleocr
copying build/lib/paddleocr/MANIFEST.in -> build/bdist.linux-x86_64/egg/paddleocr
creating build/bdist.linux-x86_64/egg/paddleocr/tools
creating build/bdist.linux-x86_64/egg/paddleocr/tools/infer
copying build/lib/paddleocr/tools/infer/utility.py -> build/bdist.linux-x86_64/egg/paddleocr/tools/infer
copying build/lib/paddleocr/tools/infer/__init__.py -> build/bdist.linux-x86_64/egg/paddleocr/tools/infer
copying build/lib/paddleocr/tools/infer/predict_det.py -> build/bdist.linux-x86_64/egg/paddleocr/tools/infer
copying build/lib/paddleocr/tools/infer/predict_system.py -> build/bdist.linux-x86_64/egg/paddleocr/tools/infer
copying build/lib/paddleocr/tools/infer/predict_rec.py -> build/bdist.linux-x86_64/egg/paddleocr/tools/infer
copying build/lib/paddleocr/tools/infer/predict_cls.py -> build/bdist.linux-x86_64/egg/paddleocr/tools/infer
copying build/lib/paddleocr/paddleocr.py -> build/bdist.linux-x86_64/egg/paddleocr
creating build/bdist.linux-x86_64/egg/paddleocr/paddleocr.egg-info
copying build/lib/paddleocr/paddleocr.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/paddleocr/paddleocr.egg-info
copying build/lib/paddleocr/paddleocr.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/paddleocr/paddleocr.egg-info
copying build/lib/paddleocr/paddleocr.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/paddleocr/paddleocr.egg-info
copying build/lib/paddleocr/paddleocr.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/paddleocr/paddleocr.egg-info
copying build/lib/paddleocr/paddleocr.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/paddleocr/paddleocr.egg-info
copying build/lib/paddleocr/paddleocr.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/paddleocr/paddleocr.egg-info
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/__init__.py to __init__.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/sast_postprocess.py to sast_postprocess.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/locality_aware_nms.py to locality_aware_nms.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/db_postprocess.py to db_postprocess.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/east_postprocess.py to east_postprocess.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/__main__.py to __main__.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/__init__.py to __init__.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/postprocess/lanms/.ycm_extra_conf.py to .ycm_extra_conf.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/utility.py to utility.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/character.py to character.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/utils/check.py to check.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/dataset_traversal.py to dataset_traversal.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/data_augment.py to data_augment.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/sast_process.py to sast_process.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/__init__.py to __init__.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/random_crop_data.py to random_crop_data.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/east_process.py to east_process.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/make_border_map.py to make_border_map.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/make_shrink_map.py to make_shrink_map.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/ppocr/data/det/db_process.py to db_process.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/__init__.py to __init__.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/tools/infer/utility.py to utility.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/tools/infer/__init__.py to __init__.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/tools/infer/predict_det.py to predict_det.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/tools/infer/predict_system.py to predict_system.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/tools/infer/predict_rec.py to predict_rec.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/tools/infer/predict_cls.py to predict_cls.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/paddleocr/paddleocr.py to paddleocr.cpython-37.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying paddleocr.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying paddleocr.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying paddleocr.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying paddleocr.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying paddleocr.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying paddleocr.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
paddleocr.__pycache__.paddleocr.cpython-37: module references __file__
paddleocr.ppocr.postprocess.__pycache__.east_postprocess.cpython-37: module references __file__
paddleocr.ppocr.postprocess.__pycache__.sast_postprocess.cpython-37: module references __file__
paddleocr.ppocr.postprocess.lanms.__pycache__..ycm_extra_conf.cpython-37: module references __file__
paddleocr.ppocr.postprocess.lanms.__pycache__.__init__.cpython-37: module references __file__
paddleocr.tools.infer.__pycache__.predict_cls.cpython-37: module references __file__
paddleocr.tools.infer.__pycache__.predict_det.cpython-37: module references __file__
paddleocr.tools.infer.__pycache__.predict_rec.cpython-37: module references __file__
paddleocr.tools.infer.__pycache__.predict_system.cpython-37: module references __file__
creating dist
creating 'dist/paddleocr-1.1.2-py3.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing paddleocr-1.1.2-py3.7.egg
creating /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddleocr-1.1.2-py3.7.egg
Extracting paddleocr-1.1.2-py3.7.egg to /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages
Adding paddleocr 1.1.2 to easy-install.pth file
Installing paddleocr script to /opt/conda/envs/python35-paddle120-env/bin
Installed /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddleocr-1.1.2-py3.7.egg
Processing dependencies for paddleocr==1.1.2
error: scipy 1.3.0 is installed but scipy>=1.4.1 is required by {'scikit-image'}
###Markdown
二、数据处理 2.1 解压数据集
###Code
!cd ~/data/data62842/ && unzip train_images.zip
!cd ~/data/data62843/ && unzip test_images.zip
!cd ~/data/data62842/ && mv train_images ../ && mv train_label.csv ../
!cd ~/data/data62843/ && mv test_images ../
###Output
_____no_output_____
###Markdown
2.2 数据增强* 本次比赛中,使用数据增强的目的是用来防止过拟合,并且数据增强适用于dataset较小的时候。* 我选择使用[text_render](https://github.com/Sanster/text_renderer)进行数据增强。使用的操作主要包括明暗变换,文本边界调整,添加噪声,颜色调整,文本字体特效变换等等。* 安装text_render后,需要手动修改text_render/configs/default.yaml配置,如下所示``` Small font_size will make text looks like blured/prydownfont_size: min: 14 max: 23 choose Text color range color boundary is in R,G,B formatfont_color: enable: true blue: fraction: 0.5 l_boundary: [0,0,150] h_boundary: [60,60,255] brown: fraction: 0.5 l_boundary: [139,70,19] h_boundary: [160,82,43] By default, text is drawed by Pillow with (https://stackoverflow.com/questions/43828955/measuring-width-of-text-python-pil) If `random_space` is enabled, some text will be drawed char by char with a random spacerandom_space: enable: false fraction: 0.3 min: -0.1 -0.1 will make chars very close or even overlapped max: 0.1 Do remap with sin() Currently this process is very slow!curve: enable: false fraction: 0.3 period: 360 degree, sin 函数的周期 min: 1 sin 函数的幅值范围 max: 5 random crop text heightcrop: enable: false fraction: 0.5 top and bottom will applied equally top: min: 5 max: 10 in pixel, this value should small than img_height bottom: min: 5 max: 10 in pixel, this value should small than img_height Use image in bg_dir as background for textimg_bg: enable: false fraction: 0.5 Not work when random_space appliedtext_border: enable: true fraction: 0.3 lighter than word color light: enable: true fraction: 0.5 darker than word color dark: enable: true fraction: 0.5 https://docs.opencv.org/3.4/df/da0/group__photo__clone.htmlga2bf426e4c93a6b1f21705513dfeca49d https://www.cs.virginia.edu/~connelly/class/2014/comp_photo/proj2/poisson.pdf Use opencv seamlessClone() to draw text on background For some background image, this will make text image looks more realseamless_clone: enable: true fraction: 0.5perspective_transform: max_x: 25 max_y: 25 max_z: 3blur: enable: true fraction: 0.03 If an image is applied blur, it will not be applied prydownprydown: enable: true fraction: 0.03 max_scale: 1.5 Image will first resize to 1.5x, and than resize to 1xnoise: enable: true fraction: 0.3 gauss: enable: true fraction: 0.25 uniform: enable: true fraction: 0.25 salt_pepper: enable: true fraction: 0.25 poisson: enable: true fraction: 0.25line: enable: false fraction: 0.05 under_line: enable: false fraction: 0.2 table_line: enable: false fraction: 0.3 middle_line: enable: false fraction: 0.5line_color: enable: false black: fraction: 0.5 l_boundary: 0,0,0 h_boundary: 64,64,64 blue: fraction: 0.5 l_boundary: [0,0,150] h_boundary: [60,60,255] These operates are applied on the final output image, so actually it can also be applied in training process as an data augmentation method. By default, text is darker than background. If `reverse_color` is enabled, some images will have dark background and light textreverse_color: enable: true fraction: 0.3emboss: enable: true fraction: 0.3sharp: enable: true fraction: 0.3``` * PaddleOCR的[FAQ1.1.8](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/FAQ.mdq118%E8%AF%B7%E9%97%AEpaddleocr%E9%A1%B9%E7%9B%AE%E4%B8%AD%E7%9A%84%E4%B8%AD%E6%96%87%E8%B6%85%E8%BD%BB%E9%87%8F%E5%92%8C%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E7%94%A8%E4%BA%86%E5%93%AA%E4%BA%9B%E6%95%B0%E6%8D%AE%E9%9B%86%E8%AE%AD%E7%BB%83%E5%A4%9A%E5%B0%91%E6%A0%B7%E6%9C%ACgpu%E4%BB%80%E4%B9%88%E9%85%8D%E7%BD%AE%E8%B7%91%E4%BA%86%E5%A4%9A%E5%B0%91%E4%B8%AAepoch%E5%A4%A7%E6%A6%82%E8%B7%91%E4%BA%86%E5%A4%9A%E4%B9%85)中介绍到,PaddleOCR的识别模型采用520W左右的数据集(真实数据26W+合成数据500W)进行训练,可见数据增广的重要性。
###Code
!cd ~/work && git clone https://github.com/Sanster/text_renderer
!cd ~/work/text_renderer && pip install -r requirements.txt
###Output
Cloning into 'text_renderer'...
remote: Enumerating objects: 694, done.[K
remote: Counting objects: 100% (6/6), done.[K
remote: Compressing objects: 100% (6/6), done.[K
remote: Total 694 (delta 1), reused 1 (delta 0), pack-reused 688[K
Receiving objects: 100% (694/694), 12.91 MiB | 26.00 KiB/s, done.
Resolving deltas: 100% (384/384), done.
Checking connectivity... done.
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: Cython in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 1)) (0.29)
Requirement already satisfied: opencv-python in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 2)) (4.2.0.32)
Requirement already satisfied: pillow in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 3)) (7.1.2)
Requirement already satisfied: numpy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (1.16.4)
Requirement already satisfied: matplotlib in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 5)) (2.2.3)
Collecting fontTools (from -r requirements.txt (line 6))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c0/77/6570a4cc3f706f1afb217a1603d1b05ebf8e259d5a04256904ef2575e108/fonttools-4.28.5-py3-none-any.whl (890kB)
[K |████████████████████████████████| 890kB 4.8MB/s eta 0:00:01 |███████████████████████████▎ | 757kB 4.8MB/s eta 0:00:01
[?25hCollecting tenacity (from -r requirements.txt (line 7))
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/f2/a5/f86bc8d67c979020438c8559cc70cfe3a1643fd160d35e09c9cca6a09189/tenacity-8.0.1-py3-none-any.whl
Requirement already satisfied: easyDict in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 8)) (1.9)
Collecting pyyaml==5.1 (from -r requirements.txt (line 9))
[?25l Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz (274kB)
[K |████████████████████████████████| 276kB 3.8MB/s eta 0:00:01
[?25hRequirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 5)) (1.1.0)
Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 5)) (2.8.0)
Requirement already satisfied: six>=1.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 5)) (1.15.0)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 5)) (2019.3)
Requirement already satisfied: cycler>=0.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 5)) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 5)) (2.4.2)
Requirement already satisfied: setuptools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib->-r requirements.txt (line 5)) (41.4.0)
Building wheels for collected packages: pyyaml
Building wheel for pyyaml (setup.py) ... [?25ldone
[?25h Created wheel for pyyaml: filename=PyYAML-5.1-cp37-cp37m-linux_x86_64.whl size=44074 sha256=8c91cf41d6533ef9cfe09ddf97d07c343f44d6b5559318620b5d854300f7973c
Stored in directory: /home/aistudio/.cache/pip/wheels/91/55/8c/94c56b8cb6f33a264cd157b400283e7fca8c2f7c9ca25d27e6
Successfully built pyyaml
Installing collected packages: fontTools, tenacity, pyyaml
Found existing installation: PyYAML 5.1.2
Uninstalling PyYAML-5.1.2:
Successfully uninstalled PyYAML-5.1.2
Successfully installed fontTools-4.28.5 pyyaml-5.1 tenacity-8.0.1
###Markdown
* 通过统计训练集的图像尺寸,可以发现训练集的高度固定为48,而宽度与图中的文字个数有关。
###Code
import glob
import os
import cv2
def get_aspect_ratio(img_set_dir):
m_width = 0
m_height = 0
width_dict = {}
height_dict = {}
images = glob.glob(img_set_dir+'*.jpg')
for image in images:
img = cv2.imread(image)
width_dict[int(img.shape[1])] = 1 if (int(img.shape[1])) not in width_dict else 1 + width_dict[int(img.shape[1])]
height_dict[int(img.shape[0])] = 1 if (int(img.shape[0])) not in height_dict else 1 + height_dict[int(img.shape[0])]
m_width += img.shape[1]
m_height += img.shape[0]
m_width = m_width/len(images)
m_height = m_height/len(images)
aspect_ratio = m_width/m_height
width_dict = dict(sorted(width_dict.items(), key=lambda item: item[1], reverse=True))
height_dict = dict(sorted(height_dict.items(), key=lambda item: item[1], reverse=True))
return aspect_ratio,m_width,m_height,width_dict,height_dict
aspect_ratio,m_width,m_height,width_dict,height_dict = get_aspect_ratio("/home/aistudio/data/train_images/")
print("aspect ratio is: {}, mean width is: {}, mean height is: {}".format(aspect_ratio,m_width,m_height))
print("Width dict:{}".format(width_dict))
print("Height dict:{}".format(height_dict))
import pandas as pd
def Q2B(s):
"""全角转半角"""
inside_code=ord(s)
if inside_code==0x3000:
inside_code=0x0020
else:
inside_code-=0xfee0
if inside_code<0x0020 or inside_code>0x7e: #转完之后不是半角字符返回原来的字符
return s
return chr(inside_code)
def stringQ2B(s):
"""把字符串全角转半角"""
return "".join([Q2B(c) for c in s])
def is_chinese(s):
"""判断unicode是否是汉字"""
for c in s:
if c < u'\u4e00' or c > u'\u9fa5':
return False
return True
def is_number(s):
"""判断unicode是否是数字"""
for c in s:
if c < u'\u0030' or c > u'\u0039':
return False
return True
def is_alphabet(s):
"""判断unicode是否是英文字母"""
for c in s:
if c < u'\u0061' or c > u'\u007a':
return False
return True
def del_other(s):
"""判断是否非汉字,数字和小写英文"""
res = str()
for c in s:
if not (is_chinese(c) or is_number(c) or is_alphabet(c)):
c = ""
res += c
return res
df = pd.read_csv("/home/aistudio/data/train_label.csv", encoding="gbk")
name, value = list(df.name), list(df.value)
for i, label in enumerate(value):
# 全角转半角
label = stringQ2B(label)
# 大写转小写
label = "".join([c.lower() for c in label])
# 删除所有空格符号
label = del_other(label)
value[i] = label
# 删除标签为""的行
data = zip(name, value)
data = list(filter(lambda c: c[1]!="", list(data)))
# 保存到work目录
with open("/home/aistudio/data/train_label.txt", "w") as f:
for line in data:
f.write(line[0] + "\t" + line[1] + "\n")
# 记录训练集中最长标签
label_max_len = 0
with open("/home/aistudio/data/train_label.txt", "r") as f:
for line in f:
name, label = line.strip().split("\t")
if len(label) > label_max_len:
label_max_len = len(label)
print("label max len: ", label_max_len)
def create_label_list(train_list):
classSet = set()
with open(train_list) as f:
next(f)
for line in f:
img_name, label = line.strip().split("\t")
for e in label:
classSet.add(e)
# 在类的基础上加一个blank
classList = sorted(list(classSet))
with open("/home/aistudio/data/label_list.txt", "w") as f:
for idx, c in enumerate(classList):
f.write("{}\t{}\n".format(c, idx))
# 为数据增广提供词库
with open("/home/aistudio/work/text_renderer/data/chars/ch.txt", "w") as f:
for idx, c in enumerate(classList):
f.write("{}\n".format(c))
return classSet
classSet = create_label_list("/home/aistudio/data/train_label.txt")
print("classify num: ", len(classSet))
###Output
aspect ratio is: 3.451128333333333, mean width is: 165.65416, mean height is: 48.0
Width dict:{48: 741, 96: 539, 44: 392, 42: 381, 144: 365, 45: 345, 43: 323, 88: 318, 72: 318, 40: 312, 52: 301, 36: 298, 50: 297, 120: 294, 54: 288, 84: 286, 51: 283, 32: 283, 24: 281, 100: 277, 64: 276, 80: 276, 76: 275, 102: 272, 81: 270, 90: 269, 56: 268, 66: 267, 78: 266, 37: 262, 82: 261, 41: 259, 89: 258, 92: 257, 46: 256, 60: 251, 86: 249, 53: 246, 168: 246, 105: 243, 61: 242, 57: 241, 128: 241, 112: 240, 85: 239, 91: 237, 39: 237, 68: 235, 98: 234, 93: 233, 192: 233, 75: 232, 34: 229, 33: 229, 74: 229, 70: 228, 87: 226, 25: 226, 110: 224, 104: 224, 58: 223, 101: 221, 108: 221, 62: 221, 49: 221, 132: 221, 150: 218, 126: 218, 73: 217, 94: 217, 28: 216, 129: 216, 69: 215, 30: 215, 99: 212, 160: 211, 38: 210, 136: 209, 26: 207, 109: 207, 55: 206, 118: 205, 35: 205, 116: 204, 115: 203, 174: 201, 117: 200, 106: 200, 148: 200, 122: 199, 113: 198, 67: 197, 77: 197, 172: 195, 114: 195, 156: 194, 130: 191, 138: 190, 140: 190, 83: 187, 103: 186, 124: 186, 147: 185, 59: 183, 139: 182, 146: 180, 123: 180, 27: 179, 176: 179, 97: 177, 65: 176, 161: 174, 137: 173, 162: 173, 154: 172, 158: 171, 133: 171, 31: 169, 240: 169, 29: 168, 125: 168, 186: 166, 169: 166, 141: 166, 121: 165, 165: 164, 152: 161, 134: 161, 157: 160, 153: 160, 135: 159, 166: 156, 149: 155, 189: 154, 177: 154, 63: 151, 163: 151, 142: 150, 181: 149, 107: 146, 183: 145, 111: 145, 173: 144, 79: 144, 170: 144, 178: 143, 184: 143, 164: 141, 216: 141, 171: 137, 204: 137, 159: 136, 185: 135, 127: 134, 196: 134, 187: 132, 200: 131, 210: 130, 194: 130, 151: 127, 155: 126, 208: 126, 145: 125, 180: 125, 71: 123, 131: 123, 198: 121, 182: 120, 217: 119, 220: 119, 188: 118, 201: 116, 195: 115, 202: 114, 175: 111, 179: 111, 228: 111, 206: 110, 256: 109, 222: 109, 232: 109, 205: 108, 252: 106, 197: 105, 211: 104, 219: 103, 214: 102, 234: 100, 119: 100, 288: 99, 47: 99, 213: 99, 218: 99, 203: 98, 264: 97, 225: 97, 199: 96, 209: 96, 190: 96, 242: 96, 226: 94, 236: 94, 224: 93, 229: 91, 193: 91, 246: 89, 212: 87, 249: 86, 243: 86, 207: 84, 262: 84, 231: 82, 268: 82, 245: 81, 237: 81, 235: 80, 221: 78, 233: 78, 230: 78, 260: 75, 261: 74, 248: 74, 167: 74, 250: 74, 244: 74, 273: 73, 274: 72, 227: 72, 247: 70, 336: 70, 276: 67, 270: 67, 241: 66, 253: 65, 272: 64, 277: 64, 300: 64, 265: 64, 223: 64, 267: 63, 279: 61, 282: 60, 254: 60, 259: 60, 271: 60, 280: 59, 258: 59, 292: 59, 278: 59, 255: 58, 238: 58, 312: 58, 215: 57, 284: 57, 95: 57, 283: 56, 251: 56, 304: 54, 306: 54, 296: 54, 266: 53, 290: 53, 269: 52, 285: 52, 360: 52, 143: 51, 384: 51, 297: 50, 291: 49, 309: 49, 313: 49, 330: 49, 324: 49, 320: 49, 302: 48, 257: 47, 340: 46, 328: 45, 325: 45, 368: 45, 314: 45, 333: 45, 318: 45, 308: 44, 275: 43, 332: 43, 281: 43, 310: 42, 369: 42, 295: 41, 303: 41, 191: 41, 341: 41, 294: 40, 293: 40, 342: 40, 298: 40, 322: 40, 400: 39, 289: 39, 378: 39, 346: 38, 286: 38, 321: 38, 307: 37, 316: 37, 299: 37, 432: 37, 338: 37, 327: 36, 323: 36, 331: 36, 344: 35, 329: 35, 339: 34, 305: 34, 348: 34, 317: 32, 361: 31, 396: 31, 375: 31, 349: 31, 263: 31, 311: 31, 347: 30, 381: 30, 372: 30, 301: 30, 373: 30, 334: 29, 403: 29, 434: 29, 362: 29, 345: 29, 326: 29, 480: 28, 366: 28, 351: 28, 337: 28, 402: 28, 387: 28, 404: 27, 376: 27, 239: 27, 352: 27, 315: 27, 405: 26, 354: 26, 358: 26, 389: 26, 377: 26, 386: 25, 394: 25, 364: 25, 363: 25, 319: 25, 382: 24, 408: 24, 464: 24, 388: 24, 353: 24, 391: 24, 365: 24, 371: 23, 374: 23, 429: 23, 379: 23, 355: 23, 393: 23, 416: 23, 390: 23, 468: 22, 401: 22, 423: 22, 457: 22, 392: 22, 444: 22, 528: 21, 477: 21, 406: 21, 398: 21, 436: 21, 357: 21, 395: 21, 343: 20, 445: 20, 425: 20, 459: 20, 356: 20, 438: 20, 447: 20, 496: 19, 411: 19, 414: 19, 418: 18, 413: 18, 417: 18, 410: 18, 367: 18, 359: 18, 419: 17, 428: 17, 385: 17, 380: 17, 453: 17, 412: 17, 424: 16, 427: 16, 370: 16, 472: 16, 430: 16, 473: 16, 483: 16, 399: 15, 454: 15, 422: 15, 437: 15, 420: 15, 350: 15, 462: 15, 510: 15, 450: 15, 397: 14, 287: 14, 474: 14, 440: 14, 460: 14, 439: 14, 544: 14, 409: 13, 535: 13, 481: 13, 461: 13, 488: 13, 492: 13, 421: 13, 451: 13, 456: 13, 465: 13, 537: 13, 471: 13, 433: 13, 446: 13, 509: 13, 493: 13, 335: 12, 562: 12, 426: 12, 558: 12, 482: 12, 458: 12, 645: 11, 536: 11, 539: 11, 616: 11, 484: 11, 512: 11, 504: 11, 549: 11, 415: 11, 435: 11, 572: 11, 490: 11, 441: 11, 505: 11, 476: 11, 576: 10, 443: 10, 592: 10, 552: 10, 452: 10, 534: 10, 486: 10, 550: 10, 514: 10, 580: 10, 523: 10, 469: 10, 495: 10, 538: 10, 570: 10, 506: 10, 556: 10, 442: 10, 522: 10, 499: 10, 448: 10, 612: 10, 516: 9, 568: 9, 485: 9, 520: 9, 463: 9, 467: 9, 529: 9, 546: 9, 383: 9, 487: 9, 601: 9, 491: 9, 508: 9, 524: 9, 470: 9, 517: 9, 407: 9, 475: 8, 518: 8, 497: 8, 634: 8, 553: 8, 571: 8, 640: 8, 466: 8, 574: 8, 501: 8, 573: 8, 560: 8, 542: 8, 478: 8, 532: 8, 489: 7, 533: 7, 557: 7, 531: 7, 593: 7, 547: 7, 513: 7, 565: 7, 545: 7, 540: 7, 664: 7, 507: 7, 449: 7, 502: 7, 603: 7, 577: 6, 567: 6, 519: 6, 759: 6, 543: 6, 724: 6, 498: 6, 625: 6, 688: 6, 585: 6, 578: 6, 530: 6, 515: 6, 503: 6, 628: 6, 559: 6, 600: 6, 615: 6, 720: 6, 521: 6, 525: 6, 605: 6, 598: 5, 639: 5, 654: 5, 630: 5, 638: 5, 731: 5, 586: 5, 455: 5, 548: 5, 618: 5, 624: 5, 595: 5, 655: 5, 651: 5, 704: 5, 622: 5, 610: 5, 805: 5, 617: 5, 589: 5, 566: 5, 660: 5, 661: 5, 594: 5, 583: 5, 587: 5, 744: 5, 554: 5, 656: 4, 659: 4, 658: 4, 635: 4, 627: 4, 666: 4, 564: 4, 588: 4, 663: 4, 646: 4, 714: 4, 644: 4, 686: 4, 511: 4, 643: 4, 500: 4, 678: 4, 826: 4, 668: 4, 606: 4, 810: 4, 584: 4, 680: 4, 699: 4, 872: 4, 590: 4, 563: 4, 551: 4, 800: 4, 792: 4, 797: 4, 692: 4, 732: 4, 608: 4, 648: 4, 613: 4, 619: 4, 629: 4, 604: 4, 777: 4, 431: 4, 672: 4, 736: 4, 620: 4, 742: 4, 602: 4, 770: 4, 691: 4, 581: 3, 631: 3, 667: 3, 711: 3, 609: 3, 807: 3, 702: 3, 675: 3, 676: 3, 729: 3, 751: 3, 868: 3, 793: 3, 780: 3, 754: 3, 725: 3, 870: 3, 685: 3, 696: 3, 763: 3, 614: 3, 708: 3, 657: 3, 673: 3, 756: 3, 761: 3, 541: 3, 569: 3, 859: 3, 746: 3, 681: 3, 591: 3, 989: 3, 632: 3, 925: 3, 611: 3, 679: 3, 697: 3, 626: 3, 689: 3, 494: 3, 816: 3, 555: 3, 526: 3, 637: 3, 841: 3, 713: 3, 842: 3, 607: 3, 597: 3, 1114: 2, 760: 2, 733: 2, 963: 2, 710: 2, 878: 2, 669: 2, 726: 2, 866: 2, 914: 2, 734: 2, 701: 2, 916: 2, 707: 2, 799: 2, 705: 2, 848: 2, 694: 2, 818: 2, 874: 2, 930: 2, 853: 2, 932: 2, 887: 2, 723: 2, 1106: 2, 1062: 2, 690: 2, 865: 2, 712: 2, 896: 2, 738: 2, 804: 2, 757: 2, 839: 2, 735: 2, 698: 2, 683: 2, 996: 2, 903: 2, 817: 2, 755: 2, 967: 2, 840: 2, 674: 2, 894: 2, 882: 2, 579: 2, 662: 2, 765: 2, 1104: 2, 682: 2, 727: 2, 821: 2, 633: 2, 824: 2, 830: 2, 641: 2, 743: 2, 665: 2, 582: 2, 787: 2, 693: 2, 709: 2, 836: 2, 931: 2, 833: 2, 884: 2, 778: 2, 784: 2, 889: 2, 835: 2, 952: 2, 822: 2, 1008: 2, 988: 2, 1092: 1, 1237: 1, 737: 1, 965: 1, 717: 1, 946: 1, 650: 1, 1071: 1, 758: 1, 1176: 1, 1011: 1, 924: 1, 1080: 1, 1126: 1, 1194: 1, 1165: 1, 834: 1, 1287: 1, 753: 1, 1409: 1, 1314: 1, 945: 1, 785: 1, 813: 1, 741: 1, 716: 1, 987: 1, 825: 1, 789: 1, 1097: 1, 730: 1, 814: 1, 1113: 1, 819: 1, 803: 1, 929: 1, 898: 1, 652: 1, 1203: 1, 876: 1, 772: 1, 862: 1, 1073: 1, 636: 1, 767: 1, 621: 1, 1039: 1, 1014: 1, 728: 1, 976: 1, 647: 1, 670: 1, 1321: 1, 768: 1, 796: 1, 1027: 1, 1144: 1, 977: 1, 934: 1, 703: 1, 900: 1, 953: 1, 962: 1, 966: 1, 873: 1, 771: 1, 1147: 1, 1280: 1, 1400: 1, 918: 1, 561: 1, 790: 1, 1066: 1, 479: 1, 1514: 1, 706: 1, 764: 1, 779: 1, 947: 1, 745: 1, 575: 1, 943: 1, 1168: 1, 941: 1, 1084: 1, 684: 1, 599: 1, 1624: 1, 883: 1, 935: 1, 820: 1, 827: 1, 978: 1, 527: 1, 957: 1, 906: 1, 852: 1, 695: 1, 1009: 1, 642: 1, 908: 1, 1003: 1, 917: 1, 794: 1, 1260: 1, 1010: 1, 831: 1, 892: 1, 1208: 1, 847: 1, 979: 1, 715: 1, 1505: 1, 1188: 1, 1111: 1, 881: 1, 1225: 1, 1283: 1, 1050: 1, 781: 1, 1051: 1, 776: 1, 997: 1, 747: 1, 984: 1, 1077: 1, 895: 1, 596: 1, 861: 1, 1076: 1, 769: 1, 1381: 1, 773: 1, 1282: 1, 1250: 1, 812: 1, 1234: 1, 774: 1, 750: 1, 823: 1, 687: 1, 749: 1, 860: 1, 986: 1, 721: 1, 901: 1, 973: 1, 926: 1, 1018: 1, 718: 1, 766: 1, 748: 1, 1087: 1, 1626: 1, 1044: 1, 649: 1, 899: 1, 1083: 1}
Height dict:{48: 50000}
label max len: 77
classify num: 3096
###Markdown
- 生成字符长度为1,2,3,4,5的数据集各2000张,共10000张。
###Code
# 清空已经生成的数据集
!cd ~/work/text_renderer/output/default && rm ./*
!cd ~/work/text_renderer && python main.py --length 1 --img_width 32 --img_height 48 --chars_file "./data/chars/ch.txt" --corpus_mode 'random' --num_img 2000
!cd ~/work/text_renderer && python main.py --length 2 --img_width 64 --img_height 48 --chars_file "./data/chars/ch.txt" --corpus_mode 'random' --num_img 2000
!cd ~/work/text_renderer && python main.py --length 3 --img_width 96 --img_height 48 --chars_file "./data/chars/ch.txt" --corpus_mode 'random' --num_img 2000
!cd ~/work/text_renderer && python main.py --length 4 --img_width 128 --img_height 48 --chars_file "./data/chars/ch.txt" --corpus_mode 'random' --num_img 2000
!cd ~/work/text_renderer && python main.py --length 5 --img_width 160 --img_height 48 --chars_file "./data/chars/ch.txt" --corpus_mode 'random' --num_img 2000
###Output
Total fonts num: 1
Background num: 1
Generate text images in ./output/default
Retry gen_img: Range cannot be empty (low >= high) unless no samples are taken
Traceback (most recent call last):
File "main.py", line 75, in gen_img_retry
return renderer.gen_img(img_index)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 55, in gen_img
word_img, text_box_pnts, word_color = self.draw_text_on_bg(word, font, bg)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 283, in draw_text_on_bg
np_img = self.draw_text_seamless(font, bg, word, word_color, word_height, word_width, offset)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 318, in draw_text_seamless
font, word_color)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 394, in draw_text_wrapper
self.draw_border_text(draw, text, x, y, font, text_color)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 419, in draw_border_text
text_color[0] + np.random.randint(0, 255 - text_color[0] - 1),
File "mtrand.pyx", line 992, in mtrand.RandomState.randint
ValueError: Range cannot be empty (low >= high) unless no samples are taken
Retry gen_img: Range cannot be empty (low >= high) unless no samples are taken
Traceback (most recent call last):
File "main.py", line 75, in gen_img_retry
return renderer.gen_img(img_index)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 55, in gen_img
word_img, text_box_pnts, word_color = self.draw_text_on_bg(word, font, bg)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 283, in draw_text_on_bg
np_img = self.draw_text_seamless(font, bg, word, word_color, word_height, word_width, offset)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 318, in draw_text_seamless
font, word_color)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 394, in draw_text_wrapper
self.draw_border_text(draw, text, x, y, font, text_color)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 419, in draw_border_text
text_color[0] + np.random.randint(0, 255 - text_color[0] - 1),
File "mtrand.pyx", line 992, in mtrand.RandomState.randint
ValueError: Range cannot be empty (low >= high) unless no samples are taken
2000/2000 100%
Finish generate data: 8.510 s
Total fonts num: 1
Background num: 1
Generate more text images in ./output/default. Start index 2000
Retry gen_img: Range cannot be empty (low >= high) unless no samples are taken
Traceback (most recent call last):
File "main.py", line 75, in gen_img_retry
return renderer.gen_img(img_index)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 55, in gen_img
word_img, text_box_pnts, word_color = self.draw_text_on_bg(word, font, bg)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 283, in draw_text_on_bg
np_img = self.draw_text_seamless(font, bg, word, word_color, word_height, word_width, offset)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 318, in draw_text_seamless
font, word_color)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 394, in draw_text_wrapper
self.draw_border_text(draw, text, x, y, font, text_color)
File "/home/aistudio/work/text_renderer/textrenderer/renderer.py", line 419, in draw_border_text
text_color[0] + np.random.randint(0, 255 - text_color[0] - 1),
File "mtrand.pyx", line 992, in mtrand.RandomState.randint
ValueError: Range cannot be empty (low >= high) unless no samples are taken
2000/2000 100%
Finish generate data: 10.232 s
Total fonts num: 1
Background num: 1
Generate more text images in ./output/default. Start index 4000
2000/2000 100%
Finish generate data: 12.927 s
Total fonts num: 1
Background num: 1
Generate more text images in ./output/default. Start index 6000
2000/2000 100%
Finish generate data: 16.569 s
Total fonts num: 1
Background num: 1
Generate more text images in ./output/default. Start index 8000
2000/2000 100%
Finish generate data: 20.731 s
###Markdown
- 将生成的数据集与原数据集合并
###Code
!cp ~/work/text_renderer/output/default/*.jpg ~/data/train_images
import os
with open('work/text_renderer/output/default/tmp_labels.txt','r',encoding='utf-8') as src_label:
with open('data/train_label.txt','a',encoding='utf-8') as dst_label:
lines = src_label.readlines()
for line in lines:
[img,text] = line.split(' ')
print('{}.jpg\t{}'.format(img,text),file=dst_label,end='')
###Output
_____no_output_____
###Markdown
三、模型调优* 可以选择PaddleOCR提供的CRNN预训练模型,或[其他模型](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/recognition.md)* 根据前面统计的训练集尺寸,将模型输入尺寸设置为高度48,宽度256* 采用cosine_decay和warmup策略,加快模型收敛 - CRNN模型是在2015年论文"An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition"[[论文]](https://arxiv.org/abs/1507.05717)[[代码]](https://github.com/bgshih/crnn)提出的,用于不定长序列的文本识别。- 下图是CRNN模型的结构图,其主要由CNN层,RNN层及CTC翻译层三部分构成。其中CNN层从输入图片提取图片特征,原文使用的是VGG网络,而PaddleOCR使用的是ResNet34和MobileNetV3。大模型往往能取得更好的效果,因此本项目采用ResNet34作为baseline来改进。改进方向则为调整CNN的特征提取网络,尝试ResNet50及更深的结构。
###Code
!cd ~/work/PaddleOCR && mkdir pretrain_weights && cd pretrain_weights && wget https://paddleocr.bj.bcebos.com/20-09-22/server/rec/ch_ppocr_server_v1.1_rec_pre.tar
!cd ~/work/PaddleOCR/pretrain_weights && tar -xf ch_ppocr_server_v1.1_rec_pre.tar
###Output
_____no_output_____
###Markdown
* 在PaddleOCR/configs/rec中,分别添加训练配置文件 my_rec_ch_train.yml和my_rec_ch_reader.yml* 本次比赛结果的调优过程:设定了161轮迭代(从epoch0到epoch160),初始学习率为0.0001,fc_decay为0.00001,l2学习率衰减为0.00001* 为了适当提升学习速度,使用了cosine_decay和warmup。其中step_each_epoch为1000,warmup_minibatch为2000,衰减总轮数为161* 经测试以上参数设定可以达到较好的结果```my_rec_ch_train.ymlGlobal: algorithm: CRNN use_gpu: true epoch_num: 161 训练轮数 log_smooth_window: 20 print_batch_step: 10 save_model_dir: ./output/my_rec_ch save_epoch_step: 20 保存模型间隔轮数 eval_batch_step: 1000 train_batch_size_per_card: 256 test_batch_size_per_card: 128 image_shape: [3, 48, 256] max_text_length: 80 character_type: ch character_dict_path: ./ppocr/utils/ppocr_keys_v1.txt loss_type: ctc distort: true use_space_char: true reader_yml: ./configs/rec/my_rec_ch_reader.yml pretrain_weights: ./pretrain_weights/ch_ppocr_server_v1.1_rec_pre/best_accuracy checkpoints: save_inference_dir: infer_img:Architecture: function: ppocr.modeling.architectures.rec_model,RecModelBackbone: function: ppocr.modeling.backbones.rec_resnet_vd,ResNet layers: 34Head: function: ppocr.modeling.heads.rec_ctc_head,CTCPredict encoder_type: rnn fc_decay: 0.00001 SeqRNN: hidden_size: 256 Loss: function: ppocr.modeling.losses.rec_ctc_loss,CTCLossOptimizer: function: ppocr.optimizer,AdamDecay base_lr: 0.0001 初始学习率 l2_decay: 0.00001 学习率衰减 beta1: 0.9 beta2: 0.999 decay: function: cosine_decay_warmup step_each_epoch: 1000 total_epoch: 161 warmup_minibatch: 2000``````my_rec_ch_reader.ymlTrainReader: reader_function: ppocr.data.rec.dataset_traversal,SimpleReader num_workers: 1 img_set_dir: /home/aistudio/data/train_images label_file_path: /home/aistudio/data/train_label.txt EvalReader: reader_function: ppocr.data.rec.dataset_traversal,SimpleReader img_set_dir: /home/aistudio/data/train_images label_file_path: /home/aistudio/data/train_label.txtTestReader: reader_function: ppocr.data.rec.dataset_traversal,SimpleReader``` 四、训练与预测- 参考[PaddleOCR官方教程](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/recognition.md)进行模型训练 4.1 训练模型- 添加训练配置文件 my_rec_ch_train.yml和my_rec_ch_reader.yml以后,输入以下命令就可以开始训练。
###Code
!cd ~/work/PaddleOCR && python tools/train.py -c configs/rec/my_rec_ch_train.yml
###Output
2021-12-24 19:19:21,079-INFO: Already save model in ./output/my_rec_ch/iter_epoch_160
###Markdown
4.2 导出模型(根据需求选择)- 训练完成后,模型和参数会被保存到PaddleOCR/output文件下,选择需要导出最终的模型,如下操作导出的是对iter_epoch_160的模型进行导出,同时设置导出的路径为PaddleOCR/inference/CRNN_R34,这些路径都可以自行修改。
###Code
!cd ~/work/PaddleOCR && python tools/export_model.py -c configs/rec/my_rec_ch_train.yml -o Global.checkpoints=./output/my_rec_ch/iter_epoch_160 Global.save_inference_dir=./inference/CRNN_R34
###Output
2021-12-24 19:50:10,162-INFO: {'Global': {'debug': False, 'algorithm': 'CRNN', 'use_gpu': True, 'epoch_num': 161, 'log_smooth_window': 20, 'print_batch_step': 10, 'save_model_dir': './output/my_rec_ch', 'save_epoch_step': 20, 'eval_batch_step': 1000, 'train_batch_size_per_card': 256, 'test_batch_size_per_card': 128, 'image_shape': [3, 48, 256], 'max_text_length': 80, 'character_type': 'ch', 'character_dict_path': './ppocr/utils/ppocr_keys_v1.txt', 'loss_type': 'ctc', 'distort': True, 'use_space_char': True, 'reader_yml': './configs/rec/my_rec_ch_reader.yml', 'pretrain_weights': './pretrain_weights/ch_ppocr_server_v1.1_rec_pre/best_accuracy', 'checkpoints': './output/my_rec_ch/iter_epoch_160', 'save_inference_dir': './inference/CRNN_R34', 'infer_img': None}, 'Architecture': {'function': 'ppocr.modeling.architectures.rec_model,RecModel'}, 'Backbone': {'function': 'ppocr.modeling.backbones.rec_resnet_vd,ResNet', 'layers': 34}, 'Head': {'function': 'ppocr.modeling.heads.rec_ctc_head,CTCPredict', 'encoder_type': 'rnn', 'fc_decay': 1e-05, 'SeqRNN': {'hidden_size': 256}}, 'Loss': {'function': 'ppocr.modeling.losses.rec_ctc_loss,CTCLoss'}, 'Optimizer': {'function': 'ppocr.optimizer,AdamDecay', 'base_lr': 0.0001, 'l2_decay': 1e-05, 'beta1': 0.9, 'beta2': 0.999, 'decay': {'function': 'cosine_decay_warmup', 'step_each_epoch': 1000, 'total_epoch': 161, 'warmup_minibatch': 2000}}, 'TrainReader': {'reader_function': 'ppocr.data.rec.dataset_traversal,SimpleReader', 'num_workers': 1, 'img_set_dir': '/home/aistudio/data/train_images', 'label_file_path': '/home/aistudio/data/train_label.txt'}, 'EvalReader': {'reader_function': 'ppocr.data.rec.dataset_traversal,SimpleReader', 'img_set_dir': '/home/aistudio/data/train_images', 'label_file_path': '/home/aistudio/data/train_label.txt'}, 'TestReader': {'reader_function': 'ppocr.data.rec.dataset_traversal,SimpleReader'}}
W1224 19:50:10.413158 29038 device_context.cc:252] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 9.0
W1224 19:50:10.418545 29038 device_context.cc:260] device: 0, cuDNN Version: 7.6.
2021-12-24 19:50:14,478-INFO: Finish initing model from ./output/my_rec_ch/iter_epoch_160
inference model saved in ./inference/CRNN_R34/model and ./inference/CRNN_R34/params
save success, output_name_list: ['decoded_out', 'predicts']
###Markdown
4.3 预测结果在work/PaddleOCR/tools/路径下,新建python文件infer_rec_new.py复制如下代码到infer_rec_new.py中``` Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport numpy as npimport osimport sysimport globimport re__dir__ = os.path.dirname(os.path.abspath(__file__))sys.path.append(__dir__)sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))def set_paddle_flags(**kwargs): for key, value in kwargs.items(): if os.environ.get(key, None) is None: os.environ[key] = str(value) NOTE(paddle-dev): All of these flags should be set before `import paddle`. Otherwise, it would not take any effect.set_paddle_flags( FLAGS_eager_delete_tensor_gb=0, enable GC to save memory)import tools.program as programfrom paddle import fluidfrom ppocr.utils.utility import initial_loggerlogger = initial_logger()from ppocr.utils.utility import enable_static_modefrom ppocr.data.reader_main import reader_mainfrom ppocr.utils.save_load import init_modelfrom ppocr.utils.character import CharacterOpsfrom ppocr.utils.utility import create_modulefrom ppocr.utils.utility import get_image_file_listdef main(): config = program.load_config(FLAGS.config) program.merge_config(FLAGS.opt) logger.info(config) char_ops = CharacterOps(config['Global']) config['Global']['char_ops'] = char_ops check if set use_gpu=True in paddlepaddle cpu version use_gpu = config['Global']['use_gpu'] check_gpu(use_gpu) place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace() exe = fluid.Executor(place) rec_model = create_module(config['Architecture']['function'])(params=config) startup_prog = fluid.Program() eval_prog = fluid.Program() with fluid.program_guard(eval_prog, startup_prog): with fluid.unique_name.guard(): _, outputs = rec_model(mode="test") fetch_name_list = list(outputs.keys()) fetch_varname_list = [outputs[v].name for v in fetch_name_list] print(fetch_varname_list) eval_prog = eval_prog.clone(for_test=True) exe.run(startup_prog) init_model(config, eval_prog, exe) blobs = reader_main(config, 'test')() print(blobs) infer_img = config['Global']['infer_img'] infer_list = get_image_file_list(infer_img) infer_list.sort(key=lambda x: int(re.split('/home/aistudio/data/test_images/|.jpg',x)[1])) print(infer_list) images = glob.glob("/home/aistudio/data/test_images/*.jpg") images.sort(key=lambda x: int(re.split('/home/aistudio/data/test_images/|.jpg',x)[1])) max_img_num = len(infer_list) if len(infer_list) == 0: logger.info("Can not find img in infer_img dir.") from tqdm import tqdm f = open('test2.txt',mode='w',encoding='utf8') f.write('new_name\tvalue\n') for i in tqdm( range(max_img_num)): for image in images: print("infer_img:",infer_list[i]) img = next(blobs) predict = exe.run(program=eval_prog, feed={"image": img},img fetch_list=fetch_varname_list, return_numpy=False) preds = np.array(predict[0]) if preds.shape[1] == 1: preds = preds.reshape(-1) preds_lod = predict[0].lod()[0] preds_text = char_ops.decode(preds) else: end_pos = np.where(preds[0, :] == 1)[0] if len(end_pos) <= 1: preds_text = preds[0, 1:] else: preds_text = preds[0, 1:end_pos[1]] preds_text = preds_text.reshape(-1) preds_text = char_ops.decode(preds_text) f.write('{}\t{}\n'.format(os.path.basename(img_path),preds_text)) f.write('{}\t{}\n'.format(infer_list[i].replace('/home/aistudio/data/test_images/', ''),preds_text)) print(image) print("\t index:",preds) print("\t word :",preds_text) f.close() save for inference model target_var = [] for key, values in outputs.items(): target_var.append(values) fluid.io.save_inference_model( "./output/", feeded_var_names=['image'], target_vars=target_var, executor=exe, main_program=eval_prog, model_filename="model", params_filename="params")if __name__ == '__main__': enable_static_mode() parser = program.ArgsParser() FLAGS = parser.parse_args() FLAGS.config = 'configs/rec/my_rec_ch_train.yml' main()```* 结果(.txt)将会被保存至work/PaddleOCR/的路径下,命名为test2.txt**注意:此处的test2.txt文件中的内容是乱序的,根据比赛要求,需要对其中的预测内容排序后再提交** * 最终比赛提交的结果,checkpoints使用的是/home/aistudio/work/PaddleOCR/output/my_rec_ch/路径下的**best_accuracy*** 通过下面的命令即可对测试集图像进行预测* Global.checkpoints 模型检查点文件* -c 配置文件* Global.infer_img 预测图片路径,可以为图像文件或者图像目录
###Code
%cd ~/work/PaddleOCR
!python tools/infer_rec_new.py \
-c configs/rec/my_rec_ch_train.yml \
-o Global.checkpoints=./output/my_rec_ch/best_accuracy \
Global.infer_img=/home/aistudio/data/test_images
###Output
/home/aistudio/work/PaddleOCR
2021-12-24 20:22:46,421-INFO: {'Global': {'debug': False, 'algorithm': 'CRNN', 'use_gpu': True, 'epoch_num': 161, 'log_smooth_window': 20, 'print_batch_step': 10, 'save_model_dir': './output/my_rec_ch', 'save_epoch_step': 20, 'eval_batch_step': 1000, 'train_batch_size_per_card': 256, 'test_batch_size_per_card': 128, 'image_shape': [3, 48, 256], 'max_text_length': 80, 'character_type': 'ch', 'character_dict_path': './ppocr/utils/ppocr_keys_v1.txt', 'loss_type': 'ctc', 'distort': True, 'use_space_char': True, 'reader_yml': './configs/rec/my_rec_ch_reader.yml', 'pretrain_weights': './pretrain_weights/ch_ppocr_server_v1.1_rec_pre/best_accuracy', 'checkpoints': './output/my_rec_ch/iter_epoch_160', 'save_inference_dir': None, 'infer_img': '/home/aistudio/data/test_images'}, 'Architecture': {'function': 'ppocr.modeling.architectures.rec_model,RecModel'}, 'Backbone': {'function': 'ppocr.modeling.backbones.rec_resnet_vd,ResNet', 'layers': 34}, 'Head': {'function': 'ppocr.modeling.heads.rec_ctc_head,CTCPredict', 'encoder_type': 'rnn', 'fc_decay': 1e-05, 'SeqRNN': {'hidden_size': 256}}, 'Loss': {'function': 'ppocr.modeling.losses.rec_ctc_loss,CTCLoss'}, 'Optimizer': {'function': 'ppocr.optimizer,AdamDecay', 'base_lr': 0.0001, 'l2_decay': 1e-05, 'beta1': 0.9, 'beta2': 0.999, 'decay': {'function': 'cosine_decay_warmup', 'step_each_epoch': 1000, 'total_epoch': 161, 'warmup_minibatch': 2000}}, 'TrainReader': {'reader_function': 'ppocr.data.rec.dataset_traversal,SimpleReader', 'num_workers': 1, 'img_set_dir': '/home/aistudio/data/train_images', 'label_file_path': '/home/aistudio/data/train_label.txt'}, 'EvalReader': {'reader_function': 'ppocr.data.rec.dataset_traversal,SimpleReader', 'img_set_dir': '/home/aistudio/data/train_images', 'label_file_path': '/home/aistudio/data/train_label.txt'}, 'TestReader': {'reader_function': 'ppocr.data.rec.dataset_traversal,SimpleReader'}}
['ctc_greedy_decoder_0.tmp_0', 'softmax_0.tmp_0']
W1224 20:22:46.662438 31520 device_context.cc:252] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 9.0
W1224 20:22:46.667984 31520 device_context.cc:260] device: 0, cuDNN Version: 7.6.
2021-12-24 20:22:50,809-INFO: Finish initing model from ./output/my_rec_ch/iter_epoch_160
100%|█████████████████████████████████████| 10000/10000 [05:04<00:00, 32.80it/s]
###Markdown
4.4 对txt文件内容排序* 用python写了一个小算法,对txt文件中的内容排序,最终将结果输出到test112.txt文件中,该排序文件命名为ZhuanHuan.py* 在work/PaddleOCR/路径下,新建ZhuanHuan.py文件,复制以下代码到ZhuanHuan.py中。```f = open('test7.txt', 'r', encoding='utf8')something = f.readlines()print(something)new = []for x in something: first = x.strip('\n') second = first.split() new.append(second)print(new)print(new[1][1])for i in new: print(new) new[i][0].replace('.jpg','') int(new[i][0])for i in range(1,10001): if new[i][1] == []: new[i][0] = new[i][0].replace('', ' ')for i in range(1,10001): new[i][0]=new[i][0].replace('.jpg', '') new[i][0]=int(new[i][0]) new[i][0]=int(new[i][0]) new[i][0].sort()print(new) for j in range(len(new[0])):f = open('test1112.txt', mode='w', encoding='utf8') f.write('new_name\tvalue\n') b = 0for j in range(10000): for i in range(1,10001): if new[i][0] == b: if len(new[i]) == 2: f.write('{}.jpg\t{}\n'.format(new[i][0], new[i][1])) else: f.write('{}.jpg\t{}\n'.format(new[i][0], '')) b = b+1print(j)f.close()print("finish")```
###Code
!python /home/aistudio/work/PaddleOCR/ZhuanHuan.py
###Output
水饺
[['new_name', 'value'], [2316, '水饺'], [4500, '84426783'], [5402, '60'], [1347, 'caohejing'], [7928, '政兴装饰'], [4192, 'welcome'], [9608, '中华老字号'], [2912, '15135125785'], [631, '传真文具办公用品批发'], [410, '曹记食品店'], [1547, '唐记'], [3463, 'g'], [765, '271号'], [6174, '江苏炒货'], [899, '瑞英制衣'], [8727, '营业时间'], [2626, '急慢性扭挂伤经验丰富'], [4561, 'wesom'], [1764, '欢迎光临'], [1658, '68329118'], [1355, '人防门'], [3262, '通体大理石'], [259, '红焖狗肉'], [6110, '广场'], [5729, '华铭印健'], [1315, '主营婚宴喜宴喜饼电话15'], [2107, '净水机'], [5142, '13'], [4073, '永发'], [3819, '机械'], [4601, '炭知味'], [1088, '书香景苑物业管理服务中心'], [2728, '天翼'], [5139, '助残志愿服务基地'], [2756, '新潮发廊'], [8737, '停车场'], [8049, '正宗飘香鸡'], [3635, '缝纫订做床上用品'], [2506, '专业值得信赖'], [6027, '助人育人与奋斗者同行'], [2082, '形象设计'], [2215, '炒饭'], [7575, '遇见餐厅'], [5473, '沙县小吃'], [3367, '平价超市'], [6500, 'bakeddurian'], [7866, 'a9'], [1511, '13327783161'], [6771, '金川'], [5948, '电话03163318564'], [4769, '周一二四五'], [4036, '康馨家'], [2303, '电话5975078959754421手机13601863296'], [7306, '烧烤时代'], [2000, '丽涵'], [9172, '提0大'], [3023, '易鑫商贸有限公司'], [6134, 'honmeu'], [7984, '麻辣鲜香好吃不贵'], [9796, '专业组装电脑维修电脑上门服务'], [2538, '辣妈服饰'], [5131, '中介'], [6643, '开关'], [1782, '针对颈肩腰腿痛风湿骨病'], [4547, '扬帆托管'], [6238, '御族缘'], [2061, '易'], [1311, '厂'], [5976, '浴室'], [7026, 'express'], [6292, 's'], [7941, '招牌广告印刷'], [7934, '出兑'], [1118, '液压破碎锤设备'], [1814, 'fefeshoop'], [6804, '上二'], [3818, '沙县'], [5796, '摩卡'], [897, '原汁原味正宗地道绿色健康'], [4747, 'j瑞丽发道'], [3446, '德聪'], [9731, '淞虹路一00五弄'], [1491, '地址文百'], [4866, 'chaulturalkartisticasset'], [8285, '上海市物业管理行业协会'], [5708, '适用于小松komatsu日立hitachi神钢kobelco沃乐沃volvo斗山doosan'], [9552, '紫光药业'], [2694, '雷'], [2957, '专卖店'], [7790, 'cp'], [1506, 'lottery'], [4755, '天和气球装饰'], [3995, 'sinain'], [2241, '上'], [4140, '配菜刀'], [3740, '承接宴席'], [2448, '珠江'], [9694, '酒十路报名点'], [3332, '批发'], [5196, '销售真'], [528, '地址a区26号电话15834115655'], [649, '匹克棉衣99元件150元2件'], [3362, '八大功效'], [8402, '教辅教参办公文具电话65928164'], [3930, '储藏室24万11'], [766, '特别加'], [7456, 'gp'], [7844, '老唐早餐店'], [2022, '维修保养各类进口国产轿车发动机底盘电路空调钣金喷漆13955718062'], [5960, '红焖驴肉'], [2781, 'y330'], [1576, 'iphone7'], [2688, '刺青工作室'], [4770, '豆浆'], [3303, '仟滋牛杂'], [6966, '66526569'], [4929, '汇贤府'], [4483, 'kcbc'], [4372, 'no14020029'], [4099, '地址武汉市青'], [4157, 'walnuttree'], [6640, '饮料'], [430, '812'], [5865, '人民来访接待室'], [5175, 'ctnr健身工作室'], [9326, '玉茗阁'], [5229, 'beifang'], [6591, '有色宝石'], [6035, '面条水饺'], [5893, '热线电话021689421501331169109713918159319'], [1662, '压神'], [4365, '电话4822566'], [8699, '新更家万'], [6233, '振龙箱包皮具城人口'], [7868, '晨光火'], [9462, 'hu'], [2127, '安心早餐'], [1041, '燕高星'], [1012, '0'], [5149, '办公室'], [7398, 'epyoung'], [6715, '消防器材'], [1396, '因为是小鸟所以在高'], [2928, '冒菜'], [7899, '盖饭'], [2083, '凹陷修复'], [9692, '518岁'], [9529, '797'], [8373, '连锁超市'], [4380, '羊'], [527, '地热13523961161'], [7231, '奶粉'], [560, '话60576033'], [407, '充电宝'], [950, '192ko特色治疗各'], [6226, '美'], [416, '无限极'], [8234, '河大曲'], [4015, '控'], [5325, 'chinawelfarelottery'], [1256, '展厅'], [4953, '理想洗'], [7957, '1005'], [2309, '源地'], [8299, '森迷专卖'], [8389, 'lily'], [5809, '378'], [5719, 'mg'], [9932, '江苏红旗电缆'], [1462, '烟酒'], [8513, '圆通速递'], [5284, '秀域'], [4485, '长城电动工具'], [7009, '95338'], [7276, '的服务满'], [9945, '绿地金融'], [1390, '电话13914321822'], [6856, '鼎睿轩'], [4260, '厂家直销'], [6173, 'huastation'], [7272, '福'], [4059, '国珍健康会所'], [9112, '转'], [1606, '联系电话15921966906'], [851, '尼彩手机工厂店'], [1675, 'chinamobie'], [5507, '批'], [198, '164'], [6312, 'shanghaihengyuanmarneequipmentcompany'], [2599, '洁超市'], [2034, '华帝'], [441, '福'], [6461, '三府'], [977, '搏客'], [7137, '电话13878063479'], [2731, '广州市侨港汽车服务有限公司'], [357, '麦田房产'], [7977, '中加科洋'], [5478, 'd000002650'], [1002, '进口滤纸专业制造'], [6814, '兴业银行'], [6026, '超'], [9888, 'tattoostudio'], [9538, '轻质砖隔墙'], [3549, '269264213881134688'], [7092, '集成灯'], [4638, '修电动车'], [5177, '金盛大厦'], [9163, 'kanglee'], [4436, '2送3'], [9996, '279'], [1343, '乐园店'], [8539, '广度策莉连锁药店'], [7939, 'dun'], [6661, '电话13940802791'], [3312, '新征程'], [508, 'no沪r1603036'], [5618, '茶园'], [9162, '号30020'], [4205, '栗记烩面'], [8192, 'wapein'], [7988, '大城小厨'], [3548, '徐小厨'], [4558, '电话03513632469'], [5313, '中百工贸电器'], [9836, 'shanghai'], [371, 'telecom'], [7315, '华莱士'], [1543, '沙县小吃'], [6040, '汗蒸'], [1096, '蜀湘园'], [4339, '杜氏骨科'], [76, 'pizza'], [7722, '服务员'], [3693, '室内外整体装修'], [7387, '杰西克网咖'], [6944, '乐晨卫浴'], [80, '凡克猫'], [9485, '签约包治祛斑祛痘'], [691, '复合式'], [4930, '晓靖家宴'], [5066, 'smile'], [3785, '96885'], [9647, '电话'], [3064, '你家阿尔法才e'], [5990, '欢迎光临'], [4604, '016岁女生男生特价店'], [3889, '美容养生减肥纹绣'], [242, '五星购门390821180'], [9343, '镇中路'], [8777, '宏星窗帘'], [575, '香花桥街道郏一村二00二年八月'], [9831, '北京二锅头'], [8016, '主营钢材水暖彩钢瓦'], [2458, '台头综合商店'], [7376, '鲁西肥牛'], [2652, '黄焖鸡米饭'], [5933, '蛋炒饭盖浇饭'], [8955, '电影花苑1号楼地下室一层'], [6132, '富易地产'], [718, 'sunflower'], [2270, '13783849126'], [2016, '中国民生银行'], [5817, 'haolue'], [1917, '18221537208'], [1452, '电话13731002178'], [8020, '君子'], [8107, '公共卫生间'], [7017, 'more'], [2773, '服务热线13924282625'], [3296, 'hbxt1710002'], [1542, '馨爽滑饺子王'], [8190, '酒业'], [1128, '门唐巷超市'], [286, '纸袋'], [9127, '羊火锅城'], [9300, '营业时间08230120013002100预约电话13512220927'], [1136, '沙县小吃'], [9192, '黄砂水泥'], [7242, '招聘'], [8923, '阳光'], [367, '旭泰内衣'], [7107, '云南菜'], [2141, '蓄电池'], [1838, '中国体'], [3830, '梅陇西路'], [4409, '塑胶地板'], [5282, '周祝公路2216号'], [3612, '7装18928806898'], [7666, '北京热力'], [4346, '上海淘气宝'], [3190, 'com'], [5727, 'g'], [9853, '中国邮政储蓄银行'], [5711, '烟'], [1999, '214'], [3123, '防安全防火墙改善全社'], [9906, '禁止鸣号'], [9110, '国学托管'], [2922, '风味正宗'], [8637, '向i'], [1163, '电瓶机油超市'], [3044, '卷帘门'], [6434, 'anma8613570016004'], [7473, '宁波埃美柯铜阀门有限公司'], [8721, '紫蕊星'], [8811, '苏威漆'], [1024, '日用百货'], [8092, '世界500强'], [4060, '伸缩门活动房高档隐形纱窗电话1506220831115295558665'], [9881, '华日烟酒超市'], [588, '国珍健康生活超市'], [1352, '信阳菜馆'], [9642, '电话03947866662'], [2125, '阳烟花炮竹'], [2306, 'brother'], [6671, '碗工'], [7367, '楼顶大字'], [1688, '热力站'], [4636, '95554'], [7980, '女人街'], [4677, '明家老铺'], [6015, '科技企业孵化司'], [1379, '九周年店庆感恩大回馈'], [7478, '广式云吞面'], [2514, '电148123778'], [7312, '大众堂'], [8612, '中国电信义乌分公司'], [2119, '泸定路339号'], [1927, '成广告'], [5611, 'gue'], [2111, '有限公司'], [8158, '送10手机13868538396'], [8450, '龙腾狮跃礼在三林'], [1654, '主营中策电'], [9219, '广州市黄埔区鱼珠街'], [6393, '证书奖牌'], [817, '废边角料等铸造炉料'], [8859, '华联'], [5804, '安装日期二0一四年十一月'], [8194, '有电危险'], [7301, '福'], [987, 'adidas'], [8793, '鲜鱼宝宝早教中心'], [3195, 'vivo'], [7125, '组合音响'], [6836, '楼宇党建服务站'], [2271, '通系绕方紧解决者'], [3251, '食品公司'], [5581, '磨头广生车行电话55427'], [5200, '炎腰椎间盘突出症小关节'], [2456, '中国工商银行'], [7647, '交持发展'], [5416, '金'], [1535, '康泰'], [4128, '工厂直销'], [2238, '园'], [279, '泳具泳'], [5672, '烧烤'], [3615, '靓尚e家'], [7532, '版权'], [9987, '香包'], [9973, '条幅锦旗画册宣传单展板展架名片'], [9723, '街最奶茶'], [9889, '133'], [1528, '第四中学'], [7673, '二楼'], [5361, '视行专业音响导航'], [9216, '补胎充电修理'], [1808, '球娱乐厅'], [3179, 'xxqr2018'], [5410, '易视界'], [4654, '土木工程1358181056518810466056肖师傅'], [1350, 'shanghaikangyangbaojiecoltd'], [9237, '招商热线line'], [7180, '韩城跆烙馆'], [6341, '服外'], [3352, '静时光咖啡馆'], [6480, '请上305室'], [1827, '避风塘'], [5192, '金逸影城'], [7055, 'w'], [6125, '大清仓'], [4931, '湖南纯粮酒坊'], [7316, 'on'], [7756, '选择鸿泰货款更快'], [3284, '劳动争议'], [6934, '中国体育彩票'], [6232, '雅'], [431, '芊草堂养生'], [1083, '中国共产党广州市海珠区海幢街道'], [2281, '8235080'], [7379, 'amani'], [4079, '青德'], [7645, '司标酒质'], [6690, '我爱我家'], [7265, '花儿音像'], [8375, '策魅力女'], [2467, '重型汽车点盘修理'], [6088, '会员之家'], [9165, '07714846536'], [7864, '炒粉炒'], [6307, '瑞泰人寿'], [6948, '佳明机电冷气维修部'], [9649, '亿康丽'], [2149, '派出所警务室'], [5679, '承格新能源'], [5165, '加盟热线4000213'], [3874, '他汀钢片优力平买三多一赠'], [5222, '健身'], [585, '斗牛体育'], [271, '更多'], [1569, '杨三婚庆图文广告'], [423, '请上三楼'], [9902, '品螺蛳粉'], [8462, '联系电话02034103783'], [7625, '5089'], [9074, '红豆'], [4704, 'oppo'], [9392, '手机13605051164'], [2989, '专车专用导'], [8184, '协媳好时尚自助火锅'], [46, '海淀区车道沟南里社区卫生服务站'], [2807, '水晶字发光字不锈钢字'], [9486, '钢不锈钢钢结构等室内外装饰工程'], [5539, '联华'], [3199, '细选'], [9506, '汶家'], [7114, '2003年2008年'], [1309, '上海卢浦进修学校龙营路校区'], [1419, 'ems'], [4391, '285'], [4081, '惠众玩具'], [7030, '粮油开业大吉'], [8038, '安'], [590, '湖北省工商业联合会'], [7600, 'tubiao'], [1074, '花光烙饼拌汤村'], [411, '平阳路111弄'], [3374, '潘多拉魔盒'], [1084, '迎器'], [2237, '杨图福'], [6485, '兴安盟景'], [7724, '佛山市顺德区'], [6296, 'hagpy'], [8460, '73838986'], [1014, '闲人免进'], [408, 'china'], [5213, '冰红茶'], [6476, 'chinamobile'], [3864, '力衣站'], [8623, 'faculty'], [5886, '五福缘酒楼'], [4908, '4697555'], [3638, 'pielipara'], [716, '天天都是特价'], [9095, '麦儿香面制品'], [2654, 'youshe'], [870, '凉皮'], [1386, '国际花园'], [2055, '周一至周五'], [7989, '我家酸菜鱼'], [4542, '162'], [775, '粮油鲜蛋'], [3857, '经'], [3898, '欧诺'], [484, '广东省防伪行业协会'], [6596, '4g'], [8897, 'house'], [4957, '小郎酒'], [2676, '各类干锅'], [621, '快餐'], [8001, '蒸馍店'], [9233, '928'], [6535, '众凯健康之家'], [4289, '品'], [9242, '达人'], [1993, 'drplant'], [7329, '阿玛尼优洗'], [6561, 'lcishnsanking'], [8113, '强力巨彩'], [5542, '童'], [7608, '留佳那禁年的味道'], [4816, '宽带'], [305, '曼天雨'], [3058, '电话6271828715221248380'], [1198, '一室一厅'], [4352, '6094555'], [2545, '具'], [7335, '爱芳'], [2974, '帝造绿色的世界encch人造草'], [4153, '工脚手架出售'], [2647, '中交四航岩土工程有限公司'], [5281, 'napistam'], [7978, '驻素社街检察室'], [4975, '电话581812085085970213818080988秀沿路313号方舟建材市场b区910号'], [4632, '潮数码'], [3912, '伴依'], [7326, '残疾车免费'], [1777, 'here'], [9060, '雅'], [3741, '新鲜卫生'], [8491, '复印'], [9652, '鲜'], [906, '15893273118'], [2408, '柳谐'], [4897, 'supor'], [6213, '烧烤海鲜专门店'], [8064, '批发'], [3229, '敦和综合市场'], [7139, '下午14001700'], [7010, '桃x醉'], [9042, '13'], [6525, '办公b'], [6405, '会所'], [200, '099'], [5126, '手机1'], [5887, '知己好茶'], [8687, '弘文广告'], [6073, '买1'], [7747, '武大众厂乡风味'], [8295, '电话15034009688'], [8195, '奶粉专卖'], [3733, '490078'], [4731, '电话3237103'], [6319, '3l'], [1625, '诱费'], [7542, '浦东东方有线六灶营业厅'], [3491, '东佳信电缆'], [6530, '土鸡土鸡蛋出售内有小番茄无花果'], [2581, '441'], [540, 'gb'], [9520, '联系电话075782214164'], [660, '黄焖鸡米饭'], [6070, '寿司'], [7843, 'kom'], [8656, 'frayfishrestaurant'], [5531, '衣时尚'], [7534, 'ni'], [7971, '调漆中心'], [9967, '小苏断桥门窗'], [9747, '三十三针打揽拉橡筋拉条拉裤头人字游花'], [4498, '百'], [4976, '电话23568661手机13055637086'], [1774, '中国福利彩票'], [2380, '打字复印'], [5029, '立森白沙营业厅'], [3776, '1805889053'], [3298, 'postal'], [8627, '长乐路'], [7732, '家常大馅水饺'], [2802, '叉车辆吊车'], [5867, '绿叶为美而生'], [4910, '电话15827502884'], [5659, '印象小厨'], [894, '广州市荔湾区金花街道'], [6362, '喷绘写真'], [3443, '格流物大世界b12021'], [9243, '修脚'], [9863, '15807189616'], [2201, '光钟表'], [205, '二'], [2889, '京九'], [3934, '世调味'], [3097, 'rhome'], [6288, '药'], [422, '长虹'], [5140, '海珠区'], [2870, 't15900932227'], [8132, '北京大饼'], [3203, '羊奶专卖连锁'], [1133, '衣网琴深'], [8563, '严禁驶出'], [1121, '电话52271708'], [9555, '电饭锅电磁炉塑料制品'], [2997, '酒窝'], [171, '按摩'], [8369, '威乐斯'], [6024, '周苗便利店'], [42, '欢'], [3304, '美容美体瘦身'], [9353, '早点'], [1330, '晋南特色'], [9425, '小琴羊绒羊毛店'], [9561, '上海广佳信息技术有限公司'], [9917, '定慧北里一居社区'], [2842, '眼镜'], [6105, '尚美容美发沙龙'], [6243, '贵的'], [4787, '文明小区'], [6034, '149'], [4270, '佳伟'], [3236, '乐趣足道'], [1629, 'internationalexpression'], [5163, '小饮吧'], [7331, 'arthome'], [6957, '周六周日'], [8600, '京世salon'], [2460, '海海健街器店'], [4247, '纽福克斯光电科技上海有限公司'], [8525, '999'], [5913, 'toilet'], [6735, '吉祥小'], [943, 'brgn'], [5728, '一贴式婚庆广场'], [3054, '椎间盘突出颈椎病肠炎'], [6943, '31'], [1835, '盖浇饭'], [2875, '冠马横特道具品牌研发中心'], [8484, '永平置业有限公司'], [4625, '一车一杆'], [9851, 'servicebanking'], [4090, 'ytt'], [1578, '556417655518175381996'], [6921, '9438288282'], [1520, '地板'], [7292, '隆祥五金水暖商店'], [3140, '祖慈肩颈'], [9992, '天翼互联网手机卖场'], [2579, '彩婷内花'], [2901, '易发卷闸门经营部'], [3784, '外贸童'], [8919, '因外出学习'], [1089, 'toilet'], [1219, 'ere'], [2645, '1002'], [8709, '久'], [1471, '专业装裱'], [841, '广州市海珠区海幢街道'], [8747, '天然大理石电视墙'], [4830, '竹纤维'], [4203, '水印'], [2025, '家乐福'], [1443, 'ping'], [9797, '思达沃歌健康管理中心no005'], [8628, '锦生堂'], [4419, '旺达酒'], [957, '来'], [8465, '林盛红木馆'], [6180, '28627'], [8168, '大燕金店'], [5051, '13592219629'], [9021, 'dr'], [7513, 'ooon'], [132, '第一品牌'], [2154, '文化艺术教育培训中心'], [7813, 'tel15170367915'], [4678, 'pinjunexpress'], [8209, 'jiyangiifiae'], [6945, '瑞泰酒店'], [14, '酒水'], [7740, '寄卖'], [7846, '丽华'], [3324, '鹿溪饲料店'], [3563, '235'], [4640, 'chigo'], [6800, '中国门业十大品牌浙江省著名商标'], [2516, '上海卓波鞋业有限公司'], [8336, '形象设计中心'], [7414, '佛山市'], [3107, '时尚女鞋'], [4198, '特约经销'], [1090, '清真'], [1416, '上海速腾汽车服务'], [203, 'img'], [2722, '原轮胎修理部'], [1817, '本小区装有电子监控担头'], [5615, '一车一杆损坏赔偿2000元'], [2740, '婚庆一条龙'], [1303, '卷饼秘制鸡柳'], [6270, '手工水饺'], [8982, '皮森'], [1978, '价目表'], [7345, '海磨具'], [5571, '手工面'], [8672, 'ya'], [2635, '金融'], [333, '中国邮政'], [733, '全屋家居定制馆'], [1156, '本店专业订做男女西服羊绒大衣衬老'], [2447, '水果大卖场'], [5899, '钓鱼'], [9008, '中交上海三航科学研究院有限公司'], [8829, '山西顶音文化艺术'], [2776, '蒲草把子肉'], [1016, '向里20米'], [7760, '鸡眼灰指甲'], [5061, '居民委员会'], [7551, '育山'], [6095, '女装童装'], [4343, 'versace'], [1858, '部'], [1292, '0时代文体'], [8263, '481'], [9250, '老樊家串串'], [6872, '烫染造型'], [6352, 'yadeo'], [4480, 'cc'], [2411, '巴西'], [7562, 'wewanjia'], [7087, '专业公司命名店名人名改名起名'], [3956, '上门取件'], [5020, '知己好茶'], [6443, '美发'], [8906, '上海市电力公司'], [1481, '健康促进咨询服务点'], [1643, '免收加工费'], [3331, '门'], [882, '267'], [3103, '原广西大'], [6772, 'stategrid'], [3836, '慧君'], [4194, '治安'], [4712, '品蓉艺术美容spa'], [9745, 'hour'], [9667, '凉皮米线肉夹馍电话13837261065'], [4633, 'health'], [9264, '啦'], [921, '水红砂罐碗'], [6092, '创新'], [4304, '标杆教育卢湾分校'], [3877, 'ktv'], [5239, '青岛啤酒'], [9232, '特步'], [1552, '鸿强批发部'], [7924, '高端婚纱摄影机构婚礼策划儿童摄影'], [4377, '品'], [1696, '红牛'], [7343, '24h'], [2169, '处理本店主营'], [4994, '15975595470家直销'], [709, '复兴路32号社区服务站'], [2013, '复印'], [4796, '安局'], [1238, '电瓶批发部换电池'], [2864, '105'], [2727, '方'], [8604, '永'], [392, 'vivo'], [6511, '电话15539008555'], [145, '3027'], [1061, 'sweetwoman'], [6264, '滋味小厨'], [3326, '159790745'], [4589, '瑞丰配件'], [6551, '欢迎茶发前来品鉴'], [2950, '京东到家'], [2007, '建设电动车'], [149, '上海艾美克电子有限公司'], [7065, '专业维修手机'], [3775, '饰品'], [5194, '美宜佳'], [5180, '奔德利名车维修'], [1048, '中药理疗'], [5153, '社区服务站'], [1013, '施冲百货综合商店'], [8981, '电玩城'], [9951, 'eroup'], [4511, '约凯撒比萨'], [9744, '2017102820171212'], [9080, 'ju'], [4535, '雅艾皮草服饰'], [8624, '北京中京北盛门窗经营部'], [257, '田林器旗'], [8427, '锌钢护栏'], [2245, '雪佛兰'], [9369, 'oppo'], [8716, '阿华水果店'], [6650, '富华'], [806, '监生18973765389'], [7622, '凯票专票馆'], [8722, '子女临时183515小时两班可转正'], [9596, '海河阳光爱心家园'], [5862, 'chemicalengineering'], [6147, '爆屏修复'], [7525, 'callback'], [9340, '移动'], [7519, 'teng'], [9487, '欧尚驿站'], [8313, '电话18665051911'], [9159, '美发'], [7422, 'iongwon'], [7717, '可上门提货'], [8221, '成人'], [2823, '奈形象'], [9545, '艺剪缘'], [7872, '主营高端系统门窗隔音窗系列安全窗系列套装门推拉门'], [8081, '15536983331'], [5464, '曼天雨'], [4261, '地址海珠区石溪蚝壳洲东街77号'], [3273, 'li'], [9291, 'shangnal'], [6471, '黄浦区公共法队人民调解中心'], [8033, 'express'], [7450, '长江相馆'], [6068, 'xiao'], [1941, 'jassie'], [1952, '02032380455'], [5822, '招聘'], [6349, '韵达'], [2866, '上海淮胜贸易有限公司'], [1924, '万寿路街道保洁中心'], [6106, '恒公路药店'], [8688, '石科'], [5079, '356'], [2658, '超市'], [5461, '银座假日酒店'], [8626, '琴钢'], [2590, '白金贵之樽酒'], [7151, '冰淇淋'], [6209, '成人保健品'], [2457, '一庆灸'], [5761, '美艺廊美容美发'], [7405, '伟云锁具'], [4253, '本栋二楼电子厂招收女作'], [5577, '滋信'], [8750, '亚格厨电'], [5203, '天正'], [7800, '汽车24小时服务高价'], [2416, '便民玻璃门窗店'], [2331, '电池'], [6613, '广外街道二热社区'], [5437, '372'], [8217, '联城市热力集团输配分公司'], [2733, '灯饰安装'], [2084, '西湖粮店'], [9094, '英圆纽百愉'], [7799, '寻'], [5132, '仓储基地'], [6815, 'c区23栏'], [6235, '现包现做才好吃动'], [9662, '绿叶'], [2038, '1848'], [6000, '4g'], [4228, '朗顿男装'], [3156, '流花站'], [2120, '核素'], [8632, '限极'], [1713, '主营各种系列五'], [9843, '干休所食堂'], [2301, '泸'], [4993, '兴创汽车会所'], [3230, 'taiyuan战'], [6175, '百花'], [7483, '电话0278873250688732325'], [1624, '牛肉'], [4130, 'theeditorialboard'], [5843, '佳振典'], [6009, '订座1363305539'], [9196, '茶具'], [2091, '大食小亭'], [677, '三楼2667718'], [8653, '禾青源商贸'], [6107, '医创空间'], [9597, '订餐87335064'], [2426, '维修'], [5003, '上门服务'], [4668, '亚宾客房'], [9097, '洪达汽配汽修中心'], [293, '于屋'], [5807, '不锈钢卷闸门铁艺'], [855, '申通快递'], [9806, '电话58161540'], [646, '小秦'], [8556, '泰州市海康实验器材'], [5953, '学生指定配镜中心'], [9175, '美妆屋'], [1593, '广州花卉质量监督'], [7639, '飞扬旅行社'], [5223, '电话62662999'], [2606, '24102'], [9696, '刘老二烧烤'], [8226, 'more'], [6508, 'openi'], [1690, '声卡'], [3998, 'dreamfly'], [1261, '星达旅馆'], [5188, '房屋鉴定'], [4881, '国际'], [5997, '电话号码13511244943'], [7237, '省钱'], [5549, '新式热卤'], [5652, '欢快家人'], [7375, '宾馆'], [197, '13482326388'], [5835, '专业工具批发及维修'], [6447, '兼营音响设备出租'], [1884, '火疗'], [3390, '回族'], [2601, '门'], [9817, '艺术培训中心'], [8322, '广安北滨路'], [4656, '4525'], [6101, '18m'], [1374, '花架大门隔断阳'], [743, '何塞阀门制造上海有限公司'], [5127, '电话13757232468禹越网661468'], [1866, 'timsstudio'], [6008, '知名品牌酒店'], [7609, '内设高中低档客店'], [1500, '电话15061012372'], [3347, '西城区月坛街道'], [5360, '东方名剪'], [6736, '鑫盛电脑'], [8210, '推拿按摩'], [3634, '上海同信机电有限公司'], [7313, '27'], [2113, '断桥铝门窗'], [3869, '恒会'], [2221, '打印复印装订'], [5482, '经营范围劳务派遣劳务外包劳务咨询'], [5869, '客运站'], [8140, 'funsoul'], [8339, '恒飞'], [1446, '24'], [5955, '121'], [5479, '安防行业领跑者'], [5314, '顺发装潢'], [993, '阳光花卉'], [3335, '中国戏曲学院'], [6590, '南店'], [3067, '顺达兰苑'], [7046, '新俊略鞋业'], [5522, '英菜壳润滑油'], [6080, '芳云烟酒'], [7580, '电话1839775832415197454024'], [3051, '您身边的厨卫专家'], [6423, '金阳光不锈钢'], [2077, '羊血'], [8358, '30'], [7681, '古井贡酒'], [4626, '文定路'], [6126, '话03517921088'], [730, '上海凤凰'], [9676, 'po'], [3669, '订餐热线'], [4514, '装饰开关'], [7206, '小丰阁家常菜'], [8850, '万家厨房'], [6774, 'h'], [5056, '中国工商银行'], [436, '十月'], [7637, '江淮汽车'], [725, '手机15921180151'], [7541, '卧'], [4582, '南园路99弄59号'], [1821, '合兴隆五金行'], [4680, 'oppo'], [2872, 'bank'], [9344, '各种锅电磁炉厨房'], [2428, '接受预定各种车型款式电池置换电话18761543869'], [126, 'sipe'], [2530, '中小企业孵化基地'], [1877, '59'], [8749, 'lanzh'], [8898, '榆树'], [7759, '彩虹'], [4133, '电话0208237160882371098'], [3095, '汽校招'], [8198, '欢迎光'], [1775, '美国泰克轮胎修理'], [814, '爱伊文具'], [6306, 'dandan365365'], [5256, '品骏快递'], [2140, '直营店'], [2682, '快捷'], [7127, '坊艺唐'], [8584, '世纪苑'], [2192, '上海合愉劳务派遣有限公司'], [3080, '平价'], [4297, '六里坪营业厅'], [4158, '出租'], [3457, '18971530500'], [8555, '新瞿溪菜市场'], [8034, '女洗手间'], [4000, '176219'], [1848, '5531'], [5436, 'l'], [1885, '手工面馆'], [1291, '男人帮'], [9295, '颜范医美'], [1262, 'mahoganyship'], [3821, '4008117600'], [4748, '陈记炸酱面馆'], [7925, '川汇环卫'], [318, '建之佳'], [9058, '13986043'], [2209, '汽车空调'], [2438, '综治工作站'], [3853, '三枪'], [5366, 't'], [9103, '天盛汽配'], [1630, '格力'], [5032, 'ruitaihotel'], [6062, '铝合金不锈钢钢构工程13903704633'], [3594, '俏厨房专卖店'], [9975, '电话李先生15327345605双'], [4290, '恒源机电'], [9716, 'benhui'], [662, '电瓶'], [3148, '橱柜品牌中心'], [469, '跆拳道散打'], [5425, '法国柏宁'], [2115, '平世nehome'], [1600, 'zto'], [8383, 'eli得力办公'], [531, '好再来'], [45, '精品'], [7245, '布艺窗帘直销店'], [9239, '河南一防技术部'], [3610, '家电维修'], [6330, '多书o'], [7366, '有礼北京人'], [6328, 'oniessor'], [6825, '药物依赖治疗中心'], [9942, 'ktno2014635165'], [7677, '报名处'], [8010, '广州州市东刚中学'], [3989, '电话18234015618'], [8791, '空中充值'], [3280, '禾石园'], [5556, '造饭师'], [9332, '曼大冷面'], [5739, 'creditharmony'], [7431, '菊兰美发'], [5678, '城19号2'], [2366, '上海烟草网络销售单位'], [5498, '正宗牛肉板面'], [5541, '保利菜馆'], [9521, '高价回收'], [1071, '印社'], [3198, '全场'], [7399, '450'], [8060, '新读书院'], [4789, '充值卡u盘等手机配件'], [486, '削'], [9246, '剪神'], [5945, '旗舰店'], [9469, 'me'], [4874, '蛋黄卷7'], [936, 'helx壳牌喜力'], [803, '旅'], [7590, '1009密胶餐具国家认证密房行业高端品牌'], [5952, '总代理'], [4692, '莓双寸有范'], [3376, '15942046034'], [30, '武汉全华光电科技股份有限公司'], [4421, '595'], [6900, '斯柯达'], [5579, '美鲜美可'], [7464, '大同鞋业'], [3052, '赵军家私'], [4371, '东京商城'], [3124, '推'], [1426, '联系人刘先生1370174633'], [7424, '福'], [5082, '专注烧鸡三十年'], [8609, '福彩'], [2931, 'yesocksline'], [2915, '丁丁五金工具批发部'], [6478, 'h'], [4495, '745'], [7626, 'goyy'], [9507, '私人摄影机构'], [2510, '流量'], [3526, '中国移动'], [719, '狗年大吉'], [840, '雅欣特种专机'], [905, '全日'], [3188, '欢迎新老顾客光临永洪串串香'], [9357, '76'], [8246, '58'], [2763, '香锅'], [227, '染168元更'], [752, 'uageeducationpress'], [2179, '200m500m1000m'], [3378, '经营单位广州电子泊车管理'], [4259, '一邦豆一'], [6884, '手工水饺擀面皮炒面条'], [1715, '235'], [3447, '炒酸奶鲜榨果汁水果携手工'], [425, '从虾开始'], [3459, '老林钢圈店'], [4628, '老黎'], [5372, '正常营业'], [7906, 'kungfumdlatang'], [6263, '潭平'], [9057, 'cac'], [2197, '艾米商务客房'], [907, '十元三双'], [3438, 'meiningjinrong'], [789, '中国体育彩票'], [772, '设计共和销售'], [8931, '1508891'], [4752, '友润化工'], [8179, '卷帘'], [7933, '酒店'], [7042, '地址合川区喜'], [1029, '旺旺制衣厂'], [6055, '重庆小面苏式汤面'], [6192, '电话15823413081'], [8476, '用品'], [1141, '智能家安保系统'], [7675, 'no0000027'], [1898, '立马电动车'], [4033, '上海农商银行srcb'], [5159, '社区服务站'], [1170, '黄金'], [6542, '胖胖水果店'], [5959, '我蹈考级单位'], [9712, '换电池'], [4114, '农家生鲜'], [4083, '油压'], [7504, '视频'], [5596, '地址'], [8320, '岩茶'], [2208, '公用电话'], [5235, '鑫希国际'], [4214, '和颐商行'], [2137, 'huaw'], [9629, 'oniu'], [2401, '开心宠物生活'], [3093, '鑫佳门窗'], [2670, '武汉鸭脖王'], [6427, '配件刷机解锁'], [3979, 'meetingpark光断宠物公园'], [8502, '嘻范'], [1158, '快照'], [7784, 'toyota'], [1117, '悦顺电动轿车'], [4694, '请推车慢行'], [4966, '茶具茶叶包装批发'], [3561, '贵州羊肉粉'], [2474, '兴达烧鸡店'], [7048, '新外大街南社区警务工作室'], [5592, '宠物医院'], [7798, '上市品牌股权代码202743tel13255667928'], [3450, '湖北'], [777, '中国移动'], [1655, '仁源生医院'], [2259, 'qingpudistrictofshanghaiteachingpracticecenter'], [6563, '设计制作安装一站式服务'], [6279, '永信建材34164933'], [9339, '钢铁路店电话2211222'], [2934, '王记'], [5896, '蔬菜供应'], [9961, '水云间花卉园艺'], [2900, '电话15189904086高港专卖店wfi小已覆服'], [2631, '宜园大酒店'], [7086, '本小区'], [3118, 'researchcenterforgovernmentbylaw'], [8433, '各类精品羊绒量身定做精工编织'], [3541, '各类名车维修保养'], [8385, '量体裁衣个性订制'], [8215, '15814881'], [1699, '计生用品'], [1851, '订水热机8加加盈店'], [5824, '张'], [3654, '现价698元'], [668, '北京市丰台区5202号5335163613051413292'], [7062, '批发零售剪庆订购电话15538611499'], [2551, '社区卫生室'], [9675, 'shanghaichonggubusinessdevelopmentcoltd'], [4031, '花圈鲜花花篮'], [4740, '万达旅业'], [6118, '店面'], [9436, '包办酒席'], [2812, '品'], [1086, '北京金融大街证券营业部'], [4743, '主治'], [5078, '印巷'], [2990, '劲霸男装'], [3993, '文绣'], [9541, '二手'], [225, '首尔小站'], [5608, 'jinpeng'], [3684, '苑'], [9866, '利农商业'], [9495, 'riza'], [3852, '芬谷满天粥铺'], [1781, '保健'], [6566, '8302000'], [9296, 'adidas'], [1066, '空夹胶之术外装饰工程'], [6725, '晨光文具'], [4775, '砂锅土豆粉'], [6302, '改善睡眠美容瘦身'], [831, '做出品质吃出健康'], [1956, '红牛'], [8648, '庄'], [1392, '旺家建材'], [2624, '镀钳加工黄金回收'], [9729, '工厂直'], [5829, '69'], [9577, '山丹丹'], [9622, '潮艺造型'], [4923, '烧烤'], [3732, '冷饮牛奶批发部'], [9722, '由此过道'], [7457, '2楼'], [608, '地址镇泽路558号'], [8509, '公共卫生间'], [5689, '电话15852951515'], [5513, '凯骑电动车'], [7447, '有限公司'], [7228, '南'], [3624, '条布大全'], [4573, '海霸王'], [3964, '绿色生活产品体验店'], [3781, '莜面村'], [9276, '中国兰州'], [3681, '炒饭'], [9971, '金典烧烤'], [2434, '客家红'], [5733, '171'], [350, '请'], [4400, '577'], [3379, '上海龙澄'], [1008, '水电木工'], [6744, '肽'], [2521, '家用'], [3867, '电动车'], [2314, '新佰伦股份有限公司官方品牌'], [2777, '有米有面'], [6720, '鸡味鸡味'], [6935, '31034283'], [378, 'batilegrounds'], [516, '茶'], [6875, '无磁文明'], [7181, '彩妆'], [8993, '13838671910'], [4322, '本座2'], [7710, '885'], [1747, '警务工作室'], [9161, '衫'], [9741, '开泰副食品店'], [9370, '益梅足道其能品质生活'], [5216, '凉皮'], [2735, '15878302346'], [8831, 'looopooo系列'], [8588, '欢迎预'], [8629, '福彩'], [8853, '老林机焊钢圈电话15977842130'], [8567, '源自福建全国连锁'], [2857, 'lesaffre'], [8048, '雏牧香'], [1806, 'tcl'], [9742, '双节同庆'], [446, '石交双'], [8051, '电话15541161499'], [1319, '消防器'], [1482, '上海小笼包子'], [916, '寿金'], [8240, 'yamaha'], [5589, '星期五'], [3937, '福'], [715, '各种屋面防水堵漏室内地下室阳台天沟卫生间'], [7142, '阿特米罗'], [2088, '烫染沙龙'], [1660, 'popularfashion'], [6802, '铁木真美食百汇'], [1512, '克州'], [7711, '萌宠馆'], [4825, '大美广告服务中心'], [2292, '韩潮服饰'], [666, '中国福利彩票'], [1746, '不锈钢'], [9460, '电话02082301011'], [6077, '全国摩托车证均可并证'], [4706, '时寸'], [7203, '青9'], [4763, '电话'], [9861, 'ztoexpress'], [4579, '道和生发养'], [5319, '和强正宗牛肉拉面'], [5398, '三禾钢琴教育'], [2168, '地址金屹城酒店用品市场1号3号电话03513134599312540613903515730'], [4689, '电中'], [1064, '当心滑跌'], [5544, 'sf'], [8345, '惠d民超市'], [4956, '空调电瓶'], [8847, '干垃圾'], [4028, '专业电动工具'], [3640, '高'], [746, '东风启辰'], [6249, '56'], [573, 'kobaby米可贝贝'], [6348, '各种颜色镜子防雾镜玻璃隔断工程玻璃等'], [9852, '电话1358596653067182612'], [7250, '直达国5'], [3432, '老酒'], [1822, '专营工'], [744, '元和'], [3788, '爱义行'], [9995, '今'], [7396, '宠物'], [1646, '攸县农商银行'], [2800, '经营安装石膏线角花灯盘圆弧'], [8543, '百租'], [9926, '店'], [3663, '电话18275006853'], [93, '卤味'], [4649, '子馨超市'], [1991, '149'], [5347, 'y6y酸菜鱼'], [6327, '15090585886'], [263, '雅迪'], [7911, '厕所'], [7330, '0208219115913602847181'], [6678, '101'], [8000, '美学修复专业矫正'], [2605, '益民大药房'], [2911, '花豆记到'], [8169, '电话1825896847715268690192'], [8968, 'express'], [7342, 'acb'], [65, '96'], [3947, '顺丰速递'], [6455, '68折'], [6912, '1384023135'], [6182, '中国移动'], [6313, 'baigagumqi'], [341, '品牌原单服饰'], [4155, '电话13502971295'], [228, '57'], [3832, '熬胶'], [190, '人民法院出版社'], [1068, '刀豆土豆'], [7688, '免费'], [1650, '鱼'], [3011, '迎您'], [4588, '敏敏便利店'], [8360, '5元50'], [4971, '卖'], [3373, 'so蛋全收漆空调电路维修电话13664798450'], [8819, '美睫'], [1760, '订送13060633773'], [1689, '家政保洁家电清洗'], [4739, '厂家直销'], [9015, '13157777128'], [9872, '广州市海珠区江南中街'], [3397, '九州汽车租赁'], [5565, '洛馍卷菜'], [192, '车坊'], [669, '七区'], [5483, '内科'], [7723, '蓝科环保'], [4318, '天天面馆'], [4461, '竹林教堂旧址'], [8619, '优活'], [3811, '所'], [1193, '创田园'], [5786, '有车辆'], [9267, '602'], [6250, 'kapro'], [7362, '广惠康大药房'], [6133, '07932252333'], [5965, '众和广告'], [5405, '出入'], [2207, '华航'], [9182, '心师咨询服务有限公司'], [39, '再生资源回收'], [3050, '情趣'], [8973, '主营生肉冻品蔬菜水果调味品粮油'], [3096, '水灵珑'], [8712, '不干胶印刷'], [1649, '彩妆'], [3239, '秘制麻辣鸡翅鸡爪'], [1131, '玉米农药化肥电话13596296409'], [7991, '陶园世家'], [7303, '空调开放'], [7474, '日丰管'], [8157, '24'], [4264, '综合工作站'], [1836, '家家康乡村肉'], [5781, '玉雕'], [6559, '上海天延制药机械有限公司'], [7177, '总店'], [7904, '中心'], [4765, '天山西路店'], [1641, '手机18136206528'], [9371, '电话17779750397'], [995, '广东商学院'], [8536, '营业时间'], [1508, '地板'], [4563, '装饰材料'], [8907, '新锦华废旧电子电器回收中心'], [4723, '麻辣羊蹄羊龙骨麻辣豆腐皮五香羊脑羊肚各'], [8537, '电话13871010601'], [3422, '13903074979'], [5574, '鱼丸牛肉丸甜不辣鱼豆腐'], [329, '上海女人'], [5734, '南湖店'], [7498, 'china'], [4853, '74767880828486'], [6104, '英式烘焙'], [6435, 'express'], [4637, '西村派出所增步社区'], [8574, '强货架吧台家具批发'], [978, '销售'], [2386, '小肥羊'], [4699, '乐萨客'], [9639, '超市'], [9151, '粉霞羊绒衫富兰内衣'], [4218, '美容护肤专业烫染护'], [4586, '羊皮纸'], [4777, '13701619570'], [4872, 'fosunpharma'], [308, '羽绒服做棉裤'], [8073, '湖园南'], [3885, '专业服装打版图片样衣车板放码唛架承接裁片加工外发纸样培训'], [5713, '中百仓储超市入口'], [2344, '长铃摩托'], [5371, '上海星州汽车维修有限公司no2020007'], [4354, '菜馆'], [5961, '18802007338'], [6796, '地址中南汽车世界r05'], [9005, 't15896051053'], [3789, 'sarfing'], [3363, '修正澳优辉山奶粉修正系列产品古城专卖'], [3185, '加盟电话15886228168'], [8856, '主营高中低档餐桌餐椅钢化玻璃连体分体转盘'], [3520, 'wanfanguoi'], [1623, '第二分店黄浦流打浦路347号'], [7019, '13862743142'], [4896, '消'], [5081, '道'], [9539, '喜洋洋'], [9065, '各种进口国产手表石英钟定做各种塔钟'], [2515, '17757901725'], [3506, '主营学生文具体育用品办公用品'], [3696, 'shanghaiinstituteofarchitecturaldecorationengineeringcoltd'], [8670, 'thingcreative'], [4331, '华纺房地产开发公司'], [8987, '手工面'], [3088, '来伊价cfen'], [5607, 'ch好'], [3787, '手机'], [7804, '婚纱摄景'], [6594, '159'], [3668, '进步多'], [4254, '88元'], [6242, '广东裕达饼业有限公'], [1033, '10元'], [9407, '182'], [8323, 'bridgepetcarc'], [1637, '柠檬工坊'], [804, '中洋酒店'], [4164, '经营项'], [4917, '婚庆花艺'], [9360, '股票代码870130'], [8264, '和秋房'], [4181, '运卷闸'], [3527, '东风日产'], [6764, '可爱可亲'], [9285, '上线'], [9705, '电动车'], [3384, '电动车证件齐'], [1816, '劳务介绍所'], [3904, '中国银河'], [7774, '女'], [688, '锦和副食店'], [2620, '小'], [5635, '777号'], [8211, '久力车业'], [4496, '百瑞源枸杞'], [3969, '红包'], [6410, '此疆特产'], [6951, '泊寓'], [8748, 'hotel'], [6583, '汽车分期旧车置换'], [5487, '上海品泰生物技术有限公司'], [7935, '扬'], [8098, 'dominos'], [8059, '三姐面馆'], [6098, 'bai'], [4608, '炒菜'], [6697, '亿警光电'], [3711, '预埋件'], [1492, '通达天下'], [4911, '泡椒田鸡'], [4292, '快照彩色复印cad出图出效果图'], [224, '水果大卖场'], [3359, '信康店'], [4698, '斯新画室'], [4497, '质优价廉欢迎选购本厂地内直入30米'], [5642, '可回收物'], [9302, '封闭阳台'], [7729, '维修保养洗车美容'], [8204, '河西市场药店'], [7656, '小丫味'], [7108, '经营馅饼包子油条酱香饼千层饼豆浆稀饭胡辣汤'], [5092, '13994292693'], [8423, '36'], [722, '中老年服饰'], [6394, '中国体育彩票'], [7302, '篷布'], [5874, '项城全顺'], [6045, '和能遮阳技术有限公司'], [3231, '18019733121'], [1255, '促销'], [6998, '童挂炉'], [1996, '劳动者港湾'], [9756, '居委会'], [6158, 'mexin美心'], [8470, '间隔胶'], [125, '水果'], [2252, '字楼招租电话13902292264'], [9306, '沙县小吃'], [7133, '斯接墙石材全屋定制'], [2933, '医院'], [4951, 'promo'], [2103, '电话15042234751'], [8815, '五金交电批发部'], [9576, '15172201768'], [9740, '百分百纯铝制造'], [2104, '水泵接合器'], [4197, '贝贝衣衣童装'], [3360, '优质家居用品'], [5090, '双面庄'], [3608, '尚奢皮护干洗'], [1886, '山泉水'], [8803, '多高一楼'], [7530, '羊蝎子砂锅面盖浇饭'], [2518, 'sourcing'], [11, '新鼎鸡'], [8496, '螺涌'], [912, '水饺'], [7285, '家居门窗'], [8318, '烟酒水'], [4182, '110元看'], [7369, '内蒙'], [7850, '电动车'], [6358, '鑫汇龙足道馆'], [2628, '纹绣工'], [3574, '早茶'], [218, '客服热线5500111'], [6513, '电话18733938550'], [6211, '北京'], [7392, '电话1863880300'], [4639, '电话15895631687'], [5286, '回家街3楼'], [3902, '米其林轮胎'], [3916, '海燕酒楼'], [4262, '面食'], [498, '回族'], [2865, '顺福布行宇泰'], [9601, '北京养老'], [8773, '安捷'], [5288, 'since1997冰淇淋与茶'], [8664, '自产自销假一罚佰'], [3542, '排油减肥石得见'], [4425, 'chrisnls'], [9230, '定制diy'], [3567, '消火栓'], [3350, '淀区五老典学'], [1498, '流花车站'], [5860, '电话'], [3380, '15135985103'], [9879, '吉密琥珀'], [6266, '衣品'], [886, '中国福利'], [8540, '公共'], [9144, '手撕烤鸭19元只'], [151, '电话15110385021'], [930, '州老号烤鱼'], [3568, '养生中心'], [4389, '冠超市'], [6194, 'hairspa丝域养发馆'], [7737, 'mg晨光文具'], [7861, '星之味'], [1908, 'haier'], [920, '店出租转让合作'], [6960, '盛源'], [959, '白证'], [3042, 'waskon'], [3233, '乐日二居综治中心'], [5875, '玻璃钻'], [1417, '邮政编码611730'], [7296, 'lai'], [5391, '电话18830960902'], [2897, '441'], [4359, '主营电器开关配电盘'], [9368, '足疗'], [443, '肠羊排牛杂肚片肉丝猪肝腰花拆骨肉鸡肉三鲜鳝鱼'], [405, 'snooky'], [6727, '210019'], [3965, '欧派集团旗下品牌'], [2571, '消防栓'], [7807, '程'], [6973, '电动工具电机空压机专卖专修'], [9976, '铭铭美发'], [3894, '瓦棚室内门防盗门地下室门'], [2531, '老表鸡肉'], [1139, '168弄1号'], [3496, '多喜爱losia'], [9501, 'pbeno'], [6340, '厂家直销'], [927, '公牛爱眼led公牛电工胶带'], [8087, '01电话7806318'], [5616, '中国驰名商标'], [8193, '泰兴市兴盛管'], [349, '健康热线13367545101'], [1865, '制冷运行部'], [1427, '时尚女装'], [3772, '面部紧致提升艾灸'], [968, '汾东店'], [8692, '浦航公寓'], [1338, '主题乐'], [3483, '大龙不锈'], [9327, '低价出兑'], [8297, '闵行红十字会'], [8733, '宝大王'], [4963, '万华百货超市'], [3040, '码印花厂'], [4519, '出入'], [4474, '17603940700'], [7452, '就带她夹娃娃'], [4648, '一心堂'], [728, '宝视达眼镜'], [5265, '太原'], [6364, '金牌烧鹅'], [7386, '海珠'], [6603, '010'], [202, 'shanghaidelanelec'], [6225, '业车云山毛尖'], [5136, '经营早中晚餐面条馄饨炒饭盖浇饭农家小炒'], [5517, '佳兴园'], [9452, '内设'], [9617, 'rodin'], [7029, '手工水饺'], [7987, '专做不'], [4571, '锅仔牛肉'], [9476, '店采用新疆兵团棉花'], [7999, '永华饭庄'], [7140, '营养健康'], [5609, '现磨豆浆'], [8456, '创自1950年'], [7521, '穗水巴07'], [6549, '293'], [1412, '登记'], [5677, 'wantfitness'], [6762, '13849416077'], [3962, '电话58449473传真58911210'], [5714, '牛肉肉板面'], [9844, '山西面食馆'], [1143, '保利店'], [5673, 'sportbullfight'], [5010, '晨光文具'], [7223, '学清小龙虾'], [6499, 'pisen'], [9047, '一二三四五六七八九'], [6326, '二二楼'], [5021, '纹眉'], [5204, '服务遍神州诚信递万家'], [1085, '把好防火关有条无患保平安'], [2750, '4f'], [9954, '前台接待'], [3480, '楼'], [6422, '鸿基房产'], [402, '10号'], [8676, '蓬业'], [2576, '天下'], [3003, 'bankofchina'], [88, '烩面'], [2129, '买一斤送一斤'], [8772, '影院'], [8866, '清洁'], [1265, '服务总站'], [9815, '组装'], [2133, '刷'], [1300, '3件'], [8808, '120'], [2070, '订餐电话'], [7854, '春'], [8724, '批发零售'], [6392, '来君鞋店'], [3644, '王何尺寸沙发翻新办公椅家庭沙发专业订做任何尺寸床褥纯天然椰棕床海'], [4329, '中朝人民血健亭'], [7765, '店铺价级'], [6780, '80929065'], [8214, '婚庆'], [1797, 'bo'], [2142, '小不点社区帮办'], [1408, '爱通子'], [8471, '封批发'], [6491, '小刀电动车'], [7936, '节俭'], [5096, 'zto'], [2572, '全国加盟连锁'], [3090, '电话18797729203'], [3750, '易拉宝制作'], [196, '无忧车行'], [7876, '中年女装女裤加肥加大'], [5191, '家居'], [694, '大熟心聚力谋发展'], [8954, '由农家鸡庄经营管理'], [1811, '味源'], [7523, '综合缴'], [6183, '11185'], [4011, '1751'], [1709, '电话63309715'], [8879, '掌振合成润滑油'], [7011, '美丽热线15005261089'], [3657, '清水羊庄'], [4915, '固定板'], [1906, '重型门'], [138, '一'], [6163, '蜘蛛王'], [2623, '一品托管'], [3104, '安宁街49号2客m童'], [152, '神轮胎'], [4156, '元自助'], [9535, '101300003032'], [3330, 'human'], [8088, '24hoursservice'], [8213, '协'], [7952, '阿宝鱼馆'], [7404, 'selfservicebanking'], [4522, 'ichao'], [1975, '室内空气有氧优化店'], [327, '电话1800143066515852952980'], [4991, '肉类'], [4728, '南关车机'], [8247, '德顺香行'], [3623, '兴达钣金'], [7336, '喷绘出图晒图'], [5206, '防护栏'], [5840, '灰指甲修脚'], [2859, '柳州螺蛳粉'], [2954, '电话13935901403'], [515, '好成荐'], [782, '原快递'], [9338, 'chinaelectricpowerpressltd'], [4981, '广易宾馆'], [6580, '云'], [7257, '蛋'], [3437, '10月16日起发'], [2607, '姐妹足疗'], [4761, '中关村公寓'], [6222, '材'], [8231, '定时达快运'], [4147, 'boscat'], [7267, '照相'], [6371, '五金工具综合服务店'], [992, 'icbc'], [2908, '汉庭酒店'], [2709, '港陵春'], [6989, '土家美味'], [139, '三木办公文具'], [1568, 'nc'], [7073, '育坊'], [2804, '蜡虾'], [8694, '山林企业'], [9494, '穗'], [8595, '泰州中理外轮理货有限公司'], [2809, '撸杆'], [4801, '华轻市场'], [8833, '恒磁卫浴'], [7463, '小'], [2921, 'n14g'], [2247, '全国'], [2546, '德嘉冻品商行'], [19, '营业中'], [2961, '陈明烟花店'], [4889, 'eautyshop'], [7115, '泰贤区社会治安综合治理'], [7657, '至宝'], [5891, '13826198980'], [5439, 'unicom中国联通'], [2813, '鸟酒店'], [8515, '品牌润'], [2611, '电话8820358313505271599'], [4095, '福林珠宝'], [2462, '各种车型发动机大修刹车维修及材料工程机械齿轮压装'], [4928, '4091号'], [6618, '上海发隆金属装饰'], [4044, '小'], [7552, 'np'], [246, '黄焖鸡米饭'], [8271, '槎龙第四经济社支部委员会'], [9673, '做更好的沙县小吃'], [3475, '24h服务'], [9100, '尔私人会'], [4107, '电话'], [4778, '政协委'], [9185, 'ggardenhotel'], [6959, '棕'], [8944, '河捞'], [8951, 'frme'], [4190, '新艺家具电器批发零售'], [9257, '6265230'], [2256, '河西市场药店'], [3430, '电话15705195173'], [5468, '农家小炒'], [4074, '广州市南信国际货运有限公司'], [8124, 'leyoujiacom'], [351, '刀削面'], [7012, '18914549049'], [6459, '温州骨里香'], [8742, '厂家直销鄂尔多斯编织洪叶云台貂绒'], [3756, '2285'], [5852, '系电话15102795650'], [5335, '美格发里工作室'], [8719, 'laboratoryanimalcenterof'], [7458, '进入电子监控区'], [5975, '152486640'], [348, '惠峰装饰'], [4858, '166'], [1682, 'dohla'], [7428, '239'], [9540, '打字'], [278, 'natu'], [9572, '自然黑吹风机推'], [4052, '秋款'], [908, '178'], [4209, '一月餐厅'], [6552, '公牛安全插座'], [9226, '花都'], [3537, '批发零售纯棉布匹床上用品'], [6809, '金鼎龙招待所'], [8424, '仁厚直社区居民委员会'], [4043, '艺术馆'], [3849, 'luye'], [801, 'm'], [6755, '第一股'], [7847, '头洗'], [428, 'jare'], [7739, '宝屋'], [1549, '商标'], [7668, '轮车增程器'], [2982, '拉货就找'], [1483, '欧阳店'], [1668, '龙造型'], [2945, '243'], [5999, '健康'], [5311, '小时营业'], [1994, '13221208083'], [4936, '兰州'], [518, '怡美布艺'], [1684, 'sik'], [2172, '源自重庆'], [1893, 'aai房就选我爱我家'], [5356, '面'], [2465, '地址杨桥路56号电话13585521938'], [2378, 'eatfi'], [2244, '托管辅导培训英语阅读习惯养成电话13104333'], [6773, '赏郎美发'], [4868, '沈阳市于洪区人和靓奇宝活商贸中心'], [7408, '王子包装上海有限公司'], [1754, '同德店'], [1371, '配件'], [9798, '地话温贝幼儿园'], [9626, '胶泥防水背景'], [9724, '电话15539498018'], [6529, '沙姜'], [9794, 'mg晨光文具'], [363, '发现最美的音符'], [213, '进口海萄酒'], [6906, '招商证券'], [4620, '车行天下永久'], [1279, '缤纷夏季'], [598, '抵押'], [6149, '广东省wtotbt通报咨询研究中心'], [2294, '椅子海绵'], [2790, '双汇香肠料灌美味香'], [8926, '老襄阳'], [5070, '电话139289012'], [3903, '真空包装活狗现杀批发零售'], [3375, '蓝翔门窗'], [2507, '宝山社区服务站'], [6533, '抢位'], [6130, '久益渔具'], [7289, '途县人民标泰快乐诚聘店长导购度'], [2564, '江南'], [3382, '清真餐厅'], [8940, '二0八年'], [2095, '防毒口罩'], [420, 'chinaunicom'], [7779, '员购'], [9056, '联想电脑'], [796, '饿了么'], [5287, '精品水果'], [2267, '我爱我家'], [7768, 'vingsbankofchina'], [8958, '车享家'], [2194, '宽百兆在'], [1527, '鸿雅轩茶楼欢迎您'], [359, '894236'], [5307, '实惠商店'], [8693, '手撕烤鸭'], [2059, '尚灯饰'], [4475, '装电脑办公耗材'], [895, '冠军油漆店'], [6705, '摩托修配'], [8368, '马凯皮鞋'], [5411, 'jdgkbg'], [5450, '电话15602286960'], [1840, '丫衡阳旗舰店'], [4630, '批零日用百货生活用品办公文化用品'], [8403, '张五儿花圈店'], [3310, '丰胜'], [2511, '订餐电话15511068627'], [4581, '传真2816756'], [185, '晨汽车'], [2274, '腾园区'], [8102, '冰淇淋鲜果饮奶茶小吃'], [3325, '螺丝'], [1850, '请上二楼'], [8674, 'ltd'], [2053, '品牌折扣'], [9910, '精装各种饮理电视墙背景墙儿童房'], [5707, '超超木业有限公司'], [2143, '双刀白吉馍'], [3318, 'swnway'], [3297, '以来已经吸引了上海应用技术大学数干名师生'], [3518, '面纱摄彩'], [7601, '电话5975'], [821, '天顿电器'], [8233, '防盗纱窗二合一'], [9547, 'love'], [6414, '电话1585088820'], [7638, 'hmzz'], [5523, '176'], [1719, '网通手机连锁'], [3851, '宁通汽配汽修'], [5181, '世界工具专家'], [9921, '中国'], [3147, '住宿棋牌'], [7072, '上海双食宜食品有限公司'], [7612, '美痧美容足疗体雕'], [1953, 'nlh529'], [3317, '一汽大众'], [6156, '定妆美甲'], [8393, 'bailianlogistics'], [6057, '宏达种业'], [2258, '言'], [3860, '程龙饲料'], [8134, 'cctv央视上榜品牌'], [5387, '验光'], [7121, '疯狂烤翅'], [8031, '新桥路'], [7540, 'yongjiaoilsealc'], [2843, '蒸菜烧菜小炒'], [8067, 'eurorepar'], [2480, 'businesstime'], [1362, '风城大虾'], [1876, '家壁纸'], [2848, 'yuyuebencao'], [9286, '推'], [8511, '火炉披萨'], [5832, '尖微酒店'], [5599, '电话1551661076803735045123'], [1250, '香烟'], [453, '服务热线02139880788'], [1903, '新星体育'], [1587, 'oppo'], [7797, '电话729717813668677000'], [108, '粥'], [3442, '露丽墙漆王'], [2971, '旺角'], [3722, '明祥苑居委工作站'], [1437, '401'], [1732, '各种炒菜面食盖饭新疆大盘鸡兴华街店'], [1621, '电烤串酱爆鱿鱼活串串涮牛肚'], [781, '雅其装饰'], [4243, 'tm'], [4634, '小笼包'], [7670, '北蔡镇人口和计划生育综合服务站'], [143, '欢迎光临'], [9860, '边左拐10米南屯618站牌'], [865, '红美容美发'], [6863, '21'], [4856, '手机13162201758'], [3072, '145'], [5253, '同装'], [2841, '中国'], [2917, '医保定点'], [9935, '百香王'], [2691, '速川县农业生产资料公司农资配送经营中心'], [758, '54'], [7052, '24'], [9188, 'huida'], [4935, '牛'], [3187, '兴旺房产置业'], [8561, '空'], [9523, '电维修'], [8490, 'china'], [7949, '火灸寿司'], [5779, '人民调解委员会'], [8935, '医保刷卡'], [3469, '邦豆'], [9428, '达观楼'], [762, '琪美'], [6465, '上海惠慈皮肤科专家门诊'], [6883, '安全门选王力'], [9575, '金尚'], [6752, '外送专线88522277'], [6865, '中药西药'], [5855, '药业'], [771, 'joyeux'], [6172, 'ithub'], [7840, '雀巢24hr热线4008208898'], [8752, 'service'], [6638, '万万美发美容'], [1889, '烟酒副食饮料'], [8025, '敦和'], [639, '18081243927'], [9252, 'shanghaitariinternationalfoodadditivecoltd'], [3425, '警务室'], [3873, '博旭物流'], [1554, '长城宽带'], [226, '诺卡'], [7929, '华润漆'], [3938, '24h'], [5475, '折叠门铝合金门窗不锈钢'], [7163, 'puli'], [8381, '途释经典'], [7892, '凡品便利'], [463, 'm'], [1557, '外设耗材'], [2164, '体'], [5828, '丽丨兹丨行丨豪丨宅丨专丨家'], [6866, '文具'], [1200, '不可'], [9438, '中亚门窗'], [1213, 'viv'], [6096, '任县大锅菜'], [6950, '724小时客服热线'], [2276, '面'], [1183, 'no0025'], [1234, '定制'], [949, 'lianhu'], [4332, '欢迎光临'], [6797, '做led大'], [9611, '961'], [8961, '西门口店'], [9795, 'ring'], [3428, '柯达'], [6831, '917kapto'], [8761, '5720104内设停车场标准间普通间'], [9142, '枪钉'], [7499, '北京社区服务站'], [281, '656'], [7461, 'aupu'], [4176, '家虎口腔门诊部'], [5749, '手机商行'], [3184, '银玲外贸服饰'], [2337, '3271103000'], [2627, '小时'], [4575, '孙记羊肉铺'], [4528, '13673947469'], [991, '商职医院'], [3173, 'jiamei'], [2998, '灌汤包煎包'], [5085, '13569371328'], [7167, '新安社区'], [81, '6号'], [7796, '58'], [2665, 'ekafu珂卡芙'], [6475, 'fordogsandcats'], [6145, '上海天种物流有限公司'], [3545, 'kyhb山'], [9374, '李扦脚'], [4570, '康辉旅游'], [4574, '上当一回'], [8660, '酒好奖多品牌大'], [4624, '广内善果养老照料中心'], [4645, 'diy'], [1455, '悟缇芙'], [4456, '公司'], [6479, '马记串串香'], [1351, '新波浴室'], [5397, '福'], [4115, '服务电话18628936000'], [802, '糖炒板栗'], [9467, 'zdk'], [370, '734号'], [4086, '老汤馄饨'], [3368, '人美发'], [5721, '厨房排烟罩排烟管电话13860161729'], [9499, '24'], [2753, 'mid'], [2420, '硕丰电动'], [2758, 'deli得力办公'], [2707, '极限无'], [1912, 'telecom'], [8542, '早上0900'], [6696, '设计部'], [7448, 'guangzhouyixinsocialservicecenter'], [7150, '经营各种石材设计安装加工一条龙服务承接室内外干挂'], [5630, '业执照车塑酒百货'], [2155, '推广产品建造绿色节能建筑'], [7858, '衣柜墙布吊顶软装'], [9081, '木日'], [1667, '王萍面皮'], [4532, '孕妇内衣'], [1328, '本甩卖'], [8280, '胡辣汤'], [9424, '新鲜屋'], [1026, '上海运营中心全国免费热线4008069179手机13818149488'], [6220, '营业时间'], [9814, '50号'], [1218, '茅台'], [8818, '加盟热线4000615170'], [9270, '大厂'], [3589, 'vankeaap'], [4454, 'b'], [5570, '仓'], [3911, '潢工程有限公'], [8770, 'eye'], [7035, '火爆男孩'], [1614, '水果大卖场'], [1783, '台'], [2689, '快递'], [8243, '至司儿意女装店'], [1581, '市场对面'], [5334, '批发地板扶手墙布墙纸全市最低价'], [7976, '597'], [482, '顶峰棋牌室'], [2417, '纯正油广东中心库液压事业部02032378008油封事业部02032378028'], [8030, 'tiant'], [8061, '2919'], [2649, '卤味'], [833, 'nyc'], [8479, 'skandinavien'], [4983, '小宅私房'], [1395, '苹果快修'], [606, 'aildays'], [6794, '11选520'], [2570, '电动车'], [147, '95079'], [3972, '苏中餐馆'], [2553, '洋河色经典'], [4641, '别墅砖'], [6654, '童鞋'], [7391, '3层'], [687, '洗浴配种'], [7897, 'dvz'], [4172, '画陌'], [7763, 'a01'], [9315, '电动车平衡车'], [5427, '公安'], [9680, '面条凉皮'], [835, '重点'], [8691, '艺'], [3057, '中国铁建'], [512, '安心种牙'], [3577, '宾馆'], [6651, '蟾宫酒楼'], [1188, '香酥鸡块'], [723, '正宗跆拳道馆'], [9447, '天天快递'], [8518, '如女之婴'], [1197, '带'], [8475, '靓鞋一族'], [5903, '铁都大厦'], [6837, '黄记白家'], [2222, 'jointown'], [6281, 'gree'], [3117, '重庆华'], [2023, '金淮鸭血粉丝'], [4505, '平馆'], [8526, '批发零售厂家直销15838688672'], [2977, 'er'], [6463, 'bull公牛'], [9181, '疑播品牌'], [5470, 'pd'], [6786, '49'], [5536, '2018'], [1972, '让你美分店'], [2094, '烤'], [6926, '成人保健品'], [272, '发'], [6518, '美亚跆拳道馆'], [3679, '电话1393290009'], [173, '东升股份'], [2466, '冰红茶'], [5722, '麻城吊锅'], [3631, 'bashit'], [7502, 'alldays'], [2469, '经营范围树脂扣电镀扣塑料扣工字扣五瓜扣'], [6087, '红衣橱'], [369, '中财管道'], [4238, '注意安全'], [5099, '洲润中州国旅'], [7874, '13233938841'], [5062, 'b1排14号'], [642, '佳林斯吉祥运输有限公司'], [9378, '兰兰名妆'], [8663, '基层巡察'], [1974, '玉'], [1539, '主治'], [3470, '电话155'], [7515, '乐途路舞'], [5909, '正在营业'], [3182, '上市'], [6305, '消防栓'], [8009, '百事可乐'], [5614, '园'], [7083, '江晨宾馆'], [1813, '家常大饼'], [3112, 'oppo'], [8292, '质胡辣汤'], [5548, '电话077131076781877715992915977739458'], [8838, '花伴'], [4804, '正宗郏县骨汤面'], [8356, '招收'], [1288, '炫'], [2769, 'jiatiaxia'], [7417, '电话15100307984'], [6976, '编号1041020852125'], [5715, '电话3884'], [2610, '专业手机维修'], [5490, '213'], [8991, '汁削面馆'], [110, '汽车修理'], [1036, '丽热线13773777989'], [8607, '乐县鑫苑菜市场'], [5702, '店面地址凤泉区宝山西路'], [4886, '赌石馆'], [298, '永哲'], [2849, '18816496126'], [8782, '理体系认证'], [4887, 'warrior'], [3501, 'spd'], [7057, '汪氏海鲜烧烤'], [3299, '避'], [4266, '美丽'], [8535, '网络维护'], [1468, '北青公路'], [4540, '西贝'], [7041, '电话035131320112817011手机13903439343013903099925'], [6585, '通知'], [827, '李记'], [1004, 'qiwow'], [3694, '特百惠欢迎您的光临'], [8979, '自然'], [9835, '印'], [5690, '18692937728'], [496, '爱泥手工体验店'], [4679, '中药黑油护理'], [1435, '文虎补鞋店'], [7176, '西部商情广告传媒制作'], [2398, '天红眼镜'], [4401, '东方讲坛'], [596, '四轮液压升降'], [491, '全国热'], [2471, '林银宾馆'], [551, '特色肉串'], [5236, '诚信五金机电'], [5399, '给合雅'], [7959, '午式'], [9437, '专业祛痘半永久纹眉全国连锁'], [479, '杯军酒家'], [7165, 'ziqing'], [1107, '画家街展示行聚集区'], [4828, '阳坊专卖'], [6598, '养生就是养气血参芪阿胶颗粒'], [4961, '三楼'], [2952, '靓稻装饰'], [4758, '郑记'], [6160, 'tel13781615530'], [8414, '箭牌瓷砖'], [1248, '家直销'], [3089, '保健60元'], [6546, '纪联华超市'], [374, '电话137081387884232112'], [4850, '祥和平价烟酒商行18637619028'], [2130, '宝岛墙纸'], [5759, '汇源社区居委会消防巡查联络点'], [2062, '视频监控'], [807, '美立新综合经营部'], [2333, '新街口北大街'], [7664, '天缘'], [1223, '张钥匙'], [7744, '298'], [1054, '项城市监业管理局'], [3926, '汇兴茶行'], [7748, '生活馆'], [4244, '机'], [630, '专业定做不钢卫等电话13667142879'], [5610, 'ztoexpress'], [8625, '409'], [6747, '中国大数据广告'], [5009, '台鼠赛豹'], [9679, '物业管理部'], [6389, '菜场入口'], [1023, '98'], [1253, '西港进贤大街22号'], [5150, '综合'], [5462, '韩秀'], [7926, '裤'], [3462, '图雅诺柴派克风景'], [2540, '珍贝羊绒'], [7816, '2529号'], [3974, '老字号'], [6879, '24'], [8119, 'exprsss'], [6633, '宵夜'], [5778, '香烟'], [664, '五金水暖电料日丰管'], [32, 'op'], [2714, '电话13031781206'], [4173, '山东老字号炒坊'], [1750, '强多利安吉农资公司抗核加盟店'], [1677, '中国太平洋保险'], [2299, 'tuteof'], [3017, '绿源电动车'], [5485, 'police'], [9073, 'jnexseavorite'], [4014, '出口慢行欢迎再来出'], [2085, '华为苹果'], [5996, '电话189063299教信同种全网站'], [8745, '酸菜鱼传统川菜'], [8849, '1151233'], [3530, '乐化奔腾油漆'], [1818, '博高尔夫'], [8261, '全国连锁正宗济南关南店加盟热线4008788198'], [8224, 'shanghaiaozhanindustralcomttd'], [8301, '吉欧斯电工机械上海有限公司'], [2724, 'fe'], [9864, '普杰大'], [2999, '平安'], [974, '顶好时尚'], [7214, '上海十品街'], [3227, '华瑞'], [6910, '面条'], [9550, '办公室'], [7696, '中华人民共和国农业部'], [6749, '电话431546013'], [7305, '电话189276'], [1456, '鼻享影院'], [7122, '韭菜盒子'], [377, '出租移动脚手架'], [4550, '瑞盾门'], [5849, '金沙正骨堂'], [5430, '特价30'], [8266, '一品足轩'], [1698, '斯钟钉有限公司'], [5354, '小炒肉'], [6668, '知秋同凰'], [7169, '中国驰名商标'], [3468, '美发灸疗养生'], [2509, '文明小区'], [8161, '公共厕所'], [8455, '双汇冷鲜肉'], [6987, '24'], [7715, '承接酒席'], [7781, '潮汕'], [7045, '雨晴物流'], [2476, '经营糖烟酒日用百货电话28703733'], [5651, '洋河蓝色'], [1104, '包装装粘辣丝'], [8377, '工厂店'], [9892, '装条街'], [1477, '棉麻布衣'], [5628, '国移动'], [8017, '只奥特佳'], [6983, '解放军总医院健康管理研究院'], [5050, '会友宏波'], [8754, 'lpsk'], [3599, '137'], [4952, '寿衣'], [7225, '江小白'], [8230, '气贸'], [9505, 'heritagearchitecture'], [1202, '星东水泥有限公司'], [8129, '乐司'], [1636, '上海市药材有限公司'], [2027, '发秀造型屋'], [6353, '13627244447'], [8023, '电话05725137931'], [7975, '九里颜'], [303, '订餐电话07063387880'], [1272, '艾利文'], [155, '家佳康'], [1336, '首家通过iso9002国际'], [4087, '专注力训'], [3814, 'ts'], [2657, '韦之梦酒'], [665, '营业时间08001900'], [7002, 'no豫郑r1601g008'], [4806, '婚庆租摆庭院设计园林绿化设计订购各种苗木花卉1895115884118951158842'], [9543, '118'], [1397, 'shz083'], [9557, 'ch'], [3260, '41418'], [6918, '花子菜馆'], [7682, '33'], [7806, 'op'], [321, '经典中华美食'], [1785, '界面剂石膏粉水不漏'], [6916, '酉店用品制冷设备电话1316511647018888319932'], [8785, '电话150508189828015100518915154991'], [7945, 'enineering'], [961, 'max'], [338, '加工金银'], [1612, '开瑞汽车'], [6612, '40'], [7358, '135'], [2372, 'vivo'], [8250, '15639448258'], [5045, '宏陶陶瓷'], [3647, '王鑫'], [5493, '金豆豆教育'], [3162, '康婷美容养生汗蒸馆'], [4100, 'kingdee'], [3958, '竞技'], [8002, 'royaltea皇茶'], [6666, '卡拉多'], [1229, '电话13688191958'], [6021, '4681'], [942, 'ijia'], [7156, '102'], [8877, '轰趴馆'], [9600, '武昌区紫阳路198号陆羽茶都73号'], [3834, '18929553788'], [9618, '劲豪'], [9871, '正宗福建千里香馄饨王'], [7436, '小卖部'], [6237, '华林店'], [555, '1326131'], [2329, 'ribbonl'], [9258, '开锁换锁'], [9204, 'nd'], [6995, '电话15967243620'], [6787, '兴连装饰'], [5956, '时代'], [1863, '风镐番岩机喷射机喷砂机注浆机潜孔钻腰带式潜孔钻车'], [6337, '电话8911907417'], [4920, '新思像护肤造型'], [2560, '私人订制'], [3710, 'aouang'], [6894, '艾丽热扎特色美食'], [4906, '中山二路'], [2260, '西'], [4034, '常年'], [3651, '思卡诺上海时装有限公司'], [9240, 'wuyutai'], [9189, '业主委员会'], [7215, '监督电话'], [9456, '白发转黑'], [2833, '互联网丶实体专卖店'], [8620, '航天之家生活超市'], [6231, '泽象地板'], [9560, '上海教育培训基地'], [9582, '批发墙纸墙布'], [8723, '超薄灯箱拉布灯箱磁吸灯箱'], [4438, '江大碗锅边'], [9687, '诚信门窗'], [7364, '1913'], [6713, '鸿运茶楼'], [134, '自然美'], [6170, '案'], [7689, '佳'], [4859, '天然乳胶'], [8482, '好'], [2132, '电动车'], [2706, '中国移动通信'], [6857, 'mg'], [5496, '金诚汽修钣金喷漆'], [8410, '门面转让'], [7389, '茶小镇'], [1608, '1610008'], [4754, '欢迎光临'], [1195, '祈福美甲'], [9091, '汽车运输集团有限公司'], [5503, 'pod'], [1605, '兄弟灯饰'], [1097, '信息通信业务受理中心'], [1860, '徐氏'], [6365, '重庆火锅'], [7818, '蜘蛛王'], [2612, '号老黄香油坊'], [6888, '重庆小面'], [5067, '羊绒'], [6556, '中国福利彩票'], [3745, '名烟'], [6988, '欧美雅清'], [6709, '苏泊尔'], [7665, '消喻工艺行'], [643, '枣和田枣栗子核桃开心果长寿果电话13963278318'], [2165, '优蜜茶'], [2421, '灿虹精英大厦'], [6001, '烧鸡活鱼'], [1602, '145'], [3794, '会员俱乐部'], [4710, '电话139087951583231855传真'], [9933, '的衣柜'], [6919, '98'], [1267, '沥落店'], [5477, '销麻'], [5089, '私营企业'], [9322, '众友健康药房'], [1736, '恒通通讯'], [1679, '越书店'], [6737, '白云街维护稳定及'], [5097, '姚老太烧饼'], [4062, '美1'], [4433, '金陵晚报'], [9212, '百兆宽带'], [2373, '慈民实业'], [2153, '六月婚礼2店'], [9072, '爱车之道'], [3473, '90'], [684, '美易平'], [3829, '百世快递'], [5027, '上网'], [418, '投资创'], [1922, '就有机会'], [956, '小虎洗'], [1945, '彩打'], [161, '时尚'], [7400, '广东省锁具维修行业协会公安备案单位'], [7786, '湖景壹号'], [6025, '7'], [2880, 'ccpit'], [3419, '水暖不锈钢'], [9855, '2803178'], [3886, '京都'], [2196, '旭网络科技有限公司'], [5745, '环境技术中心'], [6781, '沐特价房100元间'], [7105, '北京第二实验小学'], [3168, '特别'], [5825, '鞋尼世界'], [9361, '东风乘用车'], [3502, '中银行'], [6119, '黑白铁加工'], [9537, '羊汤馅'], [7299, '仓甩卖'], [6291, 'alldays'], [1366, 'yabuge'], [4148, '禧豫珠宝'], [6874, '四个'], [5970, '情堵茶园'], [1514, '服装布料内衣牵边熨衣服'], [2413, '22'], [9674, '仙之净'], [9158, '重庆小面'], [2332, 'plov'], [8771, '联系电话18616159527'], [8196, '拍照为证无效退款'], [4585, '酒吧'], [372, 'oppo'], [9134, '水饺'], [3602, 'atk上海依崇实业公司'], [1954, '啤酒'], [6248, 'camel'], [9403, '三元里'], [4008, '5元50元'], [9087, '点读机'], [5895, 'zhuanyierantangzaoxing'], [6385, '徐泾镇二联村社区事务服务中心'], [3288, '露艺'], [6942, '杜师傅'], [9398, 'ui设计就业培训'], [8616, '勤达消防'], [6228, '福建南北干货'], [9606, '奥克斯'], [2189, '地址赵巷镇赵华路498号'], [5423, '脚手架'], [2592, '面条'], [5395, '之家'], [647, '大'], [9390, '中国驰名商标'], [9012, '福'], [3102, 'duaingtea'], [3630, '闽星宾'], [1544, '键'], [9274, '批发13894677131366394957'], [2968, '公牛安全插座'], [4165, '回收及出售'], [5795, 'tel'], [3307, 'sciences'], [7507, '30'], [2229, '电话02765025402'], [4064, 'hairsalon'], [8116, '服务中心'], [4683, '中国福利彩票'], [6544, '13楼'], [5916, 'zooyu佐区'], [5514, '阿饭甜'], [181, '固特异授权零售店'], [5619, '插座照明灯'], [7227, '实有人口管理服务站'], [779, '电话1397546792513575285695'], [4556, '鲜果汇水果店'], [1357, '中铂养生堂'], [7996, 'rfnli人立'], [4569, '祥亮不锈钢实业'], [9657, '特色万方上海隔美装生洁食品经营部'], [1221, 'brother'], [1676, '顶尚'], [3887, '经销店'], [3931, '招商专线64022028640'], [1720, '件8折'], [7089, '凯禹资本'], [1358, '修'], [5006, '烤肉'], [1516, '仁爱'], [2959, '寿碗禄'], [3683, '承接深浅色裁片印花'], [6072, '刷机'], [701, '0'], [2683, '专业电气'], [1314, '民以食为天'], [976, '骏丰毛巾珠片绣花厂'], [6703, 'tt名车汇'], [8728, 'huaqu'], [7539, '传世麦香'], [4282, '炒面条'], [7918, '湘妃日化'], [7123, '君赢工程机械'], [2483, '万顺复合厂'], [5135, '浦医药药油中华中医药指定点诊所'], [5190, 'shanghaimunicipality'], [3731, '工'], [1376, '正版充电宝'], [9420, '寒暑假班'], [9482, '推拿养生馆'], [8646, '洋'], [872, '孕婴'], [7222, '冬冬内衣'], [9063, '原兴民药店'], [9461, '北注西里'], [1040, '柏'], [4417, '米'], [6522, '电脑手'], [8131, '祖木来'], [8207, '加坡福和兴板材'], [6514, '4g'], [6082, '重庆鲜面条'], [6647, '大品牌大石代'], [6453, '逸骋'], [4835, 'd'], [6835, 'unicom'], [7020, 'luchyhouse'], [5576, '安全生产管理办公室'], [8325, '上海颜范医疗美容'], [1082, '文化传媒'], [5518, '诺阁雅酒店'], [1346, '盛华'], [3047, '招聘'], [4422, '海创足云元谷'], [3600, '光'], [6788, '集团'], [8174, '兰州底盘专家'], [4503, '欧国源酒楼'], [4369, '11111'], [3274, '韩范'], [7076, '周一至周五09301800'], [295, '春天印象护肤造型'], [7529, '荷叶饭小吃客饭夜宵'], [2047, '长城宽带就是快'], [7361, '北京美硕正华'], [7919, '招聘'], [2852, '13'], [7486, '吊绣名依'], [8372, '吉他教室'], [4321, '海恒'], [3895, '上海城投'], [5343, '67'], [9640, '39568651915212501165'], [7397, '工瑞乐邦'], [5001, '6401978025'], [3328, '干果'], [5309, '朴坊秀'], [7459, '课外猫中心'], [5434, '二区'], [2498, '干洗水洗'], [5277, '制制推出'], [776, '健康生活馆'], [3729, '77'], [4069, 'hair'], [6462, 'nowh0179'], [9849, '金手包'], [9911, '中国工商银行'], [2617, '正大鸡排'], [2575, '桔色成人'], [8576, '槎龙村经济联合社'], [3523, '暖气片'], [1779, '乔丹体育'], [8766, '夜宵220031182828'], [9223, '简一大理石瓷砖'], [2890, '友德阁'], [5318, '凉面'], [8736, '电话1369563045315357598576'], [3955, '正宗杭州特色小笼包'], [8004, '消防铣应急灯'], [4546, '光明乳业'], [8045, '美香园林度假山庄'], [3997, '风味小吃'], [245, 'zotyeauto'], [8216, '请勿停车'], [7, '上海皓江旅馆'], [5993, '类方管钜管角钢等型材各类锁具锁芯把手等'], [3248, '护肤造型s'], [5500, '颜覆您的涮羊肉理念'], [6377, '关放心租'], [2544, '文峰国际'], [2932, '上海千路电动车'], [8661, '市区供电公司退休职工活动室'], [5838, '喷绘写真高空安装'], [6354, '14'], [9245, '合富置业'], [6495, '人学须道默而识之'], [7649, '营业时间900200'], [3676, '今夕缘足'], [6036, '施工电梯配件总汇'], [3029, '全家庄'], [8562, 'bu'], [2704, '物业管理处'], [5418, 'zto'], [1112, '鸿运烟酒'], [7381, '正荣集团'], [3020, '泗洪'], [1880, '全国24小时服务热线'], [9819, '招收学员'], [4652, '海淀'], [8918, '面'], [5378, '和顺复合厂'], [5881, '板不锈钢碳结钢型钢'], [287, '琴'], [7947, '上海家全物流有限公司'], [4269, '茶语轩'], [9166, '新广州人工作站'], [4594, 'bellagio'], [8697, '毅婷专业养生减肥'], [270, '宏远便利店'], [1130, '上海青浦'], [8682, '专业的泵阀制造厂家'], [1160, '小力电动车'], [6470, '2388'], [8498, '天宁寺派出所'], [2139, '事如意'], [2969, '307'], [7134, '涵涵'], [5435, '典化妆品'], [8399, 'midea'], [2766, '46613522019936'], [170, '烽火'], [4221, '归元堂'], [8391, '酒'], [7344, '桂林卤粉砂'], [6608, '随意足疗'], [2992, '中国'], [3774, '安国线3315元1钟'], [7363, '广铝各种型材华银龙山各种型材定做各种门窗塑钢门窗平开门窗推拉门等'], [7352, '镇宁东路'], [2489, '炸糕辛'], [5345, '嘉缘祥婚宴'], [706, '洗涤产品原料'], [5987, '用户至上'], [5636, '品牌4化传能化膜刷机解锁'], [6079, '美湖路7号7楼'], [9429, '橡皮熊'], [1001, '20'], [9337, '尚'], [8631, '纠销中心'], [1979, 're'], [1, '不锈钢配件大全'], [9262, 'express'], [4458, '验车过户'], [6200, '冬春06301800'], [2407, '招商热线'], [8902, '38'], [9067, '加力'], [1564, '常年招聘足疗保健技师保底'], [5255, '24h'], [9333, '凯居宜'], [8021, '变速箱齿轮'], [9565, '外贸连锁店'], [3650, '出'], [5308, '电话'], [7132, '电话5282125'], [3499, '麻辣捞烫'], [4013, '404'], [3133, 'lianjio'], [6723, '开业花篮婚车布置花束各类盆栽'], [2838, '10'], [5737, '卓勤教育'], [6740, '鹤祥路'], [3994, '文华'], [8265, 'perfec'], [4390, '外'], [6004, '店饼子'], [9710, '时尚美甲'], [4415, '永'], [4068, '鹤门业'], [837, '耕机'], [3309, '防盗门十大品牌'], [4658, '潮流造型'], [1899, '鑫丰公寓'], [9631, 'hreegunlife'], [6252, '潮渐宾馆欢迎您'], [2580, '美国突破润滑油养护中心'], [2054, '电话13073458690'], [1567, '正在营业'], [1430, '饰织补'], [7704, '金雀新城校区'], [1778, '吃一桌送一锅79券'], [1181, '货拉'], [7220, 'dongjiadurd'], [5633, '盖浇饭系列'], [2775, '晨光文具'], [2093, '全场服装'], [747, '无记饺子'], [2277, '92'], [9528, '美睫美甲馆'], [7262, '建筑书店'], [1203, '全屋整装实木定制'], [7898, '凰城茶'], [6384, '可味'], [7393, '玻璃电视背景墙'], [731, '18501758396'], [7607, '正宗云南'], [9474, '民告示'], [5951, '手撕大骨头'], [6325, 'p'], [2651, '5元之个'], [4541, '纸业'], [2533, '连锁'], [2029, '接待中心'], [1969, '13535029358'], [365, '13507162203'], [2496, '防城港东'], [4506, '副198号'], [6509, '发泡'], [7479, '广州市荔湾区金花街道'], [2065, '4g'], [8753, 'changh'], [6877, '伯克斯消防'], [5743, '永春中路209'], [8069, '中国邮政速递物流'], [3349, '万众建材'], [3705, '道爵电动汽车'], [6012, 'ctshkgrandmetroparkhotel'], [8784, '美丽预约'], [9516, '各种耳机'], [8074, '足'], [4549, '61'], [1809, '订餐电话83586'], [5561, '老字号'], [7201, '家长学校'], [1792, '全球一致的推奇品质'], [1326, 'breathing'], [4379, '上海三飞运动用品有限公司'], [8613, '习酒醇雅香'], [6351, '中国农业银行'], [4781, '浩洋电器中国高端生态厨卫'], [98, 'gutewu'], [2779, '酷果'], [6216, '财源广进'], [8678, '保屯路350号'], [7183, '汇一尚品'], [8778, '清华钢木家俱'], [3575, '燕京啤酒'], [5357, '武汉理工大学'], [1313, '龙王庙五金城'], [3308, '96777'], [8281, '特液压'], [4587, '湘会装饰工程部'], [6560, '广州市泽联建筑劳务有限公司'], [4857, '江苏店'], [5340, 'hongqiaostreetyuexiudistrict'], [9678, '阀门管件'], [3253, '美发'], [838, '满'], [3034, '福祥茗茶'], [829, '原同平路41号'], [4502, '雅靓苑'], [5380, '城市网格化综合管理中心'], [6698, '海联酒店'], [8962, '80'], [3716, '点号05092'], [507, '上海市高技能人才校企合作培养协调指导委员会'], [705, '百老泉酒家科好'], [7714, '上海市徐汇区徐家汇街道'], [3793, '全国连锁4006657999'], [13, '养生美颜美雕抗衰老美甲'], [6765, '关唐香酥'], [9256, '河绒店'], [5368, '黑元养发'], [8307, '华夏浪行'], [7049, '风帆'], [6922, '水电焊薄铁加工上门服务'], [5041, '麻辣烫'], [9718, '电话4292386'], [9765, '中国梦ala'], [8671, '绿城广场'], [8895, 'orinblonficscenter'], [8219, '548'], [2296, '居霖可'], [2983, '万达广场c6栋写字楼一楼'], [539, '林洲长沙专线'], [3163, '礼品定制'], [9654, '上海外语教育出版社'], [7672, '土豆粉'], [2668, '专业装修'], [9514, '中共重庆市南川司法鉴定所'], [6778, '收售二手车'], [2347, '团洁超市'], [2453, '出口成衣'], [4399, '贯彻党的十九大精神'], [3016, '手擀面'], [4974, '美妙食刻'], [5084, '15325814763'], [2726, '一鸣龙虾明年三月份正常营业'], [3313, '新建东路店'], [7482, '先美容一竹套过鞋二线茶等室内外工艺工程'], [5942, '从干丝上二楼'], [8520, '空调安装保修站'], [7983, '乔丹18周年庆'], [269, '空饰面板石膏板阻燃板518mm'], [3826, '有专业施工队伍'], [3371, '马家私房牛肉面'], [3764, '电话68113887手机13801913711'], [7620, '中午面食'], [5263, '新园公寓'], [3906, '打印复印'], [8665, '家常菜'], [3943, '订餐电话037155987699'], [394, '教材教辅'], [4256, '牛十三'], [7371, '九太浮山泉饮用水临澧店'], [89, '电话13525317436'], [3235, 'xinhuabookstore'], [1829, '图文快印'], [2310, '上海汤包'], [2412, '和出租屋管理服务站'], [698, 'hotpot'], [2772, '广东省考试中心'], [4530, '816'], [475, '刘安'], [7614, '房屋租售'], [480, '上海尚鸿置业有限公司'], [4557, '电话13886017386'], [8541, '技术协会'], [1789, '惠丰美食'], [3892, '仓储服务中心'], [4577, '30元'], [210, '巩总13763702875'], [5476, '黄金'], [9026, '美林高端防腐木'], [2532, '家杰烘焙蛋糕店'], [6011, '便秘常'], [2410, '志高净水机'], [1372, '生活购物广场'], [1765, '皮鞋'], [644, '龙王庙五金城招商部'], [2302, '手机18766992873'], [9125, 'modellingspa'], [153, '社区党校'], [6448, '员群众服务中心'], [8524, 'madk'], [6022, 's'], [624, 'heartheafthyspa'], [7845, '回收废品'], [3160, '主营'], [5872, '18115751181'], [7236, '理工作站'], [61, '珞珈山198号'], [5320, '昌州面馆'], [3008, 'cricht'], [4788, '承接水电安装工程电'], [5674, '鱼江路街道城市管理'], [3796, '安阳花园宾馆'], [1204, '水润苑'], [3122, '中美集团'], [2051, '欢迎光临'], [4053, '申东浩'], [6830, '复印证件'], [4837, '江汉区'], [3942, '叠鼎宾馆'], [6813, '手机维维修'], [5925, '小儿推拿'], [1201, '美味鲜汤吧'], [9287, '8元'], [6051, '24'], [3970, '元具'], [4808, '羊毛羊绒棉袄棉裤'], [3329, '上港足道'], [4662, 'chin'], [3948, '楚文花'], [2037, '电话85813567138'], [9318, '哈弗'], [7889, '电话61422968手机18816678339健康环保'], [8355, '288'], [5683, '黄根部'], [5768, '壹贰叁书店'], [4201, '美颜坊'], [8463, '佳美毛巾批发零售'], [8593, 'shuanshaxinzhenluxicun'], [1268, 'r'], [4105, 'chiagw'], [2822, '美290'], [3701, '订做批发皮鞋'], [7420, '一房一厅'], [562, '拉'], [3996, '福'], [4735, 'no03店'], [9993, '如海超市'], [3803, '汇'], [7981, '8001800'], [8483, '能力'], [1044, '农洋社业'], [2443, 'h'], [9307, '圣恩广告'], [1391, '六一中路'], [9077, '413'], [5950, '2199'], [6700, '荣事达'], [9637, '宝水饺馆'], [438, '天恒置业大厦'], [9241, '之衣'], [6143, 'opple欧普照明'], [4980, 'cam'], [4283, '布丁酒店'], [4745, 'since1987'], [988, '真正美食家'], [5915, '童装店'], [6300, '丁医生中医皮肤科诊所'], [9891, '王发宾馆'], [5107, '东文名车行'], [289, '请上三楼'], [4840, '唐汉千年古树普洱茶'], [5168, '小米粥15元'], [5605, '光头佬超爽饺'], [9869, 'vation'], [1027, '高记小吃'], [8474, '超艺'], [1559, '作悦劳务派遣的a心'], [4037, '硅藻泥'], [4479, '爱尚麻辣串爱尚麻辣串'], [934, '小学'], [4924, '致富光荣'], [9084, '真悦皇茶'], [9753, '货架丶铁发窗纱门铝合金门等'], [2032, '二丁脂dbp'], [1501, '九九鸭脖'], [6750, '门诊'], [5261, '399'], [4718, '写真喷绘雕刻印刷大型户外广告cad制图'], [5606, '高原明珠店'], [9265, '益友汽车修理厂'], [66, '中国移动通信'], [8260, '新潮美发'], [5141, '110'], [3800, '上海铝塑门窗经营部'], [1190, '淮安市前进西路营业厅'], [6634, '鲜果零食电话17771826740'], [8026, '意见箱'], [3682, '向阳河路579号'], [1246, '丽果'], [8862, '步超指定专营店'], [9055, '聚装火锅底料健康美味每'], [678, '统一价格'], [5444, '活超市'], [784, '手机卖场'], [7907, '电话03516338988'], [6064, '立邦漆'], [9124, '超峰'], [7270, '武汉美好恩京物业管理有限公司'], [8845, '万顺房产'], [5566, '17693151234'], [483, '6583366'], [6730, '德诚行集团'], [3702, '富平春'], [7309, '小荷作文三味学堂'], [8825, '86233420'], [3672, '古百鸡百'], [836, '23'], [6896, '报名'], [5198, '小潘电'], [8949, '24'], [6716, '领秀造'], [9259, '工怡苑'], [2346, '马快印'], [8350, '生非凡'], [8936, '德'], [4367, '诚聘'], [2287, '时尚婚庆鲜花'], [6324, '电话13937099410'], [5108, '15342249898杨15342243322洪'], [8698, 'sl'], [7495, '龙潭路longtanrd'], [4448, '优贝佳'], [810, '床上用品'], [60, '道达尔润滑油'], [6477, '欣便利'], [1180, 'exchangewalk'], [6081, '美甲美睫韩式美妆'], [6441, '法外汇交易远离网络炒汇避免受骗'], [8602, '59'], [9638, '兴旺保信可靠出'], [9319, '365'], [7630, 'to'], [6571, '海鲜自助'], [9607, 'auto'], [9878, '修电动车自行车劳动车'], [5654, '美萃尚品'], [6492, '昌盛'], [3356, '新能源汽车综合服务中心'], [5963, '天'], [1509, '进行中'], [1225, '美宜家窗帘'], [2072, 'leconte'], [4241, '恒洁卫浴'], [8435, '址上海市大盈镇香大路1333号手机182177078581821966'], [5980, 'haoyuanwareiroxse'], [3448, '拌汤小米南瓜粥家常小炒'], [1651, '中国体育彩票'], [6965, '砂锅麻辣烫米线'], [5327, 'c��11栋910号'], [7111, '剪爱美发'], [4907, '黄埔区鱼珠街'], [8418, '报名处'], [7407, '莫城总店'], [4973, '挖摇机专用油'], [4703, '脱发白发养发'], [3724, '电话13483110888冀a20180025'], [3866, '经营范围'], [1766, '792'], [3404, '兴隆防袋一次性用品批发'], [1856, 'o丝路情服务品牌示范店no0317'], [2317, '天阳工程'], [7338, 'visual'], [1247, '本店承接各式宴席156'], [8920, '看戏机'], [5907, '台州风味'], [122, '晨光文具'], [3715, '电话076088718736传真076088287068'], [7442, '学习'], [2894, '劲松八区'], [2295, '直销'], [6637, '帮得佳家具用品'], [1585, '1753'], [142, 'sumantian'], [7058, '电话13285230862'], [5856, 'ztoexpress'], [3945, '湖北绅宝驾校大型综合训练场'], [1231, '营业时间'], [5172, '车之福专业制'], [3232, 'fnn如家酒店'], [9156, '清仓处理'], [6165, 'dispensar'], [4142, '凯达'], [7611, '州总经销'], [2966, '中草药健康产品'], [6657, '石膏粉'], [9019, '大程集团'], [6294, '飘香荷叶饭'], [6769, '千年树木门家居定制'], [301, '孟庄宾馆停车场'], [1125, 'ryh'], [1652, '电话'], [8258, '车养护美容会所'], [4794, '是第一要务'], [4903, '鸡山捷'], [5087, '山林'], [1793, '钻石金刚总代理'], [5039, '小南国'], [1108, '信息通信业务行政许可'], [7811, '取件热线02166861011'], [5396, '印州厂招工'], [5161, '伊河路店'], [4925, '美食热线8261567'], [3204, '中国电信'], [6914, '棉布'], [8851, '变频'], [3283, '风水用品'], [368, '艾一'], [2630, '现配现磨'], [2861, '品鉴热线02163775757'], [5025, '玉相缘珠宝'], [8042, '药'], [3528, '魅力牛仔'], [6043, '藏家首运'], [2659, '上海国家会计学院'], [7385, '280'], [2701, '电话15545565333'], [2648, '51号'], [3824, '65339991'], [1370, 'are'], [6669, '电话'], [8439, '铭冠口腔'], [5661, '门锁灯饰套'], [7865, '足疗'], [7888, '13161359012'], [8523, '嘉宝莉漆'], [3205, '门窗经营部'], [4567, '精湛的技术'], [9418, '浆'], [594, '伊润牛拉'], [5644, '欢迎光临'], [3913, 'cnc'], [6595, '母婴baby'], [154, '电睫毛'], [5365, '发街'], [4590, '上海殡葬服务中国有限公司'], [9981, '凤铝彩铝隔'], [5489, '油发电机电焊机水泵及配件'], [5585, '郭烟花爆竹专卖店'], [9883, '酸菜鱼'], [9177, '房产'], [6958, '建设力帆三轮'], [7189, '新天地二手车'], [4854, '后果自负'], [3900, '洗面拔罐按摩'], [8014, '沙县小吃'], [2392, '电动车'], [735, '马可奇娜英语'], [2721, 's'], [9826, '华帝'], [2087, '摩尔齿'], [2409, '火锅'], [1105, '北京养瘦减肥医学研究院济南总店'], [8893, '光明都市菜园'], [4672, '站点号10922'], [3636, '婚纱'], [9708, '童颜管理会所'], [8229, '奢侈品清洗保养护理'], [7972, '雅座家常菜订座热线1387932181113970385890'], [9786, '圆圆纱龙物业麻沙'], [2827, '拐枣巷小串等'], [2741, '卡莱'], [4686, '电话656573651516970555'], [7579, '博诚渔具'], [3116, '西北郎'], [3643, '上海纺原物业有限公司'], [717, '无人'], [2313, '广州市越秀区东山街'], [3218, 'ereducation'], [9138, '订餐电话18730098092'], [6091, '百灵按摩'], [7230, '24h'], [215, '步滴滴出行'], [9497, 'chuanyitang'], [7102, '美容养生电外a小架'], [3543, '文强草莓采摘园'], [2060, '开锁'], [3323, '物流'], [5595, '紫剪牛奶'], [5529, '8875684313971569945烽火市场d区2栋6号'], [4894, '车电子维修'], [4766, 'kitchen'], [2394, '大'], [8615, '鲜家一口香菜馆'], [8422, '举报电话'], [7900, '高科技护肤spa中心'], [8585, '02084635478传真02084645306'], [5631, '桂林村'], [5083, '瑞'], [188, '狮城百姓大药房'], [702, 'denefu'], [1686, '批发部'], [8090, '电话8318099115015311116'], [175, '防盗门'], [3786, '学国际大食堂'], [3220, '饰美'], [7702, '手机13816588522'], [7674, '首付0元起'], [7655, '甜甜菜'], [641, '富牛黄丸到货'], [4012, '第二十九分店'], [7159, '驰摩托俱乐部'], [6162, '直销'], [3531, '悦旅旅行社'], [7560, '美团'], [3792, '外仓桥街'], [2363, 'hg'], [6409, '蓝海窗帘批发城'], [7982, '家直销批发零售非标订做电话8404508913692279141'], [4439, '单位派遣外包招工房产咨询家政服务'], [3223, '25115'], [5505, '国泰蜂业'], [4080, '虹康新苑'], [8380, '小东汽修钣'], [8419, 'sopin'], [5231, '电话13978752825边阳店'], [9417, 'lalabobo'], [4695, '上海桥冉基础建设工程有限公司'], [476, '面食'], [4285, '早餐供应承接各类团餐'], [7022, '农商银行'], [8501, '批发零售'], [3346, '18736165616'], [9249, '72'], [2272, '东联快修'], [3507, '编辑部'], [304, '云米全屋互联网家电'], [8914, '趣之堂'], [1111, '棋乒乓常年招生'], [6076, '万顺装饰连锁店'], [7186, '依妍'], [5179, '依曼丽'], [2867, '广州市越秀区司法局'], [2831, '1国家粮食储备临街一层门面对外出租120240平方米'], [610, '地下家具'], [1837, '文明公正诚信'], [7051, '联系电话135240264171893975504061755040'], [8545, '口腔门诊'], [5173, '家富富侨'], [2080, '浙江'], [1611, '巷子餐馆'], [5710, '广州市海珠服装商会办公室'], [8872, '洗剪吹盘头做花'], [2803, '金地房产中介'], [9811, 'asstocrosstinei'], [5358, '海珠测量队'], [5429, '荷叶蒸饭酸菜鱼'], [3219, '218191'], [7669, '电话15969023938'], [5787, '电话15907206577'], [5390, '00'], [8275, '隆重推'], [1258, '销售'], [5939, '春款'], [3686, '侨港汽车服务'], [5170, '凤爪王烧烤龙虾'], [3225, '高价收车'], [3341, '办公室出租'], [9248, 'jmov'], [8203, '五菱证理'], [6442, 'meei'], [6818, '老北京涮肉'], [4537, 'tuhucn'], [3641, '62390144139176438'], [6878, '家财五金店'], [3398, '炒菜炖菜2'], [3881, '又来麻辣'], [6271, '特价牛'], [3301, 'floor'], [5653, '联系服务群众工作室'], [7109, '12931138个人证件照'], [6680, '眼界大开'], [8095, '合资'], [5187, '电话13883290962'], [3780, '下'], [685, '德邦'], [9275, '重汽'], [8111, '一易街庭'], [2138, '洛佩斯'], [9970, '继忠'], [1324, '雅丽娜文具店'], [2675, '供应学校单位'], [5601, '验钞机'], [9118, '瀚海精密模具'], [1480, '电话q2082170467'], [8251, '岳各庄派出所报警电话'], [3389, '聚缘'], [7834, '天姿医亭'], [5723, '百威敬真我'], [704, '精肉'], [3019, '招聘'], [9625, '经营'], [2503, '家常炒菜'], [2935, '玉婆丽人'], [3823, '糕面'], [1123, '美妆潮流馆'], [2275, '99元'], [6610, '恩橱柜澳柯玛电器'], [7658, '满好便利'], [5921, '海军工作室'], [9130, '87541920'], [9086, '01'], [4969, '1393450082'], [5059, '电话22472768'], [768, '部情婚礼庆典'], [3988, '油博蓝泓糖酒站'], [5750, 'tattoo'], [2180, 'dtaloieet'], [1230, '土豆粉'], [7403, '双星名人'], [5182, '116号'], [6171, 'hours'], [470, 'italianstyle'], [3361, '金银加工'], [4933, '美国传'], [4333, '13831924411'], [8400, 'kng'], [285, '15270308348'], [9862, '张亮麻辣烫'], [3871, '代办保险汽车装饰'], [4527, '洋肉果蔬店'], [6310, 'hugyummy'], [3547, '衰老中心'], [7875, '中国福彩'], [8735, '展信'], [4413, '周记生鲜水产'], [9614, '3720128078'], [5806, '享'], [8974, '术协会'], [388, '顾客服务旗舰店'], [1003, '17601676454'], [9612, '精品男装'], [7221, 'sportplus'], [9749, '顺'], [8122, '9'], [910, '雅迪'], [2523, '拔罐'], [1685, '本白'], [1146, '电话8784929'], [4149, 'natuzzi'], [1627, '电轮'], [9965, '汉芳医美美丽热线15136821139'], [5, '优衣库'], [8186, '烟酒'], [9140, '建材城货场北街'], [5264, '百宏集团'], [7067, '照相'], [7082, '改尖'], [8431, '湖南'], [566, '防水'], [618, 'hrhome'], [9439, 'zhuoyue'], [8679, '群众时间'], [2821, 'si'], [2321, 'douanqu'], [6321, '飞雕时尚开关'], [9566, '13939799728'], [2193, '佳佳乐'], [6820, '特种专机加工'], [5166, '茶'], [2962, '私人订'], [7097, '大王椰木业'], [8143, '重庆老火锅'], [504, '香锅花甲'], [2708, 'mudelnila'], [3431, '强荣皮草'], [3769, '里有金系'], [656, '中国民生银行'], [1402, '美丽健康长寿'], [9294, '年份原浆'], [20, '家土菜馆'], [6963, '御康养生'], [7113, 'spar欧洲第三大零售集团'], [2279, '18902273127'], [1591, '阿拉丁烧烤'], [2354, '足道'], [4078, '悦途'], [6621, '恒信粮油店'], [7293, '中国社会工作联合会'], [5977, '连锁sh068号'], [95, '祈'], [6623, 'tel075533891216'], [2390, '手机18710071107'], [7368, '联系电话13250727022'], [3071, '灭火器箱'], [8852, '新春大吉'], [340, 'cheese'], [3717, '南洋风味'], [2550, '云联惠上海精准扶贫专项基金'], [9586, 'long'], [5193, '七小吃复做酷板面'], [9221, '933610'], [5594, '昌盛包装'], [488, '百味鸡'], [2909, '马记面馆'], [6536, '送餐电话18348379786'], [4784, '超越通讯'], [9450, '妙手仁'], [8412, '号店'], [892, '正月初六开班'], [8652, 'peak'], [2114, '民生石油'], [5758, '正'], [5660, '单县羊肉汤'], [1872, '护肤美甲化妆'], [9653, '更疗'], [4949, '1861027092013910858114'], [8689, '本座二层让11室'], [3662, 'donglifengmeikangianschoot'], [3738, '平价店'], [1367, '杭久'], [708, 'nba'], [8269, '社区服务站'], [6892, '166'], [4671, '咖啡研究室'], [9211, '制作安装'], [5157, '上海'], [2641, '四季青镇人民政府'], [4515, '炒菜'], [4434, 'anjiedaoyangchunxinjiyuan'], [9006, 'china'], [879, '丽从头开始'], [6991, '15217055238'], [6662, '青岛山色山谷矿泉水'], [6521, '电话1387291238813972892376'], [6056, '自强路'], [4239, '秀禾花店'], [4800, '义动健身'], [6627, '160'], [3607, '2'], [7353, 'ngzhou'], [1144, '欢迎光临'], [828, '812元'], [2801, 'walls'], [8927, '幸福狐狸'], [9542, '北京涌泉保健服务中心'], [7434, '涵涵童鞋'], [1589, '盐巴石棉烧烤'], [3703, '团体外拍'], [5797, '福泉路店'], [6293, '林'], [4988, '24小时自助银行服务'], [4245, '门博丽'], [2177, '苑容'], [2019, '菜'], [2359, '精工快印'], [9490, '广州市体育局海珠区体育局'], [8349, '长虹'], [1758, '厅'], [4798, '手机13813323600'], [7693, '水围店'], [5755, 'jinyulaimodelbusiness'], [8822, '福香'], [2322, '电话13964053272'], [8774, '欧mp6可欧mp6供大家'], [1316, '名'], [4786, '牛皮癣'], [5724, '御品居'], [9, '网电竞咖啡休闲'], [4792, '门头设'], [7025, 'starbuckscoffee'], [3172, '凤来店pw00045'], [7015, '联系电话15813038507陈生'], [6864, '二0一二年七月'], [2269, '贝乐'], [9321, '足疗项目精油开背修刮捏摩保健按摩'], [9304, '配件'], [3311, '创作培训基地'], [235, '业'], [820, '欢迎光临'], [1269, '化肥农药农膜种子等'], [2187, '洋河股份'], [3508, '太华门窗'], [9979, '广州市精神文明建设委员'], [4598, '雨宁升腾辉'], [8146, '215216'], [3440, '5999'], [336, '主营快餐干锅火锅炒菜烧烤炒粉'], [9023, 'royaitea皇皇茶'], [275, '998'], [3062, '新飞烟机热水水泥五金'], [846, '小石头'], [2761, '艺造型'], [885, '高唐名吃'], [7829, 'e'], [6041, '推和四公开要求'], [5979, '批饭'], [1892, '苏中批发城四区2931号电话13285233196'], [2786, 'zto'], [4918, '海'], [8921, '瑞宝'], [2655, '推'], [1753, '东方雨虹股票代码sz0022'], [9178, '产地食林'], [9234, '湾辣婚庆'], [1021, '名烟名酒针织百货洗化用品水果副食电话15617683330'], [9620, 'shineo'], [1015, '汽车冰衣箱电气焊打黄油'], [7956, '娱乐'], [582, '申通快递马甸营业部'], [9918, '粥'], [5878, '米饭'], [3677, '玻璃总汇'], [945, '74'], [3451, '路店'], [8917, '创意图文广'], [9471, '中国工商银行'], [7830, '003195478'], [6531, '直销惠农惠民天天新鲜天天优惠'], [6058, '电话81999844往前3米'], [3768, '永发门'], [8421, '中国文化产业发展集团有限公司'], [8208, '酱鸡酱鸭酱牛肉'], [9886, '新宠汇'], [5063, '金'], [6917, '和百货'], [8792, 'yongji'], [4204, '婚礼庆典'], [8093, '68'], [8390, '沙发套'], [4384, '床上用品'], [499, '栏板高度440mm'], [164, '艾香堂体验馆'], [2984, '二0一一年十二月'], [8070, '美康养生会馆'], [4152, '跆拳道双载棍'], [5164, '00元'], [5332, '彩篷公司'], [4673, '精诚彩钢'], [2825, '宜昌'], [107, '洗衣连锁'], [7692, '肉丸子'], [7794, 'kft脚王'], [5440, '桥东'], [421, '网络工程'], [7720, '彩铝塑钢断桥不锈钢铁艺围墙无框'], [5877, '如意吉'], [5930, '蓝花瓷'], [4791, '外观件'], [9481, '欣音琴行'], [9632, '16'], [9441, '光'], [8960, '一枝秀花屋'], [2887, '专门店'], [7812, '本店'], [2159, '回收各种'], [7323, '染纱帘'], [8649, '打字'], [4166, '鸿运火锅'], [7077, '399'], [554, '16'], [4593, '培训'], [8812, 's'], [2817, '本铃'], [4711, '凉皮'], [9044, 'terepah'], [2526, '29'], [7937, '春天小厨'], [2098, '专业水果批发'], [6923, '家美涂料装修'], [6285, '奶茶'], [1122, '华隆超市'], [1407, '五金帝'], [2431, '功夫煲好'], [5894, '足享天下'], [8176, '招聘'], [7685, '怡生'], [1948, '乐享中高级suv'], [4344, '正道武馆'], [3805, '水井饭店'], [5587, '电话5222569'], [1997, '天津特产'], [5555, '孟津宇安店'], [6982, '乐宝腻子粉'], [6129, '足疗按摩推拿修脚'], [3587, 'whatwe'], [5885, '东平路'], [1276, '有车出'], [1266, '鑫美'], [8344, '358'], [8284, '74'], [2634, '美娃小'], [6575, '工商银行'], [4524, '电话1597145731815827430098'], [7117, '秘制'], [105, '金'], [1005, '美蝶篷'], [4647, '浦兴店'], [1761, '古美路街道华一新城'], [4619, '美甲美女手足护理'], [1384, 'for'], [7455, '士'], [4990, '加热管'], [9563, '音乐教育中心'], [2246, '奶茶饮品'], [3217, 'shanghaijukbiotepa'], [6795, '451'], [3751, '批发'], [9236, 'a'], [1216, '摇摆机'], [8560, '预跑健身'], [3519, '办公用品'], [5298, '主板精修'], [7291, '大麦高清机顶盒'], [6383, '宾馆'], [3458, '好口福'], [4175, '广发银行icgb'], [3844, '全自动麻将机'], [186, 'hua'], [4616, '广州市海珠区海幛街'], [3059, '必将受法律制裁'], [522, '村长冷库'], [4017, '西湾东三街12号地下自编北5'], [7886, '服务农业'], [8099, '楚添理疗养生'], [9281, 'bankofchina'], [1052, '电焊'], [819, '抢铺面'], [8464, '全'], [5324, '尚馆餐厅'], [4774, '1715151050669韩'], [9271, '社区卫生服务站'], [1028, 'no42081'], [3884, '8450761'], [4236, '综治信访维稳工作站'], [6333, '营业时间'], [5438, '海爱斯达克汽车空调系统有限公司沈阳分公司'], [6846, '厂家直销电话66867218265166268'], [1161, '春荣百货'], [1767, '拉链'], [1770, '书画装裱婚礼庆典'], [6444, '半串佛真低神观音就青祥物家党'], [251, '浴欢迎您'], [9616, '面馆'], [5315, '中国电信'], [8731, '马防水'], [96, '安吉广播电视'], [2106, '来福'], [52, '外来车辆'], [1517, 'antra'], [8768, '二楼餐厅'], [3211, 'lacesar'], [5336, '蔬菜批发零售'], [2110, '13634654'], [7514, '嘉兴市胜泰电气有限公司'], [311, '4009955000'], [253, 'deer'], [8769, '炸鸡汉堡'], [3648, 'lou'], [9717, 'xa6'], [9719, '18914525059'], [7304, '外送电话8'], [4513, '天平街道'], [3905, 'express'], [9834, '活动日期3月1日'], [2418, '旋转小火锅'], [1669, 'footballcourt'], [6438, '湛宝工程机械'], [5119, '小区内禁止'], [4362, '营业时间'], [5582, 'oppofindx闪耀上市'], [2737, '福香小吃'], [853, '泰福居'], [4702, 'wei'], [1663, '代理'], [3966, '健康养生馆'], [4891, '262'], [4642, '大西瓜儿童摄影'], [3091, '喝鸿展宏'], [6569, '521'], [3412, '广告衫帐'], [471, 'letscolour'], [5532, '茶叶世家'], [5782, '木子茶'], [6653, '国际'], [6016, '124813'], [9085, '中国移动'], [4183, '丸子'], [7081, '毛血旺'], [3292, '嘉豪国际'], [3723, '清之源'], [4939, '外来人员'], [9977, '电话83359978189716809137'], [2751, '不锈钢'], [8553, '离子管推广中心'], [5501, '鸿源'], [3854, '广州大'], [4463, '家庭期刊集团'], [813, '圆通速递'], [8294, '30元'], [6758, 'vivo'], [2642, '厕'], [4481, '忠正老面馒头'], [6449, '社区学校'], [5404, 'hand'], [119, '联系电话1382441755613822151255tel02022893993'], [1499, 'sunnysongsmire'], [9634, '联系电话1820172881318036322819'], [5681, '万路达汽车精洗贴膜装潢连锁店'], [8201, '悦兴布艺'], [7280, 'guangzhouguoyitaitradingcoltd'], [7517, '城西大世界五金水暖'], [3990, '地址广州市越秀区广九大马路22号102档'], [5060, '饰'], [8202, '品监专线5237666'], [6853, '批零空压机电焊机风机水泵电机电动工具等'], [9038, '便民服务站'], [9502, '福'], [2131, 'w'], [9365, '双柜'], [8529, '洁通管业'], [7961, '好百年'], [7238, '二人餐68元'], [1152, '佳祥物流'], [7239, '电话1318304894015137056831'], [9108, 'cv'], [2906, '风铃屋'], [1610, '时尚'], [681, '中国四大名酒'], [8492, '美碗烧烤'], [5627, 'vip'], [5363, 'trainingcollege'], [8702, '室内外装修'], [9310, 'du'], [1301, 'exit'], [9554, '汽车电器维修'], [9301, '正宗赵德营米酒酵子馍'], [6128, 'threegunlivingconcept'], [787, '电动车维修'], [798, '03152100191'], [6808, '891919酒福多红酒特'], [7558, 'automotiye'], [8909, '医教结合实验基地'], [3240, '有电危险'], [2883, 'fishfrogrestaurant'], [8062, '火'], [8438, '光复北店'], [7990, '茶吉现社'], [3591, '电话13643460450'], [495, '汽车超市'], [9728, '宝隆汽修一分店玉华维修部'], [1169, '上海贝传租凭有限公司'], [4621, '早酒经济便餐'], [1398, '全国连锁'], [7831, '鸡煲小火锅'], [8994, '小叶子粮油'], [7548, '天原居民区分站'], [4366, 'enlight'], [8813, '鸿福门窗'], [7621, '板折'], [6624, '合禾养生馆培训催乳理疗'], [2323, '中国兰州牛肉拉面'], [1072, '棋'], [4117, '老肖油漆总汇'], [6505, '纺园公寓'], [1524, '舒婴宠物批发站'], [3449, '眼蔬'], [345, 'cravtsh'], [7054, '街道常西社区'], [5053, '空调'], [5851, '入口'], [2327, '服务热线15802132500'], [3092, '厨电卫浴净水机建材一站式购物'], [2265, '7天连锁酒店'], [1381, '上海石膏线'], [1984, '20'], [5152, '畅达汽车服务中心'], [5328, '茄子'], [9000, 'pizahhutbistro'], [7427, '黄港夜宵'], [695, 'muens'], [5882, 'beeshome'], [6280, '订'], [5632, '造型'], [7619, '活动室开发时间'], [6412, '中国福利彩'], [3221, '办证'], [354, '港亿隆商行'], [1878, '食坊'], [6890, '尼丽芦新茶'], [7559], [4509, '海水石材'], [6568, '聚合氧化铝'], [4861, '2680'], [8347, '销售热线13320909316售后热线400889931'], [3661, '丰城老小黄牛'], [7354, 'mon'], [7063, '五香肉'], [9071, '批发零售'], [9075, '光明乳业'], [6207, '送免费车'], [3482, 'open'], [6869, '香烛'], [309, '广州热风'], [511, '面馆'], [258, '正在'], [7595, '杭州营业部4f'], [5207, '精品童装'], [3407, '男女休闲服饰'], [2980, '大药房'], [3700, '禁止吸烟'], [5349, '运动休闲品牌折扣时尚胖姐男装'], [9260, '水电设计安装维修'], [8037, '鸡汤牛肉汤'], [9905, '星'], [5936, '董家渡路'], [4279, '再杀巩'], [2173, '广州市海珠区'], [2978, '水源地奇利瓦克多次荣获加拿大最佳饮用水奖'], [5227, '海珠区'], [4972, '昆明川荣商贸有限公司'], [8549, 'ujo'], [1233, '保民副食'], [2345, '欢迎新老客户光临联系电话1332185'], [8022, '统'], [3628, '如意菜饭'], [1025, '天德一手'], [8404, '星海音乐学院'], [1214, '理发'], [3169, '夹河社区委员会'], [6701, '君需劳'], [1990, '喜'], [4319, 'b9'], [5160, '兴顺'], [4845, '位停车'], [2108, '茅岗24小时智能停车场'], [5802, '招工'], [8387, 'piayerunknowns'], [5520, '脊森机理'], [6078, '牌养生'], [1735, '15001794240'], [8487, '广州吉祥连锁店'], [7319, 'oppo'], [9808, '修鞋'], [931, '央视网'], [8159, '善'], [8680, '化学与化工学院'], [3791, '美容'], [6421, '超市'], [2348, '方圆文化'], [7036, '北京热力集团'], [9235, '佛聚缘'], [1888, '广东大包'], [2280, '志量'], [449, '创城海平商行'], [7084, '自助快餐'], [2455, '武汉民生优能火瓶装气供应站'], [9648, '红烧'], [1599, '电话13363774777'], [8040, '医保'], [6090, 'high'], [5237, 'baoxianzhgou'], [2743, 'design'], [812, 'surfing'], [8133, 't56651302'], [7634, '18929574008'], [5664, '手机13671722195'], [111, '满499元减100元'], [3194, '恢弘启动'], [8783, '西域部慈龙虾烧烤'], [8596, '电话03703133115'], [9135, '养生热线15836272158'], [7441, '格兰菲迪台球会所'], [9812, '2元斤'], [4813, '梁记士多'], [7610, '440'], [9884, '速冻'], [3576, '泽安消防'], [3171, '动板房'], [2404, '中国体育彩票公益金捐建'], [6195, '老字号'], [5941, '宣友汽车服务有限公司'], [811, '纹身请联系'], [1759, 'hellocake'], [7008, '地址车站路64号电话137238569'], [3439, '中国福利彩票'], [9504, '麻辣火锅'], [8457, '电话15066434682'], [7616, '柜台'], [4005, '健康卫浴新标准'], [7857, '265'], [6516, '专业编'], [7785, 'is'], [5803, '英蓉屋密闭式清洁站'], [2948, 'shanghaitensologisticscoltd'], [5771, '烧饼油条烧饼夹馍夹菜'], [6512, '田科'], [7481, '科农种业门市部'], [2497, '手机13671760100'], [2527, '手机15993414119'], [1493, '儿'], [176, '兵哥商行'], [6968, '活鸡活鱼店'], [7307, '省心省钱'], [2336, '四牌楼壹号店'], [7572, '鸡爪'], [1007, '秀足空间'], [1558, '海当家'], [4780, '海外自采'], [9034, '刷pos机消费开正规发票'], [605, '老四季汤包'], [7770, '企'], [1883, '北'], [3037, '中医按摩足疗火疗刮'], [3708, '宠物防疫诊疗宠物用品狗粮猫粮'], [4447, '蟹都汇'], [1607, '壁纸大王'], [4442, '宏盛窗帘'], [3471, '自行车'], [2454, '品牌折扣全国连锁企业'], [967, '移'], [655, '电话13118407289'], [3565, 'mg晨光文具'], [6586, '中国共产党'], [5646, '鲜花店'], [7533, '皮衣清洗上光上鱼'], [7587, '24小时报警电话'], [4002, '124'], [6467, 'chinamobile'], [9551, '广东建工恒福物业有限公司'], [4345, '恒'], [9396, '佛缘堂'], [9174, '国军汽车'], [1306, '459'], [7462, '上海奥林运克俱乐部'], [8160, '手机维修'], [2040, '燕子大排档'], [5833, '广告'], [6940, '疼痛松筋蒸脚美容'], [2078, 'chinamobile'], [4771, '442'], [7823, '视康'], [165, '爱家酒店用品配套中心'], [6672, '聘信息'], [1098, '诉调对接指导站'], [6391, '麻辣辣香锅'], [4597, '15162621017'], [9778, 'perfect'], [3954, '这酸变才正宗'], [8032, '清'], [4432, '中国邮政'], [6338, '标榜'], [963, 'qq121804066'], [5400, 'so鲜花工坊'], [857, '装潢美容'], [6161, '专业烫染设计'], [8283, '临胸锐丰配送中心'], [591, '谢谢合作'], [5300, '墨海广告印刷'], [352, '其电脑维修店'], [2357, '黄焖肉米饭'], [9001, '中通快递'], [9587, '肉'], [4688, '社区综合文化服务中心'], [1930, '承接各种防水堵漏工程电话13482258845'], [3201, '虫旋澡'], [7056, '白云边'], [5958, '叁上乐'], [8759, '驿池'], [9069, '特色川菜宴席会议自助'], [859, 'regel'], [2478, '牛奶礼品水果'], [4644, 'hr'], [1101, '方舟快印'], [3689, '999'], [1304, '加盟电话15836818911no001'], [75, '造型'], [4559, '中国石化'], [5767, '吉薇尔'], [5258, '巡棋艺'], [2715, '桂达浴池'], [6990, 'chlitina'], [3392, '雅迪'], [3464, '社区卫生'], [6397, '染发烫发刮脸洗面烤油离子煲'], [7240, '精成yance'], [1222, '城商贸城'], [2941, '德法印刷'], [5330, '便利生活美宜佳有你真好'], [9800, '专业摄影'], [6013, '宇能达电子商场'], [1825, '大型仓房鲜打面足等'], [6274, '生子烤全羊'], [1510, '全都菜'], [5413, '鲜花店'], [1433, '精修各种轿车发动机底盘清洗油路洗车补胎空调电路电脑检测电话1589362906615238897218'], [6842, 'westa华士达影城'], [2854, '标书装订'], [6679, '锅土豆粉'], [6898, '河口第一幼儿园'], [177, '药品'], [5270, '鲜花店'], [7690, '小龙虾烤鱼砂锅粥'], [7477, '电话13546343508'], [5905, '力诚机械'], [5467, '乡村小菜'], [8408, '电话13476564208'], [4330, '雅洁美容spa会所'], [4955, '208房'], [4756, '18号'], [1067, '足乐'], [1496, '专营出口鞋'], [4459, '4000968066'], [1995, '私自下河'], [9266, '常兴达电子'], [9559, '手机1519060836'], [627, 'xiqincommunity'], [331, '重质量2350kg'], [3255, '涯'], [247, 'chinamobile'], [9624, '3223'], [5666, '王磊诊所'], [9229, '周口市华欣副食品有限公司'], [3773, '正宗黄山特色'], [7411, '网窖'], [6029, '电池以旧换新'], [401, '合金机电技术tltd'], [5244, '高升办公文具'], [7853, 'laitingyixie'], [2186, '话1004265298'], [7308, '加盟热线'], [2739, '联系电话15051174889'], [2699, '中国中铁财务有限公司'], [6770, '发光字门头制作雕刻名片喷绘tel18136565618'], [4543, '也可开'], [3850, '24自助银行服务'], [5381, '务中心'], [6359, '膜陕州店'], [6810, '育英会'], [3704, '上市部分特价样'], [8079, '13580304018'], [4296, '百信大药房'], [4855, '信阳毛尖'], [9875, '北京红都集团公司'], [4102, '党建服务站'], [1904, '四季芳庭'], [2868, '皮粥'], [2290, '喜庆用品店'], [891, '第仁盛设计堂'], [467, '范'], [6033, '加盟热线'], [7908, '电话1770387111315538609992'], [1387, '8353705'], [1045, '15248941818'], [99, '21'], [3353, '水饺'], [7395, '在健身的道路上必然会孤单'], [7469, '海鲜'], [7712, '地址凤和天峰纺织城e2区'], [6355, '中共闵行区梅陇镇'], [1566, '无障碍通道'], [3579, '空房出租'], [4612, '定制'], [2320, '蒙台梭利教育'], [4876, 'maxsine'], [7667, '经营百货副食卷烟零售'], [7120, '水泥陶粒砖沙石'], [6217, '检测维修汽车美容装饰等项目同财在原有硬件基础上建道'], [4746, '授权代'], [3939, '13825403658'], [6798, '时尚女装'], [9858, '胡老师动物保健工作室'], [2710, '大'], [33, '北川美魂社会工作服务中心'], [4450, '咖啡'], [808, 'lababite'], [9480, '转行处理'], [1920, '高薪技术培训学校'], [5789, '中国建设银'], [8495, '洗车美容维修保养钣金喷漆改装保险理赔电脑检测'], [2746, '扶手等电话13831998920'], [5992, '莲园店'], [4611, '创始于2007年6'], [1110, '维修保养美容装潢钣金'], [5986, '专业烫染'], [6002, '手机13645268498'], [1031, '清真鲜牛羊肉'], [9548, '生水果店'], [6607, '本座五楼'], [7556, '虹桥钟表眼镜'], [6184, '康大食品总经销电话1589360557715290004529'], [2212, '怕上火'], [6417, '15201625bp3'], [2767, '天雨茶业'], [9426, 'atey'], [9421, '中宴'], [57, '洛芯'], [8024, '福'], [8308, '上美'], [914, '无精食品高海领海园机电上特东福实业有限公司'], [6186, '贝奇布艺'], [2700, '新式棋牌室'], [8899, '正宗板面'], [9846, '各种肌理背景墙电视墙儿童房'], [9335, '外卖电话13162816163各种盖浇饭'], [9526, 'mi'], [346, '电话'], [240, '九'], [6381, '365英语学习365365爱阅读'], [4378, 'yt'], [9897, '厂家直销专业生产工程机械铜铝水箱散热器中冷器液压油'], [7093, '东方'], [5008, '国电信'], [3973, '1375'], [720, '手机配件'], [5983, '台北'], [617, 'tea'], [2794, '招聘后厨'], [8713, '装修李'], [9201, 'post'], [6534, '发屋'], [2829, '凯迪五金店'], [7475, 'wi'], [2381, 'tattoo'], [3935, '口手窖金利名酒商行'], [1344, '大'], [8401, '汇'], [6909, '电话1516104013815161043348'], [4162, 'freetoilets'], [8150, '良骑汽车服务三林店'], [7644, '批发零售'], [5902, '佛光广'], [3115, '内科外科儿科'], [7060, '山东3585'], [9558, '永旺五交化'], [7161, '珠区海幢街道'], [7503, '五龙北路'], [6363, '成人用品'], [2391, '上海市收藏协会'], [5249, '地址来宾市西山路618号电话0772'], [7104, '南汇店'], [1519, '服务电话0215922142913817141440'], [2818, '恭'], [7028, '专业安装维修'], [7136, '34'], [5827, '湖北家常菜'], [6826, '电话1388671132807176571017'], [9453, '袁二特色黄金大饼'], [9366, '822'], [8248, '新的王'], [898, '招商中心'], [9052, '电动车以旧换新换电池鲁汇小周车行'], [5680, '中粮全产业销食品一体车'], [2156, '广州市'], [330, '广州红豆粤剧团'], [3754, '手机158'], [1896, '怡口净水'], [9046, 'cctv战略合作伙伴'], [7000, '北肃武山白鲜'], [4227, '联盟商家'], [7208, 'edp校友之家'], [7409, '购物'], [7023, '柯山宾馆'], [3653, '阿慧造型'], [7438, '农夫山泉有点甜'], [587, '刀削面'], [904, '订座电话13729867815'], [2427, 'wo'], [9633, '印'], [1185, '莱芜宫庭烧饼'], [3843, '低压'], [1950, '61'], [3652, '电动车助力车'], [5846, '鸡咪鸡味'], [2239, '木雕木线花格卷风门面阳共餐房者装18071061021'], [3597, '中影股份'], [2815, '正宗'], [3287, '合珠'], [8118, '爱国敬业诚信友善'], [2703, '纤'], [3263, '聚源茶庄聚源茶庄'], [6956, '世纪华联地产'], [6343, '早餐'], [3264, '国连锁线0010'], [1120, '系列'], [69, '三开路28号'], [5604, '您已进入'], [2042, '兴达电器经营修配部'], [6167, '虹园六村'], [739, '24'], [873, '手机18217468349'], [8262, '代收代寄'], [5296, '可可'], [7370, '天天新鲜'], [8342, '上海文峰美发美容全国连锁'], [4691, 'oppo'], [1078, 'emts'], [8903, '晚上00'], [4441, 'manchamanchamancha'], [2482, '御山校区'], [8538, '艺术中心'], [8451, '保健'], [7013, '中国移动4c手机卖场'], [8145, '唯美'], [2615, '鑫鑫平价'], [4888, '特色私房菜'], [3266, '946166'], [9355, '馄饨'], [4315, '伊莉亚'], [7586, '秀发spa会所'], [3285, '电话13676223944'], [4217, '全部免费吃'], [9364, 'mtianyu'], [8855, '告别不文'], [9762, '北京福州宾馆'], [8756, '雪洛可'], [4249, '电话1893966866189396769151656822'], [9423, '邓城猪蹄老店'], [7594, '鲜花'], [5143, '5785'], [8008, '美容'], [2618, 'jodo'], [9255, '中草药染发'], [6938, 'ta'], [533, '欢迎'], [5228, 'jk'], [5735, '住宅实验室'], [4713, '品牌家电制冷维修中心ktv娱乐集成led广告屏制作安装'], [6483, '龙全同利科技开发有限公司'], [7880, '明尧广告'], [9713, '不锈钢工程钢结构工程宣传栏踏牌旗杆13260618686'], [2105, '超元章'], [6707, '渝舍'], [2198, '中国体育彩票'], [9589, '德明汽修'], [7662, '闵行区标准化文化活动室'], [6655, '厂家直销'], [1444, '地址镇前街70号'], [6928, '杭州'], [4944, '精修各国'], [6071, '电话15048636845'], [4032, 'mg'], [1680, '事业'], [4093, '招聘图'], [9916, 'g'], [2674, '菲菲美甲'], [4186, '条幅加工'], [505, '中通速递'], [5499, '精美小菜'], [4334, '广艺装饰'], [7633, 'ping'], [6920, '897'], [6645, '群姐综合缴费店'], [5569, '果缘蛋糕'], [1290, '子稀家纺'], [7635, '脉'], [4555, '精品造型'], [1857, 'mengshanresort'], [932, '铁艺加工'], [8058, '百年传承'], [5441, '批发沙子水'], [2542, '华'], [5017, 'ward'], [8494, 'uoosin'], [1846, '地址山西太原小商品批发市场北通道5号'], [7871, '十四小时服'], [3388, 'china'], [1695, 'ian'], [5195, '汉界'], [9113, '电话18833440994'], [2377, '艺之舞'], [3961, '丰成参堂'], [6510, '芳村一广州培'], [1035, '毛毛鞋服'], [1847, '电话0373728108813072697196'], [4469, '东兴隆五金机电商行'], [6523, '审溪指定专营店'], [5219, '上海长长高分子材料有限公'], [23, '兆富国际'], [1626, '晨奥国际'], [2613, '133e12'], [8741, '陈燕萍植丽素'], [1583, '富源汽车室内翻新'], [191, '服装店'], [3010, '春款家居服'], [2603, '江淮瑞风日产皮卡三菱猎豹东南得利卡'], [3364, '老何'], [7340, '一品红川菜'], [8321, '成人用品'], [7253, '大量现货壁纸10元'], [8608, '号楼'], [8346, '新概念'], [1891, '停车场'], [4536, '福'], [9273, '周浦店'], [5943, '摩托车'], [1854, '88624612'], [7745, '杭州精品女装折扣'], [4932, '雪丁汽车质保雨年'], [3632, '浩浩水果大卖场'], [3953, '手机专卖店'], [913, '武汉理工大学乐社件'], [6309, '怡宝'], [8592, '上海市浦东新区塘桥街道'], [6570, '序春运'], [2152, '益秀农家小炒'], [1208, '不锈钢塑钢彩钢房加工'], [4631, '华兴机电维修部'], [613, '福泰隆'], [2944, '平安旅馆'], [3252, '酿酒采酒行'], [4950, '修脚'], [6695, '宜光黑'], [6007, 'dc美皓口腔门诊二'], [7938, '地址沪青平公路3609弄6号3栋'], [7921, '安耀电'], [1179, '防水工程'], [6436, '和管理委员会'], [8655, 't13661670788'], [736, '52'], [2596, '雪允上'], [3321, '直销2号店'], [7887, '华仔'], [3066, '8f'], [592, '南山小米酒'], [5125, '251'], [6370, '少儿服装店'], [9414, '105'], [3564, 'mark'], [366, '八戒排骨'], [426, 'noczt1417057224'], [2356, '瑞康弘中健康中心'], [502, '本院内'], [8170, '崧盈路1070'], [4402, '免费35139729'], [162, '朗乐琴行'], [1603, '中式快餐'], [5509, '心悦数码'], [1263, '悠悦艺术培训中心'], [9022, '脚气等各种脚病'], [2067, '云南'], [3021, '电经营部'], [3306, '1551'], [7171, '景缘商贸'], [1318, '汇业地产'], [1285, '骨科'], [3257, '联系电话13561776359'], [2787, 'king'], [1633, '龙景电讯'], [8798, 'newopien'], [1224, '上海电动工具工程技术研究中心'], [558, '航天旅游'], [5285, 'car'], [5199, '气缸垫gaskethead'], [395, '2839003'], [7535, 'www2zbikjox'], [4562, '私房菜'], [9990, 'e18'], [9693, '牛仔羽绒无痕仿补'], [9635, '诚招各县乡镇代理商电话2037878'], [5492, '空调'], [4141, '262'], [1786, '龙文教育'], [5392, '正信输送机械'], [9187, '六妙白茶'], [5511, 'tel1506108571215161047300'], [4035, '精品女装'], [85, '剪爱造型'], [8757, 'fluidsystemshanghaicoltd'], [5688, '新视界'], [6597, '祥和珠宝金银加工老店'], [2101, '育才嘉苑地下停车场'], [6628, '九尊烟酒行'], [1182, '人满福'], [9518, '每份'], [541, '社关'], [178, '理学部'], [3714, 'snowflying'], [7024, '1栋'], [1763, '名车一站式服务'], [5110, 'hanihan'], [5133, '前达门肉食专营店'], [2991, '千年杜康'], [4488, '陶杂平价'], [979, '卫岗'], [3534, '营业中'], [2541, '辛巴人部落深夜美食'], [6635, '徐汇区民办安琪儿幼稚园'], [1049, '090939'], [1596, '水饺'], [8104, '永兆豪庭'], [7849, '服务电话0203223342613826278045'], [9119, '中州家具店'], [887, '中国福利彩票'], [3015, '全国第368店'], [7298, '小炒'], [7932, '彭刘杨路店'], [2231, '怡佳酒店'], [5543, '加宝地板'], [1916, '上海艾竹家居设计有限公司'], [7636, '泰康人寿'], [1545, '商标'], [9115, '549'], [4246, '852'], [7284, '沙县小吃'], [8707, '加盟热线4006858288'], [8085, '手机维修中心'], [137, '清真胜'], [7141, '内设空调电脑房钟点房'], [1252, '上海鸣龙茶叶发展有限公司c幢273f'], [8998, '电动门'], [4177, '手机用联通'], [1335, 'jiujianginternationau'], [67, '张玉芹内科诊所'], [3868, 'mg'], [7248, '大世界'], [2389, '铁锅羊肉'], [5211, '手机180'], [7037, '女'], [1744, '选服务'], [5830, '数'], [9082, '景田'], [5650, '青浦区学生体质健康监测中心'], [9567, '装18335918316'], [1824, '经营项目电动'], [9655, '33'], [1472, '晾衣机'], [3660, '养生会馆'], [1726, '海顺证券投资'], [6136, 'haya'], [4418, '烟酒销售'], [7571, '88859230866307230'], [1521, '便利'], [5756, '天润天城快递中心'], [9039, '南晟电商孵化园'], [3618, '油饼'], [6218, '福'], [2432, '只为懂生活的你'], [5234, '中国电信'], [2214, '孕婴童品牌集成店'], [5526, '莲湖路'], [3421, '麦德菲'], [5696, '123文店'], [231, '925'], [727, '大司'], [5655, '燕汇广场'], [6450, 'drink'], [55, '大医药'], [2520, '健康进社会'], [7426, '伊人丽影'], [7773, '拉面'], [2677, '4945'], [2171, 'n'], [3018, '地址苏中九区38号电话13040158559'], [9305, '午休班'], [3098, '浦东新区大团店'], [2799, '荣宽'], [3127, '志成五金店'], [2385, '极速空调'], [4045, '中国移动'], [3525, '注'], [1833, '人参馆'], [1414, '防辐射门窗'], [5820, '专业美甲'], [6141, '13888016969'], [8776, 'care'], [4223, '中国电信'], [9848, '心联'], [3927, '众品湘木桶饭'], [9658, '联通光纤'], [7465, '电动车'], [1460, 'lisega'], [4106, '赵心增日3'], [9704, '示范学校'], [5275, '威旺205'], [749, 'xujiahuicommunityhealthcenter'], [97, '中国移动'], [211, '伊科'], [8546, '军凯通讯'], [4119, '清水羊庄'], [7384, '1032'], [8564, '免费为企业提供卓越人才'], [9053, '电话15271557821'], [3272, '德州盛隆农业科技发展有限公司'], [1091, '各种'], [381, '各型号配件'], [2886, '红霞足浴'], [2988, '完美'], [6649, '上海'], [1751, '中国福利体育彩票'], [1281, '电话13833809002'], [8633, '肥仔铺'], [2554, '各种彩旗'], [4278, '新车上户'], [4187, '丁酒店'], [5823, 'tiens'], [8096, '60元起'], [1959, '46'], [8367, '烟酒'], [809, '洗车补胎新胎销售'], [54, '泰区区浙虹街道设加公'], [6782, '推进示范'], [5344, '羊杂汤羊肉面特色小炒炒饭电话13701431413'], [2811, 'surfing'], [6616, '16'], [3336, '上海老凤祥'], [3625, '智能监控系统'], [8409, '银星便利店'], [3692, '车站东街社区居委会'], [852, '二楼网吧'], [5937, '手机13599255349'], [4023, '香辣虾'], [3951, 'diamonds'], [2684, '293'], [8891, '中国移动'], [1009, '中文域名江城名吃客服电话02787875310'], [1902, '诚信机电产品经营部'], [7943, '推'], [7152, '欧尚'], [3828, '电话15094903465'], [3771, 'p'], [1940, '启波'], [1894, 'gu衣'], [689, 'o'], [4805, '炸鸡'], [3157, 'r'], [8222, '公场'], [4934, '海艺电脑'], [1153, '彩印'], [4046, '莎莎'], [2899, '灰指甲'], [1373, '南湖店'], [8053, '农市淀区中关心'], [7640, '上膳源有机菜'], [7470, '一元起转小火锅'], [6812, '就好'], [8964, '8元'], [6400, '工厂'], [6284, 'w'], [8425, '陶瓷'], [6439, '油漆'], [4767, '质量体系认证'], [83, '胡辣汤'], [7623, '羊毛衫羊绒衫'], [7101, '妙手'], [8662, '订房热线08'], [8559, '320'], [4178, '联系电话13626135852'], [1039, '福建千里香'], [494, '春风一题'], [2045, 'r'], [7094, 'ank'], [7166, '昭金车业'], [7553, '华'], [91, '超'], [8341, 'a'], [7706, '核载8人'], [2680, '167'], [3759, 'sg'], [6821, '华侨商务酒店'], [2312, '集成吊顶'], [7719, '川湘家常菜'], [5917, '汽车抵押贷款'], [6329, '特约店'], [4999, '竹椅竹席清席等'], [1241, '养生馆铬'], [7780, '锅包肉'], [3571, '肝病病房'], [3079, '小二推拿'], [36, '主营天然大理石国产石花园石厨房石等各种石服等'], [8472, '双汇冷鲜肉批零中心'], [2484, '洗剪吹25元'], [1337, '房产中介'], [984, '回收'], [8386, '正云粮油干杂店'], [7628, '休息中'], [7164, '固色剂y表调剂'], [8842, '13112263131三楼'], [5417, '日韩内'], [1586, '射阳东台'], [9820, '联华生活馆'], [1215, 'one'], [1964, '柴坑鲜羊肉'], [6517, 'iphone'], [5639, '春款'], [267, '澳门路168号'], [5212, '柴油木'], [3603, '卫生间30'], [1661, '十里香馄饨'], [5898, '诚信服'], [9865, '枫林发艺'], [3121, '大虾馄饨'], [7963, '85'], [1706, '中医诊室'], [4174, 'zhizhux'], [3055, '五粮液'], [5923, '236'], [8324, '雷'], [4071, '湖北省药学会'], [2473, 'wo'], [5077, '长廊足浴'], [6122, '莫干山'], [8445, '全国连锁'], [8306, '永宁东路100号'], [6887, '脸谱小生'], [8147, '湖南有线电视'], [1839, '武汉轩宁文化艺术中心'], [8734, '临汾市公安局交通警察支队新盛红龙机动车登记服务站'], [3584, 'house'], [9224, '停车场'], [980, '主营服牛肉汤馄饨炒饭盖浇饭'], [1178, '全场88折'], [4753, '棉布寿衣'], [6605, '博运网吧'], [2862, '皇冠大酒店'], [8378, '首艺造型'], [5100, '电话15903460159'], [2343, '办公住宅'], [4605, '接吉船头鱼'], [9691, '园'], [9348, 'intltrans'], [9752, '北京中商松下家电维修站'], [1154, '五金建材'], [3544, '本店长期招聘优秀服务员洗碗工杂工15920185678陈生'], [7777, '铁艺造型'], [1692, '晶鑫精品移门'], [8703, '证件快照'], [7184, '大陆街器'], [1573, '上海市闵行区'], [900, '鲜果切'], [8988, '外贸工厂体验店'], [7190, 'deooration'], [457, '装饰'], [6402, '路派'], [7751, '狗年旺旺'], [1601, '泰安总代理加盟热线13792105950'], [7955, '66216281'], [2442, '市政府实事项目'], [1819, '13966195970'], [3801, '巴土四汽租车部'], [5046, '相框'], [5748, '金乐'], [9991, '利药店'], [9141, '63'], [1597, 'studio'], [2445, '午900至1100'], [6151, '服饰干鲜'], [209, '主营日用品五金电料水暖'], [1951, '路202号'], [8904, '重庆小面欢迎'], [3493, '田林酒'], [9764, '上海大众'], [9943, '安汽车美容会所'], [1575, 'selfservicebanking'], [9401, '鼎鑫门窗'], [8114, '圣苑茶叶'], [9479, 'hotel'], [3237, 'quii3'], [3435, '电话159984996'], [5800, '手机13858528376'], [8578, '浦东新区张杨路579号三鑫大厦八楼'], [5928, '租盈'], [609, '广州达芙妮鞋业有限公司授权'], [9131, '创骐散热器'], [5621, '传热强化'], [7100, '推'], [1137, '主营'], [7500, '有巢帮扶'], [2892, '请进'], [6097, '手机13122364200'], [9857, '56岁孩子设计的'], [9198, '159'], [243, '皮纹纸'], [5115, '太极五行五味锦'], [3381, '天一机电电话0879212180013887'], [3936, '龙腾烟酒'], [266, '中西医结合'], [447, '魔冠品质就是好'], [8, '纯粮酒'], [2976, '高价回收礼品电话18801173138'], [3664, '警民联系箱'], [8980, '中国人寿'], [5480, '淘'], [4076, '激光切割'], [2472, '金日大吉'], [7103, '华漕派出所'], [6253, '广州市黄埔区'], [429, '冯兽医技术咨询'], [2916, '红十字老年护理院'], [255, 'vinvo'], [4207, 'kzwmobilephonerepairchain'], [6775, '吴人寿'], [4388, '电话3260066'], [2086, 'vivo'], [9484, '招聘'], [254, 'banking'], [1249, 'dinghuo'], [1057, 't1573199159'], [6843, '中国古动物馆'], [4094, '兰杜旅馆'], [3259, '精品班'], [7615, '精典装饰'], [9513, '惠雅居陶瓷'], [9150, '凡东平万成和国件学机'], [2559, 'glub'], [9382, '85'], [1157, '纳思书院vip一对一精品班'], [1957, '政苑宾馆'], [607, 'aux'], [6503, '光'], [7632, '原万果园3店门口'], [3197, '2大型仓库对外出租300900平方米'], [140, '电瓶车'], [8434, '下午100500'], [9325, '朗世达'], [7585, '画王广告喷绘'], [6572, '祥龙家具城'], [1164, 'mbo'], [3535, '蟾宫酒楼'], [9277, '赛克'], [4744, '电脑培训'], [7537, 'of'], [5912, '润滑油'], [9016, '福私房菜全羊馆'], [8913, '缙酒店家具'], [6850, '景德镇市合顺物流公司'], [8939, '信用'], [2122, '浪漫发丝'], [64, '烟机'], [2184, '皇'], [8441, '八佰伴中心'], [3099, '堂'], [1046, '润滑油'], [1988, '美的洗衣机'], [6142, '门窗艺术玻璃门市'], [6891, '西丽社区'], [738, '7days'], [5471, '小四季'], [9003, '电话02158086'], [2452, '2元'], [4715, '山西省伊斯兰协会员制'], [3719, '夏日'], [3183, 'sale'], [2223, '小城早点'], [948, '音象电子出版社'], [5209, '2f'], [2181, '广场旗舰店'], [5246, 'ngeofsteam'], [5352, 'tel1529521509915295276235'], [4273, '四单元'], [4001, '市馆'], [9303, '美瘦身'], [4457, '寿衣'], [9107, '美味四季'], [1609, '喷绘写真'], [2495, '现磨豆浆米酒汤圆米粑'], [2439, '369'], [5497, 'we'], [3276, '电动车'], [2010, '拼布教室'], [5202, '多元汽车装饰批发'], [3128, '肉汤烩面'], [5682, '钻石'], [519, '广州市群林工贸发展有限公司'], [1360, '招小时工'], [1127, '新装发888449419401729'], [1273, '联系电话'], [3870, '龙壁明珠克拉玛依'], [2216, '石材'], [1226, '17302000'], [8206, '服务到家'], [1914, '电话13523110394'], [7047, '凉皮'], [1284, '加盟电话15940650000'], [3812, '健康服务中心'], [5251, '本座三楼'], [7909, '金色家园诊所'], [9941, '4lane9hengshanroad'], [9404, '上海远程老年大学'], [3665, '茗明阁'], [2557, '亚星店'], [5408, '星河湾酒店'], [5250, '房'], [7435, '美发沙龙'], [2278, '新'], [9580, '193'], [1114, '上海钻石行业您销'], [9509, '明眼镜'], [672, '鸡公煲'], [4267, 'modelquarter'], [2031, '焊机喷涂机发电机'], [4824, '马明仁'], [6246, '话15221567536'], [9049, '陈家奇超市'], [7754, 'nihachi'], [6819, 'tian'], [2419, '永义五金批发超市'], [3477, '电话'], [8254, '饺子'], [6278, 'no006'], [276, '国家食品安全风险评估中心分中心'], [2609, '货热线13737552094'], [4942, '厨味了'], [8499, '复理疗'], [9090, '17'], [2942, '地址上海市福水晶'], [6642, '百合'], [7583, '新加坡对岸富力公主'], [659, '李师傅煎饼店'], [9385, '批发'], [5370, '台翔'], [5118, '876'], [1705, '卢鸿装饰'], [1967, '东南社区警务工作室'], [5890, 'yesdo'], [9609, '上上海小南国'], [4122, '宽带'], [651, '经营'], [274, '芭蕾舞'], [3720, '制实惠不贵'], [5451, '18210399489'], [8068, '健康养生低碳环保'], [5210, '液品'], [8332, '深圳市佳鑫瑞净化工程有限公司'], [217, '裕'], [8706, '后恢复中心'], [5815, 'ruixingwitpark'], [7432, '大平湖医保险'], [5109, '衣架模特货架等服装道具专业设计服装店货架定制'], [26, '经会所'], [8432, '020898'], [9316, 'norton'], [5557, '水饺'], [1828, '正大塑料商行'], [2210, '周易预测'], [4491, '社区服务站'], [707, 'mope'], [4741, 'cctv央视展播品牌全国连锁'], [7347, '水器'], [4103, '巧云'], [740, '吊顶热'], [1723, '13102581556'], [129, '多功能健'], [9998, '1福'], [5040, '书写工具美术画材'], [7863, '2m'], [2938, '两学一做学习教育常态化制度'], [4025, '主营业务办公用品日用百货高档茶具电'], [9043, '进轩窗饰'], [4782, 'practicebase'], [4110, '178弄'], [7349, '网吧二楼'], [703, '玖玖易购'], [9045, '经济与管理研究院'], [5693, '美甲亿发品'], [2485, '法围请怀美丽电来'], [7822, '39601113960222'], [8225, '地址杭州新城323139088187电话076742712'], [360, '中大童装'], [6801, '老字号拉面'], [9116, '136956458'], [8167, '换屏'], [4265, '精品发光字'], [8780, '水手宝宝'], [8970, '北京盛德馨'], [8985, 'r'], [4231, '2f'], [3802, 'knowlife'], [3006, '种子'], [335, '美容馆'], [7761, '和精密制造苏州有限公司'], [5024, '灰指甲'], [5562, '刘杰口腔'], [8086, '洛阳百味季'], [8279, '儿童牙科'], [1579, '电动车'], [3222, '厚和'], [8508, '美颜会所'], [5760, '柴油推广告'], [7200, '地址上海沪南公路9855号'], [969, '电话15275088968'], [8112, '正大机械'], [5834, '中转全国'], [4773, '七厅061'], [917, '自助银行服务'], [1834, '创造无限一路领先'], [6433, '吴吴e玩具屋'], [7278, '彩钢棚楼梯扶手及铝合金门窗等'], [7003, '吊索具有限公司'], [9592, '广州建筑'], [6754, 'jiu'], [1109, '瑞商务宾馆'], [1971, '全国加盟热线4009966567'], [437, '订门面转让'], [990, '上海运营中心'], [6452, '陶瓷做健康好茶器'], [339, 'luoqi'], [1807, 'shanghaimauamesenertainmentco'], [8828, '昕日'], [9931, '舞杰汽车'], [8057, 'eaton'], [6373, '炒翻天炒饭'], [1762, '麻辣烫'], [4427, '出租1370351495313633514581360353344'], [47, 'suzhou'], [5007, '广福店'], [8814, '诚信经营容县沙田柚'], [299, '锅牛肉'], [4281, '猫太后家店'], [5034, '1文'], [4208, '斯美琪'], [2535, '饲料'], [7322, 'meiyijia'], [2199, '288'], [975, '电话15771988299'], [9218, '电话285789013974598384'], [6486, '14号楼710室电话13088991585'], [3690, '味美肉鲜'], [1970, '疗'], [8594, '四海电器行'], [9128, '质量保证'], [306, '维修'], [2134, '名典各店'], [1718, '火疗'], [8187, '手机上号'], [6244, '培训'], [6049, '公共安全'], [909, '咨询热线0216330'], [9463, '纪念收藏品除外'], [8547, '添衣阁'], [8232, '2019'], [1681, '天一社区'], [9983, '电话'], [1275, '南三经街'], [1697, '艾格金妍'], [5415, '美容配件'], [4027, '特色焖饼'], [6964, 'b座'], [3896, '电话8186789三清山中大道36号23层'], [9324, 'unicom中国联通'], [22, '广顺租车'], [9828, '0首付学习'], [1302, '瑞莎名店'], [3658, '鸭块面肠旺面'], [9222, 'm1526'], [8337, '88186379'], [951, '广州市圆创广告有限公司'], [4877, '中国著名品牌'], [4467, '少水果馕'], [133, '祖传秘方'], [6494, '1f'], [5558, '资'], [3033, '加强法制宣传教育'], [7444, '植物养发馆'], [5058, '加工抽油烟机通风管道电焊'], [9508, '荔渡广场'], [4340, '奔驰宝马专用'], [2076, '谋道'], [783, '一人一锅五角一串'], [1560, '居永红'], [3511, '143'], [8972, '电话15820373337'], [8050, '从业资格证及'], [6748, '电话39824888传真39821882'], [9478, '大弟饮食店'], [44, '瓶修复中心'], [3835, 'hair'], [6554, '粮油超市'], [9940, '东升旅馆手机配件'], [9415, '品牌折扣店'], [8005, '时光酒楼'], [8302, '诚悦房产'], [3414, '定做'], [6109, 'stx口碑漆服装机械'], [7775, 'hat'], [9282, '亿超市'], [8871, '肯德基'], [6564, '口腔'], [3590, '绒'], [4995, '营部'], [1037, '楼梯扶手广告牌铁艺铝合金18338781157'], [3377, '国绅软装'], [5022, '回头笑'], [6861, '电器成套设备快做配电箱五金工具电线电缆建筑材料'], [5254, '耀华玻璃'], [2970, '常青树美容美发连锁机构'], [114, '百特陶瓷'], [2145, '希望英语'], [9351, '粉刺'], [9491, 'chinamobile'], [2885, '人口管理'], [1541, '炒粉'], [3004, 'tot'], [8267, '华丽新感觉'], [9934, '室内外装修水电安装'], [9793, '桥西区5013'], [8901, 'mei'], [5068, '第44011184号投注站'], [3879, '百饰庄'], [2981, '非凡超市'], [1873, '诚信活鸡活鱼'], [8611, '2017'], [4404, '内丘联友分销店'], [2161, '百辉'], [8071, '五金商店'], [3847, 'unications'], [9789, '烟酒批发'], [2874, '蓄'], [2262, '农物中心3'], [3113, '扫描'], [6464, '南瓜田鸡'], [3383, '1件6折'], [3401, 'autoservice'], [9411, '专业制造'], [5634, '冷饮'], [6399, '2223337'], [8887, '美味饺享'], [4762, '5773025'], [625, '建园综合市场'], [6189, '丰德寄卖行'], [3151, '中国电信'], [919, '等快相'], [1331, '鞋柜'], [7372, '海珠区海幢街'], [911, '小心地滑'], [9117, 'chinaminshengbank'], [4277, '室'], [8117, '壁布'], [9096, '书法'], [2358, '控报警'], [7546, '号'], [6432, '被云丝被'], [3315, '变频器软启动制动单元制动电限plccp等等电电502181548'], [2089, '特'], [5927, '用泰山'], [4151, '电话1'], [2748, '电话073185785169'], [724, 'goodwie好太太'], [2255, '好居家家具总汇'], [9845, '经常用'], [9525, '存油存桔'], [1307, '车公庄大街'], [7783, '葬聚德全羊'], [1441, '龙文教育'], [8296, '大抽卖'], [1129, '小碗菜炒菜盖饭'], [3521, '长城汽车'], [5472, '综合管理中心'], [4300, '广福彩篷公司'], [2050, 'chinatelecom'], [5138, '福建超煌汽车服务有限公司'], [9591, '收银员'], [4749, '五星钻豹电动车'], [4865, '15935726290'], [2979, '盈鑫烟酒'], [8718, '奔能电动车'], [204, '和达日杂'], [2662, '靖江肉铺'], [4720, '幼儿园'], [199, '禁山大街'], [4440, '济南早耀餐饮管理有限公司'], [8220, 'yulebaby'], [4293, '出售'], [8072, '鼻香园'], [1119, '电话13320961210'], [141, '国'], [3295, '肉夹馍'], [4303, '兴隆店'], [5506, '西北风味'], [3152, '苏北'], [3492, '手机配件'], [1648, '服务一流'], [1703, '华山店'], [683, 'policeoffice'], [53, '迎'], [316, '香锅'], [7124, '立柱广告'], [6593, '招租热线'], [7415, '定做安装消音卷'], [8943, '18898813887'], [2508, '美甲'], [4938, '兰州拉面'], [6693, '黄埔区岗'], [9968, '抛池静电纸'], [9958, '集成吊顶'], [3082, 'led照明'], [4171, '英子18620698034'], [4429, '栗小姐家'], [6100, '福利彩票'], [8809, '前爱快位'], [941, '21号'], [9399, 'panasonic'], [5695, '站点'], [6870, '音响'], [3813, '入口'], [8440, '丹体育'], [206, '256'], [6424, '地址万水装饰城1区9号'], [361, '瑞晶宝'], [5455, '永昌摩托车配件'], [3882, '工工ne'], [5572, 'c字'], [4358, '美甲拔火罐刮痧艾灸采背泡脚'], [4643, '小儿'], [478, '武汉石化'], [509, '莎莎社区饭堂'], [9292, '惠便利店'], [1678, '内衣红豆家纺'], [1963, '河酒店'], [8125, '眼镜'], [4607, '品牌工厂'], [9924, '大众餐馆'], [9666, '大冶市国药'], [6862, '酒茶生'], [6372, '1326'], [5780, 'e'], [4219, '施耐'], [4341, '协同分中心'], [4299, '擀面皮'], [180, '承接餐饮住宿培训会议'], [389, 'threegunliyingconsept'], [2895, '开放时'], [1043, '佳达货运东兴防城钦州北海合浦灵山'], [3476, '金昊丰装饰'], [3596, '竹'], [5364, '睿晟en'], [9102, '电话13775788326'], [3822, 'art'], [123, '05335555112'], [629, '换油保养'], [2289, '午餐精美副尖盖饭擦尖夹口凉菜家常小炒'], [4516, '福鑫记'], [1348, '李猛铁制工艺'], [7646, 'samsungoppo'], [8238, '梵廊朵'], [9190, '震霆装饰'], [4006, '浴场宾馆'], [4882, '彩色输出工程图复印文本装订印刷绿地国际店'], [4310, 'mandy'], [5591, '服装商场'], [3024, '公主坊'], [8880, '经营项目内科电话15151262639'], [5988, '轮胎轮辋'], [1632, '一'], [3735, 'no1001石台路店'], [2730, 'hodo'], [6380, '快照'], [4160, '福奈特'], [6763, '电话18694523893157'], [1092, '美的中央空调'], [7153, '企业集团'], [391, '我的游'], [4374, 'jademansion'], [2881, '价目表'], [6757, '56095'], [5414, '奥迪奔驰宝马车灯低件成'], [1271, '地址万水装饰城电话189349165513'], [2513, '鸭太阳能'], [877, '千品通讯'], [5113, '业快修'], [6711, '福场路'], [1752, '中进店有惊喜欢迎进店选购引尚堂中'], [4565, '湛江乡下土猪'], [7128, '送餐电话13521991577'], [4257, '全国连锁专治皮肤'], [9443, '红'], [2450, '63'], [6731, '学生用品'], [9064, '有家您的置业专家66808008'], [6681, '电话13781223539'], [8933, '白马中场'], [5516, '海鲜烧'], [9887, '15'], [7589, '换3'], [6541, '中国共产党'], [9956, '太原市晋源区药厂社区卫生服务站'], [7755, '换超威电池'], [6188, '程氏快装'], [4398, '泰山区卫生局发'], [5317, 'suembroiderygallery'], [2905, '招商管理'], [8799, '全场2'], [256, '浮来青国旅'], [9225, 'kelon'], [313, '柯尼卡美能达'], [2633, '鸿美宝回卫城'], [750, '定点药店'], [6625, '665'], [3254, '中华老字号'], [7564, '72146720'], [5583, '图文快印'], [5047, 'wantfitness'], [1868, '时尚文具礼品饰品'], [4048, '成天面庄'], [1365, '上海爱杰劳务派遣有限公司'], [1518, '432'], [9035, '外卖'], [2853, 'rawmaterial'], [5876, '欢迎光临'], [7805, '各类电线'], [7317, '美佳润换油'], [5491, 'cake'], [1312, '上海市科技创业中心闽行基地'], [2517, '地址金'], [8078, '加1元多1件'], [5374, '源自深层岩石'], [8517, '85375'], [5626, '13102356471'], [561, '移动便利店'], [1432, '正新鸡排'], [7596, '汽车空调清洗'], [5117, '恒昌摩托车电动车维修部'], [9816, '方正宽带'], [3135, '怀'], [3396, '优嘉'], [319, '电话1553307384413832018861'], [3747, '二手厨具家'], [5892, '大型弹花'], [4803, '乐平商务宾馆'], [8572, 'nk'], [2068, '晾苑店'], [7997, '35线'], [5129, '歌士顿电器行'], [9702, '特色餐厅'], [1731, '香逢酒业'], [5232, '武汉理工大学'], [9314, '信通手机'], [5736, '朋来阁美食'], [816, '188'], [1447, '28'], [7695, 'transportation'], [5367, '万家灯饰'], [6961, 'i'], [3627, 'zcrubber'], [94, 'pudonglogsticsbranchjinhuiwarehouse'], [6851, '欢光'], [6127, 'm'], [9837, '照相机'], [537, '南南'], [412, '申请平安通'], [2395, '新鲜'], [5038, 'truetzschlertextilemachineryshanghaicoltd'], [8840, '友丽拌饭'], [454, '陵美酒'], [4681, 'd00000823'], [6726, '鑫威毛织厂'], [4996, '驳接爪'], [8417, '电话8468830'], [1239, '美发美发'], [5766, '请走西门'], [4206, '广州市振戎燃气有限公司'], [7550, '电话13857250761'], [5645, '邦妮佳家政服务有限公司'], [5588, '13934601796'], [5691, '誉防水'], [1380, '电话13917807870二楼'], [9048, '香港福来林爱心板材生活馆'], [1286, 'vvivo'], [5426, '排毒养颜'], [460, '卤肉'], [6639, '联系电话64'], [1882, '2896号'], [5278, '城润滑油'], [7071, '电话18841169528'], [3150, '燕琴舞蹈服饰'], [1134, '办公室'], [2963, '批发防盗门钢板门实木门钢木门不锈钢门玻璃门工程门卷闸门'], [7839, '玲珑包子铺'], [3293, 'bank'], [7958, 'ncn'], [2882, '进风塘'], [761, '丽岙店'], [1929, '楼下'], [654, '1'], [7708, '9554601061573333田村分拨中心负'], [9517, '明志摩托维修'], [1329, 'ainakidscomoanyltd'], [9659, '请勿泊车'], [3049, '烤穆'], [9698, '精品男装'], [8106, '老干部活动中心'], [6738, 'antui'], [5752, '国通快递'], [6953, '兰芳照像馆'], [2679, '华昌办小'], [4337, '水泥河沙'], [6946, '广州市工业大道南瑞宝茶叶城a2区46档电话18022879168'], [2584, 'yuanschuanchuanxiang'], [3671, '武汉三环球通'], [3977, '蔡甸特色牛骨头'], [5169, 'vip'], [5584, '嘉禾'], [8394, '老教育工作者协会'], [396, '青少年教育基地'], [7226, 'an'], [939, '神话网咖'], [9358, '7x799'], [8810, '报装热线1008616'], [8338, '擎丰风机'], [1018, '电话1380394603913673433288'], [8467, '小马修车行'], [2696, '康富源'], [4667, '1502年'], [3167, 'm'], [5747, 'yuexiulibrary'], [124, '娇婷美容会所'], [8480, 'aijia'], [3712, '经营部'], [4065, '河北天尚旅行社'], [9761, '窗帘'], [9645, '沙县'], [893, '东片工作站'], [4161, '斤年换新'], [6565, '善美'], [4306, '特色'], [9466, '德奥楼'], [9627, '00'], [8110, '房屋出租'], [6206, '美容院'], [6584, 'ocn'], [4150, '300x300地起壁别墅砖'], [6069, '朋子轮胎修补'], [896, '奶源发'], [9279, '天一嫁衣婚庆'], [292, '菜美甲'], [3765, '6xt18012'], [8884, '红柳炭火串烧'], [4520, '真'], [7182, '电话13408315588'], [6674, '加盟热线18270972260'], [5984, 'vivo'], [5821, '公共交通卡销售处'], [3256, '66号'], [7905, '君泽轩'], [5502, '庆祝香爵生活319号常宁店试业大吉'], [6446, '西北风味'], [27, '膜'], [9320, '九江农商银'], [1710, '汗蒸'], [9059, '良品铺子'], [9137, '单位'], [3441, 'eys'], [2226, 'haval'], [113, 'dur'], [5554, '38'], [6260, 'air'], [4592, '家木五金交电经销部'], [6899, '一园o回'], [9980, '电话13829422349'], [845, 'hj'], [9051, '房运美不动产'], [8978, 'medoca'], [2005, '13620561879'], [4719, '苏十市红老'], [3479, '豆豆烟酒'], [8945, '堡店'], [7198, 'ongtema'], [9330, '吉哥'], [5074, '190'], [2361, '酒店管理专业实训室'], [5884, '烟酒副食批零兼营电话15807198926'], [3680, '天竺工坊'], [1910, '加盟电话15196784366'], [8029, '张姐'], [3114, '梦之缘'], [6994, '为美广告材料'], [6686, '整'], [1497, '订餐电话15163369836'], [9269, '复印'], [3149, '注册窗口周五下午办公时间'], [6692, '传承韵文化'], [3670, '入网缴费'], [7577, '侨城社区居民委员会'], [128, '经营早巾晚餐承接酒席各种小炒丰俭由君电话15807479486'], [9907, '化黑茶'], [9170, '13533007202'], [6599, '提神抗疲劳'], [7080, '不锈钢饰品'], [1664, '地址'], [7966, '霸王'], [7004, '世贸物流中心'], [599, '18537311669'], [118, '中国展'], [4066, '技术协会'], [6753, '林青副食店'], [9531, '生活'], [6445, '干色汇'], [9228, '03档'], [4737, '壹号街市'], [5233, '可音可国际早教'], [1987, '一顶好隆江猪脚饭'], [4143, '微信关注万能小哥4006633750合作商家'], [3370, 'chinalighi'], [1065, '战略合作伙伴'], [50, '局部改造维修水电路检测维修墙面地面局部维修'], [9660, '富强'], [7337, 'theorgansofguangzhoukangyuanchnic'], [4039, '德诚行地产'], [7099, '丝绸体验馆'], [7268, '家无缺'], [4885, 'topflavor'], [8244, 'health'], [9367, 'b205'], [3076, '宪法'], [6112, '黑土猪'], [8755, '24h'], [8512, 'huwaizhuangbeitetifushi'], [9706, '大促'], [9377, 'gs'], [2026, '门面到期'], [4635, '首家放心馍'], [3783, '始创于1990年'], [8304, '齐鲁地产'], [9030, '亿声插座'], [4852, '订餐电话18234757959'], [2926, '在重庆地道火锅'], [409, '下午13301700'], [9217, 'businesstime'], [3726, '上海冠权贸易有限公司'], [3609, '小东京鞋'], [6229, '鸿泰科成'], [4517, '新金路'], [290, '88折'], [8758, 'tel15961003176'], [120, 'nissan'], [2440, '特约o布页运部'], [8686, 'shanghaijinhuivillacampus'], [6014, '启明星教育文体'], [679, '浦东北茶'], [3779, '分红高'], [4240, '涂料片装匹装3d数码'], [8156, '中国体育彩票'], [8924, '琴杯'], [9395, '中兴社区'], [7819, '海信智能体验厅'], [8153, '爱你宝贝'], [6790, '宝玉直社区民委员会'], [7878, '61167795'], [1017, '文'], [5662, '涮羊肉麻辣火锅'], [260, '304'], [2491, '话费充值手机维修手机配件电脑维修复印打字'], [8635, '帆酸酒'], [2904, '海外乐淘'], [4893, 'express'], [3214, '碧山冲湘菜馆'], [8744, 'botei'], [4470, '四季快餐'], [7953, '地坪漆'], [9885, '各种型号棉'], [6356, '西湾东路14号'], [2512, '电话63456075'], [9031, '电话33074026'], [6799, '未然'], [4275, '北京阜外医心健康科技有限公司商店'], [7524, '鱼珠街茅岗股份经济联合社委员会'], [6251, '寄押'], [6257, '0电话18379719936'], [4287, '管'], [8579, '170'], [4353, '足疗按摩'], [8227, '洋河蓝色经典名烟名酒'], [6089, '104'], [5550, '51hydcom'], [6881, '准乘5人'], [5969, '吃'], [7869, '好艺'], [8257, '孕婴童'], [7173, '13871489109'], [1162, '20'], [710, '滤清器批发'], [5299, '车来'], [8443, '北海市青浦区重固镇老年的务'], [4327, '公司'], [4799, 'hotel'], [5301, '手机'], [8076, '竹苑轩美食'], [6766, '3223200732235443'], [9451, 'x'], [3351, '美之缘'], [7070, 'tle1309127278'], [8505, '东方广场3楼人人乐超市后门'], [2987, '浙江e'], [9004, '打车'], [822, '果真多水果炒货'], [9434, '工会委员会'], [1968, '3403191928'], [3749, '拆'], [2261, '牛肉面'], [7377, 'ucc'], [2873, '美容美体护肤体验店'], [6907, '字号'], [4026, '炒面条6元'], [7746, '6829272'], [5762, '热线18016463200地址秀沿路周康一村49号'], [8531, '城东北公寓'], [7944, '强记钟表眼镜'], [3108, '芝超市'], [6543, '社区家庭教育指导服务点'], [2079, 'apartment'], [3656, '团外卖'], [3420, '香港爆汁流沙包'], [1928, '车艺尚'], [7663, '托箱'], [3536, '欧帘雅布艺'], [9200, '电话5324478'], [3418, '31'], [7402, 'jianshe'], [4880, '老祖宗石磨坊'], [6904, '萧山道源路43217号电话15657558827'], [8468, '胸部养护中心'], [9986, '170号'], [334, '鹿港小镇'], [4807, '祥和xianghe'], [9988, '话17794301023'], [7096, '煲仔粥'], [4772, '小炒'], [7771, '液压油管'], [933, '托电动车修理'], [3226, '顺势疗法调理肩周颈椎腰椎等慢性疾病'], [5350, '养生'], [156, '杨高南路918号'], [7974, '麦乐汉堡'], [3000, 'm层'], [1389, '出入平安'], [3372, '件科技'], [8996, '家常小炒'], [9402, '东音泵业'], [7536, '客辰店'], [1631, '宏扬广告'], [1124, '永盛车行'], [9955, '轮胎大'], [5732, '24h'], [6086, '标书证件'], [9813, '幕墙春生配件'], [4477, '燕窝'], [5064, '康缘养生'], [82, '电话15830702885'], [320, '全家共享'], [6971, '公用品'], [9787, '治安联防大队'], [9089, '大量招工免费推荐'], [4724, '办号'], [6911, '公牛西门子开关插座led灯泡工程制灯路灯'], [28, '网吧'], [5940, '娃哈哈'], [2666, '烟酒超市'], [1730, '虹桥镇'], [8063, '皮肤管理'], [7179, '放心奶'], [6085, '给健康加道菜'], [7388, 'spa'], [7496, '凌周轩阁'], [6792, '食品机械制冷设备中西餐厨房工'], [9553, '迈可地板'], [6702, '尚线发型'], [4146, '得莫利炖活鱼'], [4188, '室内'], [9288, '金世盾门业'], [6298, '接待区域'], [3337, '安房书画'], [6823, '沙正街店'], [3986, '长宁区残疾人联合会'], [3613, '是朵幸福花'], [7979, '金源'], [4189, '雀巢'], [5792, '电话13437263727'], [4452, '千姿惠总店'], [6267, '电线电缆劳保用品万向轮防水木'], [6457, '主营'], [2806, '发现最美的音符'], [4964, 'peimeng'], [5033, '植物保护'], [2414, '电134731908186685559'], [9610, '经营餐桌实木床衣柜办公家具各种小家具沙发定做'], [4727, '08'], [8108, '方杰消防'], [4386, '孝丰网664596'], [455, '39'], [9751, '甲沟炎'], [9909, '酷壹'], [7445, 'zhangliang'], [8290, '喝出男人味'], [3588, '56277277加盟电话15038001'], [473, '地瓜大事'], [3620, '251131913592300223'], [4809, 'c区355'], [7454, '重庆万州烤鱼狗二'], [3137, '致远机械'], [1227, '照'], [7671, '白云路28号'], [4603, '煲仔饭砂锅麻饺'], [3343, '饮品'], [1982, '羊本田'], [2713, '用家具批发厂家直销'], [7942, '15513687373'], [4912, '面粉'], [7090, '高雄路邮政所'], [713, '改水改电'], [9890, '爱琴海咖啡'], [8430, '全家familymart'], [6930, '4f'], [8354, '月福'], [1962, '香渝坊'], [4946, '全国加盟热线18606610980'], [721, '南洋'], [4875, '红十字护理医院'], [1618, '著名品牌'], [1079, '绿地么馆'], [7965, '海纳利尔'], [3921, '吃好菜请到三峡来'], [4360, '主营ppr管马桶龙头水槽油烟机热水器增压泵毛巾架'], [5474, 'lets'], [1905, '同创全国文明城区'], [4383, '酒'], [9571, '甘其食'], [8469, '董军西诊所'], [2459, '欢聚一堂'], [2286, '鲜花婚庆'], [1647, '豪而'], [4091, 'proshop'], [4736, '电话88737581'], [9013, '办公室具'], [8804, '13661817923'], [1191, 'pororo'], [9850, '新生活化妆品'], [4534, '蜜雪冰城'], [8650, '全国连锁'], [9412, '黄打非工作站'], [5770, '艾米客房欢迎您'], [223, '提货部'], [3876, '晨工'], [4730, '丹麦香酥牛奶棒'], [7328, '图文'], [3763, '郑州朱屯米粉'], [4109, '承接室内外装饰商场'], [3917, '东座'], [1921, '盐巴客'], [2929, '直销'], [7838, 'colinaint'], [6050, 'bull公牛'], [8617, '合星大厦'], [611, '曙光花园社区'], [965, 'pizza'], [799, '江工'], [9406, '麻棉'], [9914, '品牌直供阿里直营正品保证'], [1944, '车友汇'], [3739, '快印'], [3959, '美丽健康长寿'], [7591, '唯炎'], [5351, '6214'], [9776, '意就是'], [1737, '小铃羊s'], [6148, '复印胶装'], [9413, '电脑房'], [8910, '安全插座专家'], [8533, '李师傅铁艺加工'], [1966, '服务院国资费建材机关服务局'], [7324, '佑路333号水晶'], [364, 'adidas'], [5603, '欧雅'], [5183, 'omot'], [7802, '震轩'], [9656, '旗杆'], [902, '家常小炒'], [2326, '优惠大酬宾给力会实惠'], [1207, '童床'], [4009, '114'], [73, '二0一一年十二月'], [6378, 'slzptown'], [29, '翠微幼儿园'], [400, '农业部微生物产品质量监督检验测试中心'], [2725, '东床品超市'], [1236, '高价回收旧车'], [9928, '动车'], [4057, '15021323271'], [7069, '欢迎光临'], [5167, '59287813851808048870'], [3762, '和面机脱毛机切菜机绞切肉机'], [283, '询'], [2311, '风尚购'], [2951, '视贺安全插座'], [8892, '话138829504226422'], [1323, '民安广告'], [3453, '华丽米'], [2681, '推'], [233, '日子布艺'], [2230, '一品'], [8382, '不锈钢拉丝扶手玻璃扶'], [77, '电瓶车专卖'], [5866, '匹克棉衣清仓处理'], [7598, '脚干裂脱皮雀'], [8827, '中百超市'], [8065, '种子农药化肥电话15837590801'], [497, '健康幸'], [3522, '小来水果超市'], [2820, '罗丽丝'], [8937, '92283749'], [4213, '高清监控'], [7709, 'o康博男装'], [1388, '新疆大漠果香食品有限公司'], [9720, '电话58490865'], [9123, '传'], [8975, '美团'], [1727, '都说这里'], [7679, '纹'], [6592, '中国交建管理学院广州航务分院'], [2789, '图像采集区域'], [7631, '汽车维修保养'], [6487, '新街口街道北京jcyservicecenter'], [6428, '北店'], [9823, '亨通'], [7187, '梦想起航'], [3678, '艺术背景墙大理石'], [117, 'hiffainm'], [7828, '伟星管'], [6729, '通久'], [1943, '95625987587617515859992143'], [6712, '电话1378895094813501768280'], [9446, '电13037526143'], [5176, '祺澳'], [7168, '雅洁'], [62, 'sii'], [4232, 'professienal'], [7079, '胡世修脚堂'], [6915, '晨光文具'], [1551, '脱落翻新汽车安全气囊修复汽车仪表台蒙皮翻新'], [7254, '原邦'], [2927, '华盛世'], [157, 'oppo'], [5818, 'meilala'], [8162, '过桥米线'], [5279, '电话18729988766'], [4834, 'professiona'], [7603, '民巷脑美茶威阳泾清茯茶有限公司'], [9007, '精购名小吃'], [3366, '长领钢'], [2422, '消防器材'], [4705, '公司'], [637, '电话15836878947'], [8673, '同立盛'], [2678, 'tv'], [8911, '茶基服务站'], [5819, 'nacy'], [1081, '严禁驶入'], [2001, '形象会所'], [1087, '电话18025291992'], [1537, 'mthub'], [4596, '新型室内外防水施工销售综合门市部'], [2468, '墟沟西园店'], [1826, '海鲜水饺'], [7725, '悦昊服装城一楼139'], [8497, '名烟名'], [2940, '广州市'], [9788, '中国电信'], [4314, '120'], [4870, '各种牛津布箱包面料里布'], [569, '用户做好水表水管'], [221, '汇成集团'], [5355, 'no0371201707010'], [8272, '光明'], [4504, '9501号'], [7792, '10'], [9416, '盖洗饭西老'], [6784, '92'], [6741, '新市社区居委会'], [3675, '2793警'], [4627, 'southbundsoftlspirningmaterialmarke'], [3734, '3f'], [4510, '郏一村'], [8704, '有氧'], [7916, 'carrefour'], [9391, '龙虾夜宵烤鱼'], [2755, '江涛装饰'], [9149, '大市直街58996'], [1454, '采耳'], [2597, 'elm'], [9684, '织'], [5858, '充氧'], [1935, '二甲店'], [9078, '装修知名品牌'], [9767, '交通银行'], [8977, '西锐铖汽车服务有限公司'], [7687, '天天'], [997, '冰清玉'], [9644, '99'], [2170, 'food'], [7584, '青云隆百货便利店'], [4416, '增高鞋'], [2587, '格宠物幸福时'], [5267, '发艺造型'], [8481, 'complant'], [8199, '卑装饰'], [2488, '华威汽车维修中心'], [3142, '房'], [7126, '新款热卖中移动卡全国'], [9908, '定做'], [3512, '四味羊肉馆'], [922, '步云'], [1592, 'ngcommunityservicestation'], [9195, '台湾骨汤麻辣烫'], [6052, '烤鱼'], [7199, '加工油烟机'], [1933, '开关'], [9194, '面拌面炒面米饭'], [9803, '彩票'], [5578, '集成吊顶'], [6099, '箭牌街浴'], [4163, '党记'], [7686, 'mobile'], [16, 'littleisg'], [3582, '美容护肤'], [2855, '牙科'], [4750, '喜盈工艺品'], [7155, '上海浦黎国际贸易有限公司'], [4487, '逍轩足浴'], [1235, '焊机批发'], [4325, 'bull'], [8875, 'china'], [9841, '务热线4001088066'], [5625, '西村一社'], [7075, '牛肉粉'], [9974, '耳鼻咽喉科学系'], [6573, 'chnt正'], [3077, 'vip专线4008656000'], [2150, '宇杰'], [4797, '客湘聚'], [8293, '17715053096'], [9422, '迁安置指捍部'], [3180, 'iobtcste'], [6138, '美颜馆'], [6140, '沙画陶艺'], [3141, '皮'], [86, '棋牌简餐茗茶'], [5575, '120通道'], [8343, '莫干山木门'], [4659, '明真'], [1359, '实木定制家居领导者'], [6123, '念旧来不如喜新'], [9210, '雅洁五金'], [8363, '饮料'], [3012, 'surfing'], [1998, '美甲'], [8036, '护会所总部'], [548, 'kevin形象工作室'], [5294, '广州拉斯卡工程技术有限公司'], [3241, '王家码头路'], [8164, '整体形象设计'], [3026, '经营不锈钢剪板折弯家装设计不锈钢门套厨房台面雨棚广告牌楼梯扶手板材管材加工各种大小工程设计'], [2433, '38'], [9773, '竹之信仰'], [1748, '专业换屏配件批发'], [2672, '外青松'], [6065, '15803952518'], [1989, '企业外包'], [7412, '电话13838399416'], [6969, 't'], [5156, '办公设备'], [6199, '百兆光纤免费上网'], [4578, '锁商水店'], [6557, '加工'], [1942, '上海'], [5144, 'publicttoilet'], [6779, '小郎酒'], [9054, '运动季'], [9748, 'a19394'], [8298, '防盗门防火门工程门非标门学校门银行联动'], [6258, '紫花舍'], [4614, '软包'], [7074, '试验'], [1913, '02076958'], [2268, '5i5j'], [615, '艾维庭美容纤体spa'], [1745, '德国y43'], [7817, '旧车换新车'], [6124, '上海市嘉定区安亭镇'], [5428, '电话0052693458135705933813669699888'], [4127, '平衡车'], [5289, '云泽万达装饰'], [1981, '中共万载县委驻上海流动党员总支委员会'], [4836, '中国兰州牛肉拉面'], [6214, '科琳'], [2043, 'c'], [6067, '乳品专卖'], [5777, '三六羊肉店'], [937, '布兰诺'], [3, '猪脚饭店'], [4288, '电动车'], [5329, '洁威科技'], [3166, '批发零售'], [3048, '特色水牛奶甜品'], [4916, '2'], [5974, '事荣腾菜'], [742, '经典电子制作'], [9821, '1新州通讯'], [2318, '家用百宝城'], [3890, '刘杰理'], [1196, '欧文办公'], [8288, '订做各类纸房子'], [9760, '纹'], [8807, '臭豆'], [273, '源盛网吧'], [4144, '话13915283698'], [2017, '订餐电话1863店'], [1529, '裕德路126号'], [3798, '产后妈味'], [465, '内丘二餐饭店'], [690, '真彩'], [4212, '交费'], [7247, '只中介入小区必须办理好'], [4538, 'busstation'], [7731, '5燃卡部落'], [5816, '五金水暖电料'], [7570, '手工水饺'], [2225, '咖啡简餐上网游艺'], [6806, '望城'], [9455, '睿兴中屋'], [6793, '金汇'], [860, '面包车1399865498车全顺瑞风'], [5850, '盈记'], [8335, '楼艾灸拿刮痧拔罐'], [2760, '新农牛骨'], [3509, '热线受理电话'], [7793, '少儿艺术培'], [6048, '学习用品文体用品玩具饰'], [6734, '卷鹅'], [3897, '31006151'], [5137, '秘制酱香鸡30元份'], [1590, '启智幼儿园'], [3208, '康健路'], [8934, '电动车'], [1728, '百'], [3863, '学徒工致名'], [5705, '文百18号0351281848715234199176'], [847, '严桥路410号c栋4层'], [6191, '速购超市'], [386, '元防水日盛五金'], [8316, '全屋定制衣柜书柜鞋柜酒柜推拉门等'], [5102, '中心'], [7810, '停车场'], [9774, '文杰通讯'], [6239, 'nd'], [8352, '6639315'], [7273, '信正明'], [3348, '大盘骨'], [8047, '尊敬的各位善信游客'], [6977, '二00九年十月'], [406, '地址南昌市洛阳东路287号街区东一号电话079188298346传真079188298346手机13979193309道美华'], [680, '意和天盛'], [2236, '电话18606823763'], [6931, 'wwwjinkcn'], [2884, '聚鑫二手车'], [8105, '生鲜超市'], [7157, '唯品'], [6038, '电动车电池领导者'], [3200, '白云边'], [7881, '13'], [415, '百度'], [6113, '5'], [4821, '新龙龙虾'], [862, '招聘'], [8775, '生活超市'], [9328, '入住宾馆'], [2400, '财'], [3748, 'nogd02003'], [5946, 'cov央视挂广品牌十大家庭首品牌'], [9186, '开利水装'], [5323, '空调拆装'], [4210, '豆沙包等'], [6074, '广州'], [7738, '家委会'], [6743, '招收'], [5443, '汤圆'], [513, 'shi'], [9573, '24营'], [915, '89'], [3046, '经营项'], [755, '24小时收授电话1508316084818339433308'], [5248, '代言人董崇华'], [8738, '地址旧青钢路万东区大四'], [1976, '读多年'], [9665, '地址君楼园8号北区171'], [444, '虾'], [7820, '27'], [3909, 'theinstituteof'], [1805, '俊瓜餐厅'], [5563, '老'], [3845, '宏亿机床配件经营部'], [9268, '发艺'], [8666, 'weibu'], [7713, '冠峰家常菜'], [6398, '正宗按摩'], [6822, '上海市工商行政管理局'], [6094, 'elcphantcheesprokedricr'], [167, '韵顺驾驶证'], [9569, '到255'], [9877, '美'], [5394, '电动门铝合金门车库门拉闸门水晶门感应门'], [4884, '诚聘'], [2994, '美容美发'], [2081, '新鲜绿色精品'], [3674, '低至99'], [8622, '东北风味'], [9920, '周浦镇志愿者服务中心'], [8889, '北京天诺物业管理有限责任公司'], [1459, '洪升全屋定制'], [4708, '服务热线7185269'], [1445, '洗发1元5元'], [9847, '刷'], [8447, '好梦来'], [1628, '古井贡酒红运'], [1640, '国'], [994, '剪版折弯报栏不锈'], [4697, '服装'], [6282, '养生零食'], [2405, '注免费赠送拉链'], [4584, 'castrol'], [456, 'kitchen'], [474, '床上用品'], [43, '健康养生会馆'], [6579, '菲琳'], [1776, '菜鱼'], [2762, '肉丁'], [8583, '装裱'], [2920, '园qwyuan'], [1925, '营养早餐'], [7256, '严禁撞自插电违者'], [8568, '防盗门锁芯升级'], [5005, 'p促销'], [9427, '牛骨汤面馆'], [6619, '大'], [8659, '太'], [8760, '外贸休闲服饰'], [2562, '混凝土搅拌站建筑机械钢筋机械'], [7359, 'feichenghao'], [6431, '衣去污保养窗帘'], [3770, '交营胡同'], [790, '本店'], [4568, '洋河蓝色经典'], [4088, '上海糖果批'], [3243, 'taizhouhonganfirecontroleouipmentcolt'], [9579, '电话64337733'], [2228, '1391759944'], [6761, '天花春景玩具厂'], [5857, '泰禾红桥'], [2595, '宝生园'], [7508, '六满居酒楼'], [2878, '御福轩老北京'], [7131, 'aolyo'], [5812, '大塘清明酒'], [5546, '水沙沙'], [6807, '匹克体育'], [3478, 'internationalexhibitioncenter'], [7116, '旧'], [8018, '角老五美发'], [4871, '正大食品'], [9002, '业评估线13861234715'], [3586, '农业国际影城'], [850, '15366725698'], [2011, '福'], [7566, '靓汤粉'], [6685, '监督电话86550533'], [1274, '武商量贩一楼内'], [7453, '油漆墙纸墙布防水上门服务'], [358, '运通汽贸'], [4202, '能做的还有很多一'], [5931, '厂家'], [9874, '永强路店'], [9822, '车位已满'], [849, '美颜馆'], [7969, '糖馨烘焙糕点'], [9636, '中国体育彩票'], [9805, '广场水电'], [2263, '爱玛电动车'], [8028, '迪瑞羊绒'], [1289, '世界首创'], [2092, '东盟店'], [1870, '地址广州市海珠区广州大道南7808'], [3100, '药房'], [6684, '飞达筛网滤布'], [9839, '太行海'], [3434, '槐店'], [3968, '中国福利彩票'], [773, '输配部'], [7787, '文博托辅'], [1340, '大同刀削面'], [9220, '电脑刻章'], [4097, '轮胎轮辋'], [5649, '美容养生'], [7033, '春'], [2148, '0透普轮胎'], [9347, '联系电话13395130802'], [7310, '订做'], [2591, '玉龙诊所'], [9442, '童装女装'], [4302, '统一企业'], [3978, '1053'], [1294, 'oel'], [5717, '玉器批发'], [3581, '电话18'], [5362, '013938928139'], [3344, 'brillianceauto'], [6135, '主营手机液晶充电宝电池充电器贴膜'], [757, 'createadaptlntegration'], [3271, '专业手机维修'], [4251, '修'], [7891, '湖南品骏物流'], [2717, '特色早点兰州拉面'], [2695, '兰州拉面'], [3557, '内衣厂新家园'], [5527, '足道'], [7263, '安全'], [9643, '柯达'], [5146, '学校门前'], [1911, '工大'], [7923, '小鱼儿俱乐部'], [8154, '发型工作室'], [9663, '天栓寺过桥米线'], [8212, '13453142813'], [5128, '加盟热线18635570366'], [9533, 'shanghailalegendeculturalheritagecoltd'], [2835, '德奈福'], [3215, '碳纤维电暖电热水龙头'], [9099, 'aux'], [3742, '时尚运动'], [3282, '酷发'], [2723, '百年仁源生'], [1577, '汤可以喝'], [3918, '商水金'], [9590, 'mingjia'], [3106, '彩婷内花'], [5481, '1600746698办证'], [6361, '嘉美装饰'], [4709, '订房热线59700082'], [6315, '尚品汇皮具'], [3416, '18252699823'], [6691, '代还信用卡'], [8007, '鸿泰'], [6017, '714'], [4286, '采全球高端精品水果'], [774, '我的挑战'], [6265, '中理通商标代理'], [7146, '福'], [3073, 'abe'], [6993, '广州市市级高新技术创业服务中心'], [2764, '热水器'], [940, '飞龙钣金油漆'], [3465, '酒坊'], [8135, 'earcon'], [4237, '96'], [4291, '富利民超市'], [5030, '外贸成衣工厂直销'], [6276, '己'], [5559, '三方签约'], [1582, 'axo'], [4863, '图店'], [9546, '前程锦绣'], [1931, '中国福利彩票'], [2073, '湛江烧烤'], [8915, '郑州超越汽配'], [2158, '推拿踩背'], [3929, '地址方舟建材市场b1112号电话0215085891651325778手机13524236634'], [9088, '德商务中心'], [7413, '钢'], [3808, '电话13037528252'], [4499, '婚庆照相'], [6120, '婚庆服务礼仪生日一条龙'], [9646, '517'], [2487, '工厂直销地址万水装饰城c区29号电话15110310358'], [5297, '招聘'], [8155, '4105'], [3452, 'acontinentallifequalitypreferedberogaenter'], [3583, '号'], [7736, '3f'], [999, '振文舞蹈用品'], [2765, '墨染'], [2248, 'nox77317080251'], [7492, 'bd'], [5271, '纯框油画框话13368548788'], [2586, '欢迎光'], [9870, '海之星海鲜城'], [955, '60'], [5746, 'spider'], [4700, '美佳福'], [1832, '特'], [848, '阳山人家'], [8729, '问天阁艺术交流中心'], [4913, 'nd'], [1173, '割机配件'], [4533, '红蚂蚁高端休闲代步车'], [6886, '欢迎光临'], [4599, '金行区群团基层服务室'], [4717, '中共'], [9009, '速椒鲜生'], [5799, '江苏杨家酒业股价珠交所挂牌企业版权代码36524514'], [3249, 'shqp1711316'], [8740, '传承壶行'], [2185, '家常菜'], [8255, '舒雅美发'], [2388, '乐饮'], [4919, '螺杆加工法兰加工'], [9578, '大锅全羊'], [1059, '专营'], [9873, '服饰'], [163, '海警一支队'], [5420, '梦和木'], [9500, '中批发城园区3233号电话1323617310913914420188'], [8303, '中际眼镜'], [7862, '瑞泰酒店'], [5841, '美容美发沙龙'], [9329, 'inone'], [2850, '中部集团'], [8575, '联系电话'], [2319, '药博氏大药房'], [8331, '胖子百货批发零售'], [8848, '仙乐'], [5870, '广东宏际线管线槽'], [5310, '孕婴生活馆'], [3551, '13818872900'], [8270, '田旺家具窗帘'], [4904, '718'], [3193, '福'], [4595, '肌肤管'], [9783, '硕博印务'], [8586, '美东'], [3146, '鑫玉足浴指压'], [800, '茶叶柜雪糕柜'], [6841, '路易香浓美容蒸'], [866, '中国建筑设计研究院'], [1228, '广告装饰'], [6044, '外贸精品'], [4685, 'loney'], [7721, '床位出租'], [5243, '主营'], [2350, '物业服务站'], [9359, '午托晚托全托电话82688785'], [9511, '中央国信登记结算有限责任公司'], [9604, '1388950306915942601338'], [8717, '罗莎蛋糕'], [962, 'skoda'], [619, '热水器'], [9930, '考点'], [1212, '迪土尼英语'], [7718, 'b33'], [2548, '职工书屋欢'], [7195, '驳接爪'], [2686, '金牛管业'], [3957, '背部放松'], [2939, '电话13838693973'], [8603, '补胎'], [693, 'el18257272801'], [9758, '16'], [1425, '迎日中春食品大促销'], [7912, '西安'], [3971, '鑫源超市'], [4987, '正宗兰州牛肉拉面'], [8218, '大盘鸡麻辣鱼'], [628, '电话7806688'], [7489, '四轮定位'], [5972, '雅迪'], [8237, '随易餐馆'], [1810, '电话13759240518'], [8043, '102'], [734, '楼'], [6084, '申通快递'], [6275, '中国体育彩票'], [1798, '联系电话'], [7471, '1987'], [5116, '3m防雾鑫口罩'], [2583, '诚意鞋业'], [1849, 'suningcom'], [8983, '2018'], [6323, '87521588'], [5266, '加盟预订热线1'], [2914, '116'], [8437, '体验店'], [5900, '电话01069599999'], [9193, '老年活动室'], [1926, '电视空调冰箱'], [7877, '如意汽修'], [5776, '服务电话2311099'], [9079, 'max米旗'], [9915, '50'], [3744, 'ebeca'], [5148, '小笼包'], [8326, '冰淇淋'], [686, '商悦'], [6269, '学士街26号'], [5373, 'dunlop'], [595, '诚信门窗'], [6986, '欧普'], [7468, '社区经济合作社'], [9205, '正在营业'], [2233, '太平路小学'], [492, 'nob16m3'], [6256, '晨视之光'], [8103, '东辰钢结构'], [9202, '出租'], [6137, '给健康加道菜'], [4348, '103597'], [3923, '金科'], [6178, 'two'], [9381, '李记烧烤'], [7648, '自由空间'], [8453, '沙新山红色物业管理有限限作8'], [635, '我家衣柜'], [9650, '1031'], [102, '正宗香米发粑'], [6895, '让转租139663'], [7995, 'henanhengy'], [7531, '大根渔具店'], [3846, 'wifi'], [1725, '8档'], [5525, '承接塑铝金门推门'], [8011, '文汇图文'], [2574, '海和芳早点'], [9668, '信达财产保险'], [6783, '卫生室'], [6187, '地址大盈镇香大路15231525号电话1855015258117091921889'], [9899, '天'], [2235, '招木'], [5459, '聚香'], [6042, '外贸袜子'], [6176, 'bar'], [450, 'lanxun'], [5842, '珍诊地'], [4484, '紫燕百味鸡'], [4462, '旗舰店'], [3056, 'x21'], [5904, 'hour'], [2339, '苑'], [7185, '电话29992202'], [7588, '高档品牌光机布'], [261, '仁厚直社区委员会'], [9311, '嘉实多润滑油'], [5553, '安能快运'], [220, '拉'], [4814, '双休日法定节假日休息'], [6039, '车匠'], [8695, '517元'], [6527, '精修'], [6665, '夫妻保健'], [4258, '赵华路'], [7147, '黄埔区鱼珠街瓦壶岗社区'], [4968, '易盛客'], [9524, '小时工承包'], [9584, 'wudoncfenc'], [4684, '外贸工厂店'], [9168, '慧捷上海科技股份有限公司'], [7856, '贤'], [9583, '上海集团制氧机上海生产基地'], [7518, '欢迎入住'], [4539, 'energychina'], [179, '泰兴'], [5754, '欢迎光临'], [1619, '迎'], [3666, '1号楼'], [5613, '米线饵块'], [4096, 'c'], [3174, '药房'], [3755, '苏州瑞米达日用品有限公司'], [4424, '香河肉饼'], [1364, '台北风情'], [2423, '东平大马路'], [2888, '订房电话15171246150'], [3139, '小炒'], [5947, '广场'], [6839, '代理'], [5685, '45026'], [7643, '厂家直销'], [6155, '今点'], [5910, '味道江哦'], [7207, '空调网过滤袋加工高温布滤油纸滤棉'], [6060, '凉露'], [3760, '参茸燕窝'], [15, '放心保健店'], [7968, 'uanjia'], [4661, '天'], [5112, '洛阳牛肉汤'], [9388, '残疾人之家'], [9736, '美国'], [3400, '开原旅游'], [3175, '网点编号02470'], [4898, '机'], [4347, '门面'], [550, '建设实业发展总公司'], [4937, '龙豪装饰'], [1489, '经营范围不锈钢铝仓'], [2847, '学库教育'], [6003, '货安装一条龙服务电话58388865'], [751, '棋牌'], [6611, 'apicyamadatechnologyshanghadcoltd'], [6658, '购生活超市'], [6972, '逸阳'], [8580, '轻拿新主纯就迎光豆'], [8582, '寿'], [7365, '圣宝'], [2826, '海水快递营业部队'], [9136, '5484755354473160'], [1434, '30年实体老店专业维修各类品牌手表小心碰头'], [1773, '办公'], [3601, 'shoppingmall'], [324, '央视上榜品牌'], [9969, '厂家直销'], [5883, '远成讯运'], [2183, '主题酒店'], [1237, '学生就业创业指导服务中心'], [6030, '2车优美'], [5879, '亲'], [4077, '1833706231'], [2664, '2656'], [2780, '美'], [2936, '项城运营中心'], [7158, '湘辣快餐'], [9389, '道q一下'], [7730, '家居'], [5763, '经络古法餐生海珠区二店'], [3467, 'f11'], [7618, '杨园福'], [9010, '池爆门窗'], [3191, 'luyuan'], [970, '电话15290640285'], [390, '阿五美发所'], [9313, '滴答'], [9299, '15131775150'], [7038, '银饰全'], [4478, '此门市'], [6558, '主麻'], [1058, '扶元堂'], [7378, '郭记饭馆'], [9465, '发电机'], [9298, '的自己一定会感谢当初那个坚持不解的自己'], [1038, '盖饭炒饼'], [3269, '北黑茶'], [3605, '电话137112863931392887691'], [6855, '品牌'], [8636, '厦衣达'], [5967, '旭豪'], [7727, '胡辣汤八宝粥豆浆豆腐脑小米绿豆粥油条酱香饼手抓饼电话13838606340'], [6838, '20'], [7210, '海市长宁区仙霞新村街业'], [5407, '楼梯扶手'], [485, 'citic'], [1936, '毛线鲜花'], [317, '中国体育彩票'], [2796, '房屋租售'], [5888, '让墙上多一点艺术'], [3552, '烟机'], [5080, '阿华水果店'], [9341, '航凯暑山'], [7824, '鑫天地便利店'], [8948, '15234017449'], [692, '大鹏采暖'], [9519, '盘福店'], [404, '老北京布鞋'], [3983, '953'], [8413, 'sale'], [282, '内进10米'], [3546, '炒'], [7348, '麻'], [5197, '步步高辅导中心'], [4494, '卡特彼勒租赁店'], [1711, '轮胎修补'], [8082, '国祥冷饮批发'], [6985, 'hour'], [5620, '山东名吃传承百年'], [1406, 'tel1896100944418344838396'], [3616, '男厕'], [8715, '欢乐空调'], [1864, '老姜'], [7130, 'hop'], [5982, '力之星'], [602, '工程图打印复印扫'], [7544, '门面'], [1126, '福'], [9475, '特色蒸豆花'], [5452, '金鸿燃气营业厅'], [1051, '材子婴才好品质决定品牌'], [2716, '专业祛痘祛斑机构'], [6046, '沙钢'], [7318, '大姐大'], [9155, '定中'], [4431, 'xiu'], [9489, '36'], [9599, '建峰钢绳经营部'], [148, '羊羔肉铁'], [7040, '广州埃信电信设备有限公司'], [5726, 'keylaboratoryofintegratedpestylanagementoncropsinsouthchinaministryofagricultureprchina'], [5431, '便利超市'], [5671, 'the'], [5104, '地址武汉市大东门机电市场4栋12号电话02780101601820710488'], [8235, '鹏程'], [1740, '电话13327002706'], [2008, '调味冷冻食品涮锅烧烤订货电话13613510689'], [9133, '高平汽修'], [3176, '15615141313'], [8477, '10'], [6350, 'icbc'], [239, '出租'], [4676, '晓晓美容'], [9581, '租售'], [3941, '8'], [4623, '放心早点'], [5533, '知心营养工作室'], [5845, '培训'], [9503, '出售生水饺'], [6053, '广汽丰田'], [7394, '艾迪斯国际口腔'], [4609, '24'], [9284, '小心鹅'], [5262, '欢迎光临'], [7246, '送货电话13916343907'], [4280, '复印'], [4873, '21'], [230, '沪公'], [7106, '更高端的电动车'], [8601, '多厂直销'], [2285, '盲人'], [229, '机械工业第十计量测试中心站'], [2123, '天天坚果'], [9147, 'oppo'], [2144, '门头制作'], [2039, '上海钻石行业旅司'], [1823, '积分兑换礼品'], [5331, '上海市浦东新区莲溪小学'], [6520, '机油电瓶'], [4979, '心连心大酒店'], [7311, 's'], [8953, '精品饰品'], [7832, '共创幸福和谐家园依法'], [3761, '动力'], [6867, '出口'], [3919, '凉菜'], [3429, '一元个'], [1642, '电话13831900680'], [9868, '营养健康'], [5944, '罗普斯金门窗'], [1485, '天源广业烟酒商行'], [6979, '国连'], [8989, '大卖场'], [2499, '米'], [4501, '羊肉'], [4926, '蛋炒饭'], [5393, '中英剑桥'], [8300, 'style'], [5744, '冷气开放'], [8353, '34'], [9733, '彩游艺厅'], [2567, '个人形象照'], [7430, '垃圾分类督导员工作站'], [6502, '凯富宜宾馆'], [8013, '15396351283'], [546, '伊诺房产'], [7446, '鱼美'], [3980, '牛本道'], [9041, '张家装'], [7946, '拉面'], [5676, '花王水漆'], [9183, '聚歌城'], [7917, '清香池'], [6179, '欢迎司机师傅广大市场商户进店品尝'], [8504, '专业委员会'], [440, '旭日房地'], [1553, '价布鞋'], [9213, '东北'], [861, '电话13849445168'], [4426, '华泰保险'], [3725, '方儿童城'], [565, '棋牌'], [2614, '住明汤'], [4342, '鑫旺烟酒安阳特产'], [9376, '名家紫砂壶'], [6675, '豪铁健身会所'], [6609, '翠园社区卫生服务站'], [4092, '串串'], [4216, 'loufogei'], [6759, '乐康生活馆'], [1383, '携手共建卫生城苏心同台立明市'], [8869, '绿色自然'], [8746, '温馨提示'], [1375, '数码冲洗'], [5230, '室内装饰品'], [8406, '金城'], [166, 'garmentaccessoriesco'], [4126, '大医汇'], [5813, '电话15835135512'], [8309, '小萌宝'], [6847, '主营名烟名酒名茶电话13952652240'], [9549, '推'], [4063, '营业时间10002030'], [7501, '套装门衣柜门推拉门木地板橱柜'], [4815, '更有膜力'], [9672, '易记牛肉汤'], [3035, '全国连锁0001店定慧北桥店'], [9734, '爱尚麻辣烫'], [6660, '宾馆'], [4224, '中国工商银行'], [3992, '53'], [7848, '电瓶车'], [4482, '永发模料'], [6721, '骏鸣酒店'], [268, '人民调解委员会'], [1135, '严禁停车'], [1006, '福'], [8334, '联系电话18055'], [3642, '车公庄大街'], [8930, '外城'], [3901, 'bestor'], [2213, '宁波市'], [7472, '河具烟酒批发'], [9512, '馄饨皮饺子皮饼丝'], [1812, '男装女装'], [3105, 'n'], [9317, '华依化妆'], [2240, 'ir'], [4050, '创丰种子经销处'], [4276, '料明烟酒'], [7545, '广州方邦机械设备有限公司'], [4230, '风格名剪美容美发连锁'], [9101, 'cssc'], [1734, '显视屏'], [1820, '幸福汇'], [8449, '4001118868'], [7061, '碧莹翡翠'], [3494, '玉石背景马赛克'], [6272, '长城装饰'], [432, '兰州拉面'], [8528, '电话22019079'], [8148, '专业烫染店'], [6456, '199'], [1502, '仪表电线电缆'], [9898, 'pulcher至美化妆品'], [8912, '港式甜品'], [3932, 'dm'], [1321, '回收'], [7851, '人口和家庭计划指导室'], [4058, '8993967711'], [506, '电脑维修'], [1220, '出租'], [8726, '旧车信息'], [7160, '道粥'], [4901, 'no隆回店'], [116, '13247687211'], [9092, '北极星贸易公司'], [9556, '老虾烤羊腿花甲'], [4075, '泰兴市联发液压机械厂'], [187, '汽车装饰行'], [6648, '平财险'], [3410, '便民服务点'], [2052, '盟店'], [6304, '站点号35015538'], [9029, 'ziyanfoodschainlink'], [284, 'russia'], [1533, '禁止停车'], [2534, '京'], [7178, '中国调银行业名牌广东省著名商标'], [535, '商丘第一家'], [1983, '4008208898'], [7902, '保姆'], [9809, '50分钟'], [2021, '炸鸡大王'], [9375, '广兰苑'], [4869, '百花幼儿园'], [8705, '带幸福回家'], [3342, '家新铁工大学化学化工学院大代中用品务究务'], [6714, '自在乐享'], [1487, 'shanghai'], [875, '中国建设银行'], [3691, '医疗'], [1422, '静静水果'], [5669, '狐妆'], [2249, '600'], [998, '极速空间'], [7894, '大家砂锅粥烤鱼'], [8956, '中小企业服务中心'], [1176, '家庭个人护理品专卖店'], [6247, 'mingjiangdesign'], [5171, '181681676'], [4455, '时代'], [4179, '上海美都环卫服务有限公司'], [399, '手机13951172973'], [633, '秘制大锅卤肉'], [1673, '助银行服务'], [1595, '蒲东铜门'], [3246, '炖菜'], [3752, '靓美家灯饰'], [9093, '叫了只炸鸡'], [5580, '炫彩造型'], [1875, '崧复路'], [379, '大气明理'], [130, '龙华烟酒粮油'], [4572, '烤牛肉'], [2174, '宏星门窗不锈钢'], [5037, '各种材质阀门法兰弯头二通大小头'], [1377, '芳村一广州培'], [2066, '救援电话13569436925'], [208, '旅馆'], [6577, '清仓'], [5853, '赛赛生活超市'], [6318, '月欣名师堂'], [5214, '青浦供销便利店'], [4361, '二海云旗舰店tel88185408'], [9785, '平阳路'], [7703, '瑞博思广告'], [8598, '宵夜24小时营业订座电话2235283918819169456'], [9661, '北京工商'], [3472, '61351'], [5698, '芳圆珠宝'], [6261, '电路空调'], [8976, '编号1003002'], [8454, '刘妈妈'], [4716, '15902738849'], [7509, 'aihao'], [6882, '金号毛巾家纺'], [1405, '米其林轮胎'], [8587, '01063568716'], [6196, '麦好乐'], [2304, '今'], [7882, '福彩'], [8591, '百鑫堂大药房'], [7491, '复旦梅陇科创中心'], [5783, '化妆品'], [7742, '电话18824899961'], [923, 'bar'], [7893, '重庆小面'], [4490, '广州市大普通信设备有限公司'], [2049, '酒'], [8992, '配钥匙'], [4615, '杨园分部'], [7914, '主营双安绝缘鞋手套力达安瑞得安全帽澳翔扬沪安全鞋'], [9386, '西北风味'], [5769, '国酒之源清香之祖文化之根'], [1830, '颐养e家'], [3111, '棒锤岛海参海鲜鱼'], [5971, '知竹面馆'], [4883, '原味猪杂汤饭'], [5326, '茸锦青浦科创园'], [636, '教育学院'], [6947, '热线'], [2561, '文化市场综合行政执法队'], [7119, '启需产品经销点'], [7429, '顶城'], [1802, '5973272568'], [795, '豆浆机'], [207, '哈尔滨啤酒'], [1148, '润华电器商行'], [2136, 'ttyoyan'], [8762, '陈泰昌水仙茶始创'], [5826, '打印复印'], [6507, '蒸包'], [7129, '百老泉'], [5002, '803'], [2625, '内设空调'], [3394, '13613706358'], [3386, '960'], [182, '更换'], [2946, '手机15975526354a07b'], [51, 'pizza'], [1546, '承德老酒'], [2602, '电话0351878888303512176996'], [2925, '网格管理站'], [7174, '自编号563'], [5052, '6848'], [8175, 'sanfu'], [5019, '批零各种五金工具电动工具电线电缆'], [4742, '装饰'], [767, 'aas'], [4602, '即日起买任意一款鸡排送酸梅汤一杯'], [5341, '外在动人内在动'], [2504, 'pawn'], [844, '法国原'], [3144, '厕所'], [1296, '037060768001779605'], [439, '真'], [9790, '正在营业'], [2719, '砂锅'], [1393, '1872838536315282339170'], [9960, 'agriculturalbankofchina'], [8888, '狗粮猫粮狗'], [4184, '电话13508989192'], [7205, '电话03948282777'], [2112, '宽蝶'], [7135, '瑞丽纤体美容'], [4357, '杨光生批萨'], [9454, '农乐农机维修'], [1165, '全国连锁'], [5641, '212'], [1100, '形象工作室'], [5120, '来伊份'], [2305, '上海市药材有限公司'], [9014, '电话15993987898'], [2116, 'ktv'], [417, '牛蛙海鲜鱼粉'], [383, '沙龙'], [9669, 'iehaaasorb'], [6981, '239'], [4895, '绿园春'], [3746, '战略合作民音艺校'], [8282, '晋南'], [797, '13527050511'], [59, '熟练打边2名'], [4878, '地址文化街9799号'], [4768, '辛集宅恩诊所'], [2328, '为了回馈新老顾客'], [8252, '专业妇科'], [7188, '星光电子城'], [4670, '好又多超市'], [2729, '中春北新路步之'], [4822, '安入出'], [5568, '创盟客'], [7031, '手机维'], [234, '光'], [4235, '年'], [8188, '瑞莲棉胎'], [9473, '地址茂盛装饰城3排14号电话152340427561354545552'], [5073, '福'], [6604, '森林德宝地板'], [525, '中国建设银行'], [3949, '南园大厦'], [9900, '1365068460'], [3570, '中国企业十大最具魅力培训师'], [2650, '订做近视太阳镜'], [312, '电话15800590659'], [3999, '蔡伦'], [3192, '空调拆装清洗加氧租售回收电话13297921105'], [6515, '众人鞋业'], [9743, '一材易板'], [9027, '特价28元碗品'], [3573, '贝乐时尚童装'], [661, '比德文'], [7191, '恒新'], [6287, '龙'], [144, '128号'], [964, '私家鞋柜'], [2816, '现磨咖啡特色小吃'], [9153, '外卖电话15867195686'], [6622, '31'], [5348, '悦昊服装商贸城'], [7007, '健康专业团有限公司'], [7346, '天河体育场青少年业余体校'], [7512, 'no0336'], [5741, '鞋子医院'], [2393, '电话15167176432'], [2636, '批发'], [2157, '爱关多更源奶机有'], [7927, '四川棉絮加工处'], [9171, '川菜'], [244, '电话13287321366'], [6574, 'tel18791435527'], [8514, '电话13667333813'], [1947, '者美的空调'], [5684, '订餐电话13603594928'], [236, '1002887'], [5720, '专业'], [4004, '复秤台'], [2621, '菜水果副食'], [4435, '甜点diy'], [8066, '七区'], [3417, '尚顶好'], [996, '443'], [4312, '282'], [3782, 'a102'], [9854, '培训'], [3797, '冉厨农家菜馆'], [8868, '略合作伙伴'], [3985, 'on'], [2429, 'hd'], [5854, '聚宝盆'], [3354, '誉满中华'], [2522, '单元'], [876, '丽源广告'], [2046, '屋中介'], [6331, 'cmdm'], [7993, '出酱骨头外卖每'], [7913, '内设'], [6746, '祝视传媒'], [3933, '批'], [1965, 'ems'], [3039, '现磨豆浆'], [212, '各种配件齐全'], [2436, '5005号'], [9770, '话13112273070'], [3578, '福'], [4544, '电话18635127610'], [2282, '53'], [8826, '新华广告'], [9615, '叉车出租'], [9989, '13832126629'], [612, '小艾数码'], [9791, 'red'], [4734, 'jh'], [604, 'saic'], [8109, '地址浦东六灶镇花苑路35号'], [4902, '经营范围加工改制定做各种传动轴'], [8590, 'maide'], [2639, '特色'], [4154, '酒店家具'], [581, '208'], [356, '精品配件'], [461, '3775440823'], [4832, '鸿雁会展'], [6816, '原托比'], [4326, '电话58701241'], [3357, '订做生日蛋糕18252663192'], [2839, '中策线缆'], [3289, '潮童馆'], [5124, '24h'], [2369, '365'], [3982, '售泰投资'], [397, '小五瓷砖'], [5382, '电话035130556981393460649'], [9372, 'ch'], [5590, '东风'], [1019, '鄱阳销售服务中心'], [4970, '完美人生完美活'], [6952, '新品上市欢迎进店选购'], [5456, '巷品咖啡'], [2965, '莱'], [5406, 'epso'], [4412, '老开元米粉'], [5043, '联系电话15277'], [753, '金庆'], [8172, '兰'], [5305, '照明灯具'], [9297, '恒美华成广安路店'], [559, '儿童画画廊字画装裱13016584345'], [7728, '槟榔'], [5922, '老风味烧饼'], [9167, '57885688'], [7903, '咨询电话4008770370'], [2834, '电话13171926616李经理13171926981刘经理03127895692'], [5788, '24小时营业'], [8084, '冲动任性'], [3327, '欢迎光'], [3294, '天'], [1400, '水果批发'], [2568, '温鲜面条'], [1791, '话15635168338'], [4986, '煎饺每两五个'], [1420, 'liguortobacc'], [805, '复印洗相快速照相菜单刻章喷绘'], [9293, '滤'], [6588, '鞋包一亩地袜'], [538, '泸'], [7569, '设备总汇'], [8781, '桃酥王'], [1042, '出租车报废换新车'], [7357, '北京市残疾人就业保障金'], [5597, '13283941851'], [1334, 'noc0271407410'], [1369, '特色正宗酸烂肉酸烂排骨稀饭'], [2616, '甲古z方'], [8995, '杰宝大王'], [1450, '孙记'], [2638, '四四理'], [4733, '窗台板'], [4523, '发城6区463637手机1519527593'], [4927, '包子面包水饺豆制品休闲糕点'], [2099, '顺务项目'], [6581, '健身综合馆'], [2752, '大拇指'], [2858, '温控'], [7801, '1782'], [1721, '防盗网'], [9842, '干洗设备水洗设备熨烫设备干洗水洗耗材皮化材料干洗店加盟洗衣皮衣技术培训'], [699, '北京天诺物业管'], [658, '制品'], [1749, '翡翠'], [7641, '天然'], [7954, '生活百货食品等咨询电话17317578110'], [9997, '公牛装饰开关'], [8503, '未成年人活动室'], [7264, '自助银行服务'], [3391, '2816317八线2811545fax2816317808'], [794, '金三光馆1'], [6620, '本座一楼'], [4466, '手机15821386275'], [6379, 'wwwchaoyuedoorcom'], [9838, '鞋业产品检测认证实验您'], [632, '港角'], [5241, '如意广告装饰'], [2525, 'service'], [419, '13996424890'], [8802, '金牛'], [6834, '门面转让'], [3695, '木艺阁石'], [9400, '飞'], [3032, '批发部'], [2397, '贵州'], [4989, 'agrlcuituro'], [9726, '服务中心'], [6482, '宝岛美发厅'], [7068, '入口'], [5386, '广州市历史建筑'], [5154, '上海唯爱纸业有限公司'], [3069, '87533157'], [4959, 'mcdonalds'], [294, '盛大开业'], [3053, '福'], [2364, '金辉副'], [7216, '鑫山保险代理'], [7281, 'balabala'], [7334, '更好喝的'], [3395, '考拉甜品'], [5134, 'ken'], [8873, '金州管业'], [6177, 'sbhscorrugatedmachineryshanghaicoltd'], [2690, '万起'], [5035, '离石电缆'], [9036, '15738111277'], [4757, '复印'], [8969, '身符工艺品及佛道'], [6375, '安全'], [6115, '传智所'], [2425, '31号'], [3319, '轮胎平衡1377616238'], [9209, '健康热线18380582282'], [1032, '保福幼雅酒'], [1457, '宏汇园社区居民委员会'], [7678, '叶般米线'], [2856, '正宗'], [375, '大团永春五金商店'], [2697, 'hotel'], [2204, '537'], [5377, '田林'], [5524, 'tel15805262678'], [5217, '靓宝贝'], [9593, '严选教育'], [2048, '复理疗生发养发白发转黑苗熏拔毒'], [8841, '无机油行驶成功'], [9384, '双节狂欢购'], [4829, '韩星机械'], [8522, '简单开启美好'], [150, '油量上飞新'], [8967, '门面出租'], [5686, '桃酥大王'], [9199, '皮子'], [9784, '21世纪不动产'], [2996, '15852998895'], [2006, '本店招工'], [925, '中国福利彩票'], [5065, '中建锦绣天地'], [1504, '招聘'], [2264, '新村第社区'], [6059, '静兔'], [4305, '电话13022441520'], [7494, '字画装裱'], [5730, '舒'], [9025, 'thesocialvenue'], [7078, '南村'], [2543, '10元'], [4125, '排水净化中心'], [3633, '典足浴'], [4131, '小杨车行'], [6751, '56档'], [3861, '汇鑫烟酒电话13778805048'], [4016, 'appliance'], [3619, '电话13365206048'], [8361, '8865号'], [7516, '开放时间周一至周五'], [1571, '24'], [8928, '千盛'], [31, '成记名烟名酒出'], [448, '18971164026'], [4958, '欢迎光临'], [9574, '88624612'], [1438, '闵行'], [8942, '中铁十五局集团有限公司云贵指辉部'], [4526, '姐妹'], [0, '邦佳洗衣'], [424, '平安促进会'], [1939, '蓝色经典'], [1070, '兰州'], [2604, '订餐电话13817458672'], [9040, 'hos'], [4646, '口游19'], [7676, '百门'], [9757, '订餐电话15890519807'], [2785, '茶场'], [8530, '2845'], [972, 'bin'], [1702, 'oppo'], [4493, '永兴店'], [3234, '环保线缆'], [9671, '欢迎品尝'], [6032, 'oppo'], [5731, '溪嘴佳'], [6395, '茶'], [7286, 'deli得力文具'], [8407, '金吉星'], [563, '若干名'], [1958, 'lyfen'], [4042, 'cooper'], [6939, '初恋滋味'], [4355, 'selfservicestorageboxbusiness'], [839, '招租'], [4617, '大理啤酒'], [8035, '电话136'], [9832, '13938725388'], [7574, '二层'], [5844, '服务社'], [6641, '东北家常菜'], [9164, '旺旺饭店'], [5158, '天能电池'], [9468, '现磨咖啡西餐茗茶果汁套'], [3129, '本店开业有礼活动3月29日4月4'], [4576, '武汉市'], [3924, '二楼雅间灯台电话15713829135'], [6437, 'dentalclinic'], [9952, '小大人'], [5121, '医牙'], [74, '十'], [5540, '德力西电气'], [6885, '多点'], [9510, '主营'], [3673, '焦娜烟酒商行'], [9227, '普及消防知识曹强防火意识'], [1211, '宿小美发'], [503, '汤包'], [8136, '阳'], [2643, '山羊王子羊绒'], [6832, '早点'], [5864, 'zhengxin'], [8189, '实有人口管理服务站'], [4844, '面馆'], [8019, '秘制盐鸡翅'], [1604, '花篮'], [4308, '04017665915940220781'], [3767, '货超市'], [663, '海纳百川'], [1264, '学生辅导'], [8340, 'lichangyuforensicscienceineneaicne'], [6659, 'shawbu1ldingofhuman1t1es'], [2902, '白发转黑'], [6845, '水光美白补水净颜排毒'], [4779, '恒安药房'], [1915, '爱车港湾象服务中心'], [3840, '1925'], [7653, '电话18018675686'], [1897, '骏童健康理疗水育龙'], [2720, '动补胎'], [8916, '羊肉汤拉'], [9588, '高级轿车机油'], [3119, '烟酒超市'], [5700, '中国东方航空'], [8638, 'appol'], [1616, '品制作胎毛笔水晶手足印'], [5098, '新鲜羊印'], [7196, '一件88折'], [8357, '金宇机电公司'], [2403, '订座电话80925926'], [5333, '新鲜大肉'], [8046, '开关插座水'], [1251, '中国移动'], [9309, '柴油电控维修'], [8846, '宜家'], [6789, 'postalsavingsbankofchina'], [4551, '上海原弓'], [6484, '诚聘'], [5528, '西窗帘'], [4041, 'mg'], [8557, '贵州黔润康'], [5093, 'r15'], [3159, '箱包钱包皮带'], [1475, '甜妞棋牌'], [1937, '电话13567972619溪龙网660366'], [8863, '公共厕所'], [1556, '12'], [9594, 'enter'], [3083, '138'], [6652, '金龙不锈钢制作加工中心'], [477, '水质部'], [2986, '请防手关门'], [8554, '轻松阳光健身俱乐部'], [9903, '宠物生活馆'], [6430, '变尚宠物'], [9777, '文华图书'], [3305, '网点编号'], [966, '电话1529850111115298502222'], [2547, '请上二楼'], [1056, '24'], [4072, '农家乐饭庄'], [3202, 'by'], [1171, 'firm'], [7691, '13783188693'], [2242, '吃食吃站休闲美食店是'], [7973, '伟冰装饰'], [3300, '宁大族冠华印刷科技股份有限公司'], [1399, '东华门直营店13485331126'], [1505, '无线上网卡'], [8610, '电话'], [7269, 'g59'], [8965, '13673422239'], [634, '肉夹馍凉皮米线饺子稀饭'], [2805, '服务员'], [1102, '3377288'], [6460, '润宝实体店'], [926, '33'], [2667, '32381085'], [4914, '太平洋经销'], [7652, '电话15527859859'], [382, '传媒'], [3862, 'shbjcom'], [8519, '74'], [2206, '价格表'], [4580, '阜成门配送中心'], [5675, '不锈钢门'], [9544, 'keylabofnaturalpesticidessndchemical'], [7211, '专业断桥铝'], [8365, '洋河蓝色经典上海乐农超市'], [3757, '中国体育彩票'], [1739, '空m诊e'], [9432, '王明房'], [8858, '北展北街'], [5510, '炒菜'], [1513, '红酒折扣店订酒电话152249'], [523, '东都'], [973, 'f5o'], [325, '自行车'], [4104, '蛋期土鸭煲76元只'], [9570, '十里洋小笼馆'], [6840, '新广告'], [3558, '魏王园'], [6245, '养发疏通经络'], [1845, '110'], [7654, '中信介绍所'], [3060, '传吾颜'], [938, '手机抵押'], [1423, '红都商务会馆'], [6632, 'dusto'], [3433, '特色'], [6344, '出租'], [8984, '公牛爱眼'], [4591, '名居'], [7573, '纹绣烫染'], [3778, '电话81493336'], [8751, '乐明轩'], [6031, '商电'], [337, '订做床上用品'], [2778, '宣美烫染'], [4323, '电话13615585188'], [556, 'boder'], [6215, '小军自行车电动车配件大全'], [5205, '迎泽家园店'], [6303, 'fishchips'], [6429, '我安示范苏'], [3795, '湖南省著名商标'], [5586, '双鞭鸡汤'], [5709, '可口可乐'], [3718, '开翔汽车美容'], [1874, '温州老窖特曲'], [4663, '现蒸'], [9173, '1607g005'], [4428, '惠民生活'], [2947, '鸡淼精品打火机批发'], [115, '1832'], [6936, '一品香油饼'], [7271, '24'], [7833, '马'], [8684, '招工'], [4139, '29'], [953, '国广中'], [6630, '招宝水'], [3038, '船模试验池'], [6589, '集成吊顶'], [7554, '电话13761733680'], [2712, '金旺源珠宝'], [1278, 'china'], [2353, 'china'], [6339, 'chinatelecom'], [583, '北京市南礼士路'], [9215, '车库入口减速慢行'], [7466, '养生美容美发'], [5004, '老子'], [2555, '木子家'], [8966, 'a5115floor'], [5011, '宣溪宽带指定报装点'], [1674, '农牛肉面'], [7193, '市容环卫监督管理所'], [7277, '电话15939305169'], [9143, '专业'], [2536, 'wif'], [7578, '稻花香'], [6150, '无骨凤爪'], [6451, 'washsystems'], [1738, '洗衣'], [6262, '电话13866121012'], [3743, '地址烽火机电大市场c区4栋910号'], [3481, 'songhe'], [954, '电话15927480649'], [9483, '山西桦达乐龄养老服务有限公司'], [1907, 'ct'], [5463, '广东省名牌产品'], [745, '红布租'], [7233, '财源广进'], [6241, '早930晚2100'], [3505, '卡唯伊'], [6673, '私人订制'], [5048, '造型'], [4846, '红宝石'], [3125, '项目经理部'], [6255, '美蛙'], [9534, '金鑫移门'], [9148, '晚餐'], [1707, '佰福味'], [8054, '铝合金不锈钢配玻璃'], [5225, 'bvgena'], [9120, '地址搬迁至平桥大道东段十八里小学路口'], [6937, '智能机'], [8089, '13661805305'], [6996, '丰收电动'], [4943, '见田设计'], [4446, 'bloy'], [427, '婚庆'], [2671, '铁东营业厅'], [9083, '保养'], [1354, '强强家电制冷维修'], [3484, '城市超市'], [638, '康吉利'], [3498, '祥吉光香'], [4664, '康华推拿'], [7896, '弘毅教育'], [9354, '兴佳'], [3334, '中茶普洱'], [2972, '防水防火零印醛环保家居'], [2191, '保健品护肤品日用品化妆品'], [7766, '唯镁门业'], [1254, '山西金泰源汽车服务中心'], [1113, '洋洋宾馆'], [2879, '自行车童'], [9527, '18980918780'], [791, '新日'], [5086, '饰'], [6876, '菇'], [7511, '太阳教育本'], [1421, 'threegunlivingconcept'], [1700, '技术领先'], [6617, '美的'], [4116, 'hua'], [3164, '班比尼的家'], [1401, 'cctv合作品牌'], [5932, '招聘'], [8794, '中国体'], [2163, '笨小鸭'], [1353, '锦江之星'], [6018, '高雄'], [6578, '鑫泽亿成建材'], [4019, '08001午0500凌晨'], [1142, '福家装饰'], [6636, '院'], [3914, '动力鸡车'], [5640, '88古树普洱'], [9408, '海之韵'], [8384, '极分类工作领导小组'], [7440], [4998, '市文物保护研究中心'], [1588, '重庆面馆'], [6336, '云芬公司'], [1701, '物业服务中心在a6区'], [5023, '润华电器商行'], [38, '地址c区3132号'], [5964, 'yanjiu'], [48, '小民排挡兰烧烤'], [8565, '芒果'], [183, '建材内购会'], [2565, '9家配天能电池'], [5637, '代网吧'], [6925, '顺源药房'], [232, '爱上七佰岁'], [169, 'ile'], [362, '水电气'], [8834, '眼保眼镜'], [9393, 'laundry'], [8268, '沙镇杨家瓜嗒百年老字号'], [867, '69'], [4653, '海马汽车'], [6706, '乐园五金水电'], [3865, 'loton'], [4118, '41'], [7320, '动物医院'], [8959, '订餐电话18839365300'], [4382, '上海安得物业管理有限公司'], [7300, '蚕家'], [2830, '名烟名酒'], [8836, '山丹丹'], [5054, 'picc中国人民保险'], [6992, '龙牌'], [712, 'sna'], [6954, '美团'], [2759, '申天下'], [5622, '户'], [8614, '纳百川药材站'], [3085, '易'], [1159, '无限极'], [7867, '美发连锁'], [9394, '3元'], [7339, '福'], [6108, '宝校'], [9946, '托管'], [131, '甲沟炎'], [6663, '健康热线13915100907'], [6817, '面'], [5337, 'chnt'], [8971, '东山净水'], [8548, '主营烟机灶具净水器集成灶热水器电话13805266945苏中专卖'], [6221, '942'], [3427, 'fitness'], [3238, '金盾安防'], [3426, '垂姚村'], [1341, '古世'], [9098, '15285136586'], [1895, '泸州老窖官方授权旗舰店'], [2324, 'enlight'], [1683, '洗鞋'], [4444, '主经营护肤品彩妆日用品食品竹纤维纺织品'], [6234, 'street'], [6418, '川沙店'], [5978, '设计'], [9833, 'a'], [3245, '憨小猪'], [4272, '中药泡脚按摩修脚40元'], [9690, 'r'], [3393, '木真'], [5447, 'podinn'], [7873, '支部委员会'], [7705, '劳保用品小五金电动工具'], [9947, '四川嘉州'], [5245, 'iang'], [7505, '亿'], [3490, '歇'], [393, '锦城帅小厨炖汤'], [5242, '鸟腿饭'], [5091, '7人'], [9280, '各种炒菜'], [6413, '联友'], [6314, '斯达克助听器'], [8249, '坐便浴柜钢盆花酒电话138346'], [7803, '源学生图'], [101, '水管安装'], [640, '装订'], [1166, 'of'], [5914, '国铭液压油缸'], [6718, '配匙'], [2600, '热线电话02029047003'], [1729, '渔具'], [18, '2468'], [7624, '春饼'], [4725, '砂锅酸辣'], [1317, '益力'], [4472, '4008111659'], [2308, '广州市海珠服装商会办公室'], [5753, '哈尼宝贝'], [2793, '18932970307'], [3730, '美发店'], [3952, '地址汉阳区琴台大道505号国信新城1期电话02762436342135541'], [8138, '脚踏实地'], [3267, '刁'], [6897, '广东省外语艺术职业学院'], [1076, '150'], [9721, '批发晾衣架'], [8890, '圆通'], [7576, 'b129号'], [8315, '智能'], [9146, '治'], [3224, '成套设备电能计量设备等电力产品'], [2975, '编发'], [4729, '医药物流'], [2307, '农家吊锅'], [3074, '蒸饺'], [4250, '婚礼策划'], [674, '烧烤'], [2109, 'laguaiya'], [8510, '祝'], [3560, '艺星幼儿园'], [2869, '郎牌特曲'], [3697, '联系人赵珑云电话'], [2451, '全景'], [2437, '广州市越秀区流花街'], [2784, '丽水百货批发部'], [4726, '小华面馆'], [1194, '减肥瘦身束形美胸护胸理疗'], [9154, '酱辣风味'], [4167, '术术点亮成长'], [3094, '鸿平五金机电'], [6550, '水品'], [6791, '崇德水族'], [9641, '乐'], [84, 'hiuannoodles'], [2528, '湾运居'], [4945, 'health'], [792, '营养早'], [5929, '地址3号楼c11铺电话13662526410'], [3817, '青城饭店牛羊肉部'], [5375, '水材料直销'], [6901, '浦电路店'], [8366, '美达印务'], [1536, '烟酒'], [9515, '新二华合联合素工厂特酒店'], [7651, '水果干果'], [493, '选择'], [1149, '联系电话13701437143'], [6980, '期'], [160, '室内清洗'], [8273, 'niao菜鸟驿站'], [8420, '扣板地板批发'], [9342, '芝根芝底'], [4420, 'microvolunteerfirestation'], [8550, 'ryh'], [4476, 'tel15727379996'], [458, '开启'], [4085, '咨订房热线58359555扫黑恶净坏境促柜'], [3554, '孝昌昆泰专卖店'], [7826, '312817718635570616'], [6426, '为人类服'], [8957, '招聘'], [5694, '533'], [464, 'skisea'], [8488, '顺泉汽车修理中心'], [6854, '输液'], [3667, '旗舰店'], [4666, 'srcb'], [2075, '宝生园'], [4492, '上海银海阀门制造有限公司'], [2190, '名点'], [6488, 'newilfe'], [5515, '专业烫染机构'], [763, '广东省著名品牌'], [2479, 'demask'], [5704, '刘生'], [1507, '如家酒店集团'], [63, '收购抵押'], [2293, '外送电话18367436651'], [2502, '苏北龙虾香辣蟹特色凉菜家常炒菜'], [748, '长城宽带'], [8348, '洗脚'], [5259, '619'], [8376, '泰兴市华丽灯饰'], [6606, '正漏贴'], [3456, '有限公司'], [1788, '广东外贸厂站'], [3685, '中国质量认证中心华东实验室'], [6169, '6f'], [4848, '649'], [6168, '经营部'], [1327, '1353'], [9383, '芝根芝底'], [5076, '全国连锁'], [4521, '新航道'], [9904, '满宝馄饨'], [9982, '馨居墙纸'], [3290, '买新车换新品比比'], [6102, '驻北京科影保安队'], [6144, '全球精选生活好物'], [2492, '百年老店哈森路分店'], [714, '手机13527719229'], [2949, '伟龙亚行进口各车展示厅'], [4655, 'postal'], [580, '28号'], [7752, '板栗烤红县山楂电话15158926181'], [732, '黎明服务站服务热线95079'], [8908, '235'], [4795, '5'], [79, '年'], [4255, '正新鸡排门市价15元现促销价10元'], [1530, '金凤厦'], [3423, '碳纤维电暖'], [4707, '物业管理服务中心'], [5071, '美尚唯品'], [676, 'j做各木'], [6682, '专营曲轴缸盖发动机配件'], [1257, '加工羊绒衫'], [1094, '幸福365'], [7480, '庆典'], [4169, '魔力卡奶'], [7218, '迎春单身公寓'], [5938, '万立石膏线'], [2057, '新车交车区'], [4003, 'b'], [58, '二'], [2578, 'iciliina'], [326, '最高奖金100万'], [5712, 'aldola'], [3659, '格力'], [8149, '牛肉面刀削面炒面'], [8446, '大华玻璃'], [4864, '首家砂煲饭'], [2334, '野徐干洗店'], [5290, '13223025669'], [2556, '山'], [5638, '雅格手电筒'], [2494, '书乐工业园区'], [6902, '日丰管业'], [414, '雨润冷鲜肉'], [4682, '岭'], [9896, '物食品'], [6474, '贝贝乐'], [5014, '052386964488'], [6297, '胖子平价南北干货'], [68, '127'], [4335, '方盗报警器材总汇'], [6699, '欢迎光临'], [4248, '鲜花水果超市'], [1639, '代发'], [4350, '7891919'], [9923, '喝王者'], [7606, '园'], [442, '肠粉'], [6889, '世外桃源心清净'], [2435, '阿华斋店'], [4054, '皮'], [7251, '聚佑社区商'], [1867, '修理'], [5617, 'seekingtranquilityinurbanlife'], [9111, '84805189'], [5147, '泰鹏绿居设计装饰'], [918, '好德'], [2121, '电话18737681035'], [9727, '平价果行'], [7683, '家庭食堂'], [6212, '手机'], [7970, '营部'], [1923, '荣星'], [2837, '中国驰名商标'], [2653, '凤凰地板'], [7497, '卷国区西溪街道文三北4'], [9379, 'c'], [8720, '维修'], [520, '臭豆腐'], [4111, '高粱散酒'], [6553, '沙宣造理'], [5567, '锅圈'], [121, '厂'], [1946, 'a座'], [6357, '全国特殊教育学校'], [600, '佰'], [4199, 'welcome'], [7506, '美容养生'], [4395, 'communitypoliceoffice'], [2552, '光芒厨卫电器'], [4508, '益雪气'], [195, '招聘'], [5757, '美食城'], [2846, '点'], [1439, '盛火广告'], [8359, 'hx'], [8657, '精品女装'], [4714, '地址青浦区白鹤镇纪鹤公路5522529号'], [2250, '54'], [8764, '北京用达安康商贸有限公司'], [3355, '精压器变频器'], [6582, '15664002662'], [3806, 'honorvivo'], [2903, 'p空'], [3132, '同升商行'], [8946, '玫瑰花坊'], [2444, '驾驶证'], [8507, '驾材'], [3553, '百利'], [41, '13924089477'], [1240, '电话2238189'], [6334, '红艳便利店'], [2126, '电话15511970337'], [1769, '保健'], [7172, '地旺宝中中州市场南区5排13号'], [5814, '便利店'], [3216, '2f'], [9802, 'tw'], [3409, '最厚'], [2814, '订餐电话134826909'], [7112, '银都驾校'], [5095, '牛牛烧烤'], [3622, '北渔路店'], [1479, '老油坊'], [2195, '经营范围各种高中低档窗帘'], [1305, '东风东路'], [4376, '东南社区居民委员会'], [4826, '锁具大全'], [2893, '25号'], [2673, 'a'], [7044, '处康'], [8725, '18841206007'], [3915, '钢木门'], [2297, '甲尔申皮鞋'], [9536, '京海顺兴食品店'], [1431, '4g'], [1548, '名车售后处'], [7767, '109'], [856, 'gj018'], [17, '电话2679663'], [1277, '利宝陶瓷'], [6152, '海信名酒行'], [3566, '大之居民区'], [5924, 'ju'], [8178, 'zebra'], [6893, '春天理发店'], [3041, 'return'], [2430, '晨光文具'], [4180, '培训点'], [5036, '地址合肥市'], [4018, '2238'], [2325, '2300'], [1093, '送风白钢罩及电气焊'], [3268, '保香免费提供'], [4843, '兰州牛'], [2573, '生活馆'], [2844, '门另有4米剪板折弯'], [7001, '上海市田林第工中心'], [1368, '69696'], [3604, '价优惠房间'], [56, '承接婚宴会议'], [8805, '至慧学堂'], [9737, '平洋'], [8152, '金'], [597, '司'], [3136, '全国加盟热线'], [2742, 'giant'], [4669, '英泰灯饰会馆'], [5486, '梦想起航'], [2399, 'mi'], [2291, '好'], [9856, '百岁山'], [6368, '祝君平安'], [7694, '申宗'], [2167, '中华美食'], [6767, 'kunda'], [4120, '海利304工程部'], [6576, '雅迪电动车'], [8351, '13578031819'], [1531, '内'], [87, '名表'], [4545, '霍尔塞特增压器'], [3503, '高端男装定制'], [1716, '面'], [5057, 'a1铺'], [9283, '电话'], [2063, '江汉油田'], [8493, '科亦美集成墙'], [8835, '净水机'], [682, '松滋鸡甲鱼'], [3888, 'p'], [6369, '沙冰元宵'], [1345, 'd区'], [2024, '依江春'], [1555, '捷通汽车销售'], [4507, '铺'], [3406, '集乘科技'], [5773, 'bq'], [9957, '15090601303'], [6694, 'shang'], [5751, '813920'], [8654, '电话035166924841303800084418635103036'], [6181, '窗帘布艺'], [6367, '来沪'], [7143, '施南府'], [769, '良子足浴'], [4226, 'haier海尔星级服务中心'], [1861, '团结五金'], [3485, '香洲店'], [2791, 'medical'], [7884, '专业焊接不锈钢塑钢'], [2836, '美容装潢区'], [7994, 'tata木门'], [4385, 'cai'], [3810, '莫家牛奶甜品'], [616, '鸿运装饰'], [5283, '泰兴市泰通公共交通有限公司'], [8552, '此房转让'], [7018, '电脑配件'], [9352, '斜土路2897弄50号'], [5178, 'pralseapartment'], [9994, '萧记早点'], [1973, 'en'], [3515, '国货'], [2028, '13925189202何先生'], [9602, '绿缘'], [7213, '大刘卫生室'], [219, '收购'], [1580, '西安名吃'], [9699, '小陈羊肉馆'], [6873, '神灸推拿保健电话63046326'], [4364, '蚕桑科普基地'], [3145, '给你一个绿色的家'], [3075, 'd'], [9336, '专注电动三十年'], [2918, '美味炸鸡'], [1890, '中国体育彩票'], [3820, '天地壹号'], [8700, '营业中'], [1598, '本铃'], [1199, 'interstatechina'], [8938, '全国no1609'], [6164, '面'], [6728, 'cheblo'], [159, '时令小炒火锅干锅'], [7951, '彩发电裤包接打等出品口工'], [5303, '航空售票'], [2151, '配件'], [2783, 'fin'], [7460, '特色手工粉鸭血粉丝汤时尚水果捞花甲大咖'], [6390, '社会主义核心价值观'], [7522, '甘肃总代理8502998'], [8900, '电话15890537667'], [3466, 'hatalinsurancegroup'], [8015, '修胎打黄由'], [8786, '中国电信'], [8896, '手机卖场'], [6941, 'joyou中字卫浴'], [2956, '福'], [5448, '华发租售中心'], [7325, 'executiveresidences'], [3907, 'klp'], [2475, '圣洁蕾诗'], [8534, '发'], [4960, '电话34422871380644店'], [5994, '出入境签证护照采集点儿童写真复印快照'], [5442, '旺足'], [6469, '美团外卖'], [3621, '沐丹阳农资'], [8039, '原生态'], [3061, '庆鲜面条'], [9254, 'chinamobile'], [2253, '迎光临'], [6005, '出租脚手架钢管架'], [1461, '自助银行服务'], [7795, 'beijing'], [8732, '石磨豆浆油茶'], [1363, '足'], [3486, '艺人'], [7527, 'blanoicecream'], [8392, 'ctm'], [4997, '友华'], [6083, '好又省专业汽车位'], [7148, 'd86'], [1299, 'pers'], [7382, '印象成都欢迎您'], [7964, '茶'], [7110, '新亮点'], [8094, '印花'], [7749, '和田'], [6978, '果庄'], [5151, '热饮'], [2461, '双碗'], [8710, '保平安'], [614, '面条'], [9564, '假工寒假工实鸡理'], [5189, '中原房'], [1980, '鸭蛋'], [380, '加盟热线4000896911'], [7144, '摩卡'], [3138, '38'], [981, '姐妹坊'], [7707, 'tel13914540950'], [9492, '完美'], [2958, '电话15926352218'], [1270, '主营铝合金门窗阳光房移门办公室隔断不锈钢门套宣传栏'], [578, '1366'], [8817, 'fabaijirnocom'], [109, '只给全爱'], [2519, '品味餐饮管理有限公司'], [8448, '主营生品礼花'], [6852, '电话18901438368'], [8997, '1898262657313882683987'], [8641, '为农村'], [7095, 'zcinema'], [6724, '81845655'], [1030, '火警电话119'], [6932, '摩美护中心'], [9253, '配件'], [1666, '小军电器'], [9345, '爱儿乐'], [1852, '专线'], [9824, '天夫地'], [7279, '社文'], [2374, '米行'], [8163, 'baby'], [3649, '风行茶饮'], [7175, '电脑手机'], [935, 'tel18970868961'], [6401, '中心卫生院'], [7915, '文'], [6717, '婚车'], [9893, '老字号桂林米粉'], [1062, '销售各种规格灭火器消'], [8668, '无添'], [7360, '报名热线13564457038'], [3087, '碗约'], [2566, '富强商店'], [24, '远成快运'], [315, '心城艺和会仪之行'], [1244, '佳优惠超市'], [5184, '长城人寿保险股份有限公司'], [9062, '折扣店'], [2338, '有限公司'], [1147, '佳家房产'], [6193, '奥州贵酒'], [2415, '报名热线'], [2178, '公牛'], [9939, '可诺丹婷美颜美体'], [2973, '二手车改齿形'], [2406, '营业时间'], [1490, '185'], [9017, '二0一二年五月一日'], [433, '社会'], [1349, '967'], [9139, '电话54160822'], [8643, '电话4266746'], [5718, '电话2802913101804898'], [1453, '花花牛'], [2243, '重庆鲜面条'], [6528, '68'], [8121, '949'], [4817, '店'], [2749, '20180618星期一143320'], [9244, '翻新'], [5424, '广州市体育局海珠区体育局'], [5224, '睿通讯'], [9169, 'express'], [9380, '箭牌'], [7910, '上海富都物业管理有限公司'], [4812, '纯手工打程'], [6629, '鑫磐招标'], [6548, 'h'], [4403, '猪蹄'], [3758, '金苹果乐园'], [5186, '管机'], [7697, 'istheline'], [8681, '地岐香'], [5538, '足浴刮痧拔罐'], [3504, '威士雅'], [6407, '吉安市青原区速捷物流有限公司'], [5322, '咖啡茶艺'], [4271, '农家小炒'], [5706, '热烈庆祝绥宁海尚阁会所隆重开业'], [7091, '拉'], [2379, '佳佳美家具城'], [5534, 'up'], [1011, '信阳申通快递'], [481, '24小时'], [8844, '水电改1932号813460733956'], [280, '北方饺子'], [2074, '狐妖'], [2219, '78'], [8130, 'oppo'], [5379, '熟练技师数名'], [1815, '梦想桌球'], [323, '新居民自我服务'], [8925, '步步'], [6166, '至宝'], [4978, '和定意休闲会所'], [542, '扶手地板木门'], [6547, '身边的'], [6719, '中国驰名商标'], [4393, '异形'], [4397, '机构'], [4618, '中西医结合常见病多发病诊治及慢性病的调理'], [2396, '新吉无总儿童创意艺术室间'], [5028, '米'], [4613, '电话13885277412'], [8788, '流花街维护稳定及'], [1466, '回'], [9037, '服饰系列'], [297, '妇科'], [8857, 'communityhealthcare'], [9106, '吉星高照'], [6756, '菜鸟驿站'], [2644, '电话13720233361'], [2424, '地址上海市闵行区中贵路88'], [9651, '麻将'], [9132, 'welcomeojre'], [1787, '五金工具电线电料油漆涂料洁具卫'], [4145, 'enaul'], [7684, '元初出口精选'], [1757, '反玩杂项'], [1503, '柴油机'], [1209, '中国移动'], [2090, '领'], [3086, '釉宝'], [9984, '清'], [5295, '由由二村'], [1617, '法律服务'], [6345, '涮鱼馆'], [1691, '强国旅江西首家旅游上市企业五星级旅行社'], [8310, 'christopherwayne'], [353, '0589'], [8634, '锦绣服饰'], [2891, '林永五金'], [328, '复印'], [103, '8301200'], [2907, '力'], [2383, '牛肉50元斤'], [1665, '小雪羊绒店'], [3461, 'kfc'], [9122, '炖品螺蛳粉电话18977825739'], [982, '艺花'], [1841, '刘公粥铺开业了'], [2718, '业工程有限公司'], [4738, '禁止泊车'], [6295, '中洁管安装专用管'], [6710, '你在哪儿'], [7547, '品艺新文'], [3320, '电话'], [7064, '华景挖机配件'], [6539, '完美服务中心'], [4336, '巴开装修动'], [8986, '丸美美容spa会所'], [3893, '5822'], [8743, '33'], [9445, '104'], [1413, 'monarch'], [2446, '沙画积木'], [127, '汤阴分店'], [1474, '口'], [2985, '邮编210028'], [1099, 'linglongyujia'], [7154, '美味新升级'], [1339, 'shanghaihaozhongautomobilerartscoltd'], [5145, '百年老字'], [567, '沙县小吃'], [248, '西塔墨苑'], [8947, '炒饭'], [3027, '联系电话'], [6268, '电热器到货'], [2166, '雪中飞'], [344, '10'], [1671, '社保刷卡'], [7716, '吸尘器记录忆'], [8797, '风风行'], [2118, '心'], [2330, '时尚鞋'], [2349, '白鞍山市巨人数控机床厂'], [8644, '开关插座配电箱离石电线电缆'], [2702, '原厂技术'], [8527, 'affairs'], [1484, 'eti'], [4368, '酵素浴'], [9191, '会所'], [3837, 'le'], [2747, '北方天宇'], [7617, '气焊二保焊'], [1795, '日本扶商标式会社'], [3436, '新街口北大街'], [5422, '途虎养车'], [510, '有限公司'], [8141, '闲人免进'], [2736, '商务港'], [6524, '二楼'], [7986, '微运力'], [8097, 'chegongzhuangdajie'], [3078, '浙江远东电缆湖北总代理'], [4560, '高中低档化'], [435, 'chinamerchantsbank'], [6967, 'k生活时尚'], [4776, '电话17605231557'], [1325, '开心面馆'], [6286, '小炒'], [106, 'b'], [8618, '修斯林烧烤涮'], [6506, '电话58018111'], [1772, '电话136'], [7295, '中国共产党'], [4185, '18元个'], [7948, '啤酒小菜手工水饺'], [6827, '玉林市天安'], [4024, '红太阳'], [3316, '园'], [6733, '福利彩票'], [6154, '品水果店'], [6037, '一手烤锅面'], [9203, 'basicelementcoffee'], [3454, '科技公司'], [826, '装饰泥凝土展示体验中心'], [1722, '关爱之家'], [6468, '鲜源生鲜超市'], [3036, '武汉热干面馆'], [7149, '广州市飞马科技开发有限公司'], [9782, 'family'], [7821, 'tel3161118'], [9867, 'welcometofoshanchem'], [2266, 'modelling'], [7659, '虾'], [3639, '欢迎'], [5105, '妇女保健室'], [8291, '行天下旅行社'], [5551, '福地烟酒超市'], [7582, '电动汽车'], [832, '植物染发养发'], [1429, '中国烟酒'], [4831, '名师制作'], [4294, '电话18329527145'], [8647, '美味热线2291'], [864, '风格汽车维修服务部'], [5647, '2006年3月'], [8832, '金星'], [214, '治安巡逻'], [5995, '日丰管'], [100, '210'], [2953, 'joincli'], [5667, '7o商旅宾馆'], [1415, 'x3'], [7629, '曹辉'], [4414, '2666'], [8621, '海州区利民社区'], [756, '钢花村街南苑社区文体活动场所'], [4038, '公牛插座领导者'], [6689, 'wdy'], [1523, '请上二楼'], [6396, '春季新款'], [741, '入口'], [5220, '社交电竞'], [671, '471'], [514, '怡佳仁休闲零食'], [9714, '地板'], [3687, '馅旺鲜手工水饺'], [5629, 'shanghaihealcarepharmaceuticalcoltd'], [9409, '7986316'], [9901, '南'], [1949, '127'], [7791, 'vivid'], [6308, '批发墙纸墙布'], [3070, '4000027886'], [7138, '箱看戏机对讲机电子狗平板电脑电子类产品'], [7243, 'real'], [9711, '郑州交通运输集团有限责任公司'], [4443, '家'], [8314, 'sn'], [5401, 'gym'], [5839, 'dc'], [5293, '华宁足浴保健院'], [3960, '地'], [652, '节'], [216, '鲜奶冰淇淋一冷热饮甜品'], [2384, '名犬买卖'], [9362, '2018'], [536, '江'], [8328, 'gs'], [3013, '鸿运美食'], [3339, '豆浆'], [3170, '3层'], [3529, '少儿舞蹈英语'], [2470, '美发造型'], [6419, '中国威士总达人会'], [4701, '的'], [8929, '0305'], [7809, 'hl'], [3925, '四公子'], [307, '本地专业防水'], [2352, '优客快捷酒店'], [6466, '倩衣绵绵'], [6562, '中国舞'], [5123, '加盟热线1516'], [7406, '双鼎名烟名酒'], [5465, '两目渔具'], [8330, '主安经全石牌圈线室上精工雕刻各种'], [8800, 'es汤普森陶瓷'], [5530, '运城饭店'], [8883, 'beverage'], [788, '佬土世家酒专卖'], [8894, '烟'], [6567, '快捷筑牌门'], [2660, 'ta'], [7827, '老兵酒'], [2663, 'jianqiao剑桥'], [342, '唐立批发行'], [5668, '油发电机电焊机水泵及配件'], [3516, 'lacesar'], [7764, '麻辣粉凉面米皮擀面皮稀饭'], [5919, '29'], [2097, '尚品时尚男装'], [4900, '热线'], [4309, '重固镇综治工作中心'], [5409, '槎龙第一经济合作社'], [3411, '大众驾校培训处'], [8327, '13044287123'], [863, '上海上工物业发展有限公司'], [7118, '有的公共英语等级考ue'], [5353, '超威'], [9895, '璐凯帝'], [3637, '编辑部'], [5573, '真'], [7700, '联系电话13768186225'], [2362, '面馆'], [7940, '东洋鱼庄'], [2832, '地库出口'], [5935, '猪脚饭'], [4, '及各种配件'], [8905, 'loepspvcsh'], [9160, '金冠养生'], [6999, '香烟啤酒饮料百货'], [5268, '联系电话15861078819'], [9363, '艾奇恩贝'], [2224, '电话13313939187'], [4564, '美团'], [9880, '端鱼肉时携手食味车'], [7274, '上海纺织'], [2351, '鱼牌'], [2744, '全球安全门领导品牌'], [7699, '电话13163269551提供送货上门'], [5247, '创意视学餐'], [7258, '美容'], [1657, '可'], [3562, '本店担保承诺'], [4222, 'bc'], [5306, '品'], [3856, '美食广场现正招租'], [2211, '新日电动车'], [5989, '宫雪国际'], [2490, '公正法治'], [5238, '电话81071812'], [34, '平安人家'], [5106, '精锐教育'], [858, '开关选泰力'], [3460, '电话13939587869'], [6704, '赛维健康洗衣生活馆'], [8658, '简约服饰'], [7776, 'b'], [9697, '福建石业'], [3709, '生意兴隆'], [7314, '日用'], [1790, '包子粥'], [1473, '年路分店'], [4137, 'za'], [1342, 'sockscacabinet'], [4067, '糕月饼'], [1653, '国'], [4899, '刷机解锁'], [5908, '24小时营业'], [8388, '瑞香茶叶'], [9801, '艺术中心'], [8571, '业盲人按摩'], [726, '东兰新村第二居民委员会'], [8489, '02013'], [1282, '意'], [3976, '乐经营部'], [7859, 'top1oil'], [7390, '15717177398'], [7282, '乡村浴室'], [2273, '工旗涮串'], [4007, '必胜寄卖'], [296, 'china'], [8861, '平安物流有限公司'], [4890, '告示'], [9628, '木桶饭'], [8242, '宏运铝合金'], [4965, 'wetflook'], [8952, 'intaibao'], [2876, '兴发'], [2200, '世界'], [8830, '电话13053455998'], [1831, '瑞丰广告材料'], [1887, '供二批'], [6473, '地址交警大队向南180米'], [8374, '文厂'], [8878, '15835733'], [9735, '丰象地板'], [6631, '95519'], [1245, '东方纽约'], [6545, 'lashawarmp'], [8305, '英圆纽百价'], [4820, 'bicyclepark'], [2227, '清汤羊'], [4518, '美容肩颈理疗淋巴排毒暖宫肾疗拔罐艾灸'], [9247, '电话13358090090'], [2629, '接诊电话13839433052'], [6849, '34'], [3265, '招聘'], [9261, '零售'], [700, '中信房方'], [7249, 'g5'], [3510, '中国电信'], [9498, 'cm'], [8144, '地址江苏街38幢108109号电话882'], [8364, '消火栓'], [946, '花花公子'], [2711, '农舞厅'], [6805, '聘'], [5981, '幼儿'], [9114, '电器维修'], [8405, '顺切星'], [6601, 'weicome'], [387, '星级享婴低档消费电话86967366'], [8436, 'dro1b6室'], [8091, '美丽加油站'], [5302, '绿源'], [8459, 'hairdressing'], [3030, '13791124317'], [3838, '换油'], [288, '烤全鱼面皮米线'], [8589, '江'], [6667, '自动变速箱专业换油保养'], [9435, '微型消防站'], [6301, '川卤味'], [9670, '电话13193607058'], [8685, '国际'], [8379, '绝对纯洁'], [3250, 'tsingtao'], [1440, '电动工具'], [8796, '梯'], [8083, '60分钟'], [5292, '专营'], [9715, 'nailbeautydtudie'], [1879, '泰州市鸿雨消防器材有限公司'], [2563, '进修学校'], [4489, '赵巷物业收费处'], [1470, '大瀛'], [9011, 'kingbul'], [1620, '建坤卫生室'], [7680, '金纸生团和炒7'], [1132, '汽车'], [1411, '上海幸塑胎制品有限公司'], [3908, '牛奶'], [4220, '诚信有家超市'], [1476, '西丹精品木门'], [2993, '忠信养殖专业合作社'], [9882, '中国电信'], [1409, '地暖管地暖分水器球阀水龙头地漏软管剪刀热焙器'], [4123, 'financialstreet'], [9804, '有口罩'], [5312, '文具店'], [9430, '招和'], [4316, '39元'], [7543, '金虎便利'], [9397, '牌口腔美牙专家全国连锁'], [1050, '饭'], [1842, '盛开业'], [9913, 'yam8675502203'], [6406, '森林手机维修'], [1613, '名视打'], [7769, '头原布婴儿帽子婴儿洗澡巾床单被罩联系电话13598632499'], [4940, 'nationalseedcenterofexperimentaidogc'], [6408, '鲜会'], [8821, '黄焖鸡皮设鱼'], [1034, '会雯起名'], [1756, '超大派'], [5432, '世纪华联超重'], [4879, '2018年6月148208'], [2376, 'wushankaoquanyu'], [5831, 'youyou2cun'], [8236, 'h'], [7860, '地医有科技医学图像信息研究部'], [529, '12'], [7209, '港'], [1410, '选手机'], [8473, '18981185515'], [6538, 'threegunlif'], [9496, '一心一客'], [3920, '自制香'], [1177, '美利达自行车'], [9827, 'tianmei天义'], [4396, '鼎尚烟酒'], [92, '山东鲁霸商丘总代理'], [9470, '足疗保健'], [878, 'dm星贝儿童馆'], [5545, '万家福卷闸'], [9126, '承接各种夏'], [1622, '2参加跳舞者请星期一三五身'], [7602, '美容美发用品'], [262, '永信达'], [6346, '冰煮羊火锅'], [8485, '95079'], [6227, '无美味不了吃'], [1464, '世达工具终身保用'], [1645, '98号1栋5号'], [8228, '方科'], [3721, 'casa'], [9829, '金石汽配'], [3880, '学苑图文'], [7383, '丽车行汽车美容装饰'], [3790, '18721067201'], [6224, '欢迎光临'], [7750, '清边光'], [6587, 'joinus'], [764, '小炒'], [7059, '主营各种名优奶粉奶瓶婴儿洗护用品'], [9972, '包'], [2254, '拔罐'], [3474, '批发彩钢扣板门头'], [4375, '花钵陶瓷'], [8165, 'jih'], [6848, '中铁国际集团'], [1063, '棋牌'], [3592, '三人行'], [6501, '稀土竞速馆'], [8177, '医院'], [277, '空调'], [9681, '海尔旗下'], [5389, '永红白铁'], [3005, '内设台球桌游影吧卡拉ok棋牌等'], [6210, '门'], [3572, '毛尖大红袍铁观音普洱等承接87219688'], [5013, 'orystalstory'], [5454, '锦上添花'], [8371, '生活馆'], [1145, '全天'], [9522, '订餐公众号foodmade18071063005'], [8253, '四'], [9746, '联系电话'], [5274, '上海标准件机械厂'], [530, '书记工作室'], [1000, '电线电缆等'], [9621, '自助银行服务'], [2367, '品'], [9739, '拉'], [7782, '15298173003'], [6497, '千居房产'], [986, '88'], [501, '莲宝路社区卫生服务站'], [3189, '联系电话13110118098'], [6117, 'bearb'], [8577, '开锁'], [2608, '康益眼镜'], [4242, '理疗预约电话134343947'], [8142, '新源商店'], [4040, '事故理赔钣金喷漆汽车保养汽车维修联系电话15216638212'], [7229, 'royalchic'], [4954, '电话13903298716'], [8779, '手艺手工打鸡眼'], [4657, '服务单位监督电话15分钟8'], [7604, '招工'], [310, '墙纸'], [9562, 'coop'], [1458, '铜'], [7283, 'scckgad'], [5018, '电话507773815967206795虚拟网666795'], [620, '中国电信股份有限公司上海浦东电信局'], [9792, 'ltd'], [1771, '112'], [49, '柔情'], [8922, '499'], [2382, '著名商标'], [4252, '大胖涮锅'], [6644, '圆通快递'], [3385, '电话02081789420地址广州市增槎路522号118档'], [241, '摩托车'], [8276, '楼上单间出租'], [4051, '家的口的苏州面馆'], [8041, '平价'], [4170, '万寿路街道工委'], [6458, '短裤68元'], [5966, '午1400至1630'], [3831, '厕所'], [711, '上海市人民政府颁发'], [398, '订做各种时装团体服装'], [7021, '中国铁建'], [6745, '五金日杂'], [9231, '电话180093208977725278'], [6949, 'r15'], [6933, '7223'], [645, '巨'], [3946, '东晓南路'], [5458, '铝合金门窗制作封闭阳台不锈钢防盗窗隐形纱窗金刚网电话15839481614'], [6010, '忠兴数控'], [6153, '大众汽车'], [8570, '上海市长宁区城市管理行政执法局执法大队'], [5460, '养生之道'], [72, '告牌电焊'], [8999, '回春大药房'], [3065, '主营烟酒副食日常生活用品单习用品大虾'], [1532, '瀚林源烫泉'], [6688, '麻辣鸡头'], [8197, 'express'], [7232, '峰峰酒家'], [1804, '儿童平衡车体验店'], [2930, '珠江宾馆'], [5863, '百房产'], [880, '0898'], [6670, '依恋足疗保健'], [2687, '国医堂'], [1960, '核心'], [9978, '兴根汽修'], [7520, '山东炒货'], [9251, '小时'], [3242, 'communityheaithaucarenstation'], [1901, '岳伟羊'], [9999, '乐之家'], [5445, '装潢美容精洗饰品保养改装轮胎电话1310992'], [4811, 'weddingdress'], [8677, '党员群众服务中心'], [6540, '出租'], [5772, '60'], [6111, 'coffee'], [6116, '上海银行'], [3646, '空座机理连机配件耗材批发及维修'], [3209, 'tel0234936792618580866'], [4351, '地板墙纸上海红日厨电'], [1378, '铂金通路'], [7419, '平价'], [868, '换油中心'], [2757, '代'], [384, '牛肉汤'], [9410, '空调房双人间'], [2044, '墙宝岛纸'], [4136, '龙达电器修理部'], [1659, 'hc'], [300, '瑞花水果店'], [8395, '电话微信1375177167815818839616'], [2669, '光疗'], [4453, '珠江纯生'], [2124, '进店满488元送668元'], [929, '125110'], [2368, '全国连锁胶南二店'], [6828, 'forstudentsmajoring'], [3369, 'lot'], [2828, 'ajiatechinology'], [4922, '169'], [5663, '都市美厨'], [6103, '国石油'], [2593, 'nocz535b1604005'], [6908, '专业'], [6687, '城市民'], [9290, 'of'], [3131, '生活3686217733'], [1843, 'gree格力'], [1418, 'qiaoyushouji'], [4284, '川菜馆'], [7321, '进口'], [1404, '拉'], [9950, '水煮早点粉面汤'], [6997, 'police'], [2754, '勾帮子'], [3009, '电力工程机房工程安防工程装修工程'], [1356, '胡辣'], [35, '禁止'], [2014, 'vivo'], [1047, '快'], [6311, '亿通汽修'], [8640, '吉'], [2340, '38067777'], [8171, '旺铺出租'], [1862, '铝合金不锈钢塑钢及玻璃15952617988'], [6600, '影花艺婚庆'], [7355, '鸡蛋汤'], [1955, '电建阀门有限公司'], [1442, '生活超市'], [5015, '1487'], [5791, '教师简介'], [3444, '禁个'], [3206, '陶瓷十大品牌'], [2160, '营业'], [4381, '纤边改活'], [3727, '青柠萌美术教育'], [487, '山东酵子馍'], [4838, 'cqeg石'], [6316, '童贩减'], [9105, '烟酒百'], [834, '海佳彩爽'], [3559, 'fashion木'], [3550, 'office'], [2919, '毛豆'], [5504, '48'], [3161, '133643633'], [8429, '一帆风顺'], [9922, 'ministryofagriculture'], [9323, 'perf'], [696, 'since1997冰淇淋与茶'], [8651, '净化行业'], [4967, '山工机械'], [889, '大中电器'], [7835, '汽车全车件修复翻新'], [7733, '预订电话13651331195'], [1644, '去资金安全无保障网络安全为人民'], [6203, 'learning'], [8885, '清'], [1020, '13688236048'], [6903, '汤锅荤豆花火锅每客13元'], [2910, '金泉'], [4553, '池之'], [883, '菱智'], [6054, 'soreca'], [6493, '嘉华酒店沐足'], [874, '百日鲜果'], [3403, '品牌服饰折扣店'], [7837, '源自中国博士县世界自然遗产'], [5949, '嘿哈精酿'], [2619, '美人鱼钓具'], [1155, '公牛'], [517, 'shanghaiminyingchengtoupropertymanagementcoltd'], [70, '美发留学'], [7526, 'chier'], [7901, '电话13876012519'], [4030, '建设'], [9464, '过桥米线'], [4948, 'forfunforloveforhome'], [4464, '178'], [7698, '美团'], [7204, '就选宽带通'], [4437, '上中广中酒中白方巾强花毛巾电话153833952'], [6416, '思铭平价超市'], [1332, '海联店电话13826100831曾小姐'], [8289, '护理48起'], [9730, 'led滚'], [9962, '50'], [9830, '上海交通大学医学院'], [3617, '超市'], [8789, '联单'], [2698, '主营红酒白酒饮料电话8894045'], [7192, '特价'], [6322, '江苏分部专注高端实木画册'], [189, '蓝色'], [2003, 'anjdiradioandivstati'], [7194, '文服装'], [9530, '电话18670729555'], [1060, '田涛'], [9725, '凉皮'], [5519, '信访办公室'], [1322, '电动卷闸门'], [1724, '文具复印纸计算器'], [6360, '童装工厂店'], [593, '电话6425212'], [1080, '大胖涮锅'], [8843, '紫'], [4552, '鹏大汽修'], [3598, '双恒木门'], [8362, '东风'], [5624, '荠红百果园'], [5811, '文天阁快印'], [7297, '成人少儿分层教授精品小班随到随学'], [5725, 'sinopec'], [6489, '泰兴市花园服装厂'], [3155, '翔能电池'], [8867, '5号'], [543, '辣炒观'], [2860, '盛泰房产'], [6984, '十年老店'], [8521, '广州新品贸易有限公司'], [4764, '东华门烤鸭店'], [667, '10米'], [8787, '喜利莱'], [524, '13989182608'], [1172, 'martin'], [3014, '102'], [9759, '比德文电动车'], [3365, '用品'], [8870, '原生态'], [3487, 'yt'], [9953, '用品玩'], [4168, '武汉市交通科技学校'], [5901, '13301878658'], [6131, '加盟热线15993267168150368126'], [9929, '川渠不锈钢工程部'], [6844, '思学微塾'], [4410, '远大广告'], [21, 'citic'], [8139, '徐邵路'], [5692, '新华人寿保险股份有限公司'], [6063, '农用车机修保养换'], [9145, '老杜肉菜店'], [5847, '西园饭店'], [3154, '地址烽火机电市场c区11栋1112号'], [7895, '沃惠商城'], [7244, '日丰管'], [7661, '澳美亚铝材'], [5218, '承接室内外装修'], [6532, '享菜迎来3周年店庆'], [1909, '绿泰'], [1175, 'hisense'], [7327, '公司地址成都市温江区海峡科技园温泉大道三段200号'], [7275, '35米'], [2298, '电话'], [9349, '万胜广场'], [4566, '军琳烟酒直供'], [2771, '梁园区第一幼儿'], [4529, '顺达棋牌'], [8191, '订货电话13949978798'], [5665, '丁字复印'], [8763, '大图复印'], [9184, '家'], [5494, '商业城'], [5446, '13408243941'], [1205, '刷机'], [2041, '鸭煲烤鱼'], [6490, '中国能建'], [1073, '精品男装'], [3495, '计生用品壮阳礼胃情趣内衣情趣用品'], [989, '祥茶行'], [4129, '全部'], [2175, '菜尔烟酒店窗'], [9121, '食味居'], [5740, '29269'], [3699, '改衣服扦裤'], [2217, '汽修轮胎电瓶年审过户13641588691'], [4211, '同城帮'], [472, '蛋糕坊'], [1463, 'neqta'], [7439, '590'], [6342, '汉光政诊所'], [2967, '筒子骨鲜粉'], [6824, '15225749719'], [3424, 'deli'], [770, '酸汤'], [6768, '奇蕾'], [7789, '上海市长宁区北新泾街道'], [4606, '豆豆菜馆'], [888, '漫游网吧'], [6146, '鑫灿商贸'], [2661, '9295789'], [5388, '华联商厦4层'], [4113, 'store'], [1712, '复印打字照相电话185615117'], [2529, '智能彩音体验店'], [1755, 'harbin'], [5512, '上海精品羊毛羊绒貂绒衫'], [6646, '电话0373728369613200档38证卡制作'], [4108, '品尝生活的甜度'], [1853, '社区家长学校'], [9709, '舞蹈器乐主持表演'], [5703, '价格实惠再送精美礼品一'], [7753, '产自销'], [960, '波司登'], [8860, '炒菜炖'], [7162, '神墨教育'], [6420, '全国首家sgs认证婴幼儿产品'], [4215, 'mt'], [8329, '小时'], [2810, '家尚造型'], [5273, '进口轴承洛阳轴承'], [5042, '40'], [3120, '华一村沪太路'], [1486, '订餐热线13215561111'], [4460, '花园'], [6496, '人文教育'], [7197, 'shou'], [2537, 'qfo'], [576, '美的空调'], [7410, '好滋味'], [2100, '电话1552725588513986100384'], [6374, '电话15366603586'], [7333, 'a'], [9780, 'service'], [7565, '烤漆门竹木门生态门柜门'], [9207, '手机1589996977'], [1333, '民权总代理'], [403, '兴达不锈钢'], [9894, '名步'], [2341, '首强烟酒'], [4471, '炒面'], [584, '虫虫联盟网咖'], [7931, '铝钛镁招拉门折叠门衣柜门电表画'], [5687, '84111391'], [4195, '手'], [3538, '日式商务套餐'], [2656, '半自动捆包机'], [4977, '光明盲人按摩'], [8181, '显示屏发光字'], [5339, '开鲁路438号'], [135, '专业减肥16年签约减肥不反弹无效退款'], [4525, '3228'], [2464, '床单被套'], [6785, '同瑾装饰'], [785, '财源广进'], [413, '月健康'], [6833, '天用品'], [729, '奶茶果汁冰淇发食吧'], [9457, '徐记大包'], [4445, '打印'], [2257, '不干胶印刷'], [2102, '缘艺'], [8532, '州市桂花路桂花岗独立一号广州美学实化开租提对小'], [250, '福星高照'], [5412, '办公区'], [8486, '贵州茅台酒特约经销商'], [2315, 'xindeji'], [5421, 'r度'], [9440, 'gemmy'], [9630, '食来一'], [8444, '2205'], [2501, '烈士陵园'], [3580, 'car神州租车'], [2035, 'tel22661812'], [2693, '时尚坊'], [8126, '招租热线02165183660'], [1382, '红烧排骨面'], [1985, '公正法治爱国'], [9605, '莎蒙拉利窗帘软包'], [5215, '粮油超市'], [2589, '8518'], [9109, '自助银行服务'], [434, '特约服务站'], [6504, '皇茶天下'], [3488, '式盛大开业啦'], [6376, '小时'], [6440, '荣获央视cn'], [7351, '子'], [8516, '起初你会有快死掉的感觉'], [6880, '曦月宝贝'], [6259, '工程综花电话18262516630'], [871, '禁止停'], [1733, '柏白友'], [1055, '永福户外帐蓬'], [4818, '5990025039'], [1424, '方式'], [8397, '金川线缆'], [3134, '加盟电话4006726669'], [8864, '汽修'], [4320, '短租长租'], [3514, '耐磨环保'], [4847, '离石线缆'], [9061, '魔都'], [7528, '电话18878198936'], [7758, '回民烧烤'], [7433, '一卡通'], [7356, '上岛'], [3872, 'vivo'], [2058, '地'], [6537, '大药房'], [1242, '小火锅'], [489, '中心'], [568, '1361519205818752614188'], [589, '报装点'], [4660, '整体橱柜'], [1714, '服务不是'], [4905, '车行'], [1780, '不锈钢加工'], [9152, '联塑'], [8500, '花圈店成人保健'], [9477, '180'], [1495, '电话1810703289918107032654'], [7423, '瓜子花生副食炒货送货电话13476808612程'], [9688, '红片一'], [8478, '色'], [648, '汇'], [2232, '电话05728324089'], [1494, 'keng'], [9689, '诚邦物流有限公司'], [552, '订热餐线0575873907201800'], [9449, '存卡'], [8801, 'happy'], [1672, '文化东路店'], [3799, '专业制作防盗网不锈钢门扶手雨棚家装'], [3950, '中国志愿服务'], [3706, '印刷'], [4407, '虹梅路2838弄'], [6114, '汇民清真食品店'], [1687, '20元份'], [2955, 'livingstyles'], [5790, '入口'], [5537, 'chacanting'], [1869, '工程技术研究中心'], [8259, '请上二楼'], [7883, '联系电话13020198226地址新渔东路481号'], [3270, '电器海绵'], [9769, '主营钢梯钢构隔层铁艺门窗'], [564, '迈吉咖啡'], [8151, '拉'], [5383, 'jm'], [4468, '周记正宗'], [8630, '香花'], [5385, '小乔零食店'], [8820, '诚物流有限公司'], [3178, '博世地暖皇冠玉石'], [4138, '宝盛丰'], [40, '广州羊城铁路劳动服务公司招待所'], [1192, 'shudaxia'], [823, '雪云制衣'], [854, '老味道老价钱'], [1308, '曹操贡'], [1919, '惠达卫浴'], [5598, '881'], [8795, '祝美'], [6273, '金手指造型'], [3345, 'zh造型'], [6777, '中国铁建'], [7476, '友邦汽车服务中心'], [6121, 'researchcenterforgovernmentbylaw'], [6614, '窖'], [5998, '五粮液'], [1794, '开设课程'], [7581, '毛家饭'], [3825, '运'], [1534, '地址严丰路202号'], [7425, '典典养车'], [6277, '烤鸭鱼头泡饼'], [944, '加盟电话15824328616'], [6683, 'et'], [9682, '政府网668068'], [1801, '新鲜水果'], [1986, '179'], [9176, '造型'], [6677, '名酒'], [8077, '全国热线4007006146'], [222, '越'], [5012, '订餐电话03728703258滑县店'], [7493, '出租'], [4860, '豆浆店'], [1574, '鑫仁人力资源'], [2598, '店生'], [3031, '茶楼'], [4486, '广州市巨行汽车销售服务有限公司'], [7593, '工艺礼品'], [5240, '车维修中心'], [4696, '香酥芝麻大饼'], [8461, '食吃站'], [6254, '188'], [2117, 'michelin'], [3415, '炒菜涮锅酱大骨各种主食'], [7814, '宁'], [2768, '李漫花田'], [7599, '迁老字号'], [4687, '海尔授权经销商广昌现代家电销售服务热'], [3445, '国家电动工具出口商品技术服务中心'], [5094, '购销部'], [4021, '泰兴市环院农民工业分会中'], [9779, '出售草鸡蛋'], [78, '建材胶类齐全'], [7467, '造型空间'], [4349, 'nd'], [3585, '联系电话18952693219'], [8442, '发艺'], [1465, '专业电动工具'], [7510, '慎德里社区居委会消防巡查联络点'], [3207, '免费'], [3291, 'town'], [6006, '收发传真'], [4363, '康明斯东亚研发中心'], [4802, '无皮烤地瓜'], [9331, '精品女装时尚女鞋'], [549, '特'], [2577, '第五道菜'], [6236, '电话1580038252302152265696地址闵行区老北翟路4858号益企居建材市场h3号'], [1694, '日租月租'], [1069, '依生源便利优质干洗'], [2146, '儿童百货'], [3043, '哈尔滨扎啤美食城'], [9272, '绿色健康无专染'], [8739, '发源早点'], [90, '八字起名婚姻企业风水'], [7660, '集雅楼'], [4070, '手工大饼'], [2481, '富丽胶'], [3556, '赛微干洗连锁店'], [1478, '经营移门橱柜地板集成吊顶承接室内外装潢tel189368382'], [4451, '注销等业务'], [5403, '拉'], [8839, '智胜木门'], [4313, '集成吊顶'], [4827, '75a'], [7757, 'wc'], [2335, '华信地产'], [7027, '早1000'], [7350, '内丘家杰通讯'], [9289, '404023'], [4047, '手机15901610142'], [6664, '考勤机'], [5765, 'hangover'], [8790, '宾馆'], [2251, '联系电话18278812870'], [1615, '广东琦锦线业有限公司'], [4406, '上海市浦东新区工人文化宫'], [9066, '宽带安装'], [8806, '之丰党建服务站'], [3809, '老北京王府井'], [601, '蔬菜水果'], [3629, '一单元'], [500, '手机'], [3569, 'chinamobie'], [6868, '天启窗业'], [9810, '卫生部社区居民委员会'], [1295, '龙高中'], [8185, '工地断桥铝塑钢门窗铁艺不锈钢'], [451, '空'], [5162, '6'], [1717, '墩仁里a4档'], [3940, '二手车'], [8277, '永丰自制辣酱'], [4234, '饭'], [7920, '自助骨头馆'], [6075, '汽车服务站'], [4298, '北'], [4022, '区岗'], [6208, 'taobao'], [6626, '金水暖批发中心'], [5457, '29'], [8278, '顺利房产'], [8941, '达伽玛瓷砖'], [3153, '停车场'], [12, '创'], [9807, '无人售货店'], [5291, '海报名片写真广告精装展板布标'], [2375, '黄焖鸡米饭'], [4387, '欢迎光临'], [8708, '星隆服饰'], [9840, '禁入车'], [6708, '生态厨卫'], [3068, '工程复印'], [1243, '主营筒灯射灯浴霸电料'], [9944, '玻璃屋餐厅'], [557, '6楼后座牛排馆电话68695100'], [6415, '16'], [6197, '景岗卫浴'], [5384, '钣金喷漆'], [2402, '国存百货店'], [2342, '丰群铝材'], [2738, '新城社区幼儿园'], [5934, '地板木门衣柜门'], [8690, '得乐教育'], [2202, '医疗器械'], [3655, '温州指压'], [7890, '新用户'], [2064, '酱鸡酱鸭'], [1900, '诊疗科目口'], [8416, '牌室'], [7555, '纤之味'], [1184, '南泰公路'], [3247, '原味'], [8286, '茜施尔'], [7449, '牛'], [4392, '北京捷诚信通知识产权公司'], [6929, '全插座领导者'], [2365, '微瓦缓冲材料及运输包装研发基地'], [9701, '金调各建物业管理有限公司益设金水湾小区实培物空管理维作'], [9346, '欢迎加盟'], [6962, '买10元送5元'], [5304, 'v'], [5101, '上海杉海建筑工程有限公司'], [8370, 'guaiozhouzhujiangbre'], [958, 'xichengdistrictjuspacecommunityyouthclub'], [6676, 'ooniu2i'], [1467, '拉'], [4841, '洗包'], [3387, '15779553448'], [5962, '诚信石材'], [3408, '彩色打印'], [7373, '广发银行cgb'], [5521, '尽美装饰'], [5103, '三欣美'], [2732, 'bigfuli'], [5623, '便利'], [2288, '雷丁电动汽车'], [570, '有可为'], [6366, '央视推广品牌'], [8551, '招聘'], [6382, '015m'], [737, '地址龙阳路1188号'], [5201, '各种炒菜'], [1469, '佰瑞医药'], [1140, '汉釜宫自助'], [4328, '代办从业资'], [459, '童装童鞋'], [4674, '置换认证店'], [5701, '东北水饺可口可乐'], [6739, '铜城莱街'], [7592, '泸上阿姨'], [1287, '开业大吉'], [6829, '学生科学研究训练实验室'], [7741, '因为太爱你们了'], [9677, '的中央空调'], [553, 'bestpharmacistshop'], [3177, '田旺家具窗帘'], [6974, '嘉华'], [9033, '面条水饺炒饭'], [1320, '食冷饮百货'], [2360, '柏盛'], [1168, '中国移动'], [6760, 'r'], [1741, '纸盒'], [8166, '世界标来了'], [9799, '电话158025755815874158854'], [9493, '雄'], [4531, '13822295070'], [1693, '180'], [5359, 'foreignlanguage'], [5658, '089'], [9927, '金融街购物中心'], [3524, '栏杆扶手防盗网钛金雨阳篷'], [3286, '实惠绿色环保13401242099'], [7260, '上海时佳包装制品有限公司'], [71, '炒面炒河粉肉'], [9028, '食为天'], [8056, '农家小炒经济实惠'], [3340, 'dikeni'], [5920, '甬汽修'], [760, '卫浴晾衣架'], [5272, '请上'], [1634, '欢迎光临'], [8055, '周一至周'], [1053, '家布艺窗饰'], [6615, '商丘仅此一家'], [869, '浦博市张店区'], [2622, '安井'], [3737, 'xtskiper'], [264, '龙文教育'], [8426, '婚礼'], [3910, '执益图文'], [9373, '渡鸡臻选'], [7487, '373'], [3210, '网咖'], [8003, '第三代'], [5016, '13857263759663759'], [4721, '2122'], [3028, 'c区9栋25号'], [4356, '猛男哥炒饭'], [1116, 'cun'], [9433, '兄弟装璜'], [8398, '入口'], [2632, '名仕阁酒店'], [7235, '65'], [3489, '总部服务热线'], [1174, 'ltd'], [924, '华夏中介'], [3101, '维修保养'], [9488, 'hris'], [193, '广字便销店'], [9444, 'tge亭'], [5911, '荔湾店富二层'], [5321, '袁记鸭脖'], [5130, '尚艺'], [7261, '吉家'], [3081, '养生馆'], [8458, '羊肉汤'], [6411, '维修服务有限公司'], [7341, '幼儿园'], [1022, '策划设计庆典'], [7879, '2682'], [5889, '姜瑞汽车中介欢迎您'], [4101, '再生资源回收'], [3277, '上海新天地典当有限责任公司'], [4842, '防盗门套间门不锈钢门窗铝合金门窗10385306065'], [9738, '新方通金属激光切割不锈钢制品厂'], [2640, '实心叉车胎'], [6454, '卡年13001500'], [7998, '也1000起普吉岛1500起'], [9959, '上海龙澄专用车辆有限公司'], [9750, '绥芬河分店'], [5469, '乐生活电器'], [7613, '上海嘉泰模具制造有限公司'], [6185, '橱柜'], [4301, '茶软舒适高雅'], [3878, 'traditionalchineseart'], [7053, '8308881215059876812'], [9686, '足疾修复'], [5926, '江苏'], [6028, 'chosen'], [7032, '尧烤'], [8950, '广州化学试剂力'], [3816, '15961047386'], [9775, '铭家'], [1844, '展式独家肠道排毒手法'], [4867, '管理'], [237, '四清批发超市'], [6927, '连锁'], [5088, '看得见的实惠尝得到的美味'], [3859, '起寻梦沙巴'], [7627, '专业包饺子工人数名'], [3007, '0058'], [6388, '柯森装饰'], [6803, '圣美广'], [9859, '大型演艺婚礼主持'], [6732, '2020'], [3399, '24h13710601110'], [8333, '工程地毯'], [343, '尚品宅配'], [5316, '电话13283545445'], [6190, '主营东北手工水饺凉拌菜翻销手电话18970275'], [4010, '银狐'], [3855, '亮亮'], [4783, '加工制作各种不锈钢'], [3181, '团'], [1570, '美蛙'], [5738, '汽车保养'], [3807, '牌18802055717'], [7950, '真'], [7066, '地址南堰万水机电城特一区18号'], [4317, '衣佳人'], [1538, '禁止停车'], [7825, '卫生室'], [9925, '逸彩科技美容会所'], [2020, '新浪电脑'], [8865, '外环'], [3606, '仓民医所'], [466, '康之'], [6289, '早一'], [5260, '爱博二村'], [1540, '擀面'], [4548, '陌陌'], [4690, '勤房出租'], [4132, '坊餐'], [1743, 'chigo'], [8452, '川汇区'], [3539, 'coko'], [3753, 'lano'], [6656, '手机维修'], [4554, '支行欢迎您'], [2646, '海燕奶制品批发部'], [2069, '十年老店'], [6202, '洋'], [184, '针织生活馆'], [8127, '财务共享服务中心'], [830, '博'], [7650, '鸡腿饭大馅馄饨'], [7852, '正宗淮扬菜'], [3517, '2106'], [7778, 'furla'], [5794, '见真情'], [4622, '专休闲7'], [4985, '联系电话1867586584802081790645'], [7241, '承接尾部灾房'], [5276, '美的'], [9732, '经营范围百货服装家用电器文具用品卷烟零售'], [5643, '私人订制'], [7374, '事如意'], [5985, '业'], [4082, '电话1351387213813838677730'], [673, '合富欢迎您'], [4947, '波飞音乐'], [8558, '旋转小火锅'], [6205, '四股桥墩明车行'], [5764, '众康盲人按摩中心'], [884, 'sino'], [5174, '米'], [8137, '脚澄三轮车'], [5600, '工'], [8932, '万磊房屋'], [201, 'ar'], [3084, '福'], [7557, '成人用品专卖'], [332, '343200'], [3944, '广安门内街道'], [9070, '金鑫'], [6913, '中华名小吃'], [8730, '07'], [1992, '广航星'], [2218, '工作站'], [8173, 'so'], [7401, '来喜把'], [3967, '入口'], [4196, 'caict'], [2594, '大盘鸡'], [8205, '福田安全插座'], [2808, '佳美门窗'], [4394, '桥汽修养护中心'], [1938, '立邦多乐士墙布'], [8566, 'sindmo喜丹奴家纺'], [4941, '五金'], [6425, '维佳康口腔'], [2300, '冰吧'], [8599, '塘桥社会组织服务中心'], [265, '扬州'], [5495, '厅'], [4098, '中国移动'], [7568, '国家级上海产品质量监督检验研究院'], [4307, '立体停车设备'], [3611, 'chinagold'], [9664, 'donno'], [7967, '简小姐的衣柜'], [1708, '温馨玉器店'], [8606, '工程'], [9700, '阳阳超市'], [1232, '学生文具办公用品精美饰品批发零售'], [7437, '诚'], [5991, '大卡司'], [2788, '园区快餐小吃'], [4982, '电话15939090815'], [8767, '另有棉花出售'], [8669, '电子屏发光字'], [8544, '恒旅店'], [6386, '石膏线'], [3258, '盟映缘'], [4411, '野兔'], [2795, '科学智板健康万家泰兴店电话40086789011365526636'], [9985, '周托'], [2, '太原铁路局'], [4324, '卫浴'], [5798, '衣柜移门'], [3212, '湖北北茶价格鉴定评估'], [2203, '泰闽茶业'], [4759, '平价超市'], [7421, '出步高'], [1584, '五福临门'], [5208, '川粤快餐'], [1572, '10元小火锅'], [545, '南'], [4193, '水煮鱼土豆粉米线'], [4583, '油进酒'], [1803, '味酒楼'], [452, '改衣服'], [104, '电话'], [8120, '邯郸市公台区'], [8605, '锦绣美妆'], [7992, 'misstea'], [2943, 'land'], [2549, '牙白素'], [2774, '电话1387228070213872400105'], [3833, '韩都'], [4610, 'new'], [7255, '云南普洱名山系列'], [4449, '臭豆腐'], [3110, '出售'], [7842, '2018'], [2449, '高价回收'], [4909, 'dental'], [445, '购秋冬季新货上市'], [2923, '113073'], [490, 'zhengiboby'], [2182, '7699'], [6157, '三'], [6093, '烟草网络1679'], [4121, '盛铝业'], [3928, '好利来'], [3513, '梅陇镇嘉和花苑'], [4921, '快运专线'], [5269, '联系电话13870302414'], [5484, '电话02787879134'], [3109, '柳市路1弄110号'], [2582, '5大促高端智能马桶0元购'], [5785, '小额家电维修'], [9076, '广弘名车服务'], [7259, '嘉应明东'], [5648, '订购电话18328268626'], [9050, 'td75dti型输送机输送皮带及上进璃提计各种零配件'], [571, '配匙'], [8317, '废品回收'], [3165, '小和高语数处'], [1293, '66'], [2096, '口腔专科门诊'], [6230, '微晶石'], [8027, '蝶乐商'], [650, 'hamasar'], [6335, '喜'], [4823, '46'], [4370, '广东省授权经销商'], [8990, '员工生活区'], [2585, '宽带淋浴'], [2913, '159572693'], [9459, '丁环境个个爱益'], [2792, '拉'], [2782, '广州小记者站'], [4851, '万源网咖'], [7288, '京东罡杨合作'], [1488, '小龙牛肉丸'], [355, 'infinitus'], [9568, '吕斌13795354319'], [5547, '广州森龙工程机械贸部沃ading'], [2002, '港蒙上新商务'], [1670, '原汁牛肉汤'], [1565, 'pretisia'], [347, '上海华生'], [168, '人王朝'], [1977, '豆浆莲子粥'], [9876, '绿豆面条重庆小面批零兼营'], [7490, '电话13937081434'], [5784, '全国连锁专卖1018放汉店'], [2877, '台湾宏声古筝'], [6204, '丰华驿站'], [1742, '淳真沙县小吃'], [3261, '盖饭'], [657, '慈善缘流通店'], [8180, '主题美甲'], [7855, '继续教育学院国际教育中心'], [4200, '迎新老客户光'], [7772, '银座和谐广'], [1436, '电话13937680879'], [4423, '万生源'], [6955, '13602775045'], [385, '国家电网公司'], [8581, '电话18728117375'], [4473, '特产'], [5044, '红木家具印结构体验中心'], [8428, '吉楼出租'], [8881, 'opo'], [172, '陈老五麻辣烫'], [4962, '集中商业'], [2147, '电话15070355948'], [2371, '耶嘉智能集成墙面'], [8714, '来京人员'], [6283, '手机'], [9938, '565051898手机5c34obwanchiryandelcuronicscoltd'], [4134, '爱帝特卖'], [1784, '武汉钟家村工厂店no2'], [5612, '开封灌汤包手抓'], [2692, '景观'], [4295, 'vivo'], [291, '皮草'], [1638, '中关村街道'], [6811, '67'], [5075, '江淮汽'], [5868, 'eleven'], [2937, '建筑图集主编单位'], [2851, '广告'], [5742, '分店18736887722'], [7380, '凉皮'], [6299, '中国'], [3766, '伊隆特'], [2558, '胡聪'], [6347, '6136'], [5114, '脊椎理疗养生馆'], [3045, 'face'], [6526, '52'], [1103, '生'], [7234, 'noentry'], [1210, '排骨面1520元'], [2995, '皮具护'], [843, '电话575111415117043618'], [2960, '满899元减200元'], [4651, '联系电话05238950211913852862366'], [8128, '24小时电话40006851'], [2135, '凤凰'], [5449, '896'], [2071, '零食'], [1075, '公共电子阅览室'], [8123, '15323887905'], [1217, '地下一层停车场出口'], [8012, '天天低价'], [7416, '牌匾'], [9613, '世弘达物流有限公司'], [9405, '剪爱造型'], [2486, '标书装订写真喷绘工程复印展览展示画册印刷会议服务'], [3777, '59'], [6047, '答案'], [9703, '主营各'], [3213, '信用卡业务'], [3975, '10'], [5602, 'jodo'], [4263, '中事文化'], [5069, '电话13523909049'], [9179, '顺心门窗'], [7922, '333'], [1449, '工作室'], [1283, '三包子店'], [8200, '1363710857213866670150'], [547, '苏'], [1855, '沐丝'], [4430, '电话13778846491'], [2964, 'no0116'], [7145, '南昌总代理'], [5221, '13'], [7960, 'policestation'], [3333, '左岸'], [4675, '鲜肉批发零售'], [7538, 'cosmo'], [793, '电脑刻字'], [9020, '天天'], [759, '擦鞋修'], [3533, 'statenuclearpowerplantservicecompany'], [7016, '烧烤'], [1186, 'h'], [9472, '1021'], [6139, '宏志汽修'], [7332, '胜华浴池'], [5155, '南鑫'], [7006, '城管执法社区工作室'], [4191, 'qitinghealthprotcction'], [5419, '誉满中华通达天下'], [1768, 'ctao'], [7762, '带墙壁开关公牛插座'], [3848, '一付或隐形眼镜一付'], [653, 'kley'], [675, 'fotile方太'], [2441, '除敏抗衰'], [8411, '猫宁'], [1260, '上海盈卡汽车用品有限公司'], [8824, '专业承接大中小型水景工程水族箱清洁换景技术咨询'], [3186, '义利街'], [626, '天艺发型欢迎光临天艺发型欢迎光临'], [6860, '九洲石锅鱼'], [4135, '主营'], [2505, '临保月保'], [842, '七风度'], [8080, '明仁眼镜'], [3875, '榆次区郭家堡小学'], [577, '经营'], [8274, '5月1213全大城春华龙'], [5252, '86'], [252, '拉'], [3278, '13099058690'], [8816, '曦博副食'], [6023, '斜土路2899甲号'], [7170, '小吃'], [3804, 'ww'], [3688, '烟酒'], [8319, '订购电话13140543'], [526, '文明'], [1796, '金来富装饰'], [1187, 'xh'], [7294, '各种自制熟食凉菜'], [8876, '流花街兵设登记'], [9603, '宜言爱高端定制中心'], [7484, '辣爆烧烤'], [8642, '厂家直销质量第一'], [5810, '39'], [9919, '紫燕百味鸡'], [3302, '安全管'], [6317, '24小时营业15273456717'], [7815, '预约加盟电话18713195850'], [5466, '7880'], [8115, '小王汽车快'], [3126, '少儿舞蹈民族'], [1525, '398'], [2524, '克华群特辣王特价6'], [10, '诊疗科目中医科'], [2463, '安心午服'], [6924, '经典'], [952, '主修奥迪丰田本田尼桑大众等'], [5031, '福'], [6519, 'bankorjilin'], [1448, '中心'], [4833, '小炒砂锅粥晚饭'], [5111, '永泰旅馆'], [136, '五金电动工具'], [3842, '酒店化管理24小时营业五楼1'], [9772, '百联百货超市'], [8823, '5552'], [8287, '兴隆'], [6019, 'a09档'], [3981, '德星'], [670, 'no粤番g1704126'], [3987, '绿'], [3228, 'chinapacificinsurance'], [4338, '万科中心'], [5973, '7daysinn'], [7885, '批发晾衣架'], [947, '海苑后'], [7735, '院内停'], [7287, '面香园'], [7098, '社区妇女儿童维权工作站'], [3593, '太'], [9431, '韩式半永久'], [7418, '滋本家'], [8239, '地址大团镇永春中路73号电话1370161602013023208556'], [6, '电话051487582370'], [1280, '发型设计'], [1310, '衣柜'], [2355, '2141298'], [3322, '宝贤民兵营'], [9024, '电话13403733877'], [1881, '欢迎光池'], [3402, '东都'], [7841, '胖都哪女装'], [3158, '优家'], [4465, '擀面'], [3963, '大本丰旧机动车经纪有限公司'], [4405, '活鸡批发零售许家杠子'], [2205, 'gyschinaelectricalmachmerycshanghaicolta'], [4274, '精品店男装'], [8854, '皮肤科'], [6240, '电话1553663554913007093984'], [9949, '爱利凤凰电二轮'], [3275, '四海聚饭店'], [3455, 'kasuna'], [2018, '稻花香'], [3001, '位线内违者后果自负'], [5716, '铁锅鸡铁锅鹅'], [3314, '吕粮山猪'], [9032, '山西面食馆'], [5699, '佳程地产'], [8683, '地址上海市青浦区徐泾镇会恒'], [8415, '宾馆'], [2819, '和创新能源'], [5805, '电话1583407999315135182229'], [7050, '统美德培育和践行'], [6159, '禁止'], [6975, 'marina'], [7488, '炖菜'], [4862, '电话861001198665111713065021879'], [6742, '唐派联业'], [1206, '检测及维修'], [8711, '1丹'], [9278, '销售光盘刻录打印婴印电缆开级20童装系统'], [7870, 'b2'], [2539, '本店特'], [5968, '昂立智立方'], [112, '蛋糕'], [3063, '赛军区'], [2220, '冲印'], [4049, '中国移动'], [2493, '电话583'], [1526, '生产基地'], [9214, '车加家'], [2009, '自觉'], [3595, '公共法律服务中心'], [6223, 'autodecorationbeautycenter'], [9180, '电话0202987964618620675720'], [37, 'lyfen'], [6201, '头疗养生'], [5801, 'chinalife'], [376, '我牛'], [3827, '电话13580442936'], [2500, '打造无甲醛装修'], [8466, '今油茶'], [8645, '手机13609729205伍生'], [8963, '60'], [4055, '台铃'], [9619, '田家金店'], [534, '茶艺棋牌休闲'], [5000, '盖饭'], [2004, '健步鞋款式多'], [901, '面鲜生'], [5873, 'mincoshanghaimetallurgicalood'], [2387, '00不会的免费培训所有人'], [9771, '4008205566'], [1799, '虎'], [1563, '量身定做'], [9825, '国博乐新风'], [5836, '违意收分买芬'], [622, '中国工程机械工业协会推荐'], [6387, 'pc'], [4810, '汽车美容'], [4819, '汽车维修俱乐部'], [158, 'china'], [1095, '糖尿病健康咨询'], [778, '大骨羊'], [174, '家具配件'], [373, 'huaxinspace信空间'], [4311, '砂浆王防冻剂影胀剂袋装'], [1932, '衣柜'], [8886, 'tm'], [3281, '钟家村店'], [1394, '374'], [3984, '金彭'], [9818, 'tm'], [3532, '小酥肉玉米排骨'], [4124, '3y'], [5560, 'mm'], [7605, '加盟订餐'], [7985, '创新百货'], [1562, '通顺'], [4650, 'jm'], [985, '诚招'], [5657, 'liugong'], [586, '1013315'], [3540, '汉康门诊'], [5453, 'bull公牛'], [1515, '1840403688'], [9334, '大骨头'], [4229, '鸡公煲'], [7962, '致美花店'], [890, '窗帘布艺'], [5837, '老北京'], [697, '新起点'], [3707, '地址友谊南路长堤街1125'], [2896, '隆兴玻璃'], [4020, 'olub'], [5656, '康复服务指导站'], [4839, '批发零售'], [6858, '70'], [9964, '文轩'], [1167, '装潢'], [3899, '品牌窗帘'], [7043, '上文用品白事大全手机18331803586'], [4512, '西湖茶行'], [603, '火锅桌'], [2637, '高雅美'], [5593, '关动时间品每周一至周日900170'], [1635, '169'], [7642, '15070434833许老师'], [7217, 'shaokaodabing'], [6871, '国家'], [8675, '环岛游订房'], [1106, '经销'], [6555, '中penghaikelole'], [7088, '电话15823887868'], [928, '学教育'], [7266, '买就怕你不尝板栗就吃粒'], [574, '琶洲分会'], [5049, '莫民之家'], [4760, 'cifengtang'], [9104, '老城'], [4056, '651'], [3713, '厂家直销全场4折'], [7005, '自养自销水产批发'], [2770, 'nike'], [1594, '各种炒菜'], [4373, '聚泷缘'], [2176, '801477826'], [3002, '伸拉防盗窗不锈钢'], [9685, '小时候'], [3736, '假日酒店'], [5897, '女装'], [7549, '门人口'], [4790, '3店'], [6970, '泽一'], [825, '上明大'], [3614, '时尚鞋馆'], [2015, 'lesso'], [7563, '519'], [3815, '地址太原市尖草坪北方商贸城酒店用品e区43号'], [2845, '电器'], [5670, '13383126301'], [4693, '南郊卫生所'], [302, '招聘'], [2797, 'cofco'], [9238, 'g63'], [5535, '自助服务'], [7039, '台球'], [5026, '满101元'], [5280, '文售3980元卧'], [5226, '移'], [6498, '13525438810'], [5122, '戏剧与影视评论'], [9966, '大众家常炒菜'], [3922, '联通电信'], [5808, '主营工作服椅套台布酒店家具'], [7212, '理疗养生店'], [7219, '秋季'], [2370, '三枪'], [3358, '四川正宗'], [1298, '时尚'], [7085, '言悦'], [4084, '豆花'], [9623, '祖n'], [3022, '辣面'], [6905, '79'], [8765, '金逸影城'], [818, 'castrol嘉实多润滑油'], [4849, '15639430096'], [903, '拔'], [5880, '车险'], [3413, '中国银行'], [8052, '闵行区学雷锋志愿服务站点'], [3279, '租电话6833200'], [5433, '电话'], [8701, '中国农业银行'], [8882, '海之星海鲜城'], [3555, '天意造型'], [6290, 'noc02717100904'], [6332, '15929552277'], [9197, '主营'], [8874, '大王'], [9458, '1000元'], [5697, '上号充值手机电脑贴膜手机电脑维修'], [1961, '农业部重点学科'], [194, '4008117117'], [8573, 'seebest视贝'], [6404, '电话13084298800'], [8312, 'aome'], [7567, '保险银行投资'], [8506, 'arrow'], [5257, '家福万家超市'], [5508, '东北'], [7290, '德高防水'], [25, '沙洛炒面'], [8100, '诚信烟酒'], [8101, '玉柴重工'], [7734, '尚雅足浴'], [8006, '中堂精品蛋行'], [1561, '34'], [1189, '龙欣旅馆'], [4785, '福'], [4268, '水槽'], [4793, '595'], [9912, '委会'], [3841, '弘福堂'], [1451, 'no152'], [3645, '售票处'], [2871, '门面转让'], [5871, '外贸部展剪油液压件电器件主圆液流锁'], [146, '怕甲醛'], [9312, '如家酒店'], [971, '正宏业'], [5564, '天玺小区停车场出入口'], [4665, 'dlp'], [314, '复印'], [2128, '月台18'], [1871, '贝丽出人'], [2036, '百年老店经典小吃'], [9707, '理中天天有'], [3698, '早干彩轮电瓶补胎换胎壳牌授权指定换油中心'], [1918, '3187166'], [8241, '维修家用商用中央空调冷库清洗拆装空调'], [4029, '手机专卖上号维修配件'], [2033, '卡尺'], [7224, '地址青浦区青赵公路5278号'], [9755, '叮叮布艺'], [8569, '良教村委员会'], [1403, '2017'], [4225, '高价'], [521, '生活艺术家'], [9768, '赏湖轩'], [8639, '紫光药业'], [2188, '134'], [4732, '开放'], [532, '老红木家具'], [4159, '新宁小区'], [9356, '电话13881201715'], [5774, '300'], [1138, 'oppo'], [6722, '文玩'], [7252, '沁芳里'], [9595, '工程饰材'], [9754, '5叶茶'], [238, '电话1387022607215070239351'], [4112, '金龙管件'], [4751, '鲜花蛋糕'], [9532, '百丽服饰'], [462, '寻求客户'], [4984, '康怡街'], [1934, '削面水粉米线面送餐电话15102322302'], [9206, '招聘'], [2840, '皮水'], [249, '安利体验馆'], [9948, '地址'], [2030, 'jn大a'], [544, '上海波涛泵业'], [1361, '碧云轩'], [2477, 'melo广告制作'], [9387, '销售中心'], [3883, '手护'], [2234, 'noc'], [6066, '华诚机械'], [7808, '姚王专卖店电话87541920'], [468, 'bull公牛'], [6198, '河源特产'], [2284, '阿辉'], [7701, 'jianshen'], [1297, '金彭'], [4722, '兽药'], [6481, '少儿拉丁舞民族舞电招生中前十名'], [3858, '限'], [7034, '注册公司'], [780, 'zhn'], [5072, '钓鱼岛打火机'], [2162, '电话15236491034'], [1115, '17元'], [6859, '车辆禁止在行'], [6602, '出租'], [8667, '小薯美食城'], [2798, '男装女装童装'], [9936, '包'], [9695, '21010117'], [5954, '香樟树'], [7788, '申通教育品食您'], [9068, '全聚德起源店欢迎您'], [4629, '扶手货架卷闸'], [5346, 'nationalengineeringresearchcenter'], [5859, '86789977'], [5552, '工作时间上午8301100'], [9018, '地址上海闵行区平吉路87号'], [3891, 'didiou迪笛欧'], [2734, '彼佳地产'], [8044, '电脑耗材'], [322, '1348538942813327517661'], [1010, '广州市体育传统项目学校'], [9308, 'spf'], [579, '金王星辰'], [1800, '金金美甲'], [5848, '批发各类超市货架'], [8245, '油漆涂料木门楼梯防盗门玻璃水管'], [5055, '幸福美满家园拥有大唐门窗'], [3991, '120'], [5957, '炸酱面'], [8075, '厂'], [7014, '金丰花园'], [1550, '纯中草药'], [6403, '24小时'], [9208, 'unicom中国联通'], [6219, '592'], [2824, '福'], [9963, '胖子'], [5488, '畅儿家的糖水铺'], [5918, '过桥米线'], [1428, '女裤'], [9419, '名盛'], [2745, '青岛啤酒'], [2685, 'aux'], [4892, '品百味餐厅'], [7485, '许华钧诊所'], [572, '小'], [7726, '洗护师'], [7561, '电话1873'], [7743, '形'], [2056, '娇娇饭堂'], [2569, '导冠烟酒行'], [9350, '72号'], [4233, 'deautyshop'], [1150, '鼻炎配方'], [1656, 'doors'], [9766, '131370286'], [983, '电话1877150081713797322720'], [6020, '欢迎光临'], [8183, '咖啡'], [8311, 'tel5414810113774461462'], [9937, '豪斯菲尔酒店'], [3143, '北京金融街'], [5775, 'yuirungroup'], [5369, '1330714111'], [8396, '形象设计'], [3626, '洲际房地产'], [3497, 'lzsk'], [623, '午餐卤面大米'], [3405, '内设标准间普通间婴儿洗澡男女大池矿泉水直销'], [1151, 'missyan'], [3025, '住宿'], [7202, 'j5造型'], [9781, '白'], [1385, 'b栋'], [5342, '861'], [2283, '12元'], [5861, '地址徐泾镇华徐公路68号10电话13636441236'], [4089, '星巴克咖啡'], [7930, '业'], [1859, '中料劳保田口'], [9598, '首饰加工'], [5338, '畅'], [5906, '农伴'], [5376, '壁画'], [3338, 'jeep'], [3244, '白茶'], [7443, 'china'], [2924, '华茂'], [9129, 'kunda'], [786, '维修'], [7597, 'glorialeans'], [8696, '医堂'], [3839, '收银台沙发'], [8837, '重点防火单位'], [3500, '烟酒'], [7451, 'nationalinstituteofqualityinspectionandresearchonproductinshanghai'], [881, 'lx35'], [8223, '电话15018321633'], [1704, '古筝古琴'], [2588, '重庆片片鱼'], [3728, 'china'], [1259, '95546'], [815, '萌萌宠物'], [9763, '恒力电器'], [9157, '新潮移门'], [6320, '棋牌娱'], [1522, '天力桌'], [6061, 'services'], [754, '办公室'], [5185, '汤小笼包'], [2863, 'tel15061078598'], [9448, 'comm'], [4408, '酱香典范红花郎'], [2705, '复印'], [4600, '常州'], [9683, 'ma'], [4992, '信用卡取现代还小额贷款18051187333'], [2898, '手机18094508333'], [3196, '同昆'], [3130, 'dunran'], [9585, '49'], [5793, '金岸物流'], [8182, 'cacnio'], [2012, '革新路店'], [7836, '张记餐馆'], [8256, '省医保'], [8597, '电话15827502884'], [4061, '新城五金电器'], [6776, '话15099158313'], [6472, '电车第四分公司'], [1077, '政务'], [9263, '主治'], [824, '1392428955']]
9999
finish
|
HW2/HW2_20XXXXXXXX_name.ipynb | ###Markdown
HW2 Machine Learning in Korea University COSE362, Fall 2018 Due : 11/26 (TUE) 11:59 PM In this assignment, you will learn various classification methods with given datasets.* Implementation detail: Anaconda 5.3 with python 3.7* Use given dataset. Please do not change train / valid / test split.* Use numpy, scikit-learn, and matplotlib library* You don't have to use all imported packages below. (some are optional). Also, you can import additional packages in "(Option) Other Classifiers" part. * *DO NOT MODIFY OTHER PARTS OF CODES EXCEPT "Your Code Here"*
###Code
# Basic packages
%matplotlib inline
import numpy as np
import pandas as pd
import csv
import matplotlib.pyplot as plt
# Machine Learning Models
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
# Additional packages
from sklearn.model_selection import cross_val_score
from sklearn.metrics import f1_score
# Import your own packages if you need(only in scikit-learn, numpy, pandas).
# Your Code Here
#End Your Code
###Output
_____no_output_____
###Markdown
Process> 1. Load "train.csv". It includes all samples' features and labels.> 2. Training four types of classifiers(logistic regression, decision tree, random forest, support vector machine) and validate it in your own way. (You can't get full credit if you don't conduct validation)> 3. Optionally, if you would train your own classifier(e.g. ensembling or gradient boosting), you can evaluate your own model on the development data. > 4. You should submit your predicted results on test data with the selected classifier in your own manner. Task & dataset description1. 6 Features (1~6)Feature 2, 4, 6 : Real-valuedFeature 1, 3, 5 : Categorical 2. Samples >In development set : 2,000 samples >In test set : 1,500 samples Load development datasetLoad your development dataset. You should read "train.csv". This is a classification task, and you need to preprocess your data for training your model. > You need to use 1-of-K coding scheme, to convert categorical features to one-hot vector. > For example, if there are 3 categorical values, you can convert these features as [1,0,0], [0,1,0], [0,0,1] by 1-of-K coding scheme.
###Code
# For training your model, you need to convert categorical features to one-hot encoding vectors.
# Your Code Here
# End Your Code
###Output
_____no_output_____
###Markdown
Logistic RegressionTrain and validate your logistic regression classifier, and print out your validation(or cross-validation) error.> If you want, you can use cross validation, regularization, or feature selection methods. > You should use F1 score('macro' option) as evaluation metric.
###Code
# Training your logistic regression classifier, and print out your validation(or cross-validation) error.
# Save your own model
# Your Code Here
# End Your Code
###Output
_____no_output_____
###Markdown
Decision TreeTrain and validate your decision tree classifier, and print out your validation(or cross-validation) error.> If you want, you can use cross validation, regularization, or feature selection methods. > You should use F1 score('macro' option) as evaluation metric.
###Code
# Training your decision tree classifier, and print out your validation(or cross-validation) error.
# Save your own model
# Your Code Here
# End Your Code
###Output
_____no_output_____
###Markdown
Random ForestTrain and validate your random forest classifier, and print out your validation(or cross-validation) error.> If you want, you can use cross validation, regularization, or feature selection methods. > You should use F1 score('macro' option) as evaluation metric.
###Code
# Training your random forest classifier, and print out your validation(or cross-validation) error.
# Save your own model
# Your Code Here
# End Your Code
###Output
_____no_output_____
###Markdown
Support Vector MachineTrain and validate your support vector machine classifier, and print out your validation(or cross-validation) error.> If you want, you can use cross validation, regularization, or feature selection methods. > You should use F1 score('macro' option) as evaluation metric.
###Code
# Training your support vector machine classifier, and print out your validation(or cross-validation) error.
# Save your own model
# Your Code Here
# End Your Code
###Output
_____no_output_____
###Markdown
(Option) Other Classifiers.Train and validate other classifiers by your own manner.> If you need, you can import other models only in this cell, only in scikit-learn.
###Code
# If you need additional packages, import your own packages below.
# Your Code Here
# End Your Code
###Output
_____no_output_____
###Markdown
Submit your prediction on the test data.* Select your model and explain it briefly.* You should read "test.csv".* Prerdict your model in array form.* Prediction example [2, 6, 14, 8, $\cdots$]* We will rank your result by F1 metric(with 'macro' option).* If you don't submit prediction file or submit it in wrong format, you can't get the point for this part.
###Code
# Explain your final model
# Load test dataset.
# Your Code Here
# End Your Code
# Predict target class
# Make variable "my_answer", type of array, and fill this array with your class predictions.
# Modify file name into your student number and your name.
# Your Code Here
file_name = "HW2_20XXXXXXXX_name.csv"
# End Your Code
# This section is for saving predicted answers. DO NOT MODIFY.
pd.Series(my_answer).to_csv("./data/" + file_name, header=None, index=None)
###Output
_____no_output_____ |
pi_lecture.ipynb | ###Markdown
Usage Instructions* Use "ALT+r" key combination to go to slideshow mode* Use Spacebar (SHIFT+Spacebar) to go forward (backward) through the slideshow. [More info is available in the RISE documentation](https://damianavila.github.io/RISE/usage.html).This notebook is available from https://github.com/stephensekula/pi_monte_carlo_lecture. To run in Binder, click the "Launch Binder" badge[](https://mybinder.org/v2/gh/stephensekula/pi_monte_carlo_lecture/master?filepath=pi_lecture.ipynb) Monte Carlo Techniques Professor Stephen Sekula (SMU)Guest Lecture - SMU Physics (PHYS) 4321/7305 What are “Monte Carlo Techniques”? * Computational algorithms that rely on repeated random sampling in order to obtain numerical results * Basically, you run a scenario of some kind over and over again to calculate the possible outcomes, either based on probability or to determine probability * Like playing a casino game over and over again and recording all the game outcomes to determine the underlying rules of the game * Monte Carlo is a city famous for its gambling - hence the name of this class of techniques A Question Have you ever (knowingly) used "Monte Carlo techniques"? Ever played "Battleship"? Two opponents try to sink each others' ships by dropping torpedos on grid locations. The finding and sinking of ships is a Monte Carlo process. Especially at the beginning, you are forced to throw a random torpedo onto the grid because you don't know your opponent's layout. The more torpedos you throw, the more confident you become in the opponent's layout. Computing $\mathbf{\pi}$ by Monte Carlo Techiques A Simple Physical Example * Let's illustrate this class of techniques with a simple physical example: numerical computation of $\pi$ * $\pi$: the ratio of the circumference of a circle to its diameter: {{math.pi}}... * It's difficult to whip out a measuring tape or a ruler and accurately measure the circumference of an arbitrary circle. * The Monte Carlo method avoids this problem entirely Illustration of the Method Begin by drawing a square. The properties of the square are very easy to measure or establish.
###Code
def figureObject():
return plt.subplots(figsize = (4,4))
def configureAxes(axes=None):
if axes == None:
print("No axis object provided")
return
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
return
fig, ax = figureObject()
configureAxes(axes=ax)
plt.show()
###Output
_____no_output_____
###Markdown
Illustration of the Method Inscribe a circle inside the square.
###Code
def circleObject():
return plt.Circle((0, 0), 1.0, color='k', fill=False)
def inscribeCircle():
circle1 = circleObject()
fig, ax = figureObject()
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
ax.add_artist(circle1)
plt.show()
return ax
inscribeCircle()
configureAxes(axes=ax)
plt.show()
###Output
_____no_output_____
###Markdown
Let's take stock of what we know mathematically about the above picture. Taking Stock of the Inscribed Circle Because the circle is *inscribed*, it touches the square at exactly 4 points. If the length of a side of the square is $2r$, then the radius of the circle is, by definition, $r$. For my picture, I have defined a square so as to get a *unit circle*. We know the mathematical relationship between the *radius* of the circle and a point, $(x,y)$, on the boundary of the circle. $$r = \sqrt{x^2 + y^2}$$
###Code
def inscribeCircleWithArrow():
circle1 = circleObject()
r=1.00
x=0.50
y=math.sqrt(r**2-x**2)
arrow1 = plt.Arrow(0,0,x,y,color='g',width=0.1)
fig, ax = figureObject()
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
ax.add_artist(circle1)
ax.add_artist(arrow1)
plt.show()
inscribeCircleWithArrow()
###Output
_____no_output_____
###Markdown
Knowns (so far):$$\color{green}{r = \sqrt{x^2 + y^2}}$$ Let us imagine that we have a way of randomly throwing a dot into the square (imagine a game of darts being played, with the square as the board...)It must be completely random, with no bias toward a particular number in $x$ or $y$ or a region of $(x,y)$ space.
###Code
global dot_in_circle, dot_out_circle
dot_in_circle = 0
dot_out_circle = 0
def throwDot(x=100, y=100):
global dot_in_circle, dot_out_circle
this_x = 0
this_y = 0
if (math.fabs(x)>1 or math.fabs(y)>1):
this_x = numpy.random.uniform(-1,1)
this_y = numpy.random.uniform(-1,1)
else:
this_x = x
this_y = y
color = 'r'
if math.sqrt(this_x**2 + this_y**2) > 1.0:
color = 'b'
dot_out_circle += 1
else:
dot_in_circle += 1
dot = plt.Circle((this_x, this_y), 0.05, color=color, fill=True)
return dot
ax = inscribeCircle()
ax.add_artist(throwDot(-0.2,0.4))
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
There is a probability that a uniformly, randomly thrown dot will land in the circle (or on its boundary), and a probability that it will land out of the circle. What are those probabilities? ProbabilitiesThe probability (P) of landing in the circle ("in") given ("|") that a dot is somewhere on the square ("dot") is merely given by the ratio of the areas of the two objects: \begin{eqnarray}P(\mathrm{\left. in \right| dot}) & = & \frac{A_{circle}}{A_{square}} \\\end{eqnarray} \begin{eqnarray} & = & \frac{\pi r^2}{(2r)^2} \\\end{eqnarray} \begin{eqnarray} & = & \frac{\pi}{4} \\\end{eqnarray} Knowns (so far):$$\color{green}{r = \sqrt{x^2 + y^2}}$$\begin{eqnarray}\color{green}{P(\mathrm{\left. in \right| dot})} & \color{green}{=} & \color{green}{\frac{\pi}{4}} \\\end{eqnarray} That's nice that we can *geometrically* write a formula to determine this probability, but it doesn't help us to figure out $\pi$ - after all we have one equation and two unknowns. We are missing a piece... *... just what is that probability on the left side? How can we determine it?* **In other words:** Is there a way to determine the value of $P(\mathrm{\left. in \right| dot})$ *independently* so that we might then get at $\pi$? Determine P(in|dot) NumericallyWe can figure out this probability by literally throwing dots, via some unbiased process, into the square and then counting...1. The number that land inside the circle or on its boundary ($N_{in}$)1. The number that land outside the circle but still on the square gameboard ($N_{out}$) Then divide the number in the circle by the total number of dots to get at P(in|dot):$$P(\mathrm{\left. in \right| dot}) = \frac{N_{in}}{N_{in}+N_{out}} = \frac{N_{in}}{N_{total}}$$
###Code
button = widgets.Button(
description='Add Random Dot',
)
global game_axes
game_axes = None
dot_in_circle = 0
dot_out_circle = 0
@button.on_click
def addRandomDot(b):
global game_axes
game_axes.add_artist(throwDot())
game_axes = inscribeCircle()
game_axes.add_artist(throwDot())
plt.grid()
plt.show()
widgets.VBox(children=[button])
###Output
_____no_output_____
###Markdown
Determining $\mathbf{\pi}$ NumericallyWe combine our two pieces of information:$$P(\mathrm{\left. in \right| dot}) = \frac{\pi}{4}$$and$$P(\mathrm{\left. in \right| dot}) = \frac{N_{in}}{N_{total}}$$Rearrange to solve for $\pi$ from this numerical approach:$$\pi = 4 \frac{N_{in}}{N_{total}}$$Using the game I just played on the previous slide, $N_{in}=$ {{dot_in_circle}} while $N_{out}=$ {{dot_out_circle}}. Therefore, from this one game $\pi=$ {{round(4*dot_in_circle/(dot_in_circle+dot_out_circle),3)}}. The Pieces * Random numbers * needed to “throw dots” at the board * Uniformity of coverage * we want to pepper the board using uniform random numbers, to avoid creating artificial pileups that create new underlying probabilities ("biasing the outcome") * Code/Programming * You can do this manually with a square, an inscribed circle, coordinate axes, and a many-sided die. * But that limits your time and precision - computers are faster for such repetitive tasks A Computational Approach Tools* A programming language (Python)* A Jupyter notebook that allows you to play around with the programming exercises as we go forwardPython is free and open-source - you can install it on any platform you own (Windows, Mac, Linux, and even many mobile platforms like iOS and Android). Jupyter is just a framework for providing interactive access to an underlying programming language. Basics of Programming ("Coding") * Numbers – all programming languages can minimally handle numbers: integers, decimals * Variables – placeholders for numbers, whose values can be set at any time by the programmer * Functions – any time you have to repeatedly perform an action, write a function. A “function” is just like in math – it represents a complicated set of actions on variables * Code – an assembly of variables and functions whose goal is determined by the programmer. “Task-oriented mathematics” * Coding is the poetry of mathematics – it takes the basic rules of mathematics and does something awesome with them. Python and Jupyter * Python is a programming language * Jupyter is a framework for developing web-based interactive python software * We will use Python and Jupyter together today. I think it makes programming more fun and also more share-able. * Jupyter notebooks can be shared with other people, who can improve them and share them again. Running on the SMU Physics JupyterHub Server Login to the SMU High-Performance Computing System[hpc.smu.edu](https://hpc.smu.edu) Start a Jupyter Notebook and Upload the CodeUpload this code (and accompanying graphics and files) to ManeFrame and then click on it to run. Select the notebook to play with the code Running on Binder (MyBinder.org) Visit [github.com/stephensekula/pi_monte_carlo_lecture](https://github.com/stephensekula/pi_monte_carlo_lecture) and click the "Launch Binder" badge. This will take several minutes to start. Run the Notebook all the way through TWICEClick the "Cell" menu item at the top of the notebook. Select ```"Cell->Run All"```. This will execute every cell in the notebook from the first to the last. Do the above TWICE.When it's done, scroll up until you find the section entitled **"Python Coding Basics"** Building Up the Code Python Coding Basics variables, values, printing
###Code
# A line beginning with a "#" symbol is a comment -> non-executing statement!
Ntotal = 100
# Print the value of the variable, Ntotal, using the print() function defined in Python:
print(Ntotal)
# Double-click this cell to edit the code, change the value of Ntotal, and play around!
###Output
100
###Markdown
Arithmetic Operations
###Code
# Define a new variable, Nin, and set its value
Nin = 55.0
# Print an arithmetic operation using Nin and Ntotal
print(Nin/Ntotal)
# Double-click this cell to edit the code and try other operations!
# + ADDITION
# - SUBTRACTION
# * MULTIPLICATION
# / DIVISION
# **N RAISE TO THE POWER OF N, e.g. x**2 is x*x
###Output
0.55
###Markdown
Random Numbers: Uniformly Distributed Random Numbers
###Code
uniform_random_button = widgets.Button(
description='Uniform in [0,1]',
)
@uniform_random_button.on_click
def generateUniformRandomNumber(b):
print(random.uniform(0.0,1.0))
import random
print( random.uniform(0.0,1.0) )
# Create a little widget holding a button that lets me repeatedly generate a new random number
widgets.VBox(children=[uniform_random_button])
# ...or, just keep single-clicking this cell and executing it again using SHIFT+ENTER on your keyboard
###Output
0.9800586302235588
###Markdown
Defining our "Game Board" Simplifying our "Game Board" Generating "dots" - uniform random x, random y, and calculate $\mathbf{r}$
###Code
import math
import random
Ntotal = 100
Nin = 0
x = random.uniform(0.0,1.0)
y = random.uniform(0.0,1.0)
# To take the square-root, use the predefined math function math.sqrt():
r = math.sqrt(x**2 + y**2)
print(f"x = {x}")
print(f"y = {y}")
print(f"r = {r}")
# Execute me over and over by clicking on this cell and using SHIFT+ENTER on your keyboard.
###Output
x = 0.2752280411080341
y = 0.7517820491863422
r = 0.8005789930362784
###Markdown
Lists of numbers and the range() function for generating sequencesIf you want to generate a series of numbers in a sequence, e.g. 1,2,3,4..., then you want the range() *immutable sequence type* (or its equivalent in some other Python library). See the example below. ```range()``` doesn't directly create a list of numbers; it is its own type in Python. To get a list from it, see below.
###Code
# Manual list
my_list = [1,2,3,4,5]
print("Hand-generated list: ", my_list)
list_of_numbers=list(range(1,5))
print("List from range(1,5): ", list_of_numbers)
# Note that the list EXCLUDES the end-point (5). If you want to get a list of 5 numbers, from 1-5,
# then you need to extend the end-point by 1 (from 5 to 6):
list_of_numbers=list(range(1,6))
print("List from range(1,6): ", list_of_numbers)
# or this
list_of_numbers=list(range(0,5))
print("List from range(0,5): ", list_of_numbers)
###Output
Hand-generated list: [1, 2, 3, 4, 5]
List from range(1,5): [1, 2, 3, 4]
List from range(1,6): [1, 2, 3, 4, 5]
List from range(0,5): [0, 1, 2, 3, 4]
###Markdown
Repetition in Code: Loop Structures* You don't want to manually type 100 (or more) computations of your dot throwing* You need a loop!* A “loop” is a small structure that automatically repeats your computation a specified number of times
###Code
for i in [0,1,2,3,4]:
print(i)
print(i+1)
###Output
0
1
1
2
2
3
3
4
4
5
###Markdown
Putting things together: "looping" 100 times and printing r each timeLet's put things together now. We can create a variable that stores the number of iterations ("loops") of the calculation we want to do. Let's call that ```Ntotal```. Let's then loop 100 times and each time make a random ```x```, random ```y```, and from that compute $r=\sqrt{x^2+y^2}$.Note that Python uses indentation to indicate a related block of code acting as a "subroutine" - a program within the program.
###Code
import math
import random
Ntotal = 100
Nin = 0
# Use a for-loop to automatically execute the same block of code a bunch of times:
for i in range(0,Ntotal):
x = random.uniform(0.0,1.0)
y = random.uniform(0.0,1.0)
r = math.sqrt(x**2 + y**2)
print("x=%f, y=%f, r=%f" % (x,y,r))
###Output
x=0.830275, y=0.194788, r=0.852818
x=0.746310, y=0.509520, r=0.903653
x=0.445308, y=0.538662, r=0.698896
x=0.258602, y=0.130819, r=0.289807
x=0.618215, y=0.837685, r=1.041108
x=0.067571, y=0.801316, r=0.804160
x=0.768795, y=0.835800, r=1.135609
x=0.567568, y=0.597307, r=0.823959
x=0.694144, y=0.299043, r=0.755819
x=0.216593, y=0.916168, r=0.941423
x=0.639832, y=0.119284, r=0.650856
x=0.961414, y=0.516736, r=1.091482
x=0.943925, y=0.510082, r=1.072930
x=0.664189, y=0.632937, r=0.917473
x=0.485825, y=0.639248, r=0.802909
x=0.778569, y=0.020922, r=0.778850
x=0.388044, y=0.280656, r=0.478901
x=0.611469, y=0.611371, r=0.864678
x=0.179344, y=0.272885, r=0.326543
x=0.742566, y=0.760847, r=1.063152
x=0.853150, y=0.146068, r=0.865564
x=0.789054, y=0.952214, r=1.236656
x=0.326214, y=0.326016, r=0.461196
x=0.805160, y=0.199264, r=0.829451
x=0.626568, y=0.916126, r=1.109898
x=0.096376, y=0.856497, r=0.861902
x=0.144138, y=0.631063, r=0.647315
x=0.555552, y=0.762330, r=0.943284
x=0.254633, y=0.394022, r=0.469139
x=0.738202, y=0.283470, r=0.790758
x=0.766103, y=0.033632, r=0.766841
x=0.680290, y=0.342429, r=0.761612
x=0.770985, y=0.686524, r=1.032343
x=0.829862, y=0.329908, r=0.893034
x=0.179273, y=0.542500, r=0.571353
x=0.905942, y=0.246375, r=0.938846
x=0.307415, y=0.596575, r=0.671123
x=0.486968, y=0.498555, r=0.696918
x=0.127846, y=0.771835, r=0.782351
x=0.596949, y=0.594127, r=0.842220
x=0.467184, y=0.751332, r=0.884737
x=0.156188, y=0.071414, r=0.171740
x=0.239271, y=0.735493, r=0.773434
x=0.947543, y=0.948520, r=1.340719
x=0.270710, y=0.359608, r=0.450113
x=0.949933, y=0.952015, r=1.344881
x=0.679883, y=0.430772, r=0.804864
x=0.739725, y=0.774306, r=1.070861
x=0.042589, y=0.643595, r=0.645002
x=0.439615, y=0.051720, r=0.442647
x=0.456282, y=0.126063, r=0.473377
x=0.620563, y=0.780312, r=0.996988
x=0.700527, y=0.218752, r=0.733887
x=0.312976, y=0.362585, r=0.478980
x=0.880643, y=0.724931, r=1.140639
x=0.817736, y=0.633377, r=1.034340
x=0.458745, y=0.499625, r=0.678286
x=0.480191, y=0.975861, r=1.087606
x=0.142278, y=0.431656, r=0.454499
x=0.087119, y=0.004813, r=0.087252
x=0.408031, y=0.459075, r=0.614198
x=0.861995, y=0.058035, r=0.863947
x=0.567610, y=0.849273, r=1.021492
x=0.857978, y=0.910512, r=1.251063
x=0.352837, y=0.099517, r=0.366603
x=0.490809, y=0.681197, r=0.839597
x=0.613778, y=0.959375, r=1.138913
x=0.549977, y=0.505852, r=0.747236
x=0.976394, y=0.208785, r=0.998467
x=0.458908, y=0.505078, r=0.682422
x=0.017411, y=0.613503, r=0.613750
x=0.946694, y=0.545579, r=1.092651
x=0.609850, y=0.804428, r=1.009466
x=0.649153, y=0.865428, r=1.081834
x=0.489487, y=0.891577, r=1.017107
x=0.476881, y=0.298748, r=0.562731
x=0.430068, y=0.558177, r=0.704642
x=0.619441, y=0.779330, r=0.995521
x=0.856609, y=0.112479, r=0.863962
x=0.513990, y=0.792721, r=0.944771
x=0.557005, y=0.476597, r=0.733075
x=0.291337, y=0.829030, r=0.878731
x=0.718065, y=0.579264, r=0.922586
x=0.888817, y=0.284228, r=0.933157
x=0.404780, y=0.755473, r=0.857080
x=0.153603, y=0.848236, r=0.862032
x=0.047899, y=0.195019, r=0.200816
x=0.621465, y=0.184344, r=0.648230
x=0.302663, y=0.985982, r=1.031390
x=0.823164, y=0.398509, r=0.914553
x=0.598380, y=0.769478, r=0.974759
x=0.500138, y=0.609735, r=0.788616
x=0.374911, y=0.864624, r=0.942408
x=0.764175, y=0.366466, r=0.847503
x=0.866703, y=0.999405, r=1.322870
x=0.939119, y=0.166900, r=0.953834
x=0.675410, y=0.757137, r=1.014611
x=0.685615, y=0.101307, r=0.693060
x=0.995601, y=0.525353, r=1.125707
x=0.366813, y=0.042017, r=0.369211
###Markdown
Making this into a Monte Carlo Calculation: The Accept/Reject MethodNow that we can make random "dots" in x,y and compute the radius (relative to 0,0) of each dot, let's employ "Accept/Reject" to see if something is in/on the circle (a "HIT"!) or outside the circle (a "MISS"!). All we have to do is:* for each $r$, test whether $r \le R$ or $r > R$. * If the former, we have a dot in the circle - a hit! * If the latter, then we have a dot out of the circle - a miss! * We "accept" the hits and reject the misses. * The total number of points, (x,y), define all the moves in the game, and the ratio of hits to the total will tell us about the area of the circle, and thus get us closer to $\pi$.
###Code
Ntotal = 100000 # Do this many "trials" (dot throws)
Nin = 0 # Default Nin to zero
R = 1.0 # Unit Circle Radius = 1.0
for i in range(0,Ntotal):
x = random.uniform(0.0,1.0)
y = random.uniform(0.0,1.0)
r = math.sqrt(x**2 + y**2)
if r <= R:
Nin = Nin + 1
# alternatively, Nin += 1 (auto-increment by 1)
print(f"Number of dots: {Ntotal}")
print(f"Number of hits: {Nin}")
Nmiss = Ntotal - Nin
print(f"Number of misses: {Nmiss}")
# Now, compute pi using pi = 4*(N_in/N_total)
my_pi = 4.0*float(Nin)/float(Ntotal)
print(f"pi = {my_pi}")
###Output
Number of dots: 100000
Number of hits: 78487
Number of misses: 21513
pi = 3.13948
###Markdown
A working program! You can increase ```Ntotal``` (standing in for $N_{in}+N_{out}$) to increase the precision of your computation. Additional Topics (As Time Allows) Defining a "function" in PythonYou can define a custom function in Python to wrap up our masterpiece, and then run the code, passing new parameters, just by calling the function. For example:
###Code
def computePi(Ntotal=100, silent=False):
Nin = 0 # Default Nin to zero
R = 1.0 # Unit Circle Radius = 1.0
for i in range(0,Ntotal):
x = random.uniform(0.0,1.0)
y = random.uniform(0.0,1.0)
r = math.sqrt(x**2 + y**2)
if r <= R:
Nin += 1
Nmiss = Ntotal - Nin
my_pi = 4.0*float(Nin)/float(Ntotal)
if False == silent:
print(f"Throws: {Ntotal}; Hits: {Nin}; Misses: {Nmiss}; Pi = {my_pi}")
return [Ntotal, Nin, my_pi]
# call the function
computePi(Ntotal=100)
computePi(Ntotal=1000)
computePi(Ntotal=10000)
###Output
Throws: 100; Hits: 79; Misses: 21; Pi = 3.16
Throws: 1000; Hits: 781; Misses: 219; Pi = 3.124
Throws: 10000; Hits: 7829; Misses: 2171; Pi = 3.1316
###Markdown
Precision in Monte Carlo SimulationThe number above is probably close to what you know as $\pi$, but likely not very precise. After all, we only threw 100 dots. We can increase precision by *increasing the number of dots*. In the code block below, feel free to play with ```Ntotal```, trying different values. Observe how the computed value of $\pi$ changes. Would you say it's "closing in" on the value you know, or not? Try ```Ntotal``` at 1000, 5000, 10000, 50000, and 100000.In the code block below, I have also added a computation of *statistical error*. The error should be a *binomial error*. Binomial errors occur when you have a bunch of things and you can classify them in two ways: as $A$ and $\bar{A}$ ("A" and "Not A"). In our case, they are hits or "not hits" (misses). So binomial errors apply. Let's look at the details. Applying Binomial Errors to the Pi ComputationGiven finite statistics (a non-infinite number of trials), each set of trials (e.g. ```Ntotal=100```) carries an uncertainty on the computed value of $\pi$, $\pi_{est}$, such that we should really quote ($\pi_{est} \pm \sigma_{\pi}$). Since the number of points that land *inside or on* the circle is a subset of the total number thrown, $N_{in}$ and $N_{total}$ have to be treated using *binomial errors* (a dot can be in 1 of 2 non-overlapping states: in or out of the circle).$$\sigma_{n} = \sqrt{N_{total} \cdot p(1-p)}$$where $n=N_{in}$ and $p = P(\left. in \right| dot) = N_{in}/N_{total}$ in our specific case. If we propagate this into $\pi_{est}$, we obtain:$$\sigma_{\pi} = 4 \sigma_{N_{in}}/N_{total} = 4\sqrt{\frac{N_{in}}{N^2_{total}} \left( 1 - \frac{N_{in}}{N_{total}}\right)}$$ Relative Error on $\mathbf{\pi_{est}}$The *relative error* is the ratio of the uncertainty to the estimated value, e.g. $\sigma_{\pi}/\pi_{est}$. The *percent error* is just 100 times the relative error. That is given by:$$\textrm{% error} \equiv 100 \times \frac{\sigma_{\pi}}{\pi_{est}} = 100 \times \sqrt{\frac{1}{N_{in}}-\frac{1}{N_{total}}}$$
###Code
def computePiBinomialError(Ntotal=100):
my_pi_data = computePi(Ntotal, silent=True)
# Bonus: use Binomial Error calculation to determine numerical uncertainty on pi!
Nin = my_pi_data[1]
my_pi = my_pi_data[2]
my_pi_uncertainty = my_pi * math.sqrt(1.0/float(Nin) + 1.0/float(Ntotal))
my_pi_relative_uncertainty = 100*(my_pi_uncertainty/my_pi)
output_text = f"pi = {my_pi:.6f} +/- {my_pi_uncertainty:.6f} (percent error= {my_pi_relative_uncertainty:.2f}%)"
return output_text
trial_100 = computePiBinomialError(Ntotal=100)
trial_1000 = computePiBinomialError(Ntotal=1000)
trial_10000 = computePiBinomialError(Ntotal=10000)
trial_100000 = computePiBinomialError(Ntotal=100000)
###Output
_____no_output_____
###Markdown
* For 100 trials: {{trial_100}}* For 1000 trials: {{trial_1000}}* For 10000 trials: {{trial_10000}}* For 100000 trials: {{trial_100000}}Note that the uncertainty scales as $1/\sqrt{N_{total}}$. Why is this powerful?* You have just learned how to *compute an integral numerically** You can apply this technique to *any function whose integral (area) you wish to determine** Consider the example on the next slide...Aside: Imagine if you had learned this earlier – how much better a homework result could you have prepared by attacking problems both analytically AND numerically? A General Approach * Given an arbitrary function, f(x), determine its integral numerically using the “Accept/Reject Method” * First, find the maximum value of the function (e.g. either analytically, if you like, or by calculating the value of f(x) over steps in x to find the maximum value, which I denote F(x)) * Second, enclose the function in a box, h(x), whose height is F(x) and whose length encloses as much of f(x) as is possible. * Third, compute the area of the box (easy!) * Fourth, throw points in the box using uniform random numbers. Throw a value for $x$, denoted $x^{\prime}$. Throw a value for $y$, denoted $y^{\prime}$. If $y^{\prime} < f(x^{\prime})$, it's a hit! If not, it's a miss! A General Approach $$\frac{N_{hits}}{N_{total}} = \frac{I(f(x))}{A(h(x))}$$This, in the real world, is how physicists, engineers, statisticians, mathematicians, etc. compute integrals ofarbitrary functions. **Learn it. Love it. It will save you.** Simulating Experiments - Another Use of Monte Carlo Techniques* The Monte Carlo technique, given a function that represents the probability of an outcome, can beused to generate “simulated data”* Simulated data is useful in designing an experiment (e.g. during the design phase, when it's too expensive to prototype the entirety of a one-of-a-kind instrument), or even “running” an experiment over and over to see all possible outcomes Simulating the Double-Slit Wave Interference Experiment Consider slits of width, b, separated by a distance, d. Light of wavelength, $\lambda$, can pass through the slits. We want to know the intensity of light in the resulting pattern as a function of scattering angle, $\theta$. Denote this intensity $I(\theta)$. $I(\theta)$ is a measure of the probability of finding a photon scattered at the angle $\theta$ On the next page we see the "theoretical prediction" of what will happen in then next experiment based on a mathematical description of past experiments. We want to then simulate the "next experiment"... Simulating the Double-Slit Wave Interference Experiment $$I(\theta) \propto \cos^2 \left(\frac{\pi d \sin(\theta)}{\lambda}\right)\mathrm{sinc}^2\left(\frac{\pi b \sin(\theta)}{\lambda}\right)$$ where $$\mathrm{sinc}(x) \equiv \begin{cases} \sin(x)/x \; (x \ne 0) \\ 1 \; (x=0) \end{cases}$$ Next Steps* Need the maximum value of $I(\theta)$ $\longrightarrow$ occurs when $\theta=0$ (for example)* Use that to compute the height of the box; the width of the box is $\pi$ (ranging from $-\pi/2$ to $+\pi/2$)* “Throw” random points in the box until you get 1000 “accepts”* Now you have a “simulated data” sample of 1000 photons scattered in the two-slit experiment. Implementation of the Double-Slit Monte Carlo Approach
###Code
def sinc(x):
if x == 0.0:
return 1
else:
return math.sin(x)/x
def Intensity(theta, wavelength=500e-9, b=1.0e-6, d=0.01):
# wavelength is the light wavelength in meters
# b is the slit-width (in meters)
# d is the separation distance between slits (in meters)
return (math.cos((math.pi * d * math.sin(theta))/wavelength)**2 * sinc((math.pi*b*math.sin(theta))/wavelength)**2)
global photon_list, wavelength, slit_width, slit_spacing, photon_plot, photon_figure
photon_list = numpy.array([])
wavelength = 500e-9
slit_width=1e-6
slit_spacing=0.01
# Generate a number of Double-Slit photons
# add each new one to a data list
# plotting the list as a histogram using MatPlotLib
global photon_max
slider_label_style = {'description_width': 'initial'}
photon_max = widgets.FloatLogSlider(
value=1.,
base=10,
min=0.,
max=5.,
step=0.2,
description='Photons To Generate:',
style=slider_label_style,
layout=widgets.Layout(width='50%', height='30px')
)
photon_button = widgets.Button(
description='Generate Scattered Photons',
layout=widgets.Layout(width='50%', height='30px')
)
@photon_button.on_click
def DoubleSlitPhoton(button_object):
global photon_list, wavelength, slit_width, slit_spacing, photon_plot, photon_figure, photon_max
# Generate a single double-slit scattered photon
I_max = Intensity(0,wavelength, slit_width, slit_spacing)
photon_count = 0
theta = -999.
while photon_count < photon_max.value:
theta = numpy.random.uniform(-math.pi/2, math.pi/2)
I_generated = numpy.random.uniform(0, I_max)
if I_generated < Intensity(theta, wavelength, slit_width, slit_spacing):
photon_count += 1
photon_list = numpy.append(photon_list, theta)
# update the histogram plot
yields, bins = numpy.histogram(photon_list, bins=500, range=(-math.pi/2, math.pi/2))
bincenter = lambda x, i: x[i]+(x[i+1] - x[i])/2.0
bincenters = [bincenter(bins, i) for i in range(0, len(bins)-1)]
#photon_plot.hist(x=photon_list, bins=100, color=['k'])
plt.cla()
photon_plot.scatter(x=bincenters, y=yields, color=['k'])
photon_plot.set_xlim(-math.pi/2, math.pi/2)
photon_plot.set_xlabel("Scattering Angle (radians)")
photon_plot.set_ylabel("Number of Scattered Photons per radian")
plt.grid()
photon_figure.canvas.draw()
###Output
_____no_output_____
###Markdown
Slit width (b): {{slit_width}}m ; Slit Spacing (d): {{slit_spacing}} m ; Light wavelength ($\lambda$): {{wavelength/1e-9}} nm
###Code
photon_figure, photon_plot = plt.subplots(figsize = (6,4))
widgets.VBox(children=[photon_button, photon_max])
###Output
_____no_output_____ |
notebooks/tepcott.ipynb | ###Markdown
============ DIVISION 1ohdarnobihoernchenserennielcasper258smithyonetwelvemindlessrifftony_soprano1985doompenguin44grmplsfriendlybaronradas59tsupernaminommaschdivaitfab.iceman============ DIVISION 2scrazor92grievous_94michiski22mistarradam10603dogdajustus2036esquzzserviinutlestoodlesnthetruearyandv8rxfireflyp4ulin4toroldholborn============ DIVISION 3thisisqlimaxhibergthekillswitchhdeafplayerofmikeandmenpositivetensionherogutcountach92thedelgadic1j3rryacinonyximfishysunkenvipersilkieakajayjohnkili31============ DIVISION 4nikoflakisdeclassedhorsvarkamishindmitriyprreviltohrazerabe.cedefivedownactualpseudonymous8128killmeisterrazvanmc23mrbeattboxamlegendary96============ DIVISION 5abe.cedeadam10603akajayjohnamlegendary96casper258countach92deafplayerdeclasseddivaitdogdadoompenguin44dv8rxesquzzfab.icemanfireflyfivedownactualfriendlybarongrievous_94grmplsheroguthiberghorsvarkaimfishyj3rryacinonyxjustus2036kili31killmeistermichiski22mindlessriffmishindmitriymistarrmrbeattboxnikoflakisnommaschobihoernchenofmikeandmenohdarnoldholbornp4ulin4torpositivetensionprrevilpseudonymous8128radas59razvanmc23scrazor92serennielserviinutlessilkiesmithyonetwelvesunkenviperthedelgadic1thekillswitchhthetruearyanthisisqlimaxtohrazertony_soprano1985toodlesntsupernami
###Code
Brioso
Burgershot Stallion
Carbonizzare
Comet
Feltzer
Futo
Glendale
Huntley S
Jackal
Monroe
Prairie
Rhapsody
Stirling GT
Warrener
###Output
_____no_output_____ |
mdpi_sensors/hypnospy_general_code.ipynb | ###Markdown
In this Notebook...we present how different algorithms can be applied for a given participant with multiple days of recording from the MESA cohort, showing the differences in results from each one of those approaches.Further, in this example we show how HypnosPy's built in non-wear detection can be used to process the file and get rid of days without enough recording time or where the devices was not worn appropriately. Through this process, we show the ease of use of our basic software functionalities in a multi-signal setting.
###Code
from hypnospy import Wearable
from hypnospy.data import MESAPreProcessing
from hypnospy.analysis import SleepWakeAnalysis, Viewer, NonWearingDetector
# MESAPreProcessing is a specialized class to preprocess Actiwatch devices used in the MESA Sleep experiment
preprocessed = MESAPreProcessing("../data/examples_mesa/mesa-sample.csv")
# Wearable is the main object in HypnosPy.
w = Wearable(preprocessed)
# In HypnosPy, we have the concept of ``experiment day'' which by default starts at midnight (00 hours).
# We can easily change it to any other time we wish. For example, lets run this script with experiment days
# that start at 3pm (15h)
w.change_start_hour_for_experiment_day(15)
# Sleep Wake Analysis module
sw = SleepWakeAnalysis(w)
sw.run_sleep_algorithm("ScrippsClinic", inplace=True) # runs alg and creates new col named 'ScrippsClinic'
sw.run_sleep_algorithm("Cole-Kripke", inplace=True) # runs alg and creates new col named 'Cole-Kripke'
sw.run_sleep_algorithm("Oakley", inplace=True) # runs alg and creates new col named 'Oakley'
w.data[["linetime", "activity", "ScrippsClinic", "Cole-Kripke", "Oakley"]]
v = Viewer(w)
# Now we will plot each experiment day per row, all starting at 3pm.
# Left-hand plot: Note how the last two days have plenty of non-wear epochs
v.view_signals(signal_categories=["activity"], signal_as_area=["ScrippsClinic", "Cole-Kripke", "Oakley"],
colors={"area": ["green", "red", "blue"]}, alphas={"area": 0.6})
# Those can be removed with an algorithm like choi's.
nwd = NonWearingDetector(w)
nwd.detect_non_wear(strategy="choi")
nwd.check_valid_days(max_non_wear_minutes_per_day=180)
nwd.drop_invalid_days()
# Right-hand plot after removing days with large amount of non-wearing
v.view_signals(signal_categories=["activity"], signal_as_area=["ScrippsClinic", "Cole-Kripke", "Oakley"],
colors={"area": ["green", "red", "blue"]}, alphas={"area": 0.6})
###Output
_____no_output_____ |
src/NN-parallel--GPU.ipynb | ###Markdown
验证集
###Code
val_probability = pd.DataFrame(val_probability)
print(val_probability.shape)
print(val_probability.head())
val_probability.drop(labels=[0],axis=1,inplace=True)
val_probability.to_csv(r'../processed/val_probability_13100.csv',header=None,index=False)
###Output
_____no_output_____
###Markdown
测试集
###Code
import os
model_file = r'../model/model13100_NN_'
csr_testData = sparse.load_npz(r'../trainTestData/testData13100.npz')
gc.collect()
age_test = pd.read_csv(r'../data/age_test.csv',header=None,usecols=[0])
pt.printTime()
proflag = True
model_Num = 0
for i in list(range(10)):
model = load_model(model_file + str(i) + '.h5')
if proflag==True:
probability = model.predict(csr_testData,batch_size=1024,verbose=1)
proflag = False
else:
probability += model.predict(csr_testData,batch_size=1024,verbose=1)
model_Num += 1
print(model_Num)
K.clear_session()
del model
pt.printTime()
model_Num
probability /= model_Num
age = np.argmax(probability,axis=1)
age_test = pd.read_csv(r'../data/age_test.csv',header=None,usecols=[0])
age_test = age_test.values
type(age_test)
print(probability.shape)
pro = np.column_stack((age_test,probability))
pro = pd.DataFrame(pro)
pro.drop(labels=[0,1],axis=1,inplace=True)
print(pro.shape)
pro.to_csv(r'../processed/test_probability_13100.csv',index=False,header=False)
###Output
_____no_output_____ |
notebooks/code_reviews/5.0_fitting_mi.ipynb | ###Markdown
Code Review V - Fitting MIIn this code review, we're going to be reviewing how we can try to fit a curve for the relationship between mutual information and the centered kernel alignment (CKA) scorer. Code Preamble
###Code
# toy datasets
import sys
from pyprojroot import here
sys.path.insert(0, str(here()))
import warnings
from typing import Optional, Tuple
from tqdm import tqdm
import random
import pandas as pd
import numpy as np
import argparse
from sklearn.utils import check_random_state
# toy datasets
from src.data.distribution import DataParams, Inputs
# Kernel Dependency measure
from sklearn.preprocessing import StandardScaler
from sklearn.gaussian_process.kernels import RBF
from src.models.dependence import HSICModel
# RBIG IT measures
from src.features.utils import df_query, subset_dataframe
# Plotting
from src.visualization.distribution import plot_scorer, plot_score_vs_mi
# experiment helpers
from src.experiments.utils import dict_product, run_parallel_step
from tqdm import tqdm
# Plotting Procedures
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
sns.reset_defaults()
# sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='poster',font_scale=0.7, rc={'font.family': 'sans-serif'})
# sns.set(font='sans-serif')
%matplotlib inline
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Query Data
###Code
DATA_PATH = "data/results/distributions/mutual_info/"
results_df = pd.concat([
pd.read_csv(here() / f"{DATA_PATH}v5_gauss.csv"),
pd.read_csv(here() / f"{DATA_PATH}v5_tstudent.csv")
], axis=1)
results_df = results_df.loc[:, ~results_df.columns.str.match('Unnamed')]
results_df = results_df.astype(object).replace(np.nan, 'None')
###Output
/home/emmanuel/.conda/envs/hsic_align/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3062: DtypeWarning: Columns (12,13) have mixed types.Specify dtype option on import or set low_memory=False.
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
###Markdown
Gaussian Distribution
###Code
# initialize list of queries
queries = []
# query dataframe for median
dataset_methods = ['gauss']
queries.append(df_query('dataset', dataset_methods))
# query dataframe for median
sigma_methods = ['median']
queries.append(df_query('sigma_method', sigma_methods))
# query dataframe for scott and silverman methods
sigma_percents = [40., 50., 60.]
queries.append(df_query('sigma_percent', sigma_percents))
# query dataframe for RBF Kernel
dimension_query = [False]
queries.append(df_query('per_dimension', dimension_query))
# query dataframe for HSIC
scorer_query = ['cka']
queries.append(df_query('scorer', scorer_query))
sub_df = subset_dataframe(results_df, queries)
# # plot - score vs mi
# plot_score_vs_mi(sub_df, scorer='cka', compare='dimension');
sub_df.head(3)
###Output
_____no_output_____
###Markdown
Extreme ValuesSo there are a few extreme values (i.e. values that appear to fall outside of the trend). I would like to highlight in what settings they were found.
###Code
# necessary columns for plotting
columns = ['score', 'mutual_info', 'dimensions', 'samples']
sub_df = sub_df[columns]
# change column types to categorical for plotting
ind_cols = [
'samples',
'dimensions'
]
sub_df[ind_cols] = sub_df[ind_cols].astype('category')
# Plot
fig, ax = plt.subplots(ncols=2, figsize=(12, 5))
sns.scatterplot(
ax=ax[0], x='score', y='mutual_info',
data=sub_df,
marker='.',
hue='samples',
)
ax[0].set_title("Comparing Samples")
ax[0].set_xlabel('CKA Score')
ax[0].set_ylabel('Mutual Information')
ax[0].set_yscale('symlog')
sns.scatterplot(
ax=ax[1], x='score', y='mutual_info',
data=sub_df,
marker='.',
hue='dimensions',
)
ax[1].set_title("Comparing Dimensions")
ax[1].set_xlabel('CKA Score')
ax[1].set_ylabel('Mutual Information')
ax[1].set_yscale('symlog')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
So it appears that our estimation is at it's worse when we have a setting where we have a low number of samples and a high number of dimensions when there is a low amount of mutual information.**Note**: I find this a bit funny because kernels are known for being good for situations with a high number of samples and a low number of dimensions. Exact RelationSo there is a formula that describes the exact relationship between mutual information and the linear kernel for a Gaussian distribution. It's:$$I(\mathbf{X;Y}) = - \frac{1}{2} \log(1-\rho)$$where $\rho= \frac{|C|}{|C_{XX}||C_{YY}|}$. This is essentially the closed form solution for the MI between two Gaussian distributions. And $\rho$ is the score that we should obtain. I didn't actually calculate the closed-form solution (although I could in the future). But I would like to see if the score that I estimated approximates the true score that we should obtain if we were to assume a Gaussian. So I'll solve this equation for $\rho$ and then plot my estimated $\hat{\rho}$.$$\rho = 1 - \exp^{-2 I}$$
###Code
# calculate the real score based on the MI
sub_df['score_real'] = 1 - np.exp(- 2 * sub_df['mutual_info'])
# calculate the pearson, spearman between our estimate and the real score
from scipy import stats
p_score = stats.pearsonr(
sub_df['score'],
sub_df['score_real']
)[0]
sp_score = stats.spearmanr(
sub_df['score'],
sub_df['score_real']
)[0]
# Plot
fig, ax = plt.subplots(ncols=1, figsize=(7, 7))
sns.regplot(
ax=ax, x='score_real', y='score',
data=sub_df,
marker='.',
color='black',
scatter_kws={'color': 'lightblue', 'label': 'Points'}
)
ax.set_title("Approximate Relationship")
ax.set_xlabel('CKA Score')
ax.set_ylabel('True Score')
# ax.set_ylim([0.0, 8])
# Plot I
# ax.plot(np.sort(sub_df['score']), sub_df['mi_kernel'],
# linewidth=3, color='black', label='Fitted Curve')
ax.legend(['Regression Line', 'Points'])
ax.annotate(f"Pearson: {p_score:.2f}\nSpearman: {sp_score:.2f}", (-0.025, .75), fontsize=15)
plt.show()
###Output
_____no_output_____ |
Section 1/notebooks/section1/Strings_Numbers.ipynb | ###Markdown
Strings & Numbers Presenting text (strings)
###Code
print("{:<15} {:<15} {:<15}".format("Change", "Return", "Volatility"))
print("-"*42)
###Output
Change Return Volatility
------------------------------------------
###Markdown
Number Classes
###Code
print(type(5))
print(type(10.0))
###Output
<class 'int'>
<class 'float'>
###Markdown
Combining Stings and Numbers in Output
###Code
print("{:<15}{:<15}{:<15}".format("Change", "Return", "Volatility"))
print('-'*42)
print("{:<15.2f} {:<15.3f} {:<15.4f}".format(2.31,.0145, .2345))
x = 5
y = 10.0
###Output
_____no_output_____
###Markdown
Mathematical Operators
###Code
print(x + y)
print(x - y)
print(x * y)
print(x / y)
print(x ** y)
print(x // y)
print(x % y)
###Output
15.0
-5.0
50.0
0.5
9765625.0
0.0
5.0
###Markdown
Incrementing
###Code
x = 5
x += x
x
###Output
_____no_output_____ |
AFML 2.1.ipynb | ###Markdown
Data StructureThis notebook will cover:Exercise 2.*Using different types of data structure to transform time-series into event-driven order.Instead of market price time-series, we form somewhat an transaction volume expectation to total transaction volume to resample data.Using such data structure could generate good market signal for different plausible quant strategies.In order to appreciate this technique, I highly recommend you to read the below research.[Volume Clock SSRN](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2034858)**Note**If actual High-frequency data is not within your reach, you may wish to try with the below code: def create_price_data(start_price: float = 1000.00, mu: float = .0, var: float = 1.0, n_samples: int = 1000000): import numpy as np import pandas as pd i = np.random.normal(mu, var, n_samples) df0 = pd.date_range(periods=n_samples, freq=pd.tseries.offsets.Minute(), end=pd.datetime.today()) X = pd.Series(i, index=df0, name = "close").to_frame() X.close.iat[0] = start_price X.cumsum().plot.line() return X.cumsum()The above function can generate about 2 years of synthetic raw data, so that you can start to structure your data into dollar bars.Please remember to save this as your sample data to csv format, otherwise, your result may not be consistant. (This sample data can get you started for 1st 5 chapters)Contact: [email protected]
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
p = print
#pls take note of version
#numpy 1.17.3
#pandas 1.0.3
#sklearn 0.21.3
# Intraday sample data courtesy of mlfinlab
dollar = pd.read_csv('./Sample_data/dollar_bars.txt',
sep=',',
header=0,
parse_dates = True,
index_col=['date_time'])
volume = pd.read_csv('./Sample_data/volume_bars.txt',
sep=',',
header=0,
parse_dates = True,
index_col=['date_time'])
tick = pd.read_csv('./Sample_data/tick_bars.txt',
sep=',',
header=0,
parse_dates = True,
index_col=['date_time'])
db_ = dollar['close'].resample('W').count()
vb_ = volume['close'].resample('W').count()
tb_ = tick['close'].resample('W').count()
count_df = pd.concat([tb_, vb_, db_], axis=1)
count_df.columns = ['tick', 'volume', 'dollar']
count_df.loc[:, ['tick', 'volume', 'dollar']].plot(kind = 'bar', figsize=[25, 5])
# Tick bars have the most irregular count
# While Dollar bar produces the most stable count per week
p(count_df['dollar'].autocorr())
p(count_df['volume'].autocorr())
p(count_df['tick'].autocorr())
#Dollar bars has the lowest autocorr
db1_ = dollar['close'].resample('M').mean().pct_change().var()
vb1_ = volume['close'].resample('M').mean().pct_change().var()
tb1_ = tick['close'].resample('M').mean().pct_change().var()
p(tb1_, vb1_, db1_)
# Still dollar bar has the lowest variance
# But i suspect you have to resample 1D and 1W as well
from scipy import stats
p(stats.jarque_bera(dollar['close'].pct_change().dropna())[0],
stats.jarque_bera(volume['close'].pct_change().dropna())[0],
stats.jarque_bera(tick['close'].pct_change().dropna())[0])
# Again.. dollar bar.. we r seeing a pattern here
import statsmodels.stats.diagnostic as sm
import statsmodels.api as smi
def bband(data: pd.Series, window: int = 21, width: float = 0.005):
avg = data.ewm(span = window).mean()
std0 = avg * width
lower = avg - std0
upper = avg + std0
return avg, upper, lower, std0
dollar['avg'], dollar['upper'], dollar['lower'], dollar['std0'] = bband(dollar['close'])
count_dn = dollar[dollar['lower'] > dollar['close']]
count_up = dollar[dollar['upper'] < dollar['close']]
bband_dollar = pd.concat([count_dn, count_up])
raw_bbd = bband_dollar.copy()
p("Total count: {0}\nUpper Bound exceed: {1}\nLower Bound exceed: {2}".format(bband_dollar.count()[0],
count_up.count()[0],
count_dn.count()[0]))
# when you import research as rs
# the below func can be used as rs.cs_filters()
def cumsum_events(df: pd.Series, limit: float):
idx, _up, _dn = [], 0, 0
diff = df.diff()
for i in diff.index[1:]:
_up, _dn = max(0, _up + diff.loc[i]), min(0, _dn + diff.loc[i])
if _up > limit:
_up = 0; idx.append(i)
elif _dn < - limit:
_dn = 0; idx.append(i)
return pd.DatetimeIndex(idx)
def cumsum_events1(df: pd.Series, limit: float):
idx, _up, _dn = [], 0, 0
diff = df.pct_change()
for i in diff.index[1:]:
_up, _dn = max(0, _up + diff.loc[i]), min(0, _dn + diff.loc[i])
if _up > limit:
_up = 0; idx.append(i)
elif _dn < - limit:
_dn = 0; idx.append(i)
return pd.DatetimeIndex(idx)
event = cumsum_events(bband_dollar['close'], limit = 0.005) # benchmark
event_pct = cumsum_events1(bband_dollar['close'], limit = 0.005)
event_abs = cumsum_events(bband_dollar['close'], limit = bband_dollar['std0'].mean()) # based on ewma std abs estimate 0.005
event_count0 = dollar.reindex(event)
event_count1 = dollar.reindex(event_abs)
event_count2 = dollar.reindex(event_pct)
p("Total count after filter (close price): {0}".format(event_count0.count()[0]))
p("Total count after filter (absolute change): {0}".format(event_count1.count()[0]))
p("Total count after filter (pct change): {0}".format(event_count2.count()[0]))
###Output
Total count after filter (close price): 1782
Total count after filter (absolute change): 424
Total count after filter (pct change): 426
###Markdown
White TestHeteroscedasticity tests imply the two following hypotheses.H0 (null hypothesis): data is homoscedastic.Ha (alternative hypothesis): data is heteroscedastic.Therefore, if the p-value associated to a heteroscedasticity test falls below a certain threshold (0.05 for example), we would conclude that the data is significantly heteroscedastic.
###Code
#event_count['std'] = event_count['close'].rolling(21).std()
#event_count.dropna(inplace= True)
def white_test(data: pd.DataFrame, window: int = 21):
data['std1'] = data['close'].rolling(21).std()
data.dropna(inplace= True)
X = smi.tools.tools.add_constant(data['close'])
results = smi.regression.linear_model.OLS(data['std1'], X).fit()
resid = results.resid
exog = results.model.exog
p("White-Test p-Value: {0}".format(sm.het_white(resid, exog)[1]))
if sm.het_white(resid, exog)[1] > 0.05:
p("White test outcome at 5% signficance: homoscedastic")
else:
p("White test outcome at 5% signficance: heteroscedastic")
# Without cumsum filter percentage return based on boillinger band would be more heteroscedastic
# Main reason would be because it would filter out those signal that does not meet threshold requirement.
white_test(raw_bbd) # without filter (less heteroscedastic)
white_test(event_count0) # with filter (close price)
# As compared to percentage change vs absolute
# Absolute change in daily price will yield lower p-value (more heteroscedastic)
white_test(event_count1) # absolute return as a filter (less heteroscedastic)
white_test(event_count2) # percent return as a filter
###Output
White-Test p-Value: 0.27030043663183695
White test outcome at 5% signficance: homoscedastic
White-Test p-Value: 0.07936943626566823
White test outcome at 5% signficance: homoscedastic
|
Jupyter_notebooks/Gmix.ipynb | ###Markdown
Monte Carlo simulation The microstructure often determines the mechanical properties of many materials. Hence, it is essential to know what variables it depends on. Temperature, pressure and composition all affect the microstructure at equilibrium according to the Gibbs Phase Rule. The Gibbs energy for mixing ($\Delta G_{mix}$) determines if two components are soluble or will precipitate and it can be approximated by the regular solution model:$\Delta G_{mix} = zN_a\Big[\varepsilon_{AB}-\frac{1}{2}\Big(\varepsilon_{AA}+\varepsilon_{BB}\Big)\Big] + RT(\chi_A\cdot ln(\chi_A)+\chi_B ln(\chi_B))$This model assumes that the volume of pure A equals that of B, which omits the elastic strain field contribution to $\Delta H_{mix}$. Since the regular solution model is not perfect, and it is difficult to obtain experimental data; computer models like [Metropolis Monte Carlo simulations](https://web.northeastern.edu/afeiguin/phys5870/phys5870/node80.html) are often used. This allows us to determine under what conditions two components would randomly mix, or form precipitates or form inter-metallic phases.The microstructure of a material is determined both by the thermodynamics and the kinetics of the different phases that can form. To simulate this, our A and B atoms are initially in a random configuration. The variables that we can then play with are:* Time (n): by the number of iterations that the algorithm runs for* Composition ($\chi_A$): determined by the atomic fraction of A atoms (Xa)* The energy of mixing ($\varepsilon_{AB}$): determines if A and B atoms thermodynamically prefer to be next to each other (intermetallic) or for precipitates* Temperature (T): which determines the amount of diffusion in the systemThe way a Monte Carlo simulation works is that it randomly selecting an atom and determining if to swap it with one of its neighbours or not. The swapping automatically occurs if it is energetically favourable to do so, determined by the enthalpy of switching the two atoms. The two atoms can also be swapped if there is enough energy to do so, as determined by the Boltzmann distribution $\Big(exp \Big(\frac{-\Delta E}{k_BT}\Big)\Big)$. This term is temperature-dependent and encapsulates the idea of diffusivity and entropy.This process is repeated for n steps, after which we have our final microstructure. To then determine the statistics, we can calculate the number of A-B bonds and thus numerocally determine if we have an ideal solution, a precipitate or an inter-metallic phase.Here we will concentrate mostly on the thermodynamics and see how $\varepsilon_{AB}$, temperature and composition affect the resulting microstructure. 2D lattice For simplicity, we are going to model our lattice as a $N \times N$ matrix with periodic boundary conditions.This will start in a random configuration of atoms A and B, which we can reppresent as 1 and 0.We then need to be able to select random indecies of our matrix and apply our Monte Carlo model to determine if to swap the atoms or not.
###Code
import numpy as np
import matplotlib.pyplot as plt
import alloy_generator as alloy
# Parameters used:
N = 100 # Size of the matrix used
Xa = 0.5 # atomic fraction of A atoms
n = 10**4 # Number of iterations taken
T = 1000 # Temperature in K
E = 0.5 # Energy of mixing
# Calculate and plot final configuration
Alloy = alloy.Create_alloy(N, Xa, n, T, E)
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.pcolor(Alloy)
plt.figure()
alloy.test2D(Alloy, N, Xa)
###Output
_____no_output_____ |
notebooks/01b-scarlet-measure.ipynb | ###Markdown
SCARLET implementationThis notebook provides a measure function using [SCARLET](https://www.sciencedirect.com/science/article/abs/pii/S2213133718300301), a deblending algorithm based on matrix factorization. **NOTE:** It requires that you install the scarlet python package from the [source](https://github.com/pmelchior/scarlet), the pip installation being outdated. Please follow the instructions for installing scarlet [here](https://pmelchior.github.io/scarlet/install.html). If you have not done so already, we encourage you to follow the BTK [intro tutorial](https://lsstdesc.org/BlendingToolKit/tutorials.html), which will help you understand what is done in this notebook.
###Code
catalog_name = "../data/sample_input_catalog.fits"
stamp_size = 24
surveys = btk.survey.get_surveys(["Rubin","HSC"])
catalog = btk.catalog.CatsimCatalog.from_file(catalog_name)
draw_blend_generator = btk.draw_blends.CatsimGenerator(
catalog,
btk.sampling_functions.DefaultSampling(max_number=5,maxshift=6),
surveys,
stamp_size=stamp_size,
batch_size=10
)
def scarlet_measure(batch,idx,channels_last=False, is_multiresolution=False,**kwargs):
"""Measure function for SCARLET
"""
sigma_noise = kwargs.get("sigma_noise", 1.5)
surveys = kwargs.get("surveys", None)
#Fist we carry out the detection, using SExtractor (sep being the python implementation)
# We need to differentiate between the multiresolution and the regular case
if is_multiresolution:
survey_name = surveys[0].name
image = batch["blend_images"][survey_name][idx]
# Put the image in the channels first format if not already the case
image = np.moveaxis(image,-1,0) if channels_last else image
avg_image = np.mean(image, axis=0)
wcs_ref = batch["wcs"][survey_name]
psf = np.array([p.drawImage(galsim.Image(image.shape[1],image.shape[2]),scale=surveys[0].pixel_scale).array for p in batch["psf"][survey_name]])
else:
image = batch["blend_images"][idx]
image = np.moveaxis(image,-1,0) if channels_last else image
avg_image = np.mean(image, axis=0)
psf = np.array([p.drawImage(galsim.Image(image.shape[1],image.shape[2]),scale=surveys[0].pixel_scale).array for p in batch["psf"]])
wcs_ref = batch["wcs"]
stamp_size = avg_image.shape[0]
bkg = sep.Background(avg_image)
catalog, segmentation = sep.extract(
avg_image, sigma_noise, err=bkg.globalrms, segmentation_map=True
)
if len(catalog) == 0:
t = astropy.table.Table()
t["ra"], t["dec"] = wcs_ref.pixel_to_world_values(catalog["x"], catalog["y"])
if is_multiresolution:
return {"catalog":t,"segmentation":None,"deblended_images":{s.name: np.array([np.zeros((len(s.filters),batch["blend_images"][s.name][idx].shape[1],
batch["blend_images"][s.name][idx].shape[1]))]) for s in surveys}}
else:
s = surveys[0]
return {"catalog":t,"segmentation":None,"deblended_images":np.array([np.zeros((len(s.filters),batch["blend_images"][idx].shape[1],
batch["blend_images"][idx].shape[1]))])}
#Drawing the PSFs
mean_sky_level = [btk.survey.get_mean_sky_level(surveys[0],filt) for filt in surveys[0].filters]
### Initializing scarlet ###
bands=[f.name for f in surveys[0].filters]
model_psf = scarlet.GaussianPSF(sigma=(0.8,) * len(bands))
# The observation contains the blended images as well as additionnal informations
observations = [scarlet.Observation(
image, psf=scarlet.ImagePSF(psf), weights=np.ones(image.shape)/ (bkg.globalrms**2), channels=bands, wcs=wcs_ref
)]
# We define an observation for each survey
for survey in surveys[1:]:
image = batch["blend_images"][survey.name][idx]
image = np.moveaxis(image,-1,0) if channels_last else image
psf = np.array([p.drawImage(galsim.Image(image.shape[1],image.shape[2]),scale=survey.pixel_scale).array for p in batch["psf"][survey.name]])
bands=[f.name for f in survey.filters]
avg_image = np.mean(image, axis=0)
bkg = sep.Background(avg_image)
wcs = batch["wcs"][survey.name]
observations.append(scarlet.Observation(
image, psf=scarlet.ImagePSF(psf), weights=np.ones(image.shape)/ (bkg.globalrms**2), channels=bands, wcs=wcs
))
# We create a frame grouping all the observations
model_frame = scarlet.Frame.from_observations(observations, coverage='intersection', model_psf=model_psf)
# We define a source for each detection
sources = []
for n, detection in enumerate(catalog):
result = scarlet.ExtendedSource(
model_frame,
model_frame.get_sky_coord((detection["y"], detection["x"])),
observations,
thresh=1,
shifting=True,
)
sources.append(result)
scarlet.initialization.set_spectra_to_match(sources, observations)
### Fitting the sources to the blend ###
try:
blend = scarlet.Blend(sources, observations)
blend.fit(200, e_rel=1e-5)
### Returning the results in a BTK compatible form ###
deblended_images = {}
for i in range(len(surveys)):
im, selected_peaks = [], []
for k, component in enumerate(sources):
y, x = component.center
selected_peaks.append([x, y])
model = component.get_model(frame=model_frame)
model_ = observations[i].render(model)
model_ = np.transpose(model_, axes=(1, 2, 0)) if channels_last else model_
im.append(model_)
selected_peaks = np.array(selected_peaks)
deblended_images[surveys[i].name] = np.array(im)
t = astropy.table.Table()
t["ra"], t["dec"] = wcs_ref.pixel_to_world_values(selected_peaks[:,0], selected_peaks[:,1])
t["ra"] *= 3600 #Converting to arcseconds
t["dec"] *= 3600
except AssertionError: #If the fitting fails
t = astropy.table.Table()
t["ra"], t["dec"] = wcs_ref.pixel_to_world_values(catalog["x"], catalog["y"])
t["ra"] *= 3600 #Converting to arcseconds
t["dec"] *= 3600
if is_multiresolution:
deblended_images={s.name: np.array([np.zeros((len(s.filters),batch["blend_images"][s.name][idx].shape[1],
batch["blend_images"][s.name][idx].shape[1])) for c in catalog]) for s in surveys}
else:
deblended_images={s.name: np.array([np.zeros((len(s.filters),batch["blend_images"][idx].shape[1],
batch["blend_images"][idx].shape[1])) for c in catalog]) for s in surveys}
print("failed")
if len(surveys) == 1:
deblended_images = deblended_images[surveys[0].name]
return {"catalog":t,"segmentation":None,"deblended_images":deblended_images}
measure_kwargs=[{"sigma_noise": 2.0}]
meas_generator = btk.measure.MeasureGenerator(
[btk.measure.sep_measure,scarlet_measure], draw_blend_generator, measure_kwargs=measure_kwargs
)
metrics_generator = btk.metrics.MetricsGenerator(
meas_generator,
target_meas={"ellipticity": btk.metrics.meas_ksb_ellipticity},
meas_band_num=(0,0)
)
blend_results,measure_results,metrics_results = next(metrics_generator)
###Output
No flux in morphology model for source at [ 3.59998307e+02 -4.46731636e-04]
/home/thuiop/Documents/stageAPC/BlendingToolKit/env/lib64/python3.7/site-packages/skimage/metrics/_structural_similarity.py:108: UserWarning: Inputs have mismatched dtype. Setting data_range based on im1.dtype.
im2[..., ch], **args)
###Markdown
Plot Metrics from Scarlet Results
###Code
btk.plot_utils.plot_metrics_summary(metrics_results,interactive=True)
survey = "HSC"
btk.plot_utils.plot_with_deblended(
blend_results["blend_images"][survey],
blend_results["isolated_images"][survey],
blend_results["blend_list"][survey],
measure_results["catalog"]["scarlet_measure"][survey],
measure_results["deblended_images"]["scarlet_measure"][survey],
metrics_results["matches"]["scarlet_measure"][survey],
indexes=list(range(5)),
band_indices=[1, 2, 3]
)
###Output
_____no_output_____ |
Replication and Causal Graphs/.ipynb_checkpoints/Replication-checkpoint.ipynb | ###Markdown
Replication Corruption Dynamics: The Golden Goose Effect, Paul Niehaus and Sanidp Sukhtankar (2013)
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
This notebook presents the replicated results from [***Corruption Dynamics: The Golden Goose Effect***](https://www.aeaweb.org/articles?id=10.1257/pol.5.4.230), Paul Niehaus and Sandip Sukthankar; (American Economic Journal: Economic Policy 2013, 5(4); 230-269).I rely on the programming language *Python* and the packages *matplotlib*, *numpy*, *pandas*, *scipy*, *statsmodels* to replicate the results. I am able to replicate nearly all results from Niehaus and Sukhtankar. Due to limited space I will only present the most important findings. [Tables](https://github.com/HumanCapitalAnalysis/student-project-LeonardMK/tree/master/Replication%20and%20Causal%20Graphs/tables) and [figures](https://github.com/HumanCapitalAnalysis/student-project-LeonardMK/tree/master/Replication%20and%20Causal%20Graphs/figures) can be found here. The code for replicating the results can be found [here](https://github.com/HumanCapitalAnalysis/student-project-LeonardMK/tree/master/Replication_Program.ipynb/). Table of Contents* Motivation* Policy and Data* Theoretical Results* Descriptive Statistics and Figures* Model Specification and Causal Graphs* Results * Theft from Daily-Wage Projects * Theft from Piece-Rate Projects* Robustness Checks* Is Monitoring Affected* Magnitude of Effect* Conclusion* References Motivation Niehaus and Sukthankar build upon a paper from Gary S. Becker and George J. Stigler (1974) in which they argue that a principial can reduce corruption by paying an efficiency wage. From here Niehaus and Sukthankar pick off and argue that a wage increase of the official's subordinates could lead to a change in the official's rent extraction expectatioins as well. Officials face the tradeoff of extracting higher rents today against surviving, to extract rents in the future. If the increased utility of future rents offsets the immediate gain, because officials expecting higher rents in the future, this could lead to a weaker increase in corruption. Niehaus and Sukhtankar call this the "golden goose" effect since officials want to preserve the goose that lays the golden eggs which is a reference to the fable. To underpin their argument Niehaus and Sukthankar propose an economic model describing the official's optimization problem, which I will outline in the theoretical chapter. Policy and Data To test for "golden goose" effects Niehaus and Sukhtankar use panel data from the National Rural Employment Guarantee Scheme (NREGS) which is India's largest rural welfare program. Under this scheme every rural household has the right to 100 days of paid labor and can otherwise claim unemployment benefits. NREGS projects typically include construction of roads and irrigation systems. At the beginning of each year a shelf of projects is approved either at the panchayat level (village level) or at the block level (intermediate administrative unit) although higher up officials often propose or approve shelfs of projects. Work is either compensated on a daily-wage or on a piece rate basis. In practice each project uses the same type of payment and there is a clear range of daily-wage and piece-rate projects. For example all irrigation projects involving digging trenches use piece-rate payments. This difference in payment schemes is important to keep in mind for the later part of Niehaus and Sukhtankar's analysis. Officials can gain illicit rents in this context through the following forms project selection, overreporting of labor, underpaying workers or stealing materials. Consequently officials are monitored by workers and by higher up officials. Officials that get caught either face suspension or relocation to dead-end jobs. For their analysis Niehaus and Sukthankar use data from the state of Orissa which announced a wage increase for daily wage projects on the 28.04.2007 which came into effect on the 01.05.2007. The policy change was proposed by the state government of Orissa and saw a wage increased from 50Rs. to 70Rs. for one day of labor. This wage hike was limited to daily-wage projects only. Niehaus and Sukhtankar argue for the exogeneity of the policy since the officials are removed from the policymakers. Niehaus and Sukthankar use individual level [official data](http://nrega.nic.in) which includes personal information like age and residency. Also they get information about the official amount paid and the type of project. Besides data from Orissa they also gathered observations from Andra Pradesh to use as a control in later analysis. To get an actual measure of corruption Niehaus and Sukthankar sampled 1938 households from the official data to ask them about about the type of project (daily-wage or piece-rate), the amount of work done and the payment they received. Additionally village officials were questioned about the local labor market conditions, seasons and official visits. Theoretical Results and Propositions Niehaus and Sukthankar propose a model where time is discrete. An infinitely lived official and $N$ workers maximizie their discounted earnings.$$ u_i(t) = \sum_{\tau = t}^{\infty} \beta^{\tau - t}y_i(\tau) $$Here $y_i(\tau)$ are the earnings of agent $i$ in period $\tau$. It is assumed that identical agents wait to replace a fired official. In each period at most one project can be active. Variabel $\omega^t$ equals $1$ if the project is a daily wage project and $0$ if it is a piece-rate project. The shelf is defined as a stochastic stream of projects from which at the beginning of each period a random project is drawn. $$ \phi \equiv \textbf{P}(\omega^t = 1 |\omega^{t - 1}, \omega^{t - 2}, ...) $$It is assumed that all agents know $\phi$ and that it is exogenous. Each worker is assumed to have one unit of labor he can supply. Further the following notation holds 1. $\underline{w}^t, \underline{r}^t$: Daily-wage and piece-rate earnings in the private sector.2. $w_i^t, r_i^t$: Wage a NREGS worker receives. Can differ from the statutory wage.3. $\bar{w}, \bar{r}$: Statutory wage rate.4. $n^t, q^t$: Is the amount of work supplied for daily-wage/ piece-rate projects.5. $\hat{n}^t, \hat{q}^t$: Is the amount of work reported by the official on daily-wage respectively piece-rate projects.6. $\bar{n}, \bar{q}$: The number of registered workers in a villge.If the project is a daily-wage project the official gains the following illicit rents$$ y_o^t(\omega^t = 1) = \underbrace{(\bar{w} - w)}_{Underpayment} n + \underbrace{(\hat{n}^t - n)}_{Overreporting} \bar{w}$$A similar formula holds in case of piece-rate projects.$$ y_o^t(\omega^t = 1) = \underbrace{(\bar{r} - r)}_{Underpayment} q + \underbrace{(\hat{q}^t - q)}_{Overreporting} \bar{r}$$The probability of detection is modeled as a function $\pi(\hat{n}, n), \mu(\hat{q}, q)$ for daily wage respectively piece-rate projects. Further it is assumed that $\pi(n, n)=\mu(q, q)=0$ meaning there is no punishment for honesty and it is assumed that the official's problem has an interior solution. If an official is caught he receives a continuation payoff normalized to zero.The recursive formulation of the official's objective function is$$\bar{V}\equiv \phi V(\bar{w}, 1, \phi) + (1 - \phi) V(\bar{w}, 0, \phi)$$where $V(\bar{w}, 1, \phi)$ is the official's expected continuation payoff in a period with a daily-wage project. Consequently $V(\bar{w}, 0, \phi)$ marks the case of a piece-rate project.$$ V(\bar{w}, 1, \phi) = max_\hat{n}[(\bar{w} - w) n + (\hat{n} - n) \bar{w} + \beta (1 - \pi(\hat{n}, n^t)) \bar{V}(\bar{w}, \phi)]$$$$ V(\bar{w}, 0, \phi) = max_\hat{q}[(\bar{r} - r) q + (\hat{q} - q) \bar{r} + \beta (1 - \pi(\hat{n}, n^t)) \bar{V}(\bar{w}, \phi)]$$From this model Niehaus and Sukthankar derive three propositions which they plan to test in the coming regressions.1. Overreporting $\hat{n}^t - n$ on daily-wage projects is increasing in $\bar{w}$ if $\frac{\bar{w}}{\bar{V}} \frac{\partial{\bar{V}}}{\partial{\bar{w}}} < 1$ and decreasing otherwise. A higher wage increases utility from future overreporting raising the importance of keeping one's job.2. Total theft from piece-rate project $(\hat{q}^t \bar{r} - qr)$ is decreasing in $\bar{w}$. The intuition here is that an increase in $\bar{w}$ leads to a substitution effect in corruption from daily-wage to piece-rate.3. Restrict attention to any closed, bounded set of parameters $(\phi, \bar{w}, \bar{r}, \underline{w}, \underline{r})$. Then for $|y_o(1) - y_o(0)|$ sufficiently small,$$ \frac{\partial^2(\hat{n}^t - n)}{\partial \bar{w} \partial \phi} < 0\ and\ \frac{\partial^2(\hat{q}^t \bar{r} - qr)}{\partial \bar{w} \partial \phi} < 0 $$ Descriptive Statistics and Figures Table 1: Summaray Statistics of Main Regression Variables
###Code
pd.read_csv("tables/Table1.csv", index_col = [0])
###Output
_____no_output_____
###Markdown
Table 1 gives descriptive statistics for the official and actual daily-wage days respectively piece-rate project payments and the Forward DW (daily-wage) Fraction is the fraction of projects for each panchayat that are on a daily-wage basis in the next two months. It is evident from Table 1 that only a fraction of the days reported is actually paid. Same holds true for piece-rate project payments. We also see that most of the panchayat's in our sample use daily-wage projects. We can further see from Figure 1 that most panchayat's either plan on employing only daily-wage or only piece-rate projects.__Figure 1: Histogram of Forward Daily Wage Fraction__ Figure 2 presents the mean daily-wage rate for Orissa over the sample period. We can see an almost immediate wage increase in the official data. However, official reports are still below 70Rs. for most of the observed time period. Interestingly the actual wage rate from the survey-data didn't reflect this change. The mean actual wage rate deterioated for workers after the shock. Niehaus and Sukthankar claim that this effect is only compositional and disappears once they control for district fixed effects.__Figure 2: Mean Daily-Wage Rate Orissa__ Model Specification and Causal Graphs Niehaus and Sukthankar use the statistical software *R* to fit mostly least-squares model. An exception are the results reported in Table 8 which stem from maximizing the likelihood function of the model. All least-square regression models were reported with robust standard errors clustered by panchayat and day. The first regression checks whether the reform affected the project shelf composition. Niehaus and Sukthankar estimate the following equation.$$ {Fwd.\ DW\ Frac.}_{pt} = \beta_0 + \beta_1 Shock_t + \textbf{T}_t^{'} \gamma + \textbf{R}_p^{'} \zeta + \delta_{d(p)} + \epsilon_pt \tag{1}$$Where $Fwd.\ DW\ Frac._{pt}$ is the share of planned daily-wage work days in the panchayat's shelf for the next two months, $\textbf{T}$ is a matrix of time controls, $\textbf{R}$ is a matrix including panchayat level controls and $\delta_{d(p)}$ is some district fixed effects. The meaning of $\textbf{T}$, $\textbf{R}$ and $\delta_{d(p)}$ remain the same in this section. In principal we wouldn't expect to find a relation since a shelf of projects is fixed at the beginning of the fiscal year (March 2007). In the following regression equations presented $y_{pt}$ either refers to the officially reported number of daily-wage work or the total amount paid for a piece-rate project.$$ {y}_{pt} = \beta_0 + \beta_1 y_{pt} + \beta_2 Shock_t + \textbf{T}_t^{'} \gamma + \textbf{R}_p^{'} \zeta + \delta_{d(p)} + \epsilon_pt \tag{2} $$For daily-wage projects we expect $\beta_2$ to be positive (Proposition 1.) while for piece-rate projects the model predicts a negative coefficient (Proposition 2.). The causal graph (Figure 3) underlying the regression models can be seen below. Niehaus and Sukhtankar assume that all back-door paths can be blocked by controlling for $\textbf{T}_t,\ \textbf{R}_p,\ \delta_{d(p)}, y_{pt}$. Here $ \textbf{U} $ denotes the set of unobserved variables.__Figure 3: Causal Graph Assumed by Niehaus and Sukthankar__ To further control for omitted variables Niehaus and Sukthankar use data from the neighboring state of Andra Pradesh. However, Andra Pradesh only employed piece-rate projects during the sample frame, its use as a control is therefore limited to regression models explaining variation in piece-rates. For this case the the following equation is estimated.$$ {y}_{pt} = \beta_0 + \beta_1 y_{pt} + \beta_2 (OR\ Shock) \times OR_p + \beta_3 (AP\ Shock\ 1)_t \times AP_p + \beta_4 (AP\ Shock\ 2)_t \times AP_p + \beta_5 (OR\ Shock)_t + \beta_6 (AP\ Shock\ 1)_t + \beta_7 (AP\ Shock\ 2)_t + OR_p + \textbf{T}_t^{'} \gamma + \textbf{R}_p^{'} \zeta + \delta_{d(p)} + \epsilon_{pt} \tag{3} $$Here we are interested in the coefficient $\beta_2$ which is the post-shock change in corruption in Orissa. Note, that in the case that $y$ is rate paid for a piece-rate project $Always\ DW$ is replaced by $Always\ PR$. (No it is not. It is the post shock effect on the officially reported $y$). The following two regression equations are used to test proposition 3. To do so Niehaus and Sukthankar define the following variables $Always\ DW/PR$ is $1$ if the panchayat implemented only daily-wage respectively piece-rate projects. Here $Always\ DW/PR$ is supposed to be the empirical counterpart to $\phi$. We therefore expect that $\beta_3 0$ for piece-rate projects. The expectation for the $Shock$ variable is still $> (<) 0$.$$ {y}_{pt} = \beta_0 + \beta_1 y_{pt} + \beta_2 Shock_t + \beta_3 Shock_t \times (Always\ DW)_{pt} + \beta_4 (Always\ DW)_pt + \textbf{T}_t^{'} \gamma + \textbf{R}_p^{'} \zeta + \delta_{d(p)} + \epsilon_pt \tag{4} $$From the variable $Fwd.\ DW\ Frac.$ Niehaus and Sukthankar form the variables $Fdw.\ All$ which is equal to one if $Fwd.\ DW\ Frac. = 1$. In the same way they categorize $Fdw.\ Some.$ if $0 < Fwd.\ DW\ Frac. < 1$ and the omitted category if $Fwd.\ DW\ Frac. = 0$. Using interaction terms Niehaus and Sukthankar allow for differing effects between categories. The estimated equation is depicted below. This way Niehaus and Sukthankar hope for a better representation of proposition 3.$$ {y}_{pt} = \beta_0 + \beta_1 y_{pt} + \beta_2 Shock_t + \beta_3 Shock_t \times (Fwd\ DW\ All)_{pt} + \beta_4 (Fwd.\ DW\ All)_{pt} + \beta_5 Shock_t \times (Fwd.\ DW\ Some)_{pt}\\ + \beta_6 (Fwd.\ DW\ Some)_{pt} + \textbf{T}_t^{'} \gamma + \textbf{R}_p^{'} \zeta + \delta_{d(p)} + \epsilon_{pt} \tag{5} $$In order to test whether past corruption opportunities influence the present level of corruption they estimate one last regression model. Similar as above, varibales are derived from $Bwd.\ Wage\ Frac.$ which measures the fraction of daily-wage projects in the past two months. $$ {y}_{pt} = \beta_0 + \beta_1 y_{pt} + \beta_2 Shock_t + \beta_3 Shock_t \times (Fwd.\ DW\ All)_{pt} + \beta_4 Shock_t \times (Bwd\ DW\ All)_{pt} + \beta_5 Shock_t \times (Fwd.\ DW\ Some)_{pt} + \beta_6 Shock_t \times (Bwd.\ DW\ Some)_{pt} + \beta_7 (Fwd.\ DW\ All)_{pt} + \beta_8 (Bdw\ All)_{pt} + \beta_9 (Fwd\ DW\ Some)_{pt} + \beta_{10} (Bwd\ DW\ Some)_{pt} + \textbf{T}_t^{'} \gamma + \textbf{R}_p^{'} \zeta + \delta_{d(p)} + \epsilon_{pt} \tag{6} $$The theoretical model predicts that $\beta_3 < 0$ and makes no prediction for $\beta_4$. Results Theft from daily-wage projects Due to limited space and similar results I will omit the estimates from the robustness checks reported in Table 6 and 7. Since I wasn't able to replicate the results reported in Table 8 (even from the original code) I report my results and compare them to the results shown in the original paper. For start I present the estimation results from equation $(1)$ which can be seen in the Table 2 below. The results suggest no significant relationship between the composition of future projects and the wage hike for daily-wage projects. Meaning that officials didn't try to change project composition to gain better rent extraction opportunities. Besides the effect for the shock Niehaus and Sukthankar also report the effect of a linear/quadratic time-trend which is also insignificant. Table 2: Wage Shock Effects on Project Composition
###Code
pd.read_csv("tables/Table2.csv", index_col = [0], usecols = [0, 1, 2, 3], keep_default_na = False)
###Output
_____no_output_____
###Markdown
\*\*\*, \*\*, \* Denotes significance at the 1, 5, 10 percent level Next I present the results for theft on daily-wage projects. In Figure 4 below we can see the aggregated theft from daily-wage projects plotted over time. Niehaus and Sukthankar find that the large downward spikes generally occur on holidays implying that officials perceive overreporting especially risky on those days. The regression line superimposed stems from estimating the following equation.$$ Diff.\ DW = \beta_0 + \beta_1 Day + \beta_2 Day^2 + \beta_3 Shock + \beta_4 (Shock x Day) + \beta_5 (Shock x Day)^2 + \epsilon \tag{7}$$From this regression one can see that there was a slight increase in wage overreporting on the aggreagte level after the wage hike.__Figure 4: Daily-Wage Corruption Measures with Discontinuous Polynomial Fit__Table 3A and Table 3B below report the results from regression equations for the disaggregated case. Columns 1-3 in Table 3A is based on regression equation $(2)$. Column 2 adds district fixed effects $\delta_{d(p)}$ and column 3 adds a linear trend interacted with the shock term. Columns 4-6 are estimates of equation $(4)$. As expected $\beta_2$ is positive, however, the effect is only significant to the level of 10% in column 4. This means that for panchayats that also make use of piece-rate projects the shock didn't increase overreporting significantly. The prediction for $\beta_3$ holds as well. In all specifications the effect of the shock is negative for panchayats that employ only daily-wage projects. Again the effects are only weakly significant (p < 0.1). Table 3A: Wage Shock Effects on Daily-Wage Reports
###Code
pd.read_csv("tables/Table3A.csv", index_col = [0], keep_default_na = False)
###Output
_____no_output_____
###Markdown
\*\*\*, \*\*, \* Denotes significance at the 1, 5, 10 percent level The results in Table 3B below present the results from estimating equation $(5)$ and $(6)$ to assess whether future and past project composition has an influence on the size of corruption. Here the effect of the shock itself is significant in all specifications to the level of 5%. Since panchayats with $Fwd.\ DW\ Frac. = 0$ are the reference class this implies an increase of corruption for said panchayats. The effect for panchayats with $Fwd.\ DW\ Some = 1$ is insignificant implying similar effects as for panchayats with $Fwd.\ DW\ Frac. = 0$. For panchayats that have only planned daily-wage projects in the next two months $(Fwd.\ DW\ All = 1)$ the effect is, nonetheless, negative and highly significant, after controlling for backward shelf composition (column 4-6). From this Niehaus and Sukthankar derive the conclusion that there are substitution mechanics at play for the latter panchayats. Although the effect for $Shock x (Bwd.\ DW\ Some)$ is negative and signficant Niehaus and Sukthankar note that when replacing the categories of $Fdw.\ DW\ Frac.$ with the actual variable the effect turns out to be insignificant. Table 3B: Dynamic Wage Shock Effects for Daily-Wage Reports
###Code
pd.read_csv("tables/Table3B.csv", index_col = [0], keep_default_na = False)
###Output
_____no_output_____
###Markdown
\*\*\*, \*\*, \* Denotes significance at the 1, 5, 10 percent level Theft from piece-rate projects To test propostion 2 Niehaus and Sukthankar focus next on the effect of the shock on theft from piece-rate projects. The models estimated are of similar form as those reported in Table 3A and 3B. For start Niehaus and Sukthanker perform a similar analysis as for daily-wage corruption on an aggregate level.__Figure 5: Piece-Rate Corruption Measures with Discontinuous Polynomial Fit__From Figure 5 we can see that there was a strong decrease in theft from piece-rate projets which coincided with the shock. Yet, the effect seems to be temporary with a strong increase in rent extraction in June (Day 150 to 170). Niehaus and Sukthankar speculate that this could be due to the monsoon season starting in late June in Orissa, during which NREGS activities are usually ceased. Officials would then try to extract more before the monsoon season starts. Table 4A and Table 4B are estimated using a similar regression setup as for daily-wage projects. Remember from proposition 2 that we expect the effect of the wage hike to be a decrease in piece-rate projects. We see that throughout Table 4A the effect of the shock is negative (as expected) and that it is significant to the level 5% (except column 2-3). Similar as in Table 3A column 4-6 allows for differing effects for panchayats that ran only piece-rate projects and those that didn't. The interaction term is negative though insignificant suggesting that these panchayats aren't incentivized more than other panchayats, to reduce piece-rate project reports. Table 4A: Wage Shock Effects on Piece-Rate Reports
###Code
pd.read_csv("tables/Table4A.csv", index_col = [0], keep_default_na = False)
###Output
_____no_output_____
###Markdown
\*\*\*, \*\*, \* Denotes significance at the 1, 5, 10 percent level As before Table 4B includes dynamic effects by estimating equation $(5)$ and $(6)$. Again, the shock term is interacted with measures of future and backward project shelf composition. As can be seen from below effects are as expected, however, this time they are all insignificant. Table 4B: Dynamic Wage-Shock Effects on Piece-Rate Reports
###Code
pd.read_csv("tables/Table4B.csv", index_col = [0], keep_default_na = False)
###Output
_____no_output_____
###Markdown
\*\*\*, \*\*, \* Denotes significance at the 1, 5, 10 percent level To increase the power of their test for propostion 2 Niehaus and Sukthankar use data collected from Andra Pradesh and estimate equation $(5)$. Table 5 reports the estimates from the difference-in-difference equation. The effect we are interested in, the coefficient for $OR\ Shock\ x\ OR$ is negative, stronger and this time significant to the level 5 percent. From these results Niehaus and Sukthankar derive support for proposition 2 and their "golden goose" hypothesis. Table 5: Effects on Piece-Rate Reports using Andra Pradesh as a Control
###Code
pd.read_csv("tables/Table5.csv", index_col = [0], keep_default_na = False)
###Output
_____no_output_____
###Markdown
\*\*\*, \*\*, \* Denotes significance at the 1, 5, 10 percent level Robustness checks Niehaus and Sukthankar perform various robustness checks to test the validity of their findings. Due to their similarity to the original findings these results are omitted here. Their robustness checks include changing the time window of the forward and backward project-shelf composition variables ($Fwd.\ DW\ Frac.,\ Bwd.\ DW\ Frac.$) to one a three month periods. Further they estimate regression models with the difference between official and actual quantities as the dependent variable. It turns out that all these approaches yield qualtitatively similar results. The results of these regressions can be found [here](https://github.com/HumanCapitalAnalysis/student-project-LeonardMK/tree/master/Replication%20and%20Causal%20Graphs/tables) and were reproduced using this [file](https://github.com/HumanCapitalAnalysis/student-project-LeonardMK/blob/master/Replication_Program.ipynb) Is Monitoring Affected? Another possible reaction to the wage change is a change in monitioring intensity. If those panchayats that reported high usage of daily-wage projects were controlled more frequently this could explain the negative and significant effect of $Shock\ x\ (Fwd.\ DW\ All)$. However, the effect of $Bwd.\ DW\ Frac.$ should then play a greater role. To test for these changes Niehaus and Sukthankar use data from their village level survey on the most recent visists of block development officiers (BDO) and the district collector. For the panchayats that were actually visited they test whether the likelihood of a visit went up after May 2007. Niehaus and Sukthankar assume that the probability of a panchayat receiving a visit is indepent across months. Let $t$ be the month in which a panchayat was lastly visited, $p(\tau|\theta, d)$ is the probability that district $d$ receives a visit at time $\tau$. They further assume that $p$ is in logit form.$$ p(t|\theta, d)\ =\ \frac{exp({\delta_d + \gamma 1(t>=t^{*}) + f(t)}}{1 + exp({\delta_d + \gamma 1(t>=t^{*}) + f(t)}} \tag{8} $$Since data on all visits isn't available Niehaus and Sukthankar instead estimate the probability that the panchayat's lats visit was at time $t$:$$ f(t|\theta, d)\ =\ p(t|\theta, d)\ \cdot\ \prod_{\tau = t +1}^T (1 - p(\tau| \theta, d)) \tag{9} $$In the same way the probability that the panchayat didn't receive a visits since the beginning of the NREGS is$$ \prod_{\tau = \underline{t}}^{T} (1 - p(\tau | \theta, d)) \tag{10} $$Where $\underline{t}$ is the start date of the NREGS. All models were estimated using maximum likelihood. The parameter of interest here is $\gamma$ which was tested for being different from zero. If the wage change had an effect on monitoring we would see significant positive $\gamma$ values. Problematic about these results is that I was unable to replicate them even from the original *R* code. However, the results are qualitatively nearly the same. All coefficients are either non-significant or negative. Leading to the same interpretation that the probability of a visit didn't increase after the wage hike. Table 6: ML Estimates of Changing Audit Probabilites over Time
###Code
pd.read_csv("tables/Table8.csv", index_col = [0], keep_default_na = False)
###Output
_____no_output_____ |
crop_modeling_landsat_sentinel.ipynb | ###Markdown
Modeling Crop Yield: Landsat + Sentinel Python modules
###Code
import warnings
import time
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import geopandas
import pyarrow
from sklearn.linear_model import RidgeCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from scipy.stats import spearmanr
from scipy.linalg import LinAlgWarning
from scipy.stats import pearsonr
import math
import seaborn as sns
###Output
_____no_output_____
###Markdown
Satellite Parameters - For a description of the Landsat 8 mission, see the US Geological metadata [here.]() - For a description of the Sentinel 2 mission, see the US Geological metadata [here.]()We'll use **ls8** and **sn2** for Landsat 8 and Sentinel 2 missions, respectively, throughout this notebook to denote satellite specific parameters.
###Code
ls8_satellite = "landsat-8-c2-l2"
sn2_satellite = "sentinel-2-l2a"
###Output
_____no_output_____
###Markdown
Choose band combination. - For a description of **Landsat 8** bands, see the [US Geological Survey documentation here.](https://www.usgs.gov/faqs/what-are-band-designations-landsat-satellites) - For a description of **Sentinel 2** bands, see the [US Geological Survey documentation here.](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-sentinel-2:~:text=4%20bands%20at%2010%20meter,%2Dinfrared%20(842%20nm)According to our results, bands **(insert band selection here)** result in the best model performance for Landsat, and **(insert band selection here)** result in the best model performance for Sentinel for the task of predicting maize yields in Zambia.
###Code
#### Landsat bands
ls8_bands = "1-2-3-4-5-6-7"
#### Sentinel bands
# sn2_bands = "2-3-4"
# sn2_bands = "2-3-4-8"
sn2_bands = "2-3-4-5-6-7-8-11-12"
###Output
_____no_output_____
###Markdown
Choose the number of points that were featurized.Each value in the following chunk represents the amount of thousands of points that were featurized in each respective feature file. These points represent a uniform subset of the spatial grid of Zambia. Points are spaced at uniform intervals for each selection, measured in kilometers in the longitudinal direction for each set of features. Selecting a greater quantity of points results in a denser spatial sample and increases the spatial resolution of the model. Regardless of the quantity of points selected, each point is buffered by the same distance, resulting in a 1km^2 cell around each point.Selection remains the same for Landsat and Sentinel.
###Code
# points = 15
points = 20
###Output
_____no_output_____
###Markdown
Choose which months to use in the model.Note that months 10, 11, and 12 get pushed to the next year because the growing season (November - May) spans the calendar year. Maize is planted in November, starts to change color with maturity in May, and is harvested in June - August. According to our results, subsetting the months to **(insert month selection here)** increases model performance.Selection remains the same for Sentinel and Landsat
###Code
month_range = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
# month_range = [ 4, 5, 6, 7, 8, 9 ]
# month_range = [ 3, 4, 5, 6, 7, 8, 9 ]
# month_range = [ 5, 6, 7, 8, 9 ]
# month_range = [ 4, 5, 6, 7, 8 ]
# month_range = [ 5, 6, 7, 8 ]
###Output
_____no_output_____
###Markdown
Choose to keep only areas with crops (`True`) or to keep all points (`False`)Selecting `True` applies a "cropland mask" to the spatial grid of Zambia. This retains only the regions of the country in which maize is grown, according to the **(insert source here)**. As a result, the spatial extent of the features that are fed into the model are highly subset for the specific task at hand: modeling maize yields. According to our results, selecting `True` **(insert increases or decreases here)** model performance.Selecting `False` results in modeling with the maximum spatial extent of the features, with more generalized features as a result.
###Code
# crop_mask = True
crop_mask = False
###Output
_____no_output_____
###Markdown
Choose a weighted average (`True`) or a simple mean (`False`) to use when collapsing features to administrative boundary level.
###Code
# weighted_avg = True
weighted_avg = False
###Output
_____no_output_____
###Markdown
Impute NA values by descending group levels (True) or `scikit learn`'s simple imputer (False)Imputing "manually" by descending group levels imputes NA values in multiple "cascading" steps, decreasing the proportion of inoutated values with each step. First, the NA values are imputed at by both `year` and `district`, which should yield imputed values that most closely match the feature values that would be present in the data if there was no clouds obscuring the satellite images. Next, the remaining NA values that could not be imputed by both `year` and `district` are imputed by only `district`. Lastly, the remaining NA vlaues that could not be imputed by both `year` and `district` or by just `district` are imputed by `year` only. This option gives the user more control and transparency over how the imputation is executed.Imputing using `scikit learn`'s simple imputer executes standard imputation, the details of which can be found in the `scikitlearn` documentation [here.](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html)
###Code
impute_manual = True
# impute_manual = False
###Output
_____no_output_____
###Markdown
Unchanging parmatersThe parameters in the following chunk are set for the country of Zambia for with 1000 features, regardless of the satellite selected. The start years for each satellite reflect the respective years that Landsat 8 and Sentinel 2A missions began.The number of features is set to 1000 to serve as a staple parameter among the several other parameters varied during the model optimization process. Changing this parameter in the following code chunk will result in an error because featurizing landsat imagery for a different number of features was outside the scope of this project.
###Code
# set home directory
data_dir = "/capstone/cropmosaiks/data"
# set plot sizes
if points == "4":
marker_sz = 60
elif points == "15":
marker_sz = 15
elif points == "24":
marker_sz = 10
else:
marker_sz = 8
country_code = "ZMB"
num_features = 1000
year_start = 2015
year_end = 2018
# set file paths
ls8_feature_file_name = (f'{ls8_satellite}_bands-{ls8_bands}_{country_code}_{points}k-points_{num_features}-features')
sn2_feature_file_name = (f'{sn2_satellite}_bands-{sn2_bands}_{country_code}_{points}k-points_{num_features}-features')
weight_file_name = (f'{country_code}_crop_weights_{points}k-points')
###Output
_____no_output_____
###Markdown
Administrative boundaries Administrative boundaries reflect the **(insert number of districts in dataset)** district boundaries within the country of Zambia. A district can be likened to a state within the larger U.S.A. We subset the spatial grid to district level becuase the crop yield data is at the district level of specificity. The features are originally produced at higher spatial resolution, then summarized to the district level in order to train the model with ground-truth crop data.
###Code
country_shp = geopandas.read_file(f'{data_dir}/boundaries/gadm36_{country_code}_2.shp')
country_shp = country_shp.rename(columns = {'NAME_2': 'district'})[['district', 'geometry']]
country_shp.district = country_shp.district.replace("MPongwe", 'Mpongwe', regex=True)
country_districts = country_shp.district.sort_values().unique().tolist()
country_shp = country_shp.set_index('district')
country_shp.shape
country_shp.plot(figsize = (12,10), linewidth = 1, edgecolor = 'black' )
# country_shp.plot()
###Output
_____no_output_____
###Markdown
Crop yieldZambian maize yield data reflects the predicted annual maize yield provided by farmers in the month of May, when the maize matures and changes colors prior to harvest, which allows the farmers to estimate what their yield will be in the following months. These predictions are in units of metric tons per hectare and provide valuable insight to the Zambian government as they plan for the quanitites of food to import into the country in the future. For more metadata, see the websites for the [Central Statistics Office of Zambia (CSO)](https://www.zamstats.gov.zm/) and the [Summary statistics from CSO.](https://www.zamstats.gov.zm/agriculture-environment-statistics/)In order to standardize the names of all districts shared between the geoboundaries and the crop yield data, we correct for spelling, dashes, and apostrophes.
###Code
crop_df = pd.read_csv(data_dir+'/crops/cfs_maize_districts_zambia_2009_2018.csv')
crop_df.district = crop_df.district.replace(
{"Itezhi-tezhi": 'Itezhi-Tezhi',
"Kapiri-Mposhi": 'Kapiri Mposhi',
"Shang'ombo": 'Shangombo',
"Chienge": 'Chiengi'
}, regex=True)
crop_districts = crop_df.district.sort_values().unique().tolist()
crop_df = crop_df[['district', 'year', 'yield_mt']]
ln = len(crop_df[crop_df.year == 2016].district)
crop_df = crop_df.set_index('district')
ln
# crop_df
list(set(crop_districts) - set(country_districts))
list(set(country_districts) - set(crop_districts))
country_crop = geopandas.GeoDataFrame(crop_df.join(country_shp), crs = country_shp.crs)
###Output
_____no_output_____
###Markdown
Crop land
###Code
weights = pd.read_feather(f"{data_dir}/weights/{weight_file_name}.feather")
# weights
weights_gdf = geopandas.GeoDataFrame(
weights,
geometry = geopandas.points_from_xy(x = weights.lon, y = weights.lat),
crs='EPSG:4326'
)
weights_gdf.plot(figsize = (12,10),
cmap = 'inferno',
markersize = marker_sz,
alpha = .9,
column = 'crop_perc')
# plt.axis('off')
weights.crop_perc = weights.crop_perc.fillna(0)
# #weights.crop_perc = weights.crop_perc + 0.0001
###Output
_____no_output_____
###Markdown
FeaturesAppend annual features files together into one file: `features_raw`. Landsat 8
###Code
features_ls8_raw = geopandas.GeoDataFrame()
for yr in range(year_start, year_end + 1):
print(f"Opening: {ls8_feature_file_name}_{yr}.feather")
features_ls8 = pd.read_feather(f"{data_dir}/features/{ls8_satellite}/{ls8_feature_file_name}_{yr}.feather")
if (yr == year_start):
features_ls8 = features_ls8[features_ls8.month > 9]
else:
pass
# concatenate the feather files together, axis = 0 specifies to stack rows (rather than adding columns)
features_ls8_raw = pd.concat([features_ls8_raw, features_ls8], axis=0)
print("feature.shape", features_ls8_raw.shape)
print("Appending:", yr)
print("")
###Output
Opening: landsat-8-c2-l2_bands-1-2-3-4-5-6-7_ZMB_20k-points_1000-features_2015.feather
feature.shape (29275, 1004)
Appending: 2015
Opening: landsat-8-c2-l2_bands-1-2-3-4-5-6-7_ZMB_20k-points_1000-features_2016.feather
feature.shape (185311, 1004)
Appending: 2016
Opening: landsat-8-c2-l2_bands-1-2-3-4-5-6-7_ZMB_20k-points_1000-features_2017.feather
feature.shape (307320, 1004)
Appending: 2017
Opening: landsat-8-c2-l2_bands-1-2-3-4-5-6-7_ZMB_20k-points_1000-features_2018.feather
feature.shape (451570, 1004)
Appending: 2018
###Markdown
Sentinel 2
###Code
features_sn2_raw = geopandas.GeoDataFrame()
for yr in range(year_start, year_end + 1):
print(f"Opening: {sn2_feature_file_name}_{yr}.feather")
features_sn2 = pd.read_feather(f"{data_dir}/features/{sn2_satellite}/{sn2_feature_file_name}_{yr}.feather")
if (yr == year_start):
features_sn2 = features_sn2[features_sn2.month > 9]
else:
pass
# concatenate the feather files together, axis = 0 specifies to stack rows (rather than adding columns)
features_sn2_raw = pd.concat([features_sn2_raw, features_sn2], axis=0)
print("feature.shape", features_sn2_raw.shape)
print("Appending:", yr)
print("")
# Create copies of both feature datasets
features_ls8 = features_ls8_raw.copy()
features_sn2 = features_sn2_raw.copy()
# features_ls8_raw
# plt.figure(figsize = (15,10))
# sns.heatmap(features_ls8_raw.replace([np.inf, -np.inf], np.nan).drop(['lon', 'lat', 'year'], axis = 1), annot=False, cmap = 'viridis')
###Output
_____no_output_____
###Markdown
We want to carry the months October, November, and December over to the following year's date. These months represent the start of the growing season for the following year's maize yield
###Code
# Landsat
features_ls8['year'] = np.where(
features_ls8['month'].isin([10, 11, 12]),
features_ls8['year'] + 1,
features_ls8['year'])
features_ls8 = features_ls8[features_ls8['year'] <= year_end]
features_ls8.sort_values(['year', 'month'], inplace=True)
# Sentinel
features_sn2['year'] = np.where(
features_sn2['month'].isin([10, 11, 12]),
features_sn2['year'] + 1,
features_sn2['year'])
features_sn2 = features_sn2[features_sn2['year'] <= year_end]
features_sn2.sort_values(['year', 'month'], inplace=True)
###Output
_____no_output_____
###Markdown
Filter month range
###Code
# subset the features to only the month range selected at the top of the notebook
features_ls8 = features_ls8[features_ls8.month.isin(month_range)]
features_sn2 = features_sn2[features_sn2.month.isin(month_range)]
###Output
_____no_output_____
###Markdown
Pivot widerHere we pivot the data from long format to wide by indexing on 'lon', 'lat', 'year', 'month' and using the unstack function. We then map column names based on the month index and the associated features so month '01' is appended to each feature for that month making 0_01, 1_01 etc. This results in a Tidy data structure, with each row representing an image, and each column representing a feature for a certain month.
###Code
# Landsat
features_ls8 = features_ls8.set_index(['lon','lat', "year", 'month']).unstack()
features_ls8.columns = features_ls8.columns.map(lambda x: '{}_{}_ls8'.format(*x))
# Sentinel
features_sn2 = features_sn2.set_index(['lon','lat', "year", 'month']).unstack()
features_sn2.columns = features_sn2.columns.map(lambda x: '{}_{}_sn2'.format(*x))
# features_sn2
# plt.figure(figsize = (15,10))
# sns.heatmap(features_sn2.replace([np.inf, -np.inf], np.nan).reset_index(drop =True), annot=False, cmap = 'viridis')
# num_cells = len(features_sn2) * len(month_range) * num_features
# (features_sn2.isna().sum().sum() / num_cells)*100
# plt.figure(figsize = (15,10))
# sns.heatmap(features_ls8.replace([np.inf, -np.inf], np.nan).reset_index(drop =True), annot=False, cmap = 'viridis')
# num_cells = len(features_ls8) * len(month_range) * num_features
# (features_ls8.isna().sum().sum() / num_cells)*100
###Output
_____no_output_____
###Markdown
Join Landsat & Sentinel dataframes
###Code
features = features_ls8.join(features_sn2, how = 'left')
# features
###Output
_____no_output_____
###Markdown
Replace "inf" values with `NaN`Infinity values are the result of **(insert reason here)**. We replace them with `NaN` because **(insert reason here)**.
###Code
features.replace([np.inf, -np.inf], np.nan, inplace=True)
features = features.reset_index()
# features
# features.reset_index(drop =True).drop(['lon', 'lat', 'year'], axis = 1)
# plt.figure(figsize = (15,10))
# sns.heatmap(features.reset_index(drop =True).drop(['lon', 'lat', 'year'], axis = 1), annot=False, cmap = 'viridis')
# num_cells = len(features) * len(month_range) * 2 * num_features
# (features.isna().sum().sum() / num_cells)*100
###Output
_____no_output_____
###Markdown
Attach crop weightsAttach weight to each point (% area cropped of surrounding 1 km^2).
###Code
features = features.join(weights.set_index(['lon', 'lat']), on = ['lon', 'lat'])
features = features.drop(["geometry"], axis = 1)
# features
###Output
_____no_output_____
###Markdown
Mask croppped regionsAny 1 km^2 cell with a crop percentage > 0 will be retained.\The mask will not be applied if `crop_mask` is set to `False` at the top of this notebook
###Code
if crop_mask:
features = features[features.crop_perc > 0]
else:
pass
# features
###Output
_____no_output_____
###Markdown
Make "features" a `GeoDataFrame`The coordinate reference system is set to **EPSG 4326 - WGS 84**, the latitude/longitude coordinate system based on the Earth's center of mass, used by the Global Positioning System.
###Code
features = geopandas.GeoDataFrame(
features,
geometry = geopandas.points_from_xy(x = features.lon, y = features.lat),
crs='EPSG:4326'
)
###Output
_____no_output_____
###Markdown
Plot any single features
###Code
# mn = 7
# yr = 2017
# sat = 'sn2' # ls8
# feature = 854
# features[features.year == yr].plot(
# column = f"{feature}_{mn}_{sat}",
# figsize = (10,10),
# marker='H',
# # legend = True,
# markersize = marker_sz,
# )
###Output
_____no_output_____
###Markdown
Drop 'lat' and 'lon' columns
###Code
# Drop the redundant independent lon and lat columns now that they are in a geometry column
features = features.drop(['lon', 'lat'], axis = 1)
###Output
_____no_output_____
###Markdown
Join features to country geometry
###Code
features = features.sjoin(country_shp, how = 'left', predicate = 'within')
# features
###Output
_____no_output_____
###Markdown
Correct column names and drop geometry
###Code
features = (
features
.dropna(subset=['index_right'])
.rename(columns = {"index_right": "district",})
.reset_index(drop = True)
)
points = features.copy()
points = features[['geometry']]
features = features.drop(['geometry'], axis = 1)
# features
###Output
_____no_output_____
###Markdown
Impute missing valuesImputing "manually" by descending group levels imputes NA values in multiple "cascading" steps, decreasing the proportion of inoutated values with each step. This manual imputation of values gives the user more control and transparency over how the imputation is executed. Imputation occurs in three steps. 1. The NA values are imputed by `month`, `year`, and `district` which should yield imputed values that most closely match the feature values that would be present in the data if there was no clouds obscuring the satellite images. 2. The remaining NA values that could not be imputed by step 1 are imputed by only `district` across every `year`. 3. Lastly, the remaining NA values are crudely dropped. Imputing using `scikit learn`'s simple imputer executes standard imputation, the details of which can be found in the `scikitlearn` documentation [here.](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html)The imputation approach depends on the selection made at the top of this notebook for `impute_manual`.
###Code
# compute the number of cells in the features dataframe, based on the amount of rows (images), months, and feature columns
num_cells = len(features) * len(month_range) * 2 * num_features
num_cells
class bcolors:
BL = '\x1b[1;34m' #GREEN
GR = '\x1b[1;36m' #GREEN
YL = '\x1b[1;33m' #YELLOW
RD = '\x1b[1;31m' #RED
RESET = '\033[0m' #RESET COLOR
%%time
if impute_manual:
ln_ft = len(features)
ln_na = len(features.dropna())
print(f'Starting total row count: {bcolors.BL}{ln_ft}{bcolors.RESET}',
f'\nPre-Impute NaN row count: {bcolors.RD}{ln_ft - ln_na}{bcolors.RESET}',
f'\nPre-Impute NaN row %: {bcolors.RD}{((ln_ft - ln_na) / ln_ft)*100:.02f}{bcolors.RESET}',
f'\nPre-Impute NaN cell %: {bcolors.RD}{(features.isna().sum().sum() / num_cells)*100:.02f}{bcolors.RESET}',
f'\n\nStep 1: Filling NaN values by month, year, and district group average')
features = (
features
.fillna(features
.groupby(['year', 'district'], as_index=False)
.transform('mean')
)
)
ln_ft = len(features)
ln_na = len(features.dropna())
print(f'Post step 1 NaN row count: {bcolors.YL}{ln_ft - ln_na}{bcolors.RESET}',
f'\nPost step 1 NaN row %: {bcolors.YL}{((ln_ft - ln_na) / ln_ft)*100:.02f}{bcolors.RESET}',
f'\nPost step 1 NaN cell %: {bcolors.YL}{(features.isna().sum().sum() / num_cells)*100:.02f}{bcolors.RESET}',
f'\n\nStep 2: Filling NaN values by month and district group average')
features = (
features
.fillna(features
.groupby(['district'], as_index=False)
.transform('mean')
)
)
ln_ft = len(features)
ln_na = len(features.dropna())
print(f'Post step 2 NaN row count: {bcolors.GR}{ln_ft - ln_na}{bcolors.RESET}',
f'\nPost step 2 NaN row %: {bcolors.GR}{((ln_ft - ln_na) / ln_ft)*100:.02f}{bcolors.RESET}',
f'\nPost step 2 NaN cell %: {bcolors.GR}{(features.isna().sum().sum() / num_cells)*100:.02f}{bcolors.RESET}',
f'\n\nStep 3: Drop remaining NaN values')
features = features.dropna(axis=0)
print(f'Ending total row count: {bcolors.BL}{len(features)}{bcolors.RESET}\n')
else:
features = features.set_index(['year', 'district'])
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer.fit_transform(features)
features[:] = imputer.transform(features)
features = features.reset_index()
# plt.figure(figsize = (15,10))
# sns.heatmap(features.drop(['year', 'crop_perc', 'district'], axis = 1), annot=False, cmap = 'viridis')
###Output
_____no_output_____
###Markdown
Save copy of completed data
###Code
features_copy = features.copy()
features_copy['geometry'] = points.geometry
###Output
_____no_output_____
###Markdown
Summarise to administrative boundary levelWeighted by cropped area, or simple mean, depending on the selection at the top of this notebook for `weighted_avg`.
###Code
features.columns
var_cols = features.columns[1:-2].values.tolist()
features.columns[1:-2]
%%time
if weighted_avg:
features_summary = (
features
.groupby(['year', 'district'], as_index=False)
.apply(lambda x: pd.Series([sum(x[v] * x.crop_perc) / sum(x.crop_perc) for v in var_cols]))
)
else:
features_summary = features.groupby(['district',"year"], as_index = False).mean()
# features_summary
###Output
CPU times: user 1.89 s, sys: 988 ms, total: 2.88 s
Wall time: 2.86 s
###Markdown
Join crop data
###Code
# crop_df_x = crop_df[crop_df.year >= year_start]
crop_df_x = crop_df[crop_df.year >= year_start + 1]
crop_df_x = crop_df_x[~crop_df_x.index.isin(['Mafinga', 'Ikelenge'])]
crop_df_x.reset_index(inplace=True)
# crop_df_x
features_summary = (
features_summary
.set_index(["district", "year"])
.join(other = crop_df_x.set_index(["district", "year"]))
.reset_index())
# features_summary
###Output
_____no_output_____
###Markdown
Model
###Code
model_year = features_summary[features_summary.year.isin([
# 2013,
# 2014,
# 2015,
2016,
2017,
2018,
])]
###Output
_____no_output_____
###Markdown
Define `x's` and `y's`
###Code
if weighted_avg:
drop_cols = ['district', 'year', 'yield_mt']
else:
drop_cols = ['district', 'year', 'yield_mt', "crop_perc"]
x_all = model_year.drop(drop_cols, axis = 1)
# y_all = features_summary.yield_mt
y_all = np.log10(model_year.yield_mt.to_numpy() + 1)
# model_year[model_year.year == 2016].iloc[: , :20]
# x_all
###Output
_____no_output_____
###Markdown
Standardize FeaturesWe will use the default configuration to scale all `x` values. We will subtract the mean to center `x's` on 0.0 and divide by the standard deviation to give the standard deviation of 1.0. First, a StandardScaler instance is defined with default hyperparameters.
###Code
scalar = StandardScaler().fit_transform(x_all)
x_all = pd.DataFrame(scalar)
# x_all[3].hist()
###Output
_____no_output_____
###Markdown
Split into train and test sets
###Code
x_train, x_test, y_train, y_test = train_test_split(
x_all, y_all, test_size=0.2, random_state=0
)
print("Total N: ", len(x_all), "\n",
"Train N: ", len(x_train), "\n",
"Test N: ", len(x_test), sep = "")
###Output
Total N: 57
Train N: 45
Test N: 12
###Markdown
Train modelNow that our data has been standardized, we can use the same penalazation fac
###Code
ridge_cv_random = RidgeCV(cv=5, alphas=np.logspace(-8, 8, base=10, num=17))
ridge_cv_random.fit(x_train, y_train)
print(f"Estimated regularization parameter {ridge_cv_random.alpha_}")
###Output
Estimated regularization parameter 1000.0
###Markdown
Validation set $R^2$ performance
###Code
print(f"Validation R2 performance {ridge_cv_random.best_score_:0.2f}")
###Output
Validation R2 performance 0.80
###Markdown
Train Set
###Code
y_pred = np.maximum(ridge_cv_random.predict(x_train), 0)
fig, ax = plt.subplots()
ax.axline([0, 0], [1, 1])
# fig, ax = plt.figure()
plt.scatter(y_pred, y_train, alpha=1, s=4)
plt.xlabel("Predicted", fontsize=15, x = .3)
plt.ylabel("Ground Truth", fontsize=15)
plt.suptitle(r"$\log_{10}(1 + Crop Yield)$", fontsize=20, y=1.02)
plt.title((f"Model applied to train data n = {len(x_train)}, R$^2$ = {(r2_score(y_train, y_pred)):0.2f}"),
fontsize=12, y=1.01)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
m, b = np.polyfit(y_pred, y_train, 1)
plt.plot(y_pred, m * y_pred + b, color="black")
plt.gca().spines.right.set_visible(False)
plt.gca().spines.top.set_visible(False)
# plt.savefig(f'images/{feature_file_name}_train_data.jpg', dpi=300)
plt.show()
plt.close()
print(f"Training R^2 = {r2_score(y_train, y_pred):0.2f}\nPearsons R = {pearsonr(y_pred, y_train)[0]:0.2f}")
ridge_cv_random.score(x_train, y_train) ## Same as r2_score above
pearsonr(y_pred, y_train)[0] ** 2 ## little r2
###Output
_____no_output_____
###Markdown
Test set
###Code
y_pred = np.maximum(ridge_cv_random.predict(x_test), 0)
plt.figure()
plt.scatter(y_pred, y_test, alpha=1, s=4)
plt.xlabel("Predicted", fontsize=15)
plt.ylabel("Ground Truth", fontsize=15)
plt.suptitle(r"$\log_{10}(1 + Crop Yield)$", fontsize=20, y=1.02)
plt.title(f"Model applied to test data n = {len(x_test)}, R$^2$ = {(r2_score(y_test, y_pred)):0.2f}",
fontsize=12, y=1)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
m, b = np.polyfit(np.squeeze(y_pred), np.squeeze(y_test), 1)
plt.plot(y_pred, m * y_pred + b, color="black")
plt.gca().spines.right.set_visible(False)
plt.gca().spines.top.set_visible(False)
# plt.savefig(f'images/{feature_file_name}_test_data.jpg', dpi=300)
plt.show()
plt.close()
print(f"Testing set R^2 = {r2_score(y_test, y_pred):0.2f}")
print(f"Testing set pearsons R = {pearsonr(y_pred, y_test)[0]:0.2f}")
###Output
Testing set R^2 = 0.79
Testing set pearsons R = 0.90
###Markdown
Plot the fitted features
###Code
pred_features = features_copy.copy()
x_all = pred_features.drop([
'year',
'geometry',
'district',
'crop_perc'
], axis = 1)
pred_features['fit'] = np.maximum(ridge_cv_random.predict(x_all), 0)
pred_features = geopandas.GeoDataFrame(pred_features)
pred_features['fit'].mask(pred_features['crop_perc']==0, 0, inplace=True)
# pred_features.loc[pred_features["crop_perc"] == 0, "fit"] = 0 ### Does same thing but differently
# pred_features = pred_features[pred_features.crop_perc > 0].reset_index(drop = True)
# pred_features['fit'].mask(pred_features['fit'] > 2, 0, inplace=True)
plot_features = pred_features[pred_features.year == 2018]
# plot_features
plot_features.plot(figsize = (10,10),
marker='H',
legend = True,
markersize = marker_sz,
# alpha = .9,
column = 'fit')
###Output
_____no_output_____
###Markdown
Yield and Residual Plots Create data frame
###Code
x_all = features_summary.drop(drop_cols, axis = 1)
residual_df = pd.DataFrame()
residual_df["yield_mt"] = features_summary.yield_mt.to_numpy()
residual_df["log_yield"] = np.log10(features_summary.yield_mt.to_numpy() + 1)
residual_df["prediction"] = np.maximum(ridge_cv_random.predict(x_all), 0)
residual_df["residual"] = residual_df["log_yield"] - residual_df["prediction"]
residual_df["year"] = features_summary.year
residual_df["district"] = features_summary.district
residual_df = residual_df.join(country_shp, how = "left", on = "district")
#demean by location
residual_df["district_yield_mean"] = residual_df.groupby('district')['log_yield'].transform('mean')
residual_df["district_prediction_mean"] = residual_df.groupby('district')['prediction'].transform('mean')
residual_df["demean_yield"] = residual_df["log_yield"] - residual_df["district_yield_mean"]
residual_df["demean_prediction"] = residual_df["prediction"] - residual_df["district_prediction_mean"]
residual_gdf = geopandas.GeoDataFrame(residual_df)
# residual_gdf
###Output
_____no_output_____
###Markdown
Crop yield histogram
###Code
g = sns.FacetGrid(
residual_gdf,
col="year",
# col_wrap = 3,
height=4,
aspect=1
)
g.map(sns.histplot, "yield_mt", bins = 20)
g.set_axis_labels("Yield (MT)")
###Output
_____no_output_____
###Markdown
Log transform crop yield histogram
###Code
g = sns.FacetGrid(
residual_gdf,
col="year",
# col_wrap = 3,
height=4,
aspect=1
)
g.map(sns.histplot, "log_yield", bins = 20)
g.set_axis_labels(r"$\log_{10}(1 + Crop Yield)$")
###Output
_____no_output_____
###Markdown
Crop prediction histogram
###Code
g = sns.FacetGrid(
residual_gdf,
col="year",
# col_wrap = 3,
height=4,
aspect=1
)
g.map(sns.histplot, "prediction", bins = 20)
g.set_axis_labels(r"Crop yield predictions")
###Output
_____no_output_____
###Markdown
Residual histogram
###Code
g = sns.FacetGrid(
residual_gdf,
col="year",
# col_wrap = 3,
height=4,
aspect=1
)
g.map(sns.histplot, "residual", bins = 20)
g.set_axis_labels(r"Residuals")
residual_gdf.residual.min()
residual_gdf.residual.max()
###Output
_____no_output_____
###Markdown
Log crop yield vs residuals
###Code
g = sns.FacetGrid(
residual_gdf,
col="year",
# col_wrap = 3,
height=4,
aspect=1
)
g.map(sns.scatterplot, "log_yield", "residual")
g.set_axis_labels(r"$\log_{10}(1 + Crop Yield)$")
###Output
_____no_output_____
###Markdown
District residuals
###Code
fig, (ax1,ax2,ax3) = plt.subplots(nrows=1, ncols=3, figsize=(20, 5))
ax1 = (residual_gdf[residual_gdf.year == 2016]
.plot(ax = ax1, column = "residual", legend = True, norm=colors.Normalize(vmin= -0.4, vmax=0.4), cmap = "BrBG")
.set_title("2016 Residuals"))
ax2 = (residual_gdf[residual_gdf.year == 2017]
.plot(ax = ax2, column = "residual", legend = True, norm=colors.Normalize(vmin= -0.4, vmax=0.4), cmap = "BrBG")
.set_title("2017 Residuals"))
ax3 = (residual_gdf[residual_gdf.year == 2018]
.plot(ax = ax3, column = "residual", legend = True, norm=colors.Normalize(vmin= -0.4, vmax=0.4), cmap = "BrBG")
.set_title("2018 Residuals"))
caption = "A positive value is an underestimated prediction (the prediction is lower than the actual yield), a negative value is an over estimated prediction"
plt.figtext(0.5, 0.01, caption, wrap=True, horizontalalignment='center', fontsize=12)
###Output
_____no_output_____
###Markdown
Difference from the Mean
###Code
g = sns.FacetGrid(
residual_gdf,
col="year",
# col_wrap = 3,
height=4,
aspect=1
)
g.map(sns.scatterplot, "demean_yield", "demean_prediction")
g.set_axis_labels('Difference from Yield Mean', 'Difference from Prediction Mean')
fig, ax = plt.subplots()
ax.axline([-.15, -.15], [.2, .2])
plt.scatter(residual_gdf.demean_yield, residual_gdf.demean_prediction)
plt.title("Demeaned truth and predictions by district")
plt.xlabel('Difference from Yield Mean')
plt.ylabel('Difference from Predictions Mean')
for yr in range(year_start+1, year_end+1):
r_squared = r2_score(residual_gdf[residual_gdf.year == yr]["demean_yield"], residual_gdf[residual_gdf.year == yr]["demean_prediction"])
pearson_r = pearsonr(residual_gdf[residual_gdf.year == yr]["demean_yield"], residual_gdf[residual_gdf.year == yr]["demean_prediction"])
print(yr, f" R^2: {r_squared:.2f}\n",
f"Pearson's r: {pearson_r[0]:.2f}\n",
sep = "")
r_squared = r2_score(residual_gdf["demean_yield"], residual_gdf["demean_prediction"])
pearson_r = pearsonr(residual_gdf["demean_yield"], residual_gdf["demean_prediction"])
print(f"All R^2: {r_squared:.2f}\n",
f"Pearson's r: {pearson_r[0]:.2f}", sep = "")
r2 = round(pearson_r[0] ** 2, 2)
r2
###Output
_____no_output_____ |
corpus_description/basic_corpus_stats.ipynb | ###Markdown
Basic Corpus StatsThis script provides some basic corpus stats for the Thomas T. Eckert collection telegrams. In addition to calculating these stats for the whole corpus, the script calculates the stats for each telegram ledger category. ReferencesBengfort, B., Bilbro, R., & Ojeda, T. (2018). Applied text analysis with Python: Enabling language-aware data products with machine learning (First edition). O’Reilly Media, Inc.Bird, S., Klein, E., & Loper, E. (2009). Natural language processing with Python (First edition). O’Reilly. 1) Load CorpusThis script uses NLTK's [CategorizedPlaintextCorpusReader](https://www.nltk.org/api/nltk.corpus.reader.html?highlight=categorizedplaintextcorpusreadernltk.corpus.reader.CategorizedPlaintextCorpusReader) to create a corpus of telegrams. Based on folder structure, the script can consume:- all the telegrams ledgers- telegram ledgers that contain only telegrams in the clear (i.e., 'coded_telegrams')- telegram ledgers that contain only telegrams in code (i.e., 'clear_telegrams')- telegram ledgers that contain both telegram in the clear and telegrams in code (i.e., 'clear_and_coded_telegrams')
###Code
doc_pattern = r'.*/preprocessed_.*.txt'
category_pattern = r'.*?/(\w+_telegrams)/'
path_to_corpus = os.getenv('ECKERT_PAPERS_CORPUS_PATH')
telegram_corpus = CategorizedPlaintextCorpusReader(
path_to_corpus,
doc_pattern,
cat_pattern=category_pattern
)
###Output
_____no_output_____
###Markdown
To see all of the category labels, use the corpus class method `categories()`.
###Code
categories = telegram_corpus.categories()
categories
###Output
_____no_output_____
###Markdown
2) Generate General StatsAfter the corpus is setup, we can more easily describe the corpus in terms of general stats. For the whole corpus as well as each corpus category, this script calculates the number of files, words, non-stopwords, unique non-stopwords, lexical diversity, and average word use.
###Code
number_of_files = []
number_of_words = []
number_of_words_wo_stop_words = []
number_of_unique_stopwords = []
frequency_distribution_of_corpus_no_stop_words = []
lexical_diversity = []
avg_use_of_word = []
# Add data for whole corpus
number_of_files.append(len(telegram_corpus.fileids()))
# create a list of all the words in a corpus
whole_corpus_words = telegram_corpus.words()
number_of_words.append(len(whole_corpus_words))
# filter the corpus words list for english stopwords
corpus_no_stopwords = [word for word in whole_corpus_words if word not in english_stop_words]
number_of_words_no_stopwords = len(corpus_no_stopwords)
number_of_words_wo_stop_words.append(number_of_words_no_stopwords)
# set of unique words in the corpus minus stopwords
set_corpus_no_stopwords = set(corpus_no_stopwords)
number_of_unique_stopwords.append(len(set_corpus_no_stopwords))
lexical_diversity.append(len(set_corpus_no_stopwords)/len(corpus_no_stopwords))
avg_use_of_word.append(len(corpus_no_stopwords)/len(set_corpus_no_stopwords))
for category in categories:
# number of files in the current category
number_of_files.append(len(telegram_corpus.fileids(categories=category)))
# number of words in the corpus
corpus_words = telegram_corpus.words(categories=category)
number_of_words.append(len(corpus_words))
# for effeciency comparisons, remove stopwords
corpus_no_stopwords = [word for word in corpus_words if word not in english_stop_words]
number_of_words_no_stopwords = len(corpus_no_stopwords)
number_of_words_wo_stop_words.append(number_of_words_no_stopwords)
# how many of the unique words are there, excluding stopwords?
set_corpus_no_stopwords = set(corpus_no_stopwords)
number_of_unique_stopwords.append(len(set_corpus_no_stopwords))
lexical_diversity.append(len(set_corpus_no_stopwords)/len(corpus_no_stopwords))
avg_use_of_word.append(len(corpus_no_stopwords)/len(set_corpus_no_stopwords))
indices = ['whole_corpus', 'clear_and_coded_telegrams', 'clear_telegrams', 'coded_telegrams']
category_data = {
'Number of Files': number_of_files,
'Number of Words': number_of_words,
'Number of Non-Stopwords': number_of_words_wo_stop_words,
'Number of Unique Non-Stopwords': number_of_unique_stopwords,
'Lexical Diversity': lexical_diversity,
'Average Word Use': avg_use_of_word
}
caption_text = "Table 1. General Statistics for the Telegrams in The Thomas T. Eckert Papers"
category_comparison_data_frame = pd.DataFrame(data=category_data, index=indices).style.set_caption(caption_text).set_table_styles([
{'selector': 'caption', 'props': [('font-size','18px'),('color','black'), ('font-weight','bold')]},
{'td': 'caption', 'props': [('margin','1em')]}
])
category_comparison_data_frame
###Output
_____no_output_____ |
Hackerearth-Predict-the-genetic-disorders/2_genetic_testing_modelling.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import io
import gc
import time
from pprint import pprint
from datetime import date
# settings
import warnings
warnings.filterwarnings("ignore")
gc.enable()
# !pip3 install xgboost > /dev/null
!pip3 install tune-sklearn ray[tune] > /dev/null
# Global Variables
random_state = 50
# connect to google drive
from google.colab import drive
drive.mount('/content/drive')
gDrivePath = '/content/drive/MyDrive/Datasets/Hackerearth_genetic_testing/dataset/'
df_train = pd.read_csv(gDrivePath+'train_preprocessed.csv')
df_test = pd.read_csv(gDrivePath+'test_preprocessed.csv')
df_train.shape
df_test.shape
df_train.sample(3)
###Output
_____no_output_____
###Markdown
Checking if the dataset is balanced/imbalanced - Genetic Disorder
###Code
target_count = df_train['Genetic Disorder'].value_counts()
target_count
###Output
_____no_output_____
###Markdown
Checking if the dataset is balanced/imbalanced - Disorder Subclass
###Code
target_count = df_train['Disorder Subclass'].value_counts()
target_count
###Output
_____no_output_____
###Markdown
Splitting Data into train-cv
###Code
genetic_disorder_labels = df_train['Genetic Disorder'].values
disorder_subclass_labels = df_train['Disorder Subclass'].values
df_train.drop(['Genetic Disorder','Disorder Subclass'], axis=1, inplace=True)
df_test.drop(['Genetic Disorder','Disorder Subclass'], axis=1, inplace=True, errors='ignore')
# classification split for genetic_disorder_labels
from sklearn.model_selection import train_test_split
X_train_genetic_disorder, X_cv_genetic_disorder, y_train_genetic_disorder, y_cv_genetic_disorder = train_test_split(df_train, genetic_disorder_labels, test_size=0.1, random_state=random_state)
# classification split for disorder_subclass_labels
X_train_disorder_subclass, X_cv_disorder_subclass, y_train_disorder_subclass, y_cv_disorder_subclass = train_test_split(df_train, disorder_subclass_labels, test_size=0.1, random_state=random_state)
###Output
_____no_output_____
###Markdown
Over Sampling using SMOTE for Genetic Disorder
###Code
# https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/
from imblearn.over_sampling import SMOTE
smote_overSampling = SMOTE()
X_train_genetic_disorder,y_train_genetic_disorder = smote_overSampling.fit_resample(X_train_genetic_disorder,y_train_genetic_disorder)
unique, counts = np.unique(y_train_genetic_disorder, return_counts=True)
dict(zip(unique, counts))
###Output
_____no_output_____
###Markdown
Over Sampling using SMOTE for Disorder Subclass
###Code
# https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/
from imblearn.over_sampling import SMOTE
smote_overSampling = SMOTE()
X_train_disorder_subclass,y_train_disorder_subclass = smote_overSampling.fit_resample(X_train_disorder_subclass,y_train_disorder_subclass)
unique, counts = np.unique(y_train_disorder_subclass, return_counts=True)
dict(zip(unique, counts))
###Output
_____no_output_____
###Markdown
Scaling data : genetic_disorder
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_genetic_disorder_scaled = scaler.fit_transform(X_train_genetic_disorder)
X_cv_genetic_disorder_scaled = scaler.transform(X_cv_genetic_disorder)
X_test_scaled = scaler.transform(df_test)
# X_train_genetic_disorder_scaled
###Output
_____no_output_____
###Markdown
Scaling data : disorder_subclass
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_disorder_subclass_scaled = scaler.fit_transform(X_train_disorder_subclass)
X_cv_disorder_subclass_scaled = scaler.transform(X_cv_disorder_subclass)
X_test_scaled = scaler.transform(df_test)
# X_train_disorder_subclass_scaled
###Output
_____no_output_____
###Markdown
Modelling & Cross-Validation for genetic_disorder
###Code
# %%time
# # Train multiple models : https://www.kaggle.com/tflare/testing-multiple-models-with-scikit-learn-0-79425
# from sklearn.linear_model import LogisticRegression
# from sklearn.svm import SVC, LinearSVC
# from sklearn.neighbors import KNeighborsClassifier
# from sklearn.tree import DecisionTreeClassifier
# from sklearn.ensemble import RandomForestClassifier
# from sklearn.ensemble import AdaBoostClassifier
# from sklearn.ensemble import BaggingClassifier
# from sklearn.ensemble import ExtraTreesClassifier
# from sklearn.ensemble import GradientBoostingClassifier
# from sklearn.linear_model import LogisticRegressionCV
# from xgboost import XGBClassifier
# from sklearn.model_selection import cross_val_score
# models = []
# # LogisticRegression = LogisticRegression(n_jobs=-1)
# # LinearSVC = LinearSVC()
# # KNeighbors = KNeighborsClassifier(n_jobs=-1)
# # DecisionTree = DecisionTreeClassifier()
# # AdaBoost = AdaBoostClassifier()
# # Bagging = BaggingClassifier()
# # GradientBoosting = GradientBoostingClassifier()
# # LogisticRegressionCV = LogisticRegressionCV(n_jobs=-1)
# # XGBClassifier = XGBClassifier(nthread=-1)
# RandomForest = RandomForestClassifier()
# ExtraTrees = ExtraTreesClassifier()
# # models.append(("LogisticRegression",LogisticRegression))
# # models.append(("LinearSVC", LinearSVC))
# # models.append(("KNeighbors", KNeighbors))
# # models.append(("DecisionTree", DecisionTree))
# # models.append(("AdaBoost", AdaBoost))
# # models.append(("Bagging", Bagging))
# # models.append(("GradientBoosting", GradientBoosting))
# # models.append(("LogisticRegressionCV", LogisticRegressionCV))
# # models.append(("XGBClassifier", XGBClassifier))
# models.append(("RandomForest", RandomForest))
# models.append(("ExtraTrees", ExtraTrees))
# # metric_names = ['f1', 'average_precision', 'accuracy', 'precision', 'recall']
# metric_names = ['f1_weighted']
# results = []
# names = []
# nested_dict = {}
# for name,model in models:
# nested_dict[name] = {}
# for metric in metric_names:
# print("\nRunning : {}, with metric : {}".format(name, metric))
# score = cross_val_score(model, X_train_genetic_disorder_scaled, y_train_genetic_disorder, n_jobs=-1, scoring=metric, cv=5)
# nested_dict[name][metric] = score.mean()
# import json
# print(json.dumps(nested_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Modelling & Cross-Validation for disorder_subclass
###Code
# %%time
# # Train multiple models : https://www.kaggle.com/tflare/testing-multiple-models-with-scikit-learn-0-79425
# from sklearn.linear_model import LogisticRegression
# from sklearn.svm import SVC, LinearSVC
# from sklearn.neighbors import KNeighborsClassifier
# from sklearn.tree import DecisionTreeClassifier
# from sklearn.ensemble import RandomForestClassifier
# from sklearn.ensemble import AdaBoostClassifier
# from sklearn.ensemble import BaggingClassifier
# from sklearn.ensemble import ExtraTreesClassifier
# from sklearn.ensemble import GradientBoostingClassifier
# from sklearn.linear_model import LogisticRegressionCV
# from xgboost import XGBClassifier
# from sklearn.model_selection import cross_val_score
# models = []
# # LogisticRegression = LogisticRegression(n_jobs=-1)
# # LinearSVC = LinearSVC()
# # KNeighbors = KNeighborsClassifier(n_jobs=-1)
# # DecisionTree = DecisionTreeClassifier()
# # AdaBoost = AdaBoostClassifier()
# # Bagging = BaggingClassifier()
# # GradientBoosting = GradientBoostingClassifier()
# # LogisticRegressionCV = LogisticRegressionCV(n_jobs=-1)
# # XGBClassifier = XGBClassifier(nthread=-1)
# RandomForest = RandomForestClassifier()
# ExtraTrees = ExtraTreesClassifier()
# # models.append(("LogisticRegression",LogisticRegression))
# # models.append(("LinearSVC", LinearSVC))
# # models.append(("KNeighbors", KNeighbors))
# # models.append(("DecisionTree", DecisionTree))
# # models.append(("AdaBoost", AdaBoost))
# # models.append(("Bagging", Bagging))
# # models.append(("GradientBoosting", GradientBoosting))
# # models.append(("LogisticRegressionCV", LogisticRegressionCV))
# # models.append(("XGBClassifier", XGBClassifier))
# models.append(("RandomForest", RandomForest))
# models.append(("ExtraTrees", ExtraTrees))
# # metric_names = ['f1', 'average_precision', 'accuracy', 'precision', 'recall']
# metric_names = ['f1_weighted']
# results = []
# names = []
# nested_dict = {}
# for name,model in models:
# nested_dict[name] = {}
# for metric in metric_names:
# print("\nRunning : {}, with metric : {}".format(name, metric))
# score = cross_val_score(model, X_train_disorder_subclass_scaled, y_train_disorder_subclass, n_jobs=-1, scoring=metric, cv=5)
# nested_dict[name][metric] = score.mean()
# import json
# print(json.dumps(nested_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Hyperparameter tuning Tuning for : genetic_disorder
###Code
# from sklearn.model_selection import GridSearchCV
from tune_sklearn import TuneGridSearchCV
from sklearn.ensemble import ExtraTreesClassifier
# model_classifier = ExtraTreesClassifier(max_depth=15, n_estimators=400)
model_classifier = ExtraTreesClassifier(criterion='gini', bootstrap=False, max_features='auto', warm_start=False)
# # Best Params: {'criterion': 'gini', 'max_depth': 15, 'bootstrap': False, 'max_features': 'auto', 'warm_start': False, 'n_estimators': 400}
# Parameters to tune:
parameters = {
'n_estimators': np.arange(100, 3000, 100, dtype=int),
'max_depth': np.arange(5, 16, 1, dtype=int),
# 'criterion': ['gini', 'entropy'],
# 'bootstrap': [True, False],
# 'max_features': ['auto', 'sqrt', 'log2'],
# 'warm_start': [True, False],
}
tune_search_genetic_disorder = TuneGridSearchCV(
model_classifier,
parameters,
scoring='f1_weighted',
verbose=1,
n_jobs=-1,
)
tune_search_genetic_disorder.fit(X_train_genetic_disorder_scaled, y_train_genetic_disorder)
pred = tune_search_genetic_disorder.predict(X_cv_genetic_disorder_scaled)
accuracy = np.count_nonzero(np.array(pred) == np.array(y_cv_genetic_disorder)) / len(pred)
print("Tune Accuracy:", accuracy)
print("Best Params:", tune_search_genetic_disorder.best_params_)
###Output
_____no_output_____
###Markdown
Tuning for : disorder_subclass
###Code
# from sklearn.model_selection import GridSearchCV
from tune_sklearn import TuneGridSearchCV
from sklearn.ensemble import ExtraTreesClassifier
# model_classifier = ExtraTreesClassifier(max_depth=15, n_estimators=400)
model_classifier = ExtraTreesClassifier(criterion='entropy', bootstrap=False, max_features='log2', warm_start=True)
## Best Params: {'criterion': 'entropy', 'max_depth': 15, 'bootstrap': False, 'max_features': 'log2', 'warm_start': True}
# Parameters to tune:
parameters = {
'n_estimators': np.arange(100, 3000, 100, dtype=int),
'max_depth': np.arange(5, 16, 1, dtype=int),
# 'criterion': ['gini', 'entropy'],
# 'bootstrap': [True, False],
# 'max_features': ['auto', 'sqrt', 'log2'],
# 'warm_start': [True, False],
}
tune_search_disorder_subclass = TuneGridSearchCV(
model_classifier,
parameters,
scoring='f1_weighted',
verbose=1,
n_jobs=-1,
)
tune_search_disorder_subclass.fit(X_train_disorder_subclass_scaled, y_train_disorder_subclass)
# Check accuracy
pred = tune_search_disorder_subclass.predict(X_cv_disorder_subclass_scaled)
accuracy = np.count_nonzero(np.array(pred) == np.array(y_cv_disorder_subclass)) / len(pred)
print("Tune Accuracy:", accuracy)
print("Best Params:", tune_search_disorder_subclass.best_params_)
import joblib
joblib.dump(tune_search_genetic_disorder, 'genetic_disorder_model.pkl')
joblib.dump(tune_search_disorder_subclass, 'disorder_subclass_model.pkl')
trained_model_genetic_disorder = joblib.load('genetic_disorder_model.pkl')
trained_model_disorder_subclass = joblib.load('disorder_subclass_model.pkl')
###Output
_____no_output_____
###Markdown
Predicting on CV data
###Code
#
###Output
_____no_output_____
###Markdown
Predicting on test Data
###Code
predictions_genetic_disorder_test = trained_model_genetic_disorder.predict(X_test_scaled)
predictions_disorder_subclass_test = trained_model_disorder_subclass.predict(X_test_scaled)
len(predictions_genetic_disorder_test)
len(predictions_disorder_subclass_test)
read = pd.read_csv(gDrivePath + 'test.csv')
read.shape
submission = pd.DataFrame({
"Patient Id": read["Patient Id"],
"Genetic Disorder": predictions_genetic_disorder_test,
"Disorder Subclass": predictions_disorder_subclass_test,
})
submission.head()
submission.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
prediction/transfer learning fine-tuning/code comment generation/small_model.ipynb | ###Markdown
**Generate the comment for java code using codeTrans transfer learning finetuning model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers**
###Code
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
###Output
_____no_output_____
###Markdown
**2. Load the token classification pipeline and load it into the GPU if avilabile**
###Code
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
###Output
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:852: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
###Markdown
**3 Give the code for summarization, parse and tokenize it**
###Code
code = "protected String renderUri(URI uri){\n return uri.toASCIIString();\n}\n" #@param {type:"raw"}
!pip install javalang
import javalang
def tokenize_java_code(code):
tokenList = []
tokens = list(javalang.tokenizer.tokenize(code))
for token in tokens:
tokenList.append(token.value)
return ' '.join(tokenList)
tokenized_code = tokenize_java_code(code)
print("Output after tokenization: " + tokenized_code)
###Output
Output after tokenization: protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }
###Markdown
**4. Make Prediction**
###Code
pipeline([tokenized_code])
###Output
Your max_length is set to 512, but you input_length is only 24. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
|
Fahrenheit_to_Celsius.ipynb | ###Markdown
Training Our Model The problem we will solve is to convert from Fahrenheit to Celsius , where the approximate formula is:$$ c = (f - 32)/ 1.8 $$ Import dependencies
###Code
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Set up training dataSupervised Machine Learning is all about figuring out an algorithm given a set of inputs and outputs. Since the task in this Codelab is to create a model that can give the temperature in Fahrenheit when given the degrees in Celsius, we create two lists `celsius_q` and `fahrenheit_a` that we can use to train our model.
###Code
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)
celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
for i,f in enumerate(fahrenheit_a):
print("{} degrees Fahrenheit = {} degrees Celsius".format(f, celsius_q[i]))
###Output
_____no_output_____
###Markdown
Some Machine Learning terminology - **Feature** — The input(s) to our model. In this case, a single value — the degrees in Fahrenheit. - **Labels** — The output our model predicts. In this case, a single value — the degrees inCelsius. - **Example** — A pair of inputs/outputs used during training. In our case a pair of values from `fahrenheit_a` and `celsius_q` at a specific index, such as `(72,22)`. Create the model
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
###Output
_____no_output_____
###Markdown
Compile the model, with loss and optimizer functionsBefore training, the model has to be compiled. When compiled for training, the model is given:- **Loss function** — A way of measuring how far off predictions are from the desired outcome. (The measured difference is called the "loss".)- **Optimizer function** — A way of adjusting internal values in order to reduce the loss.
###Code
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
###Output
_____no_output_____
###Markdown
Train the modelTrain the model by calling the `fit` method. During training, the model takes in Fahrenheit values, performs a calculation using the current internal variables (called "weights") and outputs values which are meant to be the Celsius equivalent. Since the weights are initially set randomly, the output will not be close to the correct value. The difference between the actual output and the desired output is calculated using the loss function, and the optimizer function directs how the weights should be adjusted. This cycle of calculate, compare, adjust is controlled by the `fit` method. The first argument is the inputs, the second argument is the desired outputs. The `epochs` argument specifies how many times this cycle should be run, and the `verbose` argument controls how much output the method produces.
###Code
history = model.fit(fahrenheit_a,celsius_q, epochs=500, verbose=True)
print("Finished training the model")
###Output
_____no_output_____
###Markdown
Display training statisticsThe `fit` method returns a history object. We can use this object to plot how the loss of our model goes down after each training epoch. A high loss means that the Celsius degrees the model predicts is far from the corresponding value in `celsius_q`. We'll use [Matplotlib](https://matplotlib.org/) to visualize this. As you can see, our model improves very quickly at first, and then has a steady, slow improvement until it is very near "perfect" towards the end.
###Code
import matplotlib.pyplot as plt
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
###Output
_____no_output_____
###Markdown
Use the model to predict valuesNow you have a model that has been trained to learn the relationship between `celsius_q` and `fahrenheit_a`. You can use the predict method to have it calculate the Fahrenheit degrees for a previously unknown Celsius degrees. So, for example, if the Celsius value is 100, what do you think the Fahrenheit result will be? Take a guess before you run this code.
###Code
print(model.predict([212.0]))
###Output
_____no_output_____ |
python/d2l-en/mxnet/chapter_optimization/adadelta.ipynb | ###Markdown
Adadelta:label:`sec_adadelta`Adadelta is yet another variant of AdaGrad (:numref:`sec_adagrad`). The main difference lies in the fact that it decreases the amount by which the learning rate is adaptive to coordinates. Moreover, traditionally it referred to as not having a learning rate since it uses the amount of change itself as calibration for future change. The algorithm was proposed in :cite:`Zeiler.2012`. It is fairly straightforward, given the discussion of previous algorithms so far. The AlgorithmIn a nutshell, Adadelta uses two state variables, $\mathbf{s}_t$ to store a leaky average of the second moment of the gradient and $\Delta\mathbf{x}_t$ to store a leaky average of the second moment of the change of parameters in the model itself. Note that we use the original notation and naming of the authors for compatibility with other publications and implementations (there is no other real reason why one should use different Greek variables to indicate a parameter serving the same purpose in momentum, Adagrad, RMSProp, and Adadelta). Here are the technical details of Adadelta. Given the parameter du jour is $\rho$, we obtain the following leaky updates similarly to :numref:`sec_rmsprop`:$$\begin{aligned} \mathbf{s}_t & = \rho \mathbf{s}_{t-1} + (1 - \rho) \mathbf{g}_t^2.\end{aligned}$$The difference to :numref:`sec_rmsprop` is that we perform updates with the rescaled gradient $\mathbf{g}_t'$, i.e.,$$\begin{aligned} \mathbf{x}_t & = \mathbf{x}_{t-1} - \mathbf{g}_t'. \\\end{aligned}$$So what is the rescaled gradient $\mathbf{g}_t'$? We can calculate it as follows:$$\begin{aligned} \mathbf{g}_t' & = \frac{\sqrt{\Delta\mathbf{x}_{t-1} + \epsilon}}{\sqrt{{\mathbf{s}_t + \epsilon}}} \odot \mathbf{g}_t, \\\end{aligned}$$where $\Delta \mathbf{x}_{t-1}$ is the leaky average of the squared rescaled gradients $\mathbf{g}_t'$. We initialize $\Delta \mathbf{x}_{0}$ to be $0$ and update it at each step with $\mathbf{g}_t'$, i.e.,$$\begin{aligned} \Delta \mathbf{x}_t & = \rho \Delta\mathbf{x}_{t-1} + (1 - \rho) {\mathbf{g}_t'}^2,\end{aligned}$$and $\epsilon$ (a small value such as $10^{-5}$) is added to maintain numerical stability. ImplementationAdadelta needs to maintain two state variables for each variable, $\mathbf{s}_t$ and $\Delta\mathbf{x}_t$. This yields the following implementation.
###Code
%matplotlib inline
from mxnet import np, npx
from d2l import mxnet as d2l
npx.set_np()
def init_adadelta_states(feature_dim):
s_w, s_b = np.zeros((feature_dim, 1)), np.zeros(1)
delta_w, delta_b = np.zeros((feature_dim, 1)), np.zeros(1)
return ((s_w, delta_w), (s_b, delta_b))
def adadelta(params, states, hyperparams):
rho, eps = hyperparams['rho'], 1e-5
for p, (s, delta) in zip(params, states):
# In-place updates via [:]
s[:] = rho * s + (1 - rho) * np.square(p.grad)
g = (np.sqrt(delta + eps) / np.sqrt(s + eps)) * p.grad
p[:] -= g
delta[:] = rho * delta + (1 - rho) * g * g
###Output
_____no_output_____
###Markdown
Choosing $\rho = 0.9$ amounts to a half-life time of 10 for each parameter update. This tends to work quite well. We get the following behavior.
###Code
data_iter, feature_dim = d2l.get_data_ch11(batch_size=10)
d2l.train_ch11(adadelta, init_adadelta_states(feature_dim),
{'rho': 0.9}, data_iter, feature_dim);
###Output
loss: 0.253, 0.120 sec/epoch
###Markdown
For a concise implementation we simply use the `adadelta` algorithm from the `Trainer` class. This yields the following one-liner for a much more compact invocation.
###Code
d2l.train_concise_ch11('adadelta', {'rho': 0.9}, data_iter)
###Output
loss: 0.243, 0.137 sec/epoch
|
docs/source/auto_examples/demo_OT_2D_sampleslarge.ipynb | ###Markdown
Demo for 2D Optimal transport between empirical distributions@author: rflamary
###Code
import numpy as np
import matplotlib.pylab as pl
import ot
#%% parameters and data generation
n=5000 # nb samples
mu_s=np.array([0,0])
cov_s=np.array([[1,0],[0,1]])
mu_t=np.array([4,4])
cov_t=np.array([[1,-.8],[-.8,1]])
xs=ot.datasets.get_2D_samples_gauss(n,mu_s,cov_s)
xt=ot.datasets.get_2D_samples_gauss(n,mu_t,cov_t)
a,b = ot.unif(n),ot.unif(n) # uniform distribution on samples
# loss matrix
M=ot.dist(xs,xt)
M/=M.max()
#%% plot samples
#pl.figure(1)
#pl.plot(xs[:,0],xs[:,1],'+b',label='Source samples')
#pl.plot(xt[:,0],xt[:,1],'xr',label='Target samples')
#pl.legend(loc=0)
#pl.title('Source and traget distributions')
#
#pl.figure(2)
#pl.imshow(M,interpolation='nearest')
#pl.title('Cost matrix M')
#
#%% EMD
G0=ot.emd(a,b,M)
#pl.figure(3)
#pl.imshow(G0,interpolation='nearest')
#pl.title('OT matrix G0')
#
#pl.figure(4)
#ot.plot.plot2D_samples_mat(xs,xt,G0,c=[.5,.5,1])
#pl.plot(xs[:,0],xs[:,1],'+b',label='Source samples')
#pl.plot(xt[:,0],xt[:,1],'xr',label='Target samples')
#pl.legend(loc=0)
#pl.title('OT matrix with samples')
#%% sinkhorn
# reg term
lambd=5e-3
Gs=ot.sinkhorn(a,b,M,lambd)
#pl.figure(5)
#pl.imshow(Gs,interpolation='nearest')
#pl.title('OT matrix sinkhorn')
#
#pl.figure(6)
#ot.plot.plot2D_samples_mat(xs,xt,Gs,color=[.5,.5,1])
#pl.plot(xs[:,0],xs[:,1],'+b',label='Source samples')
#pl.plot(xt[:,0],xt[:,1],'xr',label='Target samples')
#pl.legend(loc=0)
#pl.title('OT matrix Sinkhorn with samples')
#
###Output
_____no_output_____ |
Tests/Baseline_approach_test.ipynb | ###Markdown
Models and loss
###Code
def get_denoise_model(shape, do = 0, activate = 'selu'):
inputs = Input(shape)
## Encoder starts
conv1 = Conv2D(16, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
## Bottleneck
conv2 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
## Now the decoder starts
up3 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv2))
merge3 = concatenate([conv1,up3], axis = -1)
conv3 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge3)
conv4 = Conv2D(1, 3, padding = 'same')(conv3)
shallow_net = Model(inputs = inputs, outputs = conv4)
return shallow_net
def get_descriptor_model(shape, activate= 'relu'):
'''Architecture copies HardNet architecture'''
init_weights = keras.initializers.he_normal()
descriptor_model = Sequential()
descriptor_model.add(Conv2D(32, 3, padding='same', input_shape=shape, use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(32, 3, padding='same', use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(64, 3, padding='same', strides=2, use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(64, 3, padding='same', use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(128, 3, padding='same', strides=2, use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(128, 3, padding='same', use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Dropout(0.3))
descriptor_model.add(Conv2D(128, 8, padding='valid', use_bias = True, kernel_initializer=init_weights))
# Final descriptor reshape
descriptor_model.add(Reshape((128,)))
return descriptor_model
def triplet_loss(x):
output_dim = 128
a, p, n = x
_alpha = 1.0
positive_distance = K.mean(K.square(a - p), axis=-1)
negative_distance = K.mean(K.square(a - n), axis=-1)
return K.expand_dims(K.maximum(0.0, positive_distance - negative_distance + _alpha), axis = 1)
###Output
_____no_output_____
###Markdown
Denoising Image Patches
###Code
from keras.layers import LeakyReLU
shape = (32, 32, 1)
denoise_model = keras.models.load_model('./denoise_base.h5')
###Output
_____no_output_____
###Markdown
Vary Learning Rate
###Code
from keras.layers import Lambda
shape = (32, 32, 1)
xa = Input(shape=shape, name='a')
xp = Input(shape=shape, name='p')
xn = Input(shape=shape, name='n')
descriptor_model = get_descriptor_model( shape)
ea = descriptor_model(xa)
ep = descriptor_model(xp)
en = descriptor_model(xn)
loss = Lambda(triplet_loss)([ea, ep, en])
sgd1 = keras.optimizers.SGD(lr=0.00001, momentum=0.9, nesterov=True)
sgd2 = keras.optimizers.SGD(lr=0.0001, momentum=0.9, nesterov=True)
sgd3 = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
sgd4 = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
sgd5 = keras.optimizers.SGD(lr=0.1, momentum=0.9, nesterov=True)
descriptor_model_trip_sgd1 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd2 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd3 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd4 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd5 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd1.compile(loss='mean_absolute_error', optimizer=sgd1)
descriptor_model_trip_sgd2.compile(loss='mean_absolute_error', optimizer=sgd2)
descriptor_model_trip_sgd3.compile(loss='mean_absolute_error', optimizer=sgd3)
descriptor_model_trip_sgd4.compile(loss='mean_absolute_error', optimizer=sgd4)
descriptor_model_trip_sgd5.compile(loss='mean_absolute_error', optimizer=sgd5)
### Descriptor loading and training
# Loading images
hPatches = HPatches(train_fnames=train_fnames, test_fnames=test_fnames,
denoise_model=denoise_model, use_clean=False)
# Creating training generator
training_generator = DataGeneratorDesc(*hPatches.read_image_file(hpatches_dir, train=1), num_triplets=10000)
# Creating validation generator
val_generator = DataGeneratorDesc(*hPatches.read_image_file(hpatches_dir, train=0), num_triplets=10000)
plot_triplet(training_generator)
#epochs = 1
### As with the denoising model, we use a loop to save for each epoch
## #the weights in an external website in case colab stops.
### reset, so e.g. calling 5 times fit(epochs=1) behave as fit(epochs=5)
### If you have a model saved from a previous training session
### Load it in the next line
# descriptor_model_trip.set_weights(keras.models.load_model('./descriptor.h5').get_weights())
# descriptor_model_trip.optimizer = keras.models.load_model('./descriptor.h5').optimizer
#for e in range(epochs):
descriptor_history_sgd1 = descriptor_model_trip_sgd1.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd2 = descriptor_model_trip_sgd2.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd3 = descriptor_model_trip_sgd3.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd4 = descriptor_model_trip_sgd4.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd5 = descriptor_model_trip_sgd5.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
###Output
_____no_output_____
###Markdown
Plot Losses
###Code
import matplotlib.pyplot as plt
def plot_history(history, history2, history3, history4, history5, metric = None):
# Plots the loss history of training and validation (if existing)
# and a given metric
if metric != None:
fig, axes = plt.subplots(2,1, figsize=(8, 10))
#axes[0].plot(history.history[metric])
#axes[0].plot(history2.history[metric])
#axes[0].plot(history3.history[metric])
#axes[0].plot(history4.history[metric])
#axes[0].plot(history5.history[metric])
#axes[0].plot(history6.history[metric])
try:
#axes[0].plot(history.history['val_'+metric])
#axes[0].plot(history2.history['val2_'+metric])
#axes[0].plot(history3.history['val3_'+metric])
axes[0].legend(['lr=1e-5', 'lr=1e-4', 'lr=1e-3', 'lr=1e-2', 'lr=1e-1'], loc='upper right')
except:
pass
axes[0].set_title('MAE Vs. No of Epochs for Various Learning Rates')
axes[0].set_ylabel('Mean Absolute Error')
axes[0].set_xlabel('Epoch')
fig.subplots_adjust(hspace=0.5)
axes[1].plot(history.history['loss'])
axes[1].plot(history2.history['loss'])
axes[1].plot(history3.history['loss'])
axes[1].plot(history4.history['loss'])
axes[1].plot(history5.history['loss'])
try:
#axes[1].plot(history.history['val_loss'])
axes[1].legend(['lr=1e-5', 'lr=1e-4', 'lr=1e-3', 'lr=1e-2', 'lr=1e-1'], loc='upper right')
except:
pass
axes[1].set_title('MAE Vs. No of Epochs for Various Learning Rates')
axes[1].set_ylabel('Mean Absolute Error')
axes[1].set_xlabel('Epoch')
else:
plt.plot(history.history['loss'])
try:
plt.plot(history.history['val_loss'])
plt.legend(['Train', 'Val'])
except:
pass
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plot_history(descriptor_history_sgd1, descriptor_history_sgd2, descriptor_history_sgd3, descriptor_history_sgd4, descriptor_history_sgd5, 'mean_absolute_error')
def plot_val_history(history, history2, history3, history4, history5, metric = None):
# Plots the loss history of training and validation (if existing)
# and a given metric
if metric != None:
fig, axes = plt.subplots(2,1, figsize=(8, 10))
#axes[0].plot(history.history[metric])
#axes[0].plot(history2.history[metric])
#axes[0].plot(history3.history[metric])
try:
#axes[0].plot(history.history['val_'+metric])
#axes[0].plot(history2.history['val_'+metric])
#axes[0].plot(history3.history['val_'+metric])
#axes[0].plot(history4.history['val_'+metric])
#axes[0].plot(history5.history['val_'+metric])
#axes[0].plot(history6.history['val_'+metric])
axes[0].legend(['lr=1e-5', 'lr=1e-4', 'lr=1e-3', 'lr=1e-2', 'lr=1e-1'], loc='upper right')
except:
pass
axes[0].set_title('Validation Loss Vs. No of Epochs for Various Learning Rates')
axes[0].set_ylabel('Validation Loss')
axes[0].set_xlabel('Epoch')
fig.subplots_adjust(hspace=0.5)
#axes[1].plot(history.history['loss'])
#axes[1].plot(history2.history['loss'])
#axes[1].plot(history3.history['loss'])
try:
axes[1].plot(history.history['val_loss'])
axes[1].plot(history2.history['val_loss'])
axes[1].plot(history3.history['val_loss'])
axes[1].plot(history4.history['val_loss'])
axes[1].plot(history5.history['val_loss'])
#axes[1].plot(history6.history['val_loss'])
axes[1].legend(['lr=1e-5', 'lr=1e-4', 'lr=1e-3', 'lr=1e-2', 'lr=1e-1'], loc='upper right')
except:
pass
axes[1].set_title('Validation Loss Vs. No of Epochs for Various Learning Rates')
axes[1].set_ylabel('Validation Loss')
axes[1].set_xlabel('Epoch')
else:
plt.plot(history.history['loss'])
try:
plt.plot(history.history['val_loss'])
plt.legend(['Train', 'Val'])
except:
pass
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plot_val_history(descriptor_history_sgd1, descriptor_history_sgd2, descriptor_history_sgd3, descriptor_history_sgd4, descriptor_history_sgd5, 'mean_absolute_error')
###Output
_____no_output_____
###Markdown
Vary Momentum
###Code
sgd1 = keras.optimizers.SGD(lr=0.1, momentum=0.9, nesterov=True)
sgd2 = keras.optimizers.SGD(lr=0.1, momentum=0.8, nesterov=True)
sgd3 = keras.optimizers.SGD(lr=0.1, momentum=0.7, nesterov=True)
sgd4 = keras.optimizers.SGD(lr=0.1, momentum=0.6, nesterov=True)
sgd5 = keras.optimizers.SGD(lr=0.1, momentum=0.5, nesterov=True)
descriptor_model_trip_sgd1 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd2 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd3 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd4 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd5 = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd1.compile(loss='mean_absolute_error', optimizer=sgd1)
descriptor_model_trip_sgd2.compile(loss='mean_absolute_error', optimizer=sgd2)
descriptor_model_trip_sgd3.compile(loss='mean_absolute_error', optimizer=sgd3)
descriptor_model_trip_sgd4.compile(loss='mean_absolute_error', optimizer=sgd4)
descriptor_model_trip_sgd5.compile(loss='mean_absolute_error', optimizer=sgd5)
#epochs = 1
### As with the denoising model, we use a loop to save for each epoch
## #the weights in an external website in case colab stops.
### reset, so e.g. calling 5 times fit(epochs=1) behave as fit(epochs=5)
### If you have a model saved from a previous training session
### Load it in the next line
# descriptor_model_trip.set_weights(keras.models.load_model('./descriptor.h5').get_weights())
# descriptor_model_trip.optimizer = keras.models.load_model('./descriptor.h5').optimizer
#for e in range(epochs):
descriptor_history_sgd1 = descriptor_model_trip_sgd1.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd2 = descriptor_model_trip_sgd2.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd3 = descriptor_model_trip_sgd3.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd4 = descriptor_model_trip_sgd4.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd5 = descriptor_model_trip_sgd5.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
### Saves optimizer and weights
#descriptor_model_trip.save('descriptor.h5')
### Uploads files to external hosting
#!curl -F "[email protected]" https://file.io
###Output
_____no_output_____
###Markdown
Plot Losses
###Code
def plot_history(history, history2, history3, history4, history5, metric = None):
# Plots the loss history of training and validation (if existing)
# and a given metric
if metric != None:
fig, axes = plt.subplots(2,1, figsize=(8, 10))
#axes[0].plot(history.history[metric])
#axes[0].plot(history2.history[metric])
#axes[0].plot(history3.history[metric])
#axes[0].plot(history4.history[metric])
#axes[0].plot(history5.history[metric])
try:
#axes[0].plot(history.history['val_'+metric])
#axes[0].plot(history2.history['val2_'+metric])
#axes[0].plot(history3.history['val3_'+metric])
axes[0].legend(['0.9', '0.8', '0.7', '0.6', '0.5'], loc='best')
except:
pass
axes[0].set_title('MAE Vs. No of Epochs for Various Momentum Values')
axes[0].set_ylabel('Mean Absolute Error')
axes[0].set_xlabel('Epoch')
fig.subplots_adjust(hspace=0.5)
axes[1].plot(history.history['loss'])
axes[1].plot(history2.history['loss'])
axes[1].plot(history3.history['loss'])
axes[1].plot(history4.history['loss'])
axes[1].plot(history5.history['loss'])
try:
#axes[1].plot(history.history['val_loss'])
axes[1].legend(['0.9', '0.8', '0.7', '0.6', '0.5'], loc='best')
except:
pass
axes[1].set_title('MAE Vs. No of Epochs for Various Momentum Values')
axes[1].set_ylabel('Mean Absolute Error')
axes[1].set_xlabel('Epoch')
else:
plt.plot(history.history['loss'])
try:
plt.plot(history.history['val_loss'])
plt.legend(['Train', 'Val'])
except:
pass
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plot_history(descriptor_history_sgd1, descriptor_history_sgd2, descriptor_history_sgd3, descriptor_history_sgd4, descriptor_history_sgd5, 'mean_absolute_error')
def plot_val_history(history, history2, history3, history4, history5, metric = None):
# Plots the loss history of training and validation (if existing)
# and a given metric
if metric != None:
fig, axes = plt.subplots(2,1, figsize=(8, 10))
#axes[0].plot(history.history[metric])
#axes[0].plot(history2.history[metric])
#axes[0].plot(history3.history[metric])
try:
#axes[0].plot(history.history['val_'+metric])
#axes[0].plot(history2.history['val_'+metric])
#axes[0].plot(history3.history['val_'+metric])
#axes[0].plot(history4.history['val_'+metric])
#axes[0].plot(history5.history['val_'+metric])
axes[0].legend(['0.9', '0.8', '0.7', '0.6', '0.5'], loc='best')
except:
pass
axes[0].set_title('Validation Loss Vs. No of Epochs for for Various Momentum Values')
axes[0].set_ylabel('Validation Loss')
axes[0].set_xlabel('Epoch')
fig.subplots_adjust(hspace=0.5)
#axes[1].plot(history.history['loss'])
#axes[1].plot(history2.history['loss'])
#axes[1].plot(history3.history['loss'])
try:
axes[1].plot(history.history['val_loss'])
axes[1].plot(history2.history['val_loss'])
axes[1].plot(history3.history['val_loss'])
axes[1].plot(history4.history['val_loss'])
axes[1].plot(history5.history['val_loss'])
axes[1].legend(['0.9', '0.8', '0.7', '0.6', '0.5'], loc='best')
except:
pass
axes[1].set_title('Validation Loss Vs. No of Epochs for Various Momentum Values')
axes[1].set_ylabel('Validation Loss')
axes[1].set_xlabel('Epoch')
else:
plt.plot(history.history['loss'])
try:
plt.plot(history.history['val_loss'])
plt.legend(['Train', 'Val'])
except:
pass
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plot_val_history(descriptor_history_sgd1, descriptor_history_sgd2, descriptor_history_sgd3, descriptor_history_sgd4, descriptor_history_sgd5, 'mean_absolute_error')
###Output
_____no_output_____
###Markdown
Save Baseline Model
###Code
sgd1 = keras.optimizers.SGD(lr=0.1, momentum=0.7, nesterov=True)
descriptor_model_trip = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip.compile(loss='mean_absolute_error', optimizer=sgd1)
#epochs = 1
### As with the denoising model, we use a loop to save for each epoch
## #the weights in an external website in case colab stops.
### reset, so e.g. calling 5 times fit(epochs=1) behave as fit(epochs=5)
### If you have a model saved from a previous training session
### Load it in the next line
# descriptor_model_trip.set_weights(keras.models.load_model('./descriptor.h5').get_weights())
# descriptor_model_trip.optimizer = keras.models.load_model('./descriptor.h5').optimizer
#for e in range(epochs):
descriptor_history = descriptor_model_trip.fit_generator(generator=training_generator, epochs=20, verbose=1, validation_data=val_generator)
descriptor_model_trip.save('descriptor_base.h5')
### Saves optimizer and weights
#descriptor_model_trip.save('descriptor.h5')
### Uploads files to external hosting
#!curl -F "[email protected]" https://file.io
###Output
_____no_output_____
###Markdown
Test mAP on Baseline Approach generate_desc_csv(descriptor_model, seqs_test, denoise_model=denoise_model, use_clean=False) Verification
###Code
!python ./hpatches-benchmark/hpatches_eval.py --descr-name=custom --descr-dir=/content/keras_triplet_descriptor/out/ --task=verification --delimiter=";"
!python ./hpatches-benchmark/hpatches_results.py --descr=custom --results-dir=./hpatches-benchmark/results/ --task=verification
###Output
_____no_output_____
###Markdown
Matching
###Code
!python ./hpatches-benchmark/hpatches_eval.py --descr-name=custom --descr-dir=/content/keras_triplet_descriptor/out/ --task=matching --delimiter=";"
!python ./hpatches-benchmark/hpatches_results.py --descr=custom --results-dir=./hpatches-benchmark/results/ --task=matching
###Output
_____no_output_____
###Markdown
Retrieval
###Code
!python ./hpatches-benchmark/hpatches_eval.py --descr-name=custom --descr-dir=/content/keras_triplet_descriptor/out/ --task=retrieval --delimiter=";"
!python ./hpatches-benchmark/hpatches_results.py --descr=custom --results-dir=./hpatches-benchmark/results/ --task=retrieval
###Output
_____no_output_____ |
examples/examples/06_vortex2d.ipynb | ###Markdown
Isentropic Vortex > WORK IN PROGRESS !!!In this example we are going to solve the Euler equations for an isentropic two-dimensional vortex in a full-periodic square domain. Since the problem is not diffusive, the expected behavior is for the vortex to be convected unchanged forever. This is a useful example for testing the diffusive properties of our methods, as well as its numerical stability.\begin{equation} \begin{array}{c} \rho_t + \nabla \cdot (\rho u) = 0 \\ (\rho \mathbf{u})_t + (\mathbf{u} \cdot \nabla)(\rho \mathbf{u}) + \nabla p = 0 \\ (\rho e)_t + \nabla \cdot(\mathbf{u} ( \rho e + p )) = 0 \end{array}\end{equation}The inputs to the network will be the independent variables $x$, $y$ and $t$ and the outputs will be the conserved variables $\rho$, $\rho \mathbf{u}$ and $\rho e$ where $\rho$ is the density, $\mathbf{u} = (u, v)$ is the velocity and $e$ is the specific energy.
###Code
# autoreload nangs
%reload_ext autoreload
%autoreload 2
%matplotlib inline
#imports
import math
import numpy as np
import matplotlib.pyplot as plt
import torch
###Output
_____no_output_____
###Markdown
First we define our PDE and set the values for training.
###Code
from nangs.pde import PDE
from nangs.bocos import PeriodicBoco, DirichletBoco, NeumannBoco
from nangs.solutions import MLP
class MyPDE(PDE):
def __init__(self, inputs, outputs, params):
super().__init__(inputs, outputs, params)
def computePDELoss(self, grads, inputs, outputs, params):
drdt = grads['r']['t']
r, u, v, p = outputs['r'], outputs['u'], outputs['v'], outputs['p']
ru = r*u
drudx = self.computeGrad(ru, 'x')
drudt = self.computeGrad(ru, 't')
rv = r*v
drvdy = self.computeGrad(rv, 'y')
drvdt = self.computeGrad(rv, 't')
ruup = r*u*u + p
druupdx = self.computeGrad(ruup, 'x')
ruv = r*u*v
druvdy = self.computeGrad(ruv, 'y')
rvvp = r*v*v + p
drvvpdy = self.computeGrad(rvvp, 'y')
rvu = r*v*u
drvudx = self.computeGrad(rvu, 'x')
re = p/(params['g']-1) + 0.5*r*(u*u + v*v)
dredt = self.computeGrad(re, 't')
repu = (re+p)*u
drepudx = self.computeGrad(repu, 'x')
repv = (re+p)*v
drepvdy = self.computeGrad(repv, 'y')
return [
drdt + drudx + drvdy,
drudt + druupdx + druvdy,
drvdt + drvudx + drvvpdy,
dredt + drepudx + drepvdy
]
# instanciate pde
pde = MyPDE(inputs=['x', 'y', 't'], outputs=['r', 'u', 'v', 'p'], params=['g'])
# define input values
x = np.linspace(0,2,40)
y = np.linspace(-1,1,40)
t = np.linspace(0,0.5,20)
pde.setValues({'x': x, 'y': y, 't': t, 'g': np.array([1.4])})
pde.setValues({'x': x, 'y': y, 't': t}, train=False)
###Output
_____no_output_____
###Markdown
Boundary conditions.
###Code
# periodic b.c for the space dimension
x1, x2 = np.array([0]), np.array([1])
boco = PeriodicBoco('boco_x', {'x': x1, 'y': y, 't': t}, {'x': x2, 'y':y, 't': t})
pde.addBoco(boco)
y1, y2 = np.array([0]), np.array([1])
boco = PeriodicBoco('boco_y', {'x': x, 'y': y1, 't': t}, {'x': x, 'y':y2, 't': t})
pde.addBoco(boco)
# initial condition (dirichlet for temporal dimension)
gamma, Ma = 1.4, 0.6
Rgas = 1./(gamma*Ma*Ma)
Rc, C = 0.2, -0.1
r0 = np.zeros((len(y)*len(x)))
u0 = np.zeros((len(y)*len(x)))
v0 = np.zeros((len(y)*len(x)))
p0 = np.zeros((len(y)*len(x)))
for i, _y in enumerate(y):
for j, _x in enumerate(x):
# vortex center at (5, 5)
x1, x2 = x[j] - 1, y[i]
e = -(x1*x1+x2*x2)/(2.*Rc*Rc)
ro = 1.
T = 1.
p = ro*Rgas*T+ro*C*C/(Rc*Rc)*math.exp(e)
u = 1.+C/ro*e*math.exp(e)*(-2.*x2/(2*Rc*Rc))
v = 0.-C/ro*e*math.exp(e)*(-2.*x1/(2*Rc*Rc))
r0[i*len(x) + j] = ro
u0[i*len(x) + j] = u
v0[i*len(x) + j] = v
p0[i*len(x) + j] = p
boco = DirichletBoco('initial_condition', {'x': x, 'y': y, 't': np.array([0])}, {'r': r0, 'u': u0, 'v': v0, 'p': p0})
pde.addBoco(boco)
# visualize initial condition
fig, ax = plt.subplots()
u, v = u0.reshape(len(y),len(x)), v0.reshape(len(y),len(x))
q = ax.quiver(x, y, u, v)
ax.quiverkey(q, X=0.3, Y=1.1, U=10, label='Quiver key, length = 10', labelpos='E')
plt.show()
# removing free-streem velocity component
fig, ax = plt.subplots()
q = ax.quiver(x, y, u - 1, v)
ax.quiverkey(q, X=0.3, Y=1.1, U=10, label='Quiver key, length = 10', labelpos='E')
plt.show()
###Output
_____no_output_____
###Markdown
Now we define a topology for our solution and set the training parameters. Then we can find a solution for our PDE.
###Code
# define solution topology
mlp = MLP(pde.n_inputs, pde.n_outputs, 5, 4096)
optimizer = torch.optim.Adam(mlp.parameters(), lr=3e-4)
pde.compile(mlp, optimizer)
# find the solution
hist = pde.solve(epochs=20)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5))
ax1.plot(hist['train_loss'], label="train_loss")
ax1.plot(hist['val_loss'], label="val_loss")
ax1.grid(True)
ax1.legend()
ax1.set_yscale("log")
for boco in pde.bocos:
ax2.plot(hist['bocos'][boco.name], label=boco.name)
ax2.legend()
ax2.grid(True)
ax2.set_yscale("log")
plt.show()
###Output
_____no_output_____
###Markdown
Finally, we can evaluate our solution.
###Code
# evaluate the solution
x = np.linspace(0,2,50)
y = np.linspace(-1,1,25)
t = np.array([0.5])
pde.evaluate({'x': x, 'y': y, 't': t})
r = pde.outputs['r']
plt.imshow(r.reshape(len(y),len(x)))
plt.show()
###Output
_____no_output_____ |
examples/2D_QLIPP_simulation/2D_QLIPP_forward.ipynb | ###Markdown
2D QLIPP forward simulationThis simulation is based on the QLIPP paper ([here](https://elifesciences.org/articles/55502)): ``` S.-M. Guo, L.-H. Yeh, J. Folkesson, I. E. Ivanov, A. P. Krishnan, M. G. Keefe, E. Hashemi, D. Shin, B. B. Chhun, N. H. Cho, M. D. Leonetti, M. H. Han, T. J. Nowakowski, S. B. Mehta , "Revealing architectural order with quantitative label-free imaging and deep learning," eLife 9:e55502 (2020).```
###Code
import numpy as np
import matplotlib.pyplot as plt
from numpy.fft import fft2, ifft2, fftshift, ifftshift
import pickle
import waveorder as wo
%matplotlib inline
plt.style.use(['dark_background']) # Plotting option for dark background
###Output
_____no_output_____
###Markdown
Key parameters
###Code
N = 256 # number of pixel in y dimension
M = 256 # number of pixel in x dimension
mag = 40 # magnification
ps = 6.5/mag # effective pixel size
lambda_illu = 0.532 # wavelength
n_media = 1 # refractive index in the media
NA_obj = 0.55 # objective NA
NA_illu = 0.4 # illumination NA (condenser)
NA_illu_in = 0.4 # illumination NA (phase contrast inner ring)
z_defocus = (np.r_[:5]-2)*1.757 # a set of defocus plane
chi = 0.03*2*np.pi # swing of Polscope analyzer
###Output
_____no_output_____
###Markdown
Sample : star with uniform phase, uniform retardance, and radial orientation
###Code
# generate Siemens star pattern
star, theta, _ = wo.genStarTarget(N,M)
wo.plot_multicolumn(np.array([star, theta]), num_col=2, size=5)
# Assign uniform phase, uniform retardance, and radial slow axes to the star pattern
phase_value = 1 # average phase in radians (optical path length)
phi_s = star*(phase_value + 0.15) # slower OPL across target
phi_f = star*(phase_value - 0.15) # faster OPL across target
mu_s = np.zeros((N,M)) # absorption
mu_f = mu_s.copy()
t_eigen = np.zeros((2, N, M), complex) # complex specimen transmission
t_eigen[0] = np.exp(-mu_s + 1j*phi_s)
t_eigen[1] = np.exp(-mu_f + 1j*phi_f)
sa = theta%np.pi #slow axes.
wo.plot_multicolumn(np.array([phi_s, phi_f, mu_s, sa]), \
num_col=2, size=5, set_title=True, \
titles=['Phase (slow)', 'Phase (fast)', 'absorption', 'slow axis'], origin='lower')
###Output
_____no_output_____
###Markdown
Forward model of QLIPP (polarization-diverse and depth-diverse acquisition) Source pupil
###Code
# Subsample source pattern for speed
xx, yy, fxx, fyy = wo.gen_coordinate((N, M), ps)
Source_cont = wo.gen_Pupil(fxx, fyy, NA_illu, lambda_illu)
Source_discrete = wo.Source_subsample(Source_cont, lambda_illu*fxx, lambda_illu*fyy, subsampled_NA = 0.1)
plt.figure(figsize=(10,10))
plt.imshow(fftshift(Source_discrete),cmap='gray')
###Output
_____no_output_____
###Markdown
Initialize microscope simulator with above source pattern and uniform imaging pupil
###Code
# Microscope object generation
simulator = wo.waveorder_microscopy_simulator((N,M), lambda_illu, ps, NA_obj, NA_illu, z_defocus, chi, n_media=n_media,\
illu_mode='Arbitrary', Source=Source_discrete)
setup = wo.waveorder_microscopy((N,M), lambda_illu, ps, NA_obj, NA_illu, z_defocus, chi, n_media=n_media,\
cali=False, bg_option='global', illu_mode='Arbitrary',Source=Source_cont)
plt.figure(figsize=(10,10))
plt.imshow(np.abs(fftshift(setup.Pupil_obj)),cmap='gray')
###Output
_____no_output_____
###Markdown
2D phase transfer function of the microscope at different z
###Code
wo.image_stack_viewer(np.real(np.transpose(fftshift(setup.Hp,axes=(0,1)),(2,0,1))))
###Output
_____no_output_____
###Markdown
Compute image volumes and Stokes volumes
###Code
I_meas, Stokes_out = simulator.simulate_waveorder_measurements(t_eigen, sa, multiprocess=False)
wo.parallel_4D_viewer(np.transpose(Stokes_out,(3,0,1,2)), num_col=2, size=5, origin='lower', \
set_title=True, titles=[r'$S_0$',r'$S_1$',r'$S_2$',r'$S_3$'])
# Add noise to the measurement
photon_count = 14000
ext_ratio = 10000
const_bg = photon_count/(0.5*(1-np.cos(chi)))/ext_ratio
I_meas_noise = (np.random.poisson(I_meas/np.max(I_meas) * photon_count + const_bg)).astype('float64')
wo.parallel_4D_viewer(np.transpose(I_meas_noise,(3,0,1,2)), num_col=3, size=5, origin='lower', \
set_title=True, titles=[r'$I_{ext}$',r'$I_{0}$',r'$I_{45}$',r'$I_{90}$', r'$I_{135}$'])
###Output
_____no_output_____
###Markdown
Save simulation
###Code
output_file = '2D_QLIPP_simulation'
np.savez(output_file, I_meas=I_meas_noise, Stokes_out=Stokes_out, lambda_illu=lambda_illu, \
n_media=n_media, NA_obj=NA_obj, NA_illu=NA_illu, ps=ps, Source_cont=Source_cont, \
z_defocus=z_defocus, chi=chi)
import cupy as cp
import gc
gc.collect()
cp.get_default_memory_pool().free_all_blocks()
###Output
_____no_output_____ |
ConvexOptimization/Part 3, Matrix completion and nuclear norm regularization.ipynb | ###Markdown
The matrix completion problemNext, we will look at another commonly used non-differentiable regularization term using the matrix completion problem. The basic idea is that we have a matrix (known to be low-rank) which has some missing entries. Let $Y\in\mathbb{R}^{m\times n}$ be the matrix, $(i,j)\in \Omega$ the entries that we have observed, and $B$ our estimate of the full matrix. We construct an objective function ($\mathcal{E}$) by quantifying the squared error of our estimated matrix $B$ and by penalizing high rank solutions using a regularization term:$$\mathcal{E}(B):=\frac{1}{2} \sum_{(i,j)\in\Omega} (Y_{ij}-B_{ij})^2 + \lambda||B||_{tr}.$$where $\lambda||B||_{tr}$ refers to the nuclear norm of $B$ which is the sum of its singular values.For clarity of notation, let $P_\Omega$ be the projection operator onto the set $\Omega$, defined as:$$[P_\Omega(B)]_{ij}:=\begin{cases} B_{ij} &\mbox{if } (i,j)\in\Omega, \\0&\mbox{otherwise}. \end{cases}$$Then we can then simplify our objective function by rewriting it as:\begin{align*} \mathcal{E}(B) &= g(B) + h(B), \\ g(B) &= \frac{1}{2} || P_\Omega(Y)-P_\Omega(B)||^2_F, \\ h(B) &= \lambda||B||_{tr},\end{align*}where $||\ \cdot\ ||_F$ is the Frobenius norm. We thus have a definition suitable for the proximal gradient method. That is, we have an objective function $\mathcal{E}$ consisting of a sum of two convex functions $g$ and $h$, where only $g$ is smooth. Thus, the problem that remains is to compute the gradient of $g$ and the proximity operator for $h$. Rather straightforwardly, we obtain the gradient as:$$\nabla g(B)=-(P_\Omega(Y)-P_\Omega(B)),$$since$$(\nabla g(B))_{kl}=\frac{\partial}{\partial B_{kl}} \frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m (P_\Omega(Y)_{ij}-P_\Omega(B)_{kl})^2=\frac{1}{2} \frac{\partial}{\partial B_{kl}} (P_\Omega(Y)_{kl}-P_\Omega(B)_{kl})^2=-(P_\Omega(Y)-P_\Omega(B))_{kl}.$$The proximity operator for $h$ is slightly trickier to solve, but the end result is that$$\operatorname{prox}_{\eta}(B)=S_{\eta \lambda}(B)$$where $S_{\eta \lambda}$ is the matrix soft-thresholding operator. (DERIVATION MISSING STILL!!!.) Let now $B=U\Sigma V^T$ be the Singular Value Decomposition (SVD) of $B$ and $\Sigma_{\eta \lambda}$ a diagonal matrix such that $(\Sigma_{\eta \lambda})_{ii}=\operatorname{max}\{\Sigma_{ii}-\eta \lambda,0\}$. Then, the matrix soft-thresholding operator is defined as$$S_{\eta \lambda}:=U\Sigma_{\eta \lambda} V^T.$$That is, the matrix soft-thresholding operator penalizes non-zero singular values and sets them to zero whenever they decrease below $\eta \lambda$. A higher value of $\lambda$ will thus cause more singular values to be zero, and consequently cause $B$ to be of lower-rank.
###Code
def gFun(B, Y, Omega):
return (np.sum((B[Omega]-Y[Omega])**2))/2
def hFun(B, regLambda):
U, Sigma, VT = np.linalg.svd(B, full_matrices=False)
return regLambda*np.linalg.norm(Sigma, 1)
def objFun(B, Y, Omega, regLambda):
return gFun(B, Y, Omega) + hFun(B, regLambda)
def gFunDer(B, Y, Omega):
negGrad = np.zeros(B.shape)
negGrad[Omega] = -Y[Omega] + B[Omega]
return negGrad
def matrixSoftThresholdingFun(B, th):
U, Sigma, VT = np.linalg.svd(B, full_matrices=False)
Sigma = Sigma - th
Sigma[Sigma<0] = 0
return np.dot(U * Sigma, VT)
###Output
_____no_output_____
###Markdown
Matrix completion example
###Code
m = 60 # Rows of Y
n = 80 # Columns of Y
k = 2 # N outer products
nZeros = int(0.5*m*n) # N missing elements in Y
# Create a random full Y matrix
W1 = np.random.randn(m, k)
W2 = np.random.randn(k, n)
Y = np.matmul(W1, W2)
# Obtain omega by randomly selecting elements to exlude from Y
randPerm = np.random.permutation(m*n)
Omega = np.ones(m*n, dtype=bool)
Omega[randPerm[:nZeros]] = False
Omega = np.reshape(Omega, [m, n])
# Plotting
fig = plt.figure(figsize=(15, 5))
# Y
ax = fig.add_subplot(1, 2, 1)
ax.imshow(Y, cmap=cm.coolwarm)
ax.set_title('$Y$');
ax.set_xticks([])
ax.set_yticks([])
# Omega
ax = fig.add_subplot(1, 2, 2)
ax.imshow(Omega, cmap=cm.gray)
ax.set_title('$\Omega$');
ax.set_xticks([])
ax.set_yticks([]);
###Output
_____no_output_____
###Markdown
Predict the missing entries using the proximal gradient method
###Code
# Proximal gradient parameters
i = 2
eta = 1
beta = 0.8
regLambda = 1
epsilon = 1e-5
# Pre-allocation of memory for algorithm variables
BNew = np.zeros((m, n))
BOld = np.zeros((m, n))
BTmp = np.zeros((m, n))
# Initialization
objFunVals = []
objFunVals.append(objFun(BNew, Y, Omega, regLambda))
# Proximal gradient with backtracking and accerlation
converged = False
while not converged:
gradTmp = gFunDer(BTmp, Y, Omega)
gTmp = gFun(BTmp, Y, Omega)
# Backtracking loop
while True:
BNew = matrixSoftThresholdingFun(BTmp - eta*gradTmp, eta*regLambda)
diff = BNew - BTmp
if gFun(BNew, Y, Omega) > gTmp + np.sum(np.multiply(gradTmp, diff)) + np.sum(diff**2)/(2*eta):
eta *= beta
else:
break
objFunVals.append(objFun(BNew, Y, Omega, regLambda))
BTmp = BNew + (i-2)/(i-1)*(BNew-BOld) # Acceleration
BOld = BNew
i += 1
# Converge check, mean over previous 10 iterations due to Nesterow ripples with acceleration
if len(objFunVals) > 12:
if (np.mean(objFunVals[-12:-2]) - objFunVals[-1]) / objFunVals[-1] < epsilon:
converged = True
objFunVals = np.array(objFunVals)
U, Sigma, VT = np.linalg.svd(BNew, full_matrices=False)
print('Rank(B):', np.sum(Sigma > 1e-10))
# Plotting
fig = plt.figure(figsize=(18, 3))
# Objective function
ax = fig.add_subplot(1, 4, 1)
ax.plot(range(objFunVals.size-1), objFunVals[1:], c=[0.5, 0.5, 0.5])
ax.set_xlabel('Iteration')
ax.set_ylabel('Obj. fun.')
# Estimated and real element values
ax = fig.add_subplot(1, 4, 2)
ax.plot([Y.min(), Y.max()], [Y.min(), Y.max()], 'k:', color=[0.5, 0.5, 0.5])
ph1, = ax.plot(Y[Omega].ravel(), BNew[Omega].ravel(), 'b.', ms=8, label='$\Omega$')
ph2, = ax.plot(Y[~Omega].ravel(), BNew[~Omega].ravel(), 'r.', ms=8, label='$\Omega^c$')
ax.legend(handles=[ph1, ph2])
ax.set_ylabel('$B_{ij}$')
ax.set_xlabel('$Y_{ij}$')
# Y
ax = fig.add_subplot(1, 4, 3)
ax.imshow(Y, cmap=cm.coolwarm, clim=[-np.max(np.abs(Y)), np.max(np.abs(Y))])
ax.set_title('$Y$');
ax.set_xticks([])
ax.set_yticks([])
# B
ax = fig.add_subplot(1, 4, 4)
ax.imshow(BNew, cmap=cm.coolwarm, clim=[-np.max(np.abs(BNew)), np.max(np.abs(BNew))])
ax.set_title('$B$');
ax.set_xticks([])
ax.set_yticks([]);
###Output
Rank(B): 2
###Markdown
Comparing the original and found vectors
###Code
# Plotting
fig = plt.figure(figsize=(9, 3*(k+1)))
for kIdx in range(k):
# W part
ax = fig.add_subplot(k+1, 2, kIdx*2+1)
WTmp = np.outer(W1[:,kIdx], W2[kIdx, :])
ax.imshow(WTmp, cmap=cm.coolwarm, clim=[-np.max(np.abs(WTmp)), np.max(np.abs(WTmp))])
ax.set_title('$W_{:1d}$'.format(kIdx+1));
ax.set_xticks([])
ax.set_yticks([]);
# B part
ax = fig.add_subplot(k+1, 2, kIdx*2+2)
BTmp = np.outer(U[:,kIdx], VT[kIdx, :])
ax.imshow(BTmp, cmap=cm.coolwarm, clim=[-np.max(np.abs(BTmp)), np.max(np.abs(BTmp))])
ax.set_title('$B_{:1d}$'.format(kIdx+1));
ax.set_xticks([])
ax.set_yticks([]);
# W full
ax = fig.add_subplot(k+1, 2, kIdx*2+3)
WTmp = np.matmul(W1, W2)
ax.imshow(WTmp, cmap=cm.coolwarm)
ax.set_title('$W_{Full}$');
ax.set_xticks([])
ax.set_yticks([]);
# B full
ax = fig.add_subplot(k+1, 2, kIdx*2+4)
BTmp = np.matmul(U*Sigma, VT)
ax.imshow(BTmp, cmap=cm.coolwarm, )
ax.set_title('$B_{Full}$');
ax.set_xticks([])
ax.set_yticks([]);
###Output
_____no_output_____ |
docs/source/user_guide/clean/clean_mx_rfc.ipynb | ###Markdown
Mexican Tax Numbers Introduction The function `clean_mx_rfc()` cleans a column containing Mexican tax number (RFC) strings, and standardizes them in a given format. The function `validate_mx_rfc()` validates either a single RFC strings, a column of RFC strings or a DataFrame of RFC strings, returning `True` if the value is valid, and `False` otherwise. RFC strings can be converted to the following formats via the `output_format` parameter:* `compact`: only number strings without any seperators or whitespace, like "GODE561231GR8"* `standard`: RFC strings with proper whitespace in the proper places, like "GODE 561231 GR8"Invalid parsing is handled with the `errors` parameter:* `coerce` (default): invalid parsing will be set to NaN* `ignore`: invalid parsing will return the input* `raise`: invalid parsing will raise an exceptionThe following sections demonstrate the functionality of `clean_mx_rfc()` and `validate_mx_rfc()`. An example dataset containing RFC strings
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"rfc": [
"GODE561231GR8",
"BUEI591231GH9",
"51824753556",
"51 824 753 556",
"hello",
np.nan,
"NULL"
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
###Output
_____no_output_____
###Markdown
1. Default `clean_mx_rfc`By default, `clean_mx_rfc` will clean rfc strings and output them in the standard format with proper separators.
###Code
from dataprep.clean import clean_mx_rfc
clean_mx_rfc(df, column = "rfc")
###Output
_____no_output_____
###Markdown
2. Output formats This section demonstrates the output parameter. `standard` (default)
###Code
clean_mx_rfc(df, column = "rfc", output_format="standard")
###Output
_____no_output_____
###Markdown
`compact`
###Code
clean_mx_rfc(df, column = "rfc", output_format="compact")
###Output
_____no_output_____
###Markdown
3. `inplace` parameterThis deletes the given column from the returned DataFrame. A new column containing cleaned RFC strings is added with a title in the format `"{original title}_clean"`.
###Code
clean_mx_rfc(df, column="rfc", inplace=True)
###Output
_____no_output_____
###Markdown
4. `errors` parameter `coerce` (default)
###Code
clean_mx_rfc(df, "rfc", errors="coerce")
###Output
_____no_output_____
###Markdown
`ignore`
###Code
clean_mx_rfc(df, "rfc", errors="ignore")
###Output
_____no_output_____
###Markdown
4. `validate_mx_rfc()` `validate_mx_rfc()` returns `True` when the input is a valid RFC. Otherwise it returns `False`.The input of `validate_mx_rfc()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_mx_rfc()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_mx_rfc()` returns the validation result for the whole DataFrame.
###Code
from dataprep.clean import validate_mx_rfc
print(validate_mx_rfc("GODE561231GR8"))
print(validate_mx_rfc("BUEI591231GH9"))
print(validate_mx_rfc("51824753556"))
print(validate_mx_rfc("51 824 753 556"))
print(validate_mx_rfc("hello"))
print(validate_mx_rfc(np.nan))
print(validate_mx_rfc("NULL"))
###Output
_____no_output_____
###Markdown
Series
###Code
validate_mx_rfc(df["rfc"])
###Output
_____no_output_____
###Markdown
DataFrame + Specify Column
###Code
validate_mx_rfc(df, column="rfc")
###Output
_____no_output_____
###Markdown
Only DataFrame
###Code
validate_mx_rfc(df)
###Output
_____no_output_____ |
__Project Files/.ipynb_checkpoints/ImputeNaN__KNN_backup_dropnan_class-checkpoint.ipynb | ###Markdown
Data DictionaryDescriptors:.tg {border-collapse:collapse;border-spacing:0;}.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}.tg .tg-2lp6{font-weight:bold;background-color:bbdaff;vertical-align:top}.tg .tg-amwm{font-weight:bold;text-align:center;vertical-align:top}.tg .tg-36xf{font-weight:bold;background-color:bbdaff}.tg .tg-yw4l{vertical-align:top}.tg .tg-yw42{vertical-align:top;color:blue} Feature Name Description Metrics RecordID A unique integer for each ICU stay Integer Age Age (years) Height Height (cm) ICUtype ICU Type (1: Coronary Care Unit, 2: Cardiac Surgery Recovery Unit, 3: Medical ICU, or 4: Surgical ICU) Gender Gender (0: female, or 1: male)These 37 variables may be observed once, more than once, or not at all in some cases: Feature Name Description Metrics Albumin Albumin (g/dL) ALP Alkaline phosphatase (IU/L)ALTAlanine transaminase (IU/L)ASTAspartate transaminase(IU/L)BilirubinBilirubin(mg/dL)BUNBlood urea nitrogen(mg/dL)CholesterolCholesterol(mg/dL)CreatinineSerum creatinine(mg/dL)DiasABP Invasive diastolic arterial blood pressure (mmHg)FiO2 Fractional inspired O2 (0-1)GCS Glasgow Coma Score (3-15)Glucose Serum glucose (mg/dL)HCO3 Serum bicarbonate (mmol/L)HCT Hematocrit (%)HR Heart rate (bpm)K Serum potassium (mEq/L)Lactate Lactate(mmol/L)Mg Serum magnesium (mmol/L)MAP Invasive mean arterial blood pressure (mmHg)MechVent Mechanical ventilation respiration (0:false, or 1:true)Na Serum sodium (mEq/L)NIDiasABP Non-invasive diastolic arterial blood pressure (mmHg)NIMAP Non-invasive mean arterial blood pressure (mmHg)NISysABP Non-invasive systolic arterial blood pressure (mmHg)PaCO2 partial pressure of arterial CO2 (mmHg)PaO2 Partial pressure of arterial O2 (mmHg)pH Arterial pH (0-14)Platelets Platelets(cells/nL)RespRate Respiration rate (bpm)SaO2 O2 saturation in hemoglobin (%)SysABP Invasive systolic arterial blood pressure (mmHg)Temp Temperature (°C)TropI Troponin-I (μg/L)TropT Troponin-T (μg/L)Urine Urine output (mL)WBC White blood cell count (cells/nL)Weight Weight(kg)Outcomes-Related Descriptors: Outcomes Description Metrics SAPS-I score (Le Gall et al., 1984) between 0 to 163 SOFA score (Ferreira et al., 2001) between 0 to 4 Length of stay Length of stay (days) Survival Survival (days) In-hospital death Target Variable (0: survivor, or 1: died in-hospital) Import Packages
###Code
# Import packages
import glob
import csv
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import psycopg2
import re
import seaborn as sns
from sklearn.preprocessing import scale
from fancyimpute import SoftImpute, KNN
pd.set_option('display.max_columns', 200)
pd.set_option('display.max_rows',200)
sns.set_style('whitegrid')
sns.set(rc={"figure.figsize": (15, 8)})
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
mortality = pd.read_csv('mortality.csv')
###Output
_____no_output_____
###Markdown
Data CleaningStep 1: Drop all columns with more than 25% NaN valuesStep 2: Drop all rows with more than 50% NaN valuesStep 3: Drop all other outcomes labels except 'in-hospital_death'
###Code
# NaN values count (columns)
null_col = mortality.isnull().sum()
null_col[null_col > 1000].index # columns that contain more than 3/4 NaN values (75% of the rows)
mortality.drop(null_col[null_col > 1000].index,axis=1,inplace = True)
# NaN values count (rows)
mortality['NaNs'] = mortality.isnull().sum(axis=1)
mortality[mortality['NaNs']>57].index # rows that contain more than 57 NaN values (50% of the features)
mortality.drop(mortality[mortality['NaNs']>57].index,inplace=True)
# Drop other labels
label_others = mortality.drop(['saps-i','sofa','length_of_stay','survival'],axis=1)
mortality.drop(['saps-i','sofa','length_of_stay','survival'],axis=1,inplace=True)
# Drop NaN column
mortality.drop(['NaNs'],axis=1,inplace=True)
class DropNaN(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def remove_columns_(self,df,drop_col):
"""remove columns with more than 1000 NaN values"""
drop_col = 1000
null_col = df.isnull().sum()
df = df.drop(null_col[null_col > drop_col].index,axis=1)
return df
def remove_rows_(self,df,drop_row):
drop_row = 57
df['NaNs'] = mortality.isnull().sum(axis=1)
df = df.drop(df[df['NaNs']>drop_row].index)
return df
def drop_deslabels_(self,df,deslabels):
deslabels = ['saps-i','sofa','length_of_stay','survival']
df = df.drop(deslabels,axis=1)
return df
def transform(self, main_df, *args):
main_df = self.remove_columns_(main_df)
main_df = self.remove_rows_(main_df)
main_df = self.drop_deslabels_(main_df)
self.feature_names = X.columns
return X
def fit(self, X, *args):
return self
###Output
_____no_output_____
###Markdown
Imputing Missing Data2: Impute values based on KNN
###Code
mortality.isnull().sum()
###Output
_____no_output_____
###Markdown
Finding the best K(for each column)Step 1: Extract rows with no missing values for a particular column (train set)Step 2: Do a CV on on train set find optimal KStep 3: Use optimal K on KNN fancyimpute
###Code
# Obtain one subset
subset_col = mortality.columns[mortality.columns.str.contains('mean_\w+')]
subset_mortality = mortality[subset_col]
subset_mortality.isnull().sum()
# Loop through each column
def find_best_k_cls(X, y, k_min=1, k_max=51, step=1, cv=5):
k_range = range(k_min, k_max+1, step)
accs = []
for k in k_range:
knn = KNeighborsRegressor(n_neighbors=k,weights='distance')
scores = cross_val_score(knn, X, y, cv=cv)
accs.append(np.mean(scores))
print 'accuracy:',np.max(accs),'best k:',k_range[np.argmax(accs)]
return (np.max(accs),k_range[np.argmax(accs)])
best_ks = []
feature_list = []
for feature in subset_mortality.columns:
impute_missing = subset_mortality.loc[subset_mortality[feature].isnull(), :] # test
impute_valid = subset_mortality.loc[~subset_mortality[feature].isnull(), :] # train
y = impute_valid[feature].values
X = impute_valid.drop([feature],axis=1)
# Standard Scale
scaled_array = []
for col in X.columns:
masked_X = np.ma.array(X[col], mask=np.isnan(X[col]))
masked_mean = np.mean(masked_X)
masked_std = np.std(masked_X)
scaled_col = np.array((masked_X - masked_mean)/masked_std)
scaled_array.append(scaled_col)
np.transpose(scaled_array)
Xn = pd.DataFrame(data=np.transpose(scaled_array),columns=X.columns,index=X.index)
Xn.fillna(Xn.mean(),inplace=True)
feature_list.append(feature)
best_ks.append(find_best_k_cls(Xn,y))
print feature,find_best_k_cls(Xn,y)
###Output
accuracy: 0.457916808193 best k: 8
mean_bun accuracy: 0.457916808193 best k: 8
(0.45791680819271596, 8)
accuracy: 0.427825948718 best k: 8
mean_creatinine accuracy: 0.427825948718 best k: 8
(0.42782594871808322, 8)
accuracy: 0.203645564615 best k: 20
mean_gcs accuracy: 0.203645564615 best k: 20
(0.20364556461510813, 20)
accuracy: 0.0288583942041 best k: 46
mean_glucose accuracy: 0.0288583942041 best k: 46
(0.028858394204136585, 46)
accuracy: 0.350347928361 best k: 18
mean_hco3 accuracy: 0.350347928361 best k: 18
(0.35034792836107587, 18)
accuracy: 0.16623157253 best k: 41
mean_hct accuracy: 0.16623157253 best k: 41
(0.16623157253026999, 41)
accuracy: 0.136066885231 best k: 35
mean_hr accuracy: 0.136066885231 best k: 35
(0.13606688523051452, 35)
accuracy: 0.226601404218 best k: 35
mean_k accuracy: 0.226601404218 best k: 35
(0.22660140421766664, 35)
accuracy: 0.120951935352 best k: 46
mean_mg accuracy: 0.120951935352 best k: 46
(0.12095193535204911, 46)
accuracy: 0.0837829317291 best k: 31
mean_na accuracy: 0.0837829317291 best k: 31
(0.083782931729132357, 31)
accuracy: 0.614678707945 best k: 13
mean_nidiasabp accuracy: 0.614678707945 best k: 13
(0.61467870794452073, 13)
accuracy: 0.799074417988 best k: 10
mean_nimap accuracy: 0.799074417988 best k: 10
(0.79907441798788836, 10)
accuracy: 0.479485922519 best k: 17
mean_nisysabp accuracy: 0.479485922519 best k: 17
(0.4794859225187606, 17)
accuracy: 0.40163544286 best k: 9
mean_paco2 accuracy: 0.40163544286 best k: 9
(0.40163544285950853, 9)
accuracy: 0.111286021619 best k: 24
mean_pao2 accuracy: 0.111286021619 best k: 24
(0.11128602161854842, 24)
accuracy: -24.9207078922 best k: 51
mean_ph accuracy: -24.9207078922 best k: 51
(-24.920707892175791, 51)
accuracy: 0.0912651705636 best k: 49
mean_platelets accuracy: 0.0912651705636 best k: 49
(0.091265170563644221, 49)
accuracy: 0.0956271130096 best k: 49
mean_temp accuracy: 0.0956271130096 best k: 49
(0.095627113009622899, 49)
accuracy: 0.133683543128 best k: 51
mean_urine accuracy: 0.133683543128 best k: 51
(0.13368354312775296, 51)
accuracy: 0.0942643467068 best k: 31
mean_wbc accuracy: 0.0942643467068 best k: 31
(0.094264346706828012, 31)
accuracy: 0.0708490946018 best k: 41
mean_weight accuracy: 0.0708490946018 best k: 41
(0.070849094601784096, 41)
###Markdown
From the various k values, I will be taking the mean of the k values with R2 > 0.4(The optimal k value that can account for most predictions with at least estimated 40% accuracy)
###Code
# Standard Scale
#mortality_predictors = mortality.drop('in-hospital_death',axis=1)
#masked_mortality_predictors = np.ma.array(mortality_predictors, mask=np.isnan(mortality_predictors))
#masked_mean = np.mean(masked_mortality_predictors)
#masked_std = np.std(masked_mortality_predictors)
#norm_mortality_predictors = (masked_mortality_predictors - masked_mean)/masked_std
#mortality_scaled = pd.DataFrame(data=norm_mortality_predictors,columns=mortality_predictors.columns,index=mortality_predictors.index)
#mortality_scaled.head()
# mortality_filled = KNN(k=10).complete(mortality_scaled)
#mortality_filled = pd.DataFrame(data=mortality_filled,
# columns=mortality_predictors.columns,index=mortality_predictors.index)
#mortality_filled.isnull().sum()
###Output
_____no_output_____ |
Trabajo_Parcial.ipynb | ###Markdown
Trabajo Parcial Creacion de dataset
###Code
from random import randint
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import math
nd = np.random.randint(50,100)
ne = np.random.randint(2500, 5000)
npuntos = nd+ne
distribucion, entrega = [], []
for i in range(npuntos):
if i < nd:
a = (randint(0,999),randint(0,999))
while a in distribucion:
a = (randint(0,999),randint(0,999))
distribucion.append(a)
else:
a = (randint(0,999),randint(0,999))
while a in distribucion or a in entrega:
a = (randint(0,999),randint(0,999))
entrega.append(a)
puntos=distribucion+entrega
np.savetxt("puntos_entrega.csv", entrega, delimiter=",",fmt='%.1i')
np.savetxt("almacenes.csv", distribucion, delimiter=",",fmt='%.1i')
'''def distancia(a,b):
x1, y1 = a
x2, y2 = b
return math.sqrt((x2-x1)**2+(y2-y1)**2)
def adyacencia(l):
n=len(l)
G=[0]*n
for i in range(n):
G[i]=[0]*n
for i in range(n):
for j in range(n):
if j>i:
d=distancia(l[i],l[j])
gas=randint(5,15)
G[i][j]=(round(d),gas)
elif i>j:
G[i][j]=G[j][i]
return G'''
''
#G=adyacencia(puntos)
#G = [] # formato nodo(i), nodo(f), distancia
# primero se consideran los puntos de distribucion
#luego los puntos de entrega
#for i in range(nd): #distribucion
#for j in range(nd, nd + ne): #entrega
#dist = distancia( distribucionx[i], entregax[i], distribuciony[i], entregay[i] )
#if dist < 5000: #G.append( i, j, )
#for i in range(nd, nd + ne):
#for j in range(i+1, nd + ne -1):
#G.append(i)
#
#pd.DataFrame([0,1,2])
#df = pd.DataFrame(data=G,columns=['Inicio','Fin','Distancia'])
###Output
_____no_output_____
###Markdown
Algoritmo (Coralain)
###Code
###Output
_____no_output_____
###Markdown
Algoritmo (Diego)
###Code
###Output
_____no_output_____
###Markdown
Algoritmo (Julio)
###Code
###Output
_____no_output_____ |
results/in_silico_study/predict_dose.ipynb | ###Markdown
Predict 1-log and 6-log Reduction Dose for Ciprofloxacin Treatment
###Code
import os
import chi
import chi.plots
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pints
import seaborn as sns
import xarray as xr
###Output
_____no_output_____
###Markdown
Define convenience functions.
###Code
def format_posterior(posterior, param_names):
"""
Returns posterior samples as numpy.ndarray.
"""
n_chains = len(posterior.chain.values)
n_draws = len(posterior.draw.values)
n_parameters = len(posterior)
numpy_posterior = np.empty(shape=(n_chains * n_draws, n_parameters))
for param_id, param in enumerate(param_names):
# Get samples as numpy array
samples = posterior[param].values
# Flatten samples and make sure order is preserved
numpy_posterior[:, param_id] = samples.flatten()
return numpy_posterior
def load_inference_data(end_experiment):
"""
Returns the posterior samples and the model likelihoods. The posterior
samples for each model are formatted as numpy.ndarray of shape
(n_samples, n_parameters).
"""
# Specify model names
model_names = ['K_model', 'KP_model', 'KR_model']
# Define a map from model names to names in saved posteriors
parameter_names = [
[ # K model
'pooled_initial_bacterial_count',
'pooled_ec50',
'pooled_growth_rate',
'pooled_max_kill_rate',
'pooled_sigma_log'],
[ # KP model
'pooled_initial_bacterial_count',
'pooled_death_rate',
'pooled_growth_rate',
'pooled_kill_rate',
'pooled_rate_to_dividing',
'pooled_rate_to_nondividing',
'pooled_sigma_log'],
[ # KR model
'pooled_initial_bacterial_count',
'pooled_adapted_ec50',
'pooled_growth_rate',
'pooled_adapted_max_kill_rate',
'pooled_wild_type_kill_rate',
'pooled_mutation_rate',
'pooled_sigma_log']
]
# Load posteriors and AIC scores
n_models = len(model_names)
aic_scores = np.empty(shape=n_models)
parameters = []
directory = os.getcwd()
for id_m, model in enumerate(model_names):
# Load inference data
inf_data = xr.load_dataset(
directory + '/derived_data/%s_posterior_%dh.nc'
% (model, int(end_experiment))
)
parameters.append(format_posterior(inf_data, parameter_names[id_m]))
aic_scores[id_m] = inf_data.attrs['AIC score']
# Compute elpd deltas to avoid numerical overflow
aic_scores -= np.min(aic_scores)
# Compute weights
weights = np.exp(-aic_scores / 2) / np.sum(np.exp(-aic_scores / 2))
return parameters, weights
###Output
_____no_output_____
###Markdown
Define candidate modelsRecall that the data-generating model is the KR model.We will use the data-generating model (with the true model parameters) as areference to evaluate the reliability of the model predictions.
###Code
# Define K model
directory = os.path.dirname(os.path.dirname(os.getcwd()))
k_model = chi.PharmacokineticModel(directory + '/models/K_model.xml')
k_model.set_administration(compartment='central')
k_model.set_parameter_names(names={
'myokit.bacterial_count': 'Initial bacterial count in CFU/ml',
'myokit.concentration_e50': 'EC 50 in ng/ml',
'myokit.growth_rate': 'Growth rate in 1/h',
'myokit.kappa': 'Max. kill rate in 1/h'})
k_model.set_outputs(['myokit.bacterial_count'])
k_model.set_output_names({
'myokit.bacterial_count': 'Bacterial count in CFU/ml'})
k_model = chi.ReducedMechanisticModel(
k_model)
k_model.fix_parameters({
'central.drug_amount': 0,
'dose.drug_amount': 0,
'myokit.bacterial_count_adapted': 0,
'dose.absorption_rate': 2.7, # in 1/h, Sanchez et al
'central.size': 3.7 * 70, # in L/kg, Sanchez et al for 70 kg
'myokit.elimination_rate': 0.17, # in 1/h, Sanchez et al
'myokit.gamma': 1})
# Define KP model
kp_model = chi.PharmacokineticModel(directory + '/models/KP_model.xml')
kp_model.set_administration(compartment='central', direct=False)
kp_model.set_parameter_names(names={
'myokit.bacterial_count_susceptible': 'Initial bacterial count in CFU/ml',
'myokit.death_rate': 'Death rate in 1/h',
'myokit.growth_rate': 'Growth rate in 1/h',
'myokit.kappa': 'Kill rate in ml/ng/h',
'myokit.transition_rate_12': 'Transition rate to dividing in 1/h',
'myokit.transition_rate_21': 'Transition rate to non-dividing in ml/ng/h'})
kp_model.set_outputs(['myokit.total_bacterial_count'])
kp_model.set_output_names({
'myokit.total_bacterial_count': 'Bacterial count in CFU/ml'})
kp_model = chi.ReducedMechanisticModel(
kp_model)
kp_model.fix_parameters({
'central.drug_amount': 0,
'dose.drug_amount': 0,
'myokit.bacterial_count_adapted': 0,
'dose.absorption_rate': 2.7, # in 1/h, Sanchez et al
'central.size': 3.7 * 70, # in L/kg, Sanchez et al for 70 kg
'myokit.elimination_rate': 0.17, # in 1/h, Sanchez et al
'myokit.gamma': 1})
# Define KR model
kr_model = chi.PharmacokineticModel(directory + '/models/KR_model.xml')
kr_model.set_administration(compartment='central', direct=False)
kr_model.set_parameter_names(names={
'myokit.bacterial_count_susceptible': 'Initial bacterial count in CFU/ml',
'myokit.concentration_e50_adapted': 'Adapted EC 50 in ng/ml',
'myokit.growth_rate': 'Growth rate in 1/h',
'myokit.kappa_adapted': 'Adapted max. kill rate in 1/h',
'myokit.kappa_susceptible': 'Wild type kill rate in ml/ng/h',
'myokit.mutation_rate': 'Mutation rate in ml/ng/h'})
kr_model.set_outputs(['myokit.total_bacterial_count'])
kr_model.set_output_names({
'myokit.total_bacterial_count': 'Bacterial count in CFU/ml'})
kr_model = chi.ReducedMechanisticModel(
kr_model)
kr_model.fix_parameters({
'central.drug_amount': 0,
'dose.drug_amount': 0,
'myokit.bacterial_count_adapted': 0,
'dose.absorption_rate': 2.7, # in 1/h, Sanchez et al
'central.size': 3.7 * 70, # in L/kg, Sanchez et al for 70 kg
'myokit.elimination_rate': 0.17, # in 1/h, Sanchez et al
'myokit.gamma': 1})
###Output
_____no_output_____
###Markdown
Predictions of log reduction dosesWe want to predict the dose that leads after 24h to a 1-log (10-fold) reductionof the bacterial population. We consider a dosing regimen $r$ that administersciprofloxacin orally twice a day. Find true reduction doseThe true 1-log reduction dose can be straightforwardly computed from thedata-generating model by solving$$ \bar{y}(\psi , t=24, r) = \bar{y}_0 / 10$$for the dosing regimen $r$, where $\bar{y}_0 = \bar{y}(\psi , t=0, r)$ is thetrue bacterial count at $t=0$.
###Code
class TrueXLogReductionDose(pints.ErrorMeasure):
"""
Defines a pints.ErrorMeasure that can be used to find the true
dose that leads to a X-log reduction at time t.
Can be called with a dose amount and returns the squared distance
of the log reduction at time t from the target log reduced bacterial
population.
"""
def __init__(self, log_reduction, time):
self._model = kr_model
self._parameters = [
1E6, # Initial bacterial count in CFU/ml
191, # Adapated EC50 in ng/ml
0.77, # Growth rate in 1/h
1.61, # Adapted max. kill rate in 1/h
0.003, # Wild type max. kill rate in 1/h
5E-5, # Max. mutation rate in 1/h
]
self._time = [float(time)]
self._target_reduction = -int(log_reduction)
def __call__(self, dose):
"""
Returns the squared error of the target population size to the
true population size at time t for the administered dose.
Input: dose in mg
"""
# Convert dose to ng
dose = dose[0] * 1E3
# Compute error
self._model.set_dosing_regimen(dose=dose, start=0, period=12)
predicted_pop = self._model.simulate(
parameters=self._parameters, times=self._time)[0, 0]
log_reduction = np.log10(predicted_pop / self._parameters[0])
squared_error = (self._target_reduction - log_reduction) ** 2
return squared_error
def n_parameters(self):
"""
Returns the number of parameters of the optimisation problem.
(We just want to find the dose, so it is 1)
"""
return 1
# Find true 1-log reduction dose
problem = TrueXLogReductionDose(log_reduction=1, time=24)
initial_guess = [200]
optimiser = pints.OptimisationController(
problem, x0=initial_guess, method=pints.NelderMead)
optimiser.set_log_to_screen(False)
true_dose, squared_error = optimiser.run()
print('True dose: ', true_dose)
print('Squared error: ', squared_error)
###Output
True dose: [163.82753914]
Squared error: 3.4902336099777226e-22
###Markdown
Predict reduction dose with MAA approachWe can find the 1-log reduction dose analogously to the optimisation routineabove, i.e. by solving $\bar{y}(\psi , t=24, r) = \bar{y}_0 / 10$. The maindifference for MAA is that we have to compute the predicted bacterialcount at time t by the weighted average of the mean predictions across themodels$$ \bar{y}(\mathcal{D}, t, r) = \sum_m w_m\, \mathbb{E}_m [y | \mathcal{D}, t, r].$$We will approximate the exact expectation of $y$ by an empirical average overthe posterior samples$$ \mathbb{E}_m [y | \mathcal{D}, t, r] \approx \frac{1}{S} \sum _{s=1}^S y(\psi ^s, \sigma ^s, t, r),$$where $S$ is the sample size and the superscript indexes the samples from theposterior distribution.
###Code
class MAAXLogReductionDose(pints.ErrorMeasure):
"""
Defines a pints.ErrorMeasure that can be used to find the MAA prediction of
the dose that leads to a X-log reduction at time t.
Can be called with a dose amount and returns the squared distance
of the log reduction at time t from the target log reduced bacterial
population.
"""
def __init__(self, log_reduction, time, end_experiment):
self._models = [k_model, kp_model, kr_model]
self._parameters, self._weights = load_inference_data(
end_experiment=end_experiment)
self._time = [float(time)]
self._target_reduction = -int(log_reduction)
self._initial_pop = 1E6
self._sample_size = 1000
# Make sure all models start with the correct initial population size
# shape: (n_samples, n_parameters)
self._parameters[0][:, 0] = self._initial_pop
self._parameters[1][:, 0] = self._initial_pop
self._parameters[2][:, 0] = self._initial_pop
def __call__(self, dose):
"""
Returns the squared error of the target population size to the
true population size at time t for the administered dose.
Input: dose in mg
"""
# Convert dose to ng
dose = dose[0] * 1E3
# Set dosing regimen
for model in self._models:
model.set_dosing_regimen(dose=dose, start=0, period=12)
# Compute error
predicted_pop = self._predict_population()
log_reduction = np.log10(predicted_pop / self._initial_pop)
squared_error = (self._target_reduction - log_reduction) ** 2
return squared_error
def _predict_population(self):
"""
Returns the predicted population size.
"""
# Estimate mean y for each model
mean_y = np.empty(shape=3)
for model_id, posterior in enumerate(self._parameters):
model = self._models[model_id]
mean_y[model_id] = self._predict_population_with_candidate_model(
model, posterior)
# Compute MAA prediction as weighted average of the means
predicted_pop = np.average(mean_y, weights=self._weights)
return predicted_pop
def _predict_population_with_candidate_model(self, model, posterior):
"""
Returns an estimate of the expected y for the model.
"""
# Predict y_bar for each posterior sample
y_bar = np.empty(shape=self._sample_size)
posterior_samples = posterior[
np.random.randint(0, len(posterior), size=self._sample_size)]
for sample_id, parameter_sample in enumerate(posterior_samples):
y_bar[sample_id] = model.simulate(
parameters=parameter_sample[:-1], times=self._time)[0, 0]
# Add noise to predict y
noise = np.random.normal(size=self._sample_size)
sigma = posterior_samples[:, -1]
y = y_bar * np.exp(-sigma**2/2 + sigma * noise)
return np.mean(y)
def n_parameters(self):
"""
Returns the number of parameters of the optimisation problem.
(We just want to find the dose, so it is 1)
"""
return 1
def predict_reduction_dose_maa(log_reduction, time, end_experiment):
"""
Returns the log-reduction dose associated with the time.
The end of the experiment specifies which model parameters are used,
i.e. how much data was available when the model parameters were learned.
"""
problem = MAAXLogReductionDose(log_reduction, time, end_experiment)
initial_guess = [200]
optimiser = pints.OptimisationController(
problem, x0=initial_guess, method=pints.NelderMead)
optimiser.set_log_to_screen(False)
optimiser.set_max_iterations(100)
dose, squared_error = optimiser.run()
return dose, squared_error
# Example: Find 1-log reduction dose after 30h
maa_dose, squared_error = predict_reduction_dose_maa(
log_reduction=1, time=24, end_experiment=30)
print('MAA dose prediction after learning from 30h of experiments: ', maa_dose)
print('Squared error: ', squared_error)
###Output
MAA dose prediction after learning from 30h of experiments: [163.12499993]
Squared error: 3.1740333255404387e-07
###Markdown
Predict 1-log reduction dose with MS and PAMIn order to predict the reduction dose with the candidate models in a waythat reflects the remaining uncertainty, we solve conceptually the sameoptimisation problem as before$$ y(\mathcal{D}, t=24, r) = \bar{y}_0 / 10.$$Note, however, that the limited data $\mathcal{D}$ leaves uncertainty about thebacterial count prediction, i.e. $p_m(y | \mathcal{D}, t=24, r)$. As a result,we will only be able to predict a distribution of probable reduction doses,which reflects the remaining uncertainty.One way to infer this distribution of probable reduction doses is torepeatedly sample a realisation of $y$ from $p_m(y | \mathcal{D}, t=24, r)$ andthen to find the dose amount that satisfies $y = \bar{y}_0 / 10$ for thisrealisation.The PAM predictions of the reduction doses isgiven by the combined distributions of probable reduction doses of all likelycandidate models.
###Code
class MSXLogReductionDose(pints.ErrorMeasure):
"""
Defines a pints.ErrorMeasure that can be used to find the
dose that leads to a X-log reduction at time t.
Can be called with a dose amount and returns the squared distance
of the log reduction at time t from the target log reduced bacterial
population.
"""
def __init__(self, model, log_reduction, time):
self._model = model
self._parameters = None
self._noise = None
self._sigma = None
self._time = [float(time)]
self._target_reduction = -int(log_reduction)
def __call__(self, dose):
"""
Returns the squared error of the target population size to the
true population size at time t for the administered dose.
Input: dose in mg
"""
# Convert dose to ng and set dosing regimen
dose = dose[0] * 1E3
self._model.set_dosing_regimen(dose=dose, start=0, period=12)
# Sample predicted y at t=24
y_bar = self._model.simulate(
parameters=self._parameters, times=self._time)[0, 0]
y = y_bar * np.exp(-self._sigma**2/2 + self._sigma * self._noise)
y0 = self._parameters[0]
log_reduction = np.log10(y / y0)
squared_error = (self._target_reduction - log_reduction) ** 2
return squared_error
def n_parameters(self):
"""
Returns the number of parameters of the optimisation problem.
(We just want to find the dose, so it is 1)
"""
return 1
def set_model_parameters(self, parameters):
"""
Sets the parameters of the candidate model.
Assumes that the first k-1 parameters are the parameters of the
candidate model, and the last parameter is the standard deviation
of the lognormal error.
"""
assert self._model.n_parameters() == (len(parameters) - 1)
self._parameters = parameters[:-1]
self._sigma = parameters[-1]
def set_noise(self, noise):
"""
Sets realisation of the noise. Expects a sample from a standard
Gaussian distribution.
"""
self._noise = float(noise)
def prediction_reduction_dose_pam(
log_reduction, time, end_experiment, weight_thresh=1E-3):
"""
Returns the log-reduction dose associated with the time.
The end of the experiment specifies which model parameters are used,
i.e. how much data was available when the model parameters were learned.
"""
n_samples = 2000
models = [k_model, kp_model, kr_model]
n_models = len(models)
initial_guess = [200]
tolerated_squared_error = 1E-6
# Sample reduction doses
doses = np.empty(shape=(n_models, n_samples))
posteriors, weights = load_inference_data(end_experiment)
for m_id, model in enumerate(models):
# Skip if model has not sufficient likelihood
if weights[m_id] < weight_thresh:
doses[m_id] = np.nan
continue
# Sample from posterior and sample noise realisations
posterior = posteriors[m_id]
posterior = posterior[
np.random.randint(0, len(posterior), size=n_samples)]
noise = np.random.normal(size=n_samples)
# Find reduction dose for sample
problem = MSXLogReductionDose(model, log_reduction, time)
for sample_id in range(n_samples):
problem.set_model_parameters(posterior[sample_id])
problem.set_noise(noise[sample_id])
optimiser = pints.OptimisationController(
problem, x0=initial_guess, method=pints.NelderMead)
optimiser.set_log_to_screen(False)
optimiser.set_max_iterations(100)
dose, squared_error = optimiser.run()
if squared_error < tolerated_squared_error:
doses[m_id, sample_id] = dose
else:
doses[m_id, sample_id] = np.nan
return doses, weights
# Example: Find 1-log reduction dose distribution after 30h with KR model
np.random.seed(1)
doses, weights = prediction_reduction_dose_pam(
log_reduction=1, time=24, end_experiment=30)
# Visualise reduction dose distribution
sns.kdeplot(
data=pd.DataFrame({'1-log reduction dose in mg': doses[2]}),
x='1-log reduction dose in mg', fill=True,
common_norm=False, alpha=.5, linewidth=1,
legend=False)
plt.show()
###Output
_____no_output_____
###Markdown
Predict 1-log reduction doses and 6-log reduction doses with all approaches (MS, MAA, PAM) for different amounts of available data
###Code
# Setup
time = 24
log_reductions = [1, 6]
experiment_durations = [10, 15, 20, 30]
n_experiments = len(experiment_durations)
weight_thresh = 1E-3
# Fix seed for reproducibility
np.random.seed(1)
# Find true log reduction doses
results = []
for log_reduction in log_reductions:
problem = TrueXLogReductionDose(log_reduction, time)
initial_guess = [200]
optimiser = pints.OptimisationController(
problem, x0=initial_guess, method=pints.NelderMead)
optimiser.set_log_to_screen(False)
true_dose, _ = optimiser.run()
# Predict reduction doses and visualise results
maa_doses = []
pam_doses = []
model_weights = []
for exp_id, end_experiment in enumerate(experiment_durations):
# Predict reduction dose with MAA
maa_dose, _ = predict_reduction_dose_maa(
log_reduction, time, end_experiment)
maa_doses.append(maa_dose)
# Predict reduction dose with PAM
pam_dose, weights = prediction_reduction_dose_pam(
log_reduction, time, end_experiment, weight_thresh)
pam_doses.append(pam_dose)
model_weights.append(weights)
# Store results
results.append([true_dose, maa_doses, pam_doses, model_weights])
# Plots results
fontsize = '20'
plt.rcParams['font.size'] = fontsize
fig, axes = plt.subplots(
2, n_experiments, figsize=(20, 10), sharey='row', sharex='row')
plt.subplots_adjust(wspace=0.1, hspace=0.3)
plt.xticks(fontsize=fontsize)
plt.yticks(fontsize=fontsize)
axes[0, 0].set_ylabel('Probability density', fontsize=fontsize)
axes[1, 0].set_ylabel('Probability density', fontsize=fontsize)
axes[0, 0].set_xlim([100, 240])
axes[1, 0].set_xlim([200, 625])
# Predict reduction doses and visualise results
for log_id, result in enumerate(results):
true_dose, maa_doses, pam_doses, model_weights = result
for exp_id, end_experiment in enumerate(experiment_durations):
# Add title to subfigure
if log_id == 0:
axes[log_id, exp_id].set_title('%dh:' % (end_experiment), y=1.7, x=0.1)
# Plot true dose response
axes[log_id, exp_id].axvline(
x=true_dose, color='black', linestyle='--', linewidth=3,
label='True')
# Plot PAM prediction: Dose distributions
df = []
colors = []
pam_dose = pam_doses[exp_id]
weights = model_weights[exp_id]
for model_id, weight in enumerate(weights):
# Skip models with insufficient weight
if weight < weight_thresh:
continue
dataframe = pd.DataFrame(
data=pam_dose[model_id], columns=['Dose in mg'])
dataframe['Model'] = ['K', 'KP', 'KR'][model_id]
df.append(dataframe)
colors.append(sns.color_palette()[model_id])
df = pd.concat(df, ignore_index=True)
sns.kdeplot(
data=df, x='Dose in mg', hue="Model", fill=True,
common_norm=False, alpha=.5, linewidth=1, ax=axes[log_id, exp_id],
palette=colors, legend=False)
# Plot PAM prediction: Weights
if log_id == 0:
# inset_axes = axes[log_id, exp_id].inset_axes(
# bounds=[0.575, 0.45, 0.4, 0.4])
inset_axes = axes[log_id, exp_id].inset_axes(
bounds=[0.3, 1.15, 0.4, 0.4])
n_models = 3
x_pos = np.arange(n_models)
colors = sns.color_palette()[:n_models]
inset_axes.bar(
x_pos, weights, align='center', alpha=0.5, color=colors,
edgecolor=colors)
inset_axes.set_xticks(x_pos)
inset_axes.set_xticklabels(['K', 'KP', 'KR'])
inset_axes.set_yticks([])
inset_axes.set_yticklabels([])
inset_axes.set_title('Weights')
inset_axes.set_ylim([0, 1.05])
# Highlight MS prediction
idx = 2 - np.argmax(weights)
axes[log_id, exp_id].collections[idx]._linewidths = np.array([3])
axes[log_id, exp_id].collections[idx]._original_edgecolor = np.array(
[[0., 0., 0., 1.]])
# Add MS label
if (log_id == 0) and (exp_id == 0):
# Add a line oustide visible range to add MS to legend
axes[0, 0].axvline(x=1E3, color='black', linewidth=3, label='MS')
# Plot MAA prediction
maa_dose = maa_doses[exp_id]
axes[log_id, exp_id].axvline(
x=maa_dose, color='darkred', linewidth=3, label='MAA')
# Label rows
right_ax = axes[0, -1].twinx()
right_ax.set_ylabel('1-log reduction', rotation=-90, labelpad=25)
right_ax.set_yticks([])
right_ax.set_yticklabels([])
right_ax = axes[1, -1].twinx()
right_ax.set_ylabel('6-log reduction', rotation=-90, labelpad=25)
right_ax.set_yticks([])
right_ax.set_yticklabels([])
# Set legend
axes[0, 0].legend(bbox_to_anchor=(4.95 ,1.4))
directory = os.getcwd()
plt.savefig(
directory + '/log_reduction_doses.pdf', bbox_inches='tight')
plt.show()
###Output
_____no_output_____ |
MBA_Earning_Comparison.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
############# ASSUMPTIONS ###############
#Age assumptions
Current_Age = 25
MBA_Age = 28
Retirement_Age = 65
#Initial Salary assumptions
base_salary = 90200
Total_comp = base_salary*(1.03)
#401k assumptions
Balance_401 = 51000 #initial 401k balance
Annual_ROR = 8 #interest rate for 401k
Personal_401_contribution = 15 #percent
Employer_401_contribution = 10 #percent
#Personal savings assumtions
Personal_savings = 15000 #initial savings
Personal_savings_contribution = 15 #percent of salary saved each year, non 401k
############# COMMON PATH (MBA / Current) ###############
#Common calculation until start of MBA
#Paths diverge in Fall 2024 @ age 28
#Update values / compound monthly
num_months_common = (MBA_Age-Current_Age)*12
num_months_total = (Retirement_Age-Current_Age)*12
common_401 = np.zeros(num_months_total+1)
common_401[0] = Balance_401
common_savings = np.zeros(num_months_total+1)
common_savings[0] = Personal_savings
net_worth = np.zeros(num_months_total+1)
net_worth[0] = common_savings[0] + common_401[0]
salary_common = np.zeros(num_months_total+1)
salary_common[0] = base_salary
total_comp_common = np.zeros(num_months_total+1)
total_comp_common[0] = Total_comp
for month in range(1,num_months_common+1):
common_401[month] = common_401[month-1]*(1+ Annual_ROR/(12*100)) + (Personal_401_contribution/100)*Total_comp/12 + (Employer_401_contribution/100)*Total_comp/12
common_savings[month] = common_savings[month-1] + (Personal_savings_contribution/100)*Total_comp/12
net_worth[month] = common_savings[month] + common_401[month]
if (month % 12 == 0): #update salary each year by 3%
base_salary = base_salary*(1.03)
Total_comp = base_salary*(1.03)
salary_common[month] = base_salary
total_comp_common[month] = Total_comp
#print(common_401)
#print(common_savings)
############# STAY WITH CURRENT PATH ###############
#Assumptions
Promotion_every = 4 #promotion every 4 years
Promotion_amount = 15000
num_months = num_months_total - num_months_common
current_path_401 = np.copy(common_401)
current_path_savings = np.copy(common_savings)
current_path_net_worth = np.copy(net_worth)
current_path_salary = np.copy(salary_common)
current_path_total_comp = np.copy(total_comp_common)
for month in range(num_months_common+1,num_months_total+1):
current_path_401[month] = current_path_401[month-1]*(1+ Annual_ROR/(12*100)) + (Personal_401_contribution/100)*Total_comp/12 + (Employer_401_contribution/100)*Total_comp/12
current_path_savings[month] = current_path_savings[month-1] + (Personal_savings_contribution/100)*Total_comp/12
current_path_net_worth[month] = current_path_401[month] + current_path_savings[month]
if (month % (Promotion_every*12) == 0): #update salary
base_salary += Promotion_amount
elif (month % 12 == 0):
base_salary = base_salary*(1.03)
Total_comp = base_salary*(1.03)
current_path_salary[month] = base_salary
current_path_total_comp[month] = Total_comp
############# CHOOSE MBA ###############
#MBA assumptions
Annual_MBA_Total_Cost = 111818 #https://poetsandquants.com/2020/11/30/what-a-harvard-mba-now-costs-2/?pq-category=business-school-news&pq-category-2=financing-your-mba
base_salary = 150000
Total_comp = (base_salary)*1.2 #big bonus for MBA, 30k typical for 150k
MBA_path_401 = np.copy(common_401)
MBA_path_savings = np.copy(common_savings)
MBA_path_net_worth = np.copy(net_worth)
MBA_path_salary = np.copy(salary_common)
MBA_path_total_comp = np.copy(total_comp_common)
end_MBA = 2*12 + num_months_common+1 #2 year full time MBA
#During MBA, losing money
for month in range(num_months_common+1,end_MBA):
MBA_path_401[month] = MBA_path_401[month-1]*(1+ Annual_ROR/(12*100)) #Accumulates interest, no contributions
MBA_path_savings[month] = MBA_path_savings[month-1] - 111818/12
MBA_path_net_worth[month] = MBA_path_401[month] + MBA_path_savings[month]
MBA_path_salary[month] = 0
MBA_path_total_comp[month] = 0
#Post MBA to 10 years out -> https://www.reddit.com/r/MBA/comments/ipvy7l/what_are_salaries_like_10_years_post_mba/
mid_career_base_salary = 350000 #Approximate w/ linear increase from initial base to mid career base
years_to_mid_career = 15
salary_MBA = np.linspace(base_salary,mid_career_base_salary,years_to_mid_career*12) #reach mid career salary 15 years in
Total_comp_MBA = np.linspace(Total_comp,mid_career_base_salary*1.2,years_to_mid_career*12) #reach mid career salary 15 years in
i = 0
for month in range(end_MBA,num_months_total+1):
if i < years_to_mid_career*12:
Total_comp = Total_comp_MBA[i]
base_salary = salary_MBA[i]
elif (i == years_to_mid_career*12):
Total_comp = Total_comp_MBA[-1] #Reached mid career salary
base_salary = salary_MBA[-1]
elif (i > years_to_mid_career*12 and i % 12 == 0):
Total_comp = Total_comp*1.03 #3% raise each year
base_salary = base_salary*1.03
MBA_path_401[month] = MBA_path_401[month-1]*(1+ Annual_ROR/(12*100)) + (Personal_401_contribution/100)*Total_comp/12 + (Employer_401_contribution/100)*Total_comp/12
MBA_path_savings[month] = MBA_path_savings[month-1] + (Personal_savings_contribution/100)*Total_comp/12
MBA_path_net_worth[month] = MBA_path_401[month] + MBA_path_savings[month]
MBA_path_salary[month] = base_salary
MBA_path_total_comp[month] = Total_comp
i+=1
############# ANALYSIS ###############
age = np.arange(0,num_months_total+1)/12 + Current_Age
age_next_10 = np.arange(0,10*12)/12 + Current_Age
fig = plt.figure()
plt.plot(age,current_path_net_worth/1e6,'b',label="Current Path")
plt.plot(age,MBA_path_net_worth/1e6,'r',label="Harvard/MIT MBA Path")
plt.xlabel("Age (years)")
plt.ylabel("Dollars (millions)")
plt.legend()
plt.title("Net Worth")
fig = plt.figure()
plt.plot(age_next_10,current_path_net_worth[0:len(age_next_10)]/1e6,'b',label="Current Path")
plt.plot(age_next_10,MBA_path_net_worth[0:len(age_next_10)]/1e6,'r',label="Harvard/MIT MBA Path")
plt.xlabel("Age (years)")
plt.ylabel("Dollars (millions)")
plt.legend()
plt.title("Net Worth = Next 10 Years")
fig = plt.figure()
plt.plot(age,current_path_total_comp/1e5,'b',label="Current Path Total Comp")
plt.plot(age,current_path_salary/1e5,'b--',label="Current Path Salary")
plt.plot(age,MBA_path_total_comp/1e5,'r',label="MBA Path Total Comp")
plt.plot(age,MBA_path_salary/1e5,'r--',label="MBA Path Salary")
plt.xlabel("Age (years)")
plt.ylabel("Salary (hundred thousands)")
plt.legend()
plt.title("Salary / Total Compensation")
fig = plt.figure()
plt.plot(age,current_path_401/1e6,'b',label="Current Path")
plt.plot(age,MBA_path_401/1e6,'r',label="Harvard/MIT MBA Path")
plt.xlabel("Age (years)")
plt.ylabel("Dollars (millions)")
plt.legend()
plt.title("401k Balance")
fig = plt.figure()
plt.plot(age,current_path_savings/1e6,'b',label="Current Path")
plt.plot(age,MBA_path_savings/1e6,'r',label="Harvard/MIT MBA Path")
plt.xlabel("Age (years)")
plt.ylabel("Dollars (millions)")
plt.legend()
plt.title("Personal Savings + Loans")
###Output
_____no_output_____ |
2ndMeetup-Resources/Demo/MatplotLib/AnatomyOfMatplotlib-master/AnatomyOfMatplotlib-Part5-Artists.ipynb | ###Markdown
ArtistsAnything that can be displayed in a Figure is an [`Artist`](http://matplotlib.org/users/artists.html). There are two main classes of Artists: primatives and containers. Below is a sample of these primitives.
###Code
"""
Show examples of matplotlib artists
http://matplotlib.org/api/artist_api.html
Several examples of standard matplotlib graphics primitives (artists)
are drawn using matplotlib API. Full list of artists and the
documentation is available at
http://matplotlib.org/api/artist_api.html
Copyright (c) 2010, Bartosz Telenczuk
License: This work is licensed under the BSD. A copy should be
included with this source code, and is also available at
http://www.opensource.org/licenses/bsd-license.php
"""
from matplotlib.collections import PatchCollection
import matplotlib.path as mpath
import matplotlib.patches as mpatches
import matplotlib.lines as mlines
fig, ax = plt.subplots(1, 1, figsize=(7,7))
# create 3x3 grid to plot the artists
pos = np.mgrid[0.2:0.8:3j, 0.2:0.8:3j].reshape(2, -1)
patches = []
# add a circle
art = mpatches.Circle(pos[:, 0], 0.1, ec="none")
patches.append(art)
plt.text(pos[0, 0], pos[1, 0] - 0.15, "Circle", ha="center", size=14)
# add a rectangle
art = mpatches.Rectangle(pos[:, 1] - [0.025, 0.05], 0.05, 0.1, ec="none")
patches.append(art)
plt.text(pos[0, 1], pos[1, 1] - 0.15, "Rectangle", ha="center", size=14)
# add a wedge
wedge = mpatches.Wedge(pos[:, 2], 0.1, 30, 270, ec="none")
patches.append(wedge)
plt.text(pos[0, 2], pos[1, 2] - 0.15, "Wedge", ha="center", size=14)
# add a Polygon
polygon = mpatches.RegularPolygon(pos[:, 3], 5, 0.1)
patches.append(polygon)
plt.text(pos[0, 3], pos[1, 3] - 0.15, "Polygon", ha="center", size=14)
#add an ellipse
ellipse = mpatches.Ellipse(pos[:, 4], 0.2, 0.1)
patches.append(ellipse)
plt.text(pos[0, 4], pos[1, 4] - 0.15, "Ellipse", ha="center", size=14)
#add an arrow
arrow = mpatches.Arrow(pos[0, 5] - 0.05, pos[1, 5] - 0.05, 0.1, 0.1, width=0.1)
patches.append(arrow)
plt.text(pos[0, 5], pos[1, 5] - 0.15, "Arrow", ha="center", size=14)
# add a path patch
Path = mpath.Path
verts = np.array([
(0.158, -0.257),
(0.035, -0.11),
(-0.175, 0.20),
(0.0375, 0.20),
(0.085, 0.115),
(0.22, 0.32),
(0.3, 0.005),
(0.20, -0.05),
(0.158, -0.257),
])
verts = verts - verts.mean(0)
codes = [Path.MOVETO,
Path.CURVE4, Path.CURVE4, Path.CURVE4, Path.LINETO,
Path.CURVE4, Path.CURVE4, Path.CURVE4, Path.CLOSEPOLY]
path = mpath.Path(verts / 2.5 + pos[:, 6], codes)
patch = mpatches.PathPatch(path)
patches.append(patch)
plt.text(pos[0, 6], pos[1, 6] - 0.15, "PathPatch", ha="center", size=14)
# add a fancy box
fancybox = mpatches.FancyBboxPatch(
pos[:, 7] - [0.025, 0.05], 0.05, 0.1,
boxstyle=mpatches.BoxStyle("Round", pad=0.02))
patches.append(fancybox)
plt.text(pos[0, 7], pos[1, 7] - 0.15, "FancyBoxPatch", ha="center", size=14)
# add a line
x,y = np.array([[-0.06, 0.0, 0.1], [0.05,-0.05, 0.05]])
line = mlines.Line2D(x+pos[0, 8], y+pos[1, 8], lw=5.)
plt.text(pos[0, 8], pos[1, 8] - 0.15, "Line2D", ha="center", size=14)
collection = PatchCollection(patches)
ax.add_collection(collection)
ax.add_line(line)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
Containers are objects like *Figure* and *Axes*. Containers are given primitives to draw. The plotting functions we discussed back in Parts 1 & 2 are convenience functions that generate these primitives and places them into the appropriate containers. In fact, most of those functions will return artist objects (or a list of artist objects) as well as store them into the appropriate axes container.As discussed in Part 3, there is a wide range of properties that can be defined for your plots. These properties are processed and applied to their primitives. Ultimately, you can override anything you want just by directly setting a property to the object itself.
###Code
fig, ax = plt.subplots(1, 1)
lines = plt.plot([1, 2, 3, 4], [1, 2, 3, 4], 'b', [1, 2, 3, 4], [4, 3, 2, 1], 'r')
lines[0].set(linewidth=5)
lines[1].set(linewidth=10, alpha=0.7)
plt.show()
###Output
_____no_output_____
###Markdown
To see what properties are set for an artist, use [`getp()`](https://matplotlib.org/api/artist_api.htmlfunctions)
###Code
fig = plt.figure()
print(plt.getp(fig.patch))
plt.close(fig)
###Output
aa = False
agg_filter = None
alpha = None
animated = False
antialiased or aa = False
bbox = Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)
capstyle = butt
children = []
clip_box = None
clip_on = True
clip_path = None
contains = None
data_transform = BboxTransformTo( TransformedBbox( Bbox...
ec = (1.0, 1.0, 1.0, 1)
edgecolor or ec = (1.0, 1.0, 1.0, 1)
extents = Bbox(x0=0.0, y0=0.0, x1=640.0, y1=480.0)
facecolor or fc = (1.0, 1.0, 1.0, 1)
fc = (1.0, 1.0, 1.0, 1)
figure = Figure(640x480)
fill = True
gid = None
hatch = None
height = 1
joinstyle = miter
label =
linestyle or ls = solid
linewidth or lw = 0.0
ls = solid
lw = 0.0
patch_transform = CompositeGenericTransform( BboxTransformTo( ...
path = Path(array([[0., 0.], [1., 0.], [1.,...
path_effects = []
picker = None
rasterized = None
sketch_params = None
snap = None
transform = CompositeGenericTransform( CompositeGenericTra...
transformed_clip_path_and_affine = (None, None)
url = None
verts = [[ 0. 0.] [640. 0.] [640. 480.] [ 0. 480....
visible = True
width = 1
window_extent = Bbox(x0=0.0, y0=0.0, x1=640.0, y1=480.0)
x = 0
xy = (0, 0)
y = 0
zorder = 1
None
###Markdown
CollectionsIn addition to the Figure and Axes containers, there is another special type of container called a [`Collection`](http://matplotlib.org/api/collections_api.html). A Collection usually contains a list of primitives of the same kind that should all be treated similiarly. For example, a [`CircleCollection`](http://matplotlib.org/api/collections_api.htmlmatplotlib.collections.CircleCollection) would have a list of [`Circle`](https://matplotlib.org/api/_as_gen/matplotlib.patches.Circle.html) objects all with the same color, size, and edge width. Individual property values for artists in the collection can also be set (in some cases).
###Code
from matplotlib.collections import LineCollection
fig, ax = plt.subplots(1, 1)
# A collection of 3 lines
lc = LineCollection([[(4, 10), (16, 10)],
[(2, 2), (10, 15), (6, 7)],
[(14, 3), (1, 1), (3, 5)]])
lc.set_color('r')
lc.set_linewidth(5)
ax.add_collection(lc)
ax.set_xlim(0, 18)
ax.set_ylim(0, 18)
plt.show()
# Now set individual properties in a collection
fig, ax = plt.subplots(1, 1)
lc = LineCollection([[(4, 10), (16, 10)],
[(2, 2), (10, 15), (6, 7)],
[(14, 3), (1, 1), (3, 5)]])
lc.set_color(['r', 'blue', (0.2, 0.9, 0.3)])
lc.set_linewidth([4, 3, 6])
ax.add_collection(lc)
ax.set_xlim(0, 18)
ax.set_ylim(0, 18)
plt.show()
###Output
_____no_output_____
###Markdown
There are other kinds of collections that are not just simply a list of primitives, but are Artists in their own right. These special kinds of collections take advantage of various optimizations that can be assumed when rendering similar or identical things. You use these collections all the time whether you realize it or not! Markers are implemented this way (so, whenever you do `plot()` or `scatter()`, for example).
###Code
from matplotlib.collections import RegularPolyCollection
fig, ax = plt.subplots(1, 1)
offsets = np.random.rand(20, 2)
collection = RegularPolyCollection(
numsides=5, # a pentagon
sizes=(150,),
offsets=offsets,
transOffset=ax.transData,
)
ax.add_collection(collection)
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 5.1Give yourselves 4 gold stars!Hint: [StarPolygonCollection](http://matplotlib.org/api/collections_api.htmlmatplotlib.collections.StarPolygonCollection)
###Code
# %load exercises/5.1-goldstar.py
from matplotlib.collections import StarPolygonCollection
fig, ax = plt.subplots(1, 1)
collection = StarPolygonCollection(5,
offsets=[(0.5, 0.5)],
transOffset=ax.transData)
ax.add_collection(collection)
plt.show()
from matplotlib.collections import StarPolygonCollection
fig, ax = plt.subplots(1, 1)
collection = StarPolygonCollection(5,
offsets=[(0.5, 0.5)],
transOffset=ax.transData)
ax.add_collection(collection)
plt.show()
###Output
_____no_output_____ |
pydata2017-astropy/1-units/Astropy-Units.ipynb | ###Markdown
Astropy Units, Quantities, and Constants Astropy includes a powerful framework for representing, converting, and displaying units that works with both scalar numbers and arrays. The object that contains both numeric values and associated units is the `Quantity` object. `Quantity`'s support arithmetic with other `Quantity` objects and numbers, and work naturally with many `numpy` functions, all the while keeping track of the units for you.For more detailed information about the features presented below, please see the[astropy.units](http://docs.astropy.org/en/stable/units/index.html) documentation.Note that this tutorial assumes you have little or no knowledge of astropy units. If you're already moderately familiar with them and are interested in some more complex examples, you might instead prefer the more advanced units tutorial at the [astropy-tutorials](http://tutorials.astropy.org) website. Imports:
###Code
# We'll need these packages for the examples below:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Representing units First, we need to import the astropy units subpackage (**`astropy.units`**). We will be using units in many expressions, so we want to have a concise way to refer to the units package; the typical convention is to import this as the letter **`u`**. Keep in mind that this would conflict with any variable named **`u`**:
###Code
import astropy.units as u
###Output
_____no_output_____
###Markdown
Units can then be accessed simply as **`u.`**. For example, the meter unit is:
###Code
u.meter
###Output
_____no_output_____
###Markdown
Units have docstrings, which give some explanatory text about them:
###Code
u.meter.__doc__
u.parsec.__doc__
###Output
_____no_output_____
###Markdown
and a physical type:
###Code
u.meter.physical_type
u.second.physical_type
###Output
_____no_output_____
###Markdown
Many units also have other names or abreviations:
###Code
u.m.names
u.m
u.parsec.names
u.pc
u.arcsec.names
u.arcsecond
###Output
_____no_output_____
###Markdown
All SI prefixes are also defined for a given unit, which also may have short names:
###Code
u.kilometer
u.kilometer.names
u.yoctobarn
u.microJansky * u.fortnight
###Output
_____no_output_____
###Markdown
SI and cgs units are available by default, but Imperial units require the **`imperial`** prefix:
###Code
# this is not defined
u.inch
# use this
u.imperial.inch, u.imperial.mile
###Output
_____no_output_____
###Markdown
Please see the complete list of [available units](https://astropy.readthedocs.org/en/stable/units/index.htmlmodule-astropy.units.si). You may have noticed that, in the Jupyter notebook, units render using latex:
###Code
u.micrometer
###Output
_____no_output_____
###Markdown
In a terminal, they instead default to plain text:
###Code
print(u.micrometer)
###Output
_____no_output_____
###Markdown
Composite units Composite units are created using Python numeric operators, e.g. "`*`" (multiplication), "`/`" (division), and "`**`" (power).
###Code
u.km / u.s
u.imperial.mile / u.h
(u.eV * u.Mpc) / u.Gyr
u.cm**3
u.m / u.kg / u.s**2
###Output
_____no_output_____
###Markdown
``Quantity`` objectsThe most useful feature of units is the ability to attach them to scalars or arrays, creating `Quantity` objects. A `Quantity` object contains both a value and a unit. The easiest way to create a `Quantity` object is by multiplying the value with a unit:
###Code
# this produces a Quantity object:
q = 3.7 * u.au
type(q)
q
###Output
_____no_output_____
###Markdown
A completely equivalent (but more verbose) way of doing the same thing is to use the `Quantity` object's initializer, demonstrated below. In general, the simpler form (above) is preferred, as it is closer to how such a quantity would actually be written in text. The initalizer form has more options, though, which you can learn about from the [astropy reference documentation on Quantity](http://docs.astropy.org/en/stable/api/astropy.units.quantity.Quantity.html).
###Code
u.Quantity(3.7, unit=u.au)
###Output
_____no_output_____
###Markdown
Quantities are extremely useful with array array data. You can create an array Quantity by mutliplying a containter type (e.g., a list) or a `numpy` array by a unit object:
###Code
[1., 2., 3.] * u.pc / u.yr
###Output
_____no_output_____
###Markdown
is equivalent to:
###Code
x = np.array([1.2, 6.8, 3.7]) * u.pc / u.year
x
###Output
_____no_output_____
###Markdown
We can also use the `numpy` array-generating functions multiplied by a unit:
###Code
np.random.normal(0, 1., size=10) * u.km
###Output
_____no_output_____
###Markdown
`Quantity` attributes The units and value of a `Quantity` can be accessed separately via the ``value`` and ``unit`` attributes:
###Code
q = 5. * u.Mpc
q
q.value
q.unit
x = [1.2, 6.8, 3.7] * u.pc / u.year
x
x.value
x.unit
###Output
_____no_output_____
###Markdown
`Quantity` Arithmetic Operations"`*`" (multiplication), "`/`" (division), and "`**`" (power) operations can be performed on `Quantity` objects with `float`/`int` values.
###Code
q = 3.1 * u.km
q * 2
q / 2.
q ** 2
###Output
_____no_output_____
###Markdown
Combining QuantitiesQuantities can be combined using Python numeric operators. Addition and subtraction are supported if the units are compatible:
###Code
x1 = 3 * u.km
x2 = 500 * u.m
x1 + x2
# but this won't work:
x1 = 3 * u.m
x2 = 5 * u.s
x1 + x2
###Output
_____no_output_____
###Markdown
Division and multiplication will always work:
###Code
q1 = 3. * u.m / u.s
q1
q2 = 5. * u.cm / u.s / u.g**2
q2
q1 * q2
q1 / q2 # note the "seconds" cancelled out
###Output
_____no_output_____
###Markdown
This also works with arrays:
###Code
x = [1.2, 6.8, 3.7] * u.pc / u.year
x * 3 # elementwise multiplication
###Output
_____no_output_____
###Markdown
Coverting unitsUnits can be converted to other equivalent units.
###Code
q = 2.5 * u.year
q
q.to(u.s)
(7. * u.deg**2).to(u.steradian)
(550. * u.imperial.mile / u.hr).to(u.km / u.s)
q1 = 3. * u.m / u.s
q2 = 5. * u.cm / u.s / u.g**2
q1 * q2
(q1 * q2).to( (u.m / u.kg / u.s)**2 )
###Output
_____no_output_____
###Markdown
**Important Note**: Converting a unit (not a `Quantity`) gives only the scale factor:
###Code
u.Msun.to(u.kg)
###Output
_____no_output_____
###Markdown
To keep the units, use a `Quantity` (value and unit) object:
###Code
(1. * u.Msun).to(u.kg)
###Output
_____no_output_____
###Markdown
Decomposing units The units of a `Quantity` object can be decomposed into a set of *base units* using the``decompose()`` method. By default, units will be decomposed to SI unit bases:
###Code
q = 8. * u.cm * u.pc / u.g / u.year**2
q
q.decompose()
###Output
_____no_output_____
###Markdown
To decompose into cgs unit bases:
###Code
q.decompose(u.cgs.bases)
u.cgs.bases
u.si.bases
###Output
_____no_output_____
###Markdown
Units will not cancel out unless they are identical:
###Code
q = 7 * u.m / (7 * u.km)
q
###Output
_____no_output_____
###Markdown
But they will cancel by using the `decompose()` method:
###Code
x = q.decompose()
x # this is a "dimensionless" Quantity
repr(x.unit)
###Output
_____no_output_____
###Markdown
Formatting unitsUnits or quantities can be formatted using Python string formatting:
###Code
q = 15.5872359234 * u.km/u.s
'{0:.2f}'.format(q)
###Output
_____no_output_____
###Markdown
You can also get out latex source for units, which helps generate plot labels, e.g.:
###Code
x = [1., 5., 8., 21.] * u.km
v = [13., 2.4, 1., 0.3] * u.km/u.s
plt.scatter(x, v)
plt.xlabel('$x$ [{0:latex_inline}]'.format(x.unit))
plt.ylabel('$v$ [{0:latex_inline}]'.format(v.unit))
###Output
_____no_output_____
###Markdown
Integration with Numpy functions Most [Numpy](http://www.numpy.org) functions understand `Quantity` objects:
###Code
np.sin(np.deg2rad(30)) # np.sin assumes the input is in radians
np.sin(30 * u.degree) # awesome!
q = 100 * u.kg * u.kg
np.sqrt(q)
x = np.arange(10) * u.km
x
np.mean(x)
###Output
_____no_output_____
###Markdown
Some numpy functions require dimensionless quantities.
###Code
np.log10(4 * u.m) # this doesn't make sense
np.log10(4 * u.m / (4 * u.km)) # note the units cancelled
###Output
_____no_output_____
###Markdown
Care needs to be taken with dimensionless units.For example, passing ordinary values to an inverse trigonometric function gives a result without units:
###Code
np.arcsin(1.0)
###Output
_____no_output_____
###Markdown
`u.dimensionless_unscaled` creates a ``Quantity`` with a "dimensionless unit" and therefore gives a result *with* units:
###Code
np.arcsin(1.0 * u.dimensionless_unscaled)
np.arcsin(1.0 * u.dimensionless_unscaled).to(u.degree)
###Output
_____no_output_____
###Markdown
**Important Note:** In-place array operations will silently drop the unit:
###Code
a = np.arange(10.)
a *= 1.0 * u.kg # in-place operator
a
###Output
_____no_output_____
###Markdown
Assign to a *new* array instead:
###Code
a = a * 1.0 * u.kg
a
###Output
_____no_output_____
###Markdown
Also, Quantities lose their units with some Numpy operations, e.g.:* np.append* np.dot* np.hstack* np.vstack* np.where* np.choose* np.vectorizeSee [Quantity Known Issues](http://docs.astropy.org/en/stable/known_issues.htmlquantities-lose-their-units-with-some-operations) for more details. Defining new units You can also define custom units for something that isn't built in to astropy.Let's define the a unit called **"sol"** that represents a Martian day.
###Code
sol = u.def_unit('sol', 1.0274912510 * u.day)
(1. * u.yr).to(sol) # 1 Earth year in Martian sol units
###Output
_____no_output_____
###Markdown
Now let's define Mark Watney's favorite unit, the [**Pirate-Ninja**](https://en.wikipedia.org/wiki/List_of_humorous_units_of_measurementPirate_Ninja):
###Code
pirate_ninja = u.def_unit('☠️👤', 1.0 * u.kW * u.hr / sol)
5.2 * pirate_ninja
# Mars oxygenator power requirement (for 6 people)
(44.1 * pirate_ninja).to(u.W)
###Output
_____no_output_____
###Markdown
Using physical constants The [astropy.constants](http://docs.astropy.org/en/v0.2.1/constants/index.html) module contains physical constants relevant for astronomy. They are defined as ``Quantity`` objects using the ``astropy.units`` framework.
###Code
from astropy.constants import G, c, R_earth
G
c
R_earth
###Output
_____no_output_____
###Markdown
Constants are Quantities, thus they can be coverted to other units:
###Code
R_earth.to(u.km)
###Output
_____no_output_____
###Markdown
Please see the complete list of [available physical constants](http://docs.astropy.org/en/stable/constants/index.htmlmodule-astropy.constants). Additions are welcome! EquivalenciesEquivalencies can be used to convert quantities that are not strictly the same physical type, but in a specific context are nterchangable. A familiar physics example is the mass-energy equivalency: strictly these are different physical types, but it is often understood that you can convert between the two using $E=mc^2$:
###Code
from astropy.constants import m_p # proton mass
# this raises an error because mass and energy are different units
(m_p).to(u.eV)
# this succeeds, using equivalencies
(m_p).to(u.MeV, u.mass_energy())
###Output
_____no_output_____
###Markdown
This concept extends further in `astropy.units` to include some common practical astronomy situations where the units have no direct physical connection, but it is often useful to have a "quick shorthand". For example, astronomical spectra are often given as a function of wavelength, frequency, or even energy of the photon. For example, suppose you want to find the Lyman-limit wavelength:
###Code
# this raises an error
(13.6 * u.eV).to(u.Angstrom)
###Output
_____no_output_____
###Markdown
Normally, one can convert `u.eV` only to the following units:
###Code
u.eV.find_equivalent_units()
###Output
_____no_output_____
###Markdown
But by using a spectral equivalency, one can also convert `u.eV` to the following units:
###Code
u.eV.find_equivalent_units(equivalencies=u.spectral())
(13.6 * u.eV).to(u.Angstrom, equivalencies=u.spectral())
###Output
_____no_output_____
###Markdown
Or if you remember the 21cm HI line, but can't remember the frequency, you could do:
###Code
(21. * u.cm).to(u.GHz, equivalencies=u.spectral())
###Output
_____no_output_____
###Markdown
To go one step further, the units of a spectrum's *flux* are further complicated by being dependent on the units of the spectrum's "x-axis" (i.e., $f_{\lambda}$ for flux per unit wavelength, and $f_{\nu}$) for flux per unit frequency). `astropy.units` supports this use case, but it is necessary to supply the location in the spectrum where the conversion is done:
###Code
f_lambda = (1e-18 * u.erg / u.s / u.cm**2 / u.AA)
f_lambda
f_nu = f_lambda.to(u.uJy, equivalencies=u.spectral_density(1. * u.um))
f_nu
###Output
_____no_output_____
###Markdown
There's a lot of flexibility with equivalencies, including a variety of other useful built-in equivalencies. So if you want to know more, you might want to check out the [equivalencies narrative documentation](http://docs.astropy.org/en/stable/units/equivalencies.html) or the [astropy.units.equivalencies reference docs](http://docs.astropy.org/en/stable/units/index.htmlmodule-astropy.units.equivalencies). Putting it all together A simple exampleLet's estimate the (circular) orbital speed of the Earth around the Sun using Kepler's Law:$$v = \sqrt{\frac{G M_{\odot}}{r}}$$
###Code
v = np.sqrt(G * 1 * u.M_sun / (1 * u.au))
v
###Output
_____no_output_____
###Markdown
That's a velocity unit... but it sure isn't obvious when you look at it!Let's use a variety of the available quantity methods to get something more sensible:
###Code
v.decompose() # remember the default uses SI bases
v.decompose(u.cgs.bases)
v.to(u.km / u.s)
v.to(u.imperial.mile / u.hr)
###Output
_____no_output_____
###Markdown
Exercise 1The *James Webb Space Telescope (JWST)* will be located at the second Sun-Earth Lagrange (L2) point: ☀️ 🌎 🛰 *(not to scale)*L2 is located at a distance from the Earth (opposite the Sun) of approximately:$$ r \approx R \left(\frac{M_{earth}}{3 M_{sun}}\right) ^{(1/3)} $$where $R$ is the Sun-Earth distance.Calculate the Earth-L2 distance in kilometers and miles.*Hints*:* $M_{earth}$ and $M_{sun}$ are defined [constants](http://docs.astropy.org/en/stable/constants/reference-api) * the mile unit is defined as ``u.imperial.mile`` (see [imperial units](http://docs.astropy.org/en/v0.2.1/units/index.htmlmodule-astropy.units.imperial))
###Code
# answer here (km)
# answer here (mile)
###Output
_____no_output_____
###Markdown
Exercise 2The L2 point is about 1.5 million kilometers away from the Earth opposite the Sun.The total mass of the *James Webb Space Telescope (JWST)* is about 6500 kg.Using the value you obtained above for the Earth-L2 distance, calculate the gravitational force in Newtons between * *JWST* (at L2) and the Earth* *JWST* (at L2) and the Sun*Hint*: the gravitational force between two masses separated by a distance *r* is:$$ F_g = \frac{G m_1 m_2}{r^2} $$
###Code
# answer here (Earth)
# answer here (Sun)
###Output
_____no_output_____
###Markdown
Advanced Example: little *h* For this example, we'll consider how to support the practice of defining units in terms of the dimensionless Hubble constant $h=h_{100}=\frac{H_0}{100 \, {\rm km/s/Mpc}} $. We use the name 'h100' to differentiate from "h" as in "hours".
###Code
# define the h100 and h70 units
h100 = u.def_unit(['h100', 'littleh'])
h70 = u.def_unit('h70', h100 * 100. / 70)
# add as equivalent units
u.add_enabled_units([h100, h70])
h100.find_equivalent_units()
###Output
_____no_output_____
###Markdown
Define the Hubble constant ($H_0$) in terms of ``h100``:
###Code
H = 100 * h100 * u.km / u.s / u.Mpc
H
###Output
_____no_output_____
###Markdown
Now compute the Hubble time ($1 / H_0$) for h = h100 = 1:
###Code
t_H = (1/H).to(u.Gyr / h100)
t_H
###Output
_____no_output_____
###Markdown
and for h = 0.7:
###Code
t_H.to(u.Gyr / h70)
###Output
_____no_output_____ |
Week 3/Assignment 1/FINAL_ASSIGNMENT_3A.ipynb | ###Markdown
**Import Libraries and modules**
###Code
# https://keras.io/
!pip install -q keras
import keras
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, Add
from keras.layers import Convolution2D, MaxPooling2D, AveragePooling2D
from keras.utils import np_utils
from keras.layers.normalization import BatchNormalization
from keras.datasets import mnist
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
###Output
_____no_output_____
###Markdown
Load pre-shuffled MNIST data into train and test sets
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print (X_train.shape)
from matplotlib import pyplot as plt
%matplotlib inline
plt.imshow(X_train[0])
X_train = X_train.reshape(X_train.shape[0], 28, 28,1)
X_test = X_test.reshape(X_test.shape[0], 28, 28,1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
y_train[:10]
# Convert 1-dimensional class arrays to 10-dimensional class matrices
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
Y_train[:10]
from keras.layers import Activation
from keras.layers import SeparableConv2D
model = Sequential()
model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(28,28,1)))
model.add(BatchNormalization())
model.add(Convolution2D(32, 1, activation='relu'))
model.add(SeparableConv2D(32, (2, 2), activation='relu'))
model.add(BatchNormalization())
model.add((MaxPooling2D(pool_size=2, strides=None, padding='Valid')))
model.add(Convolution2D(16, (3, 3), activation='relu'))
model.add(Convolution2D(10, 1, activation='relu'))
model.add(Convolution2D(10, 10))
model.add(BatchNormalization())
model.add(Flatten())
model.add(BatchNormalization())
model.add(Activation('softmax'))
model.summary()
from keras import optimizers
opt = optimizers.adam(lr=0.0005)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=64, nb_epoch=10, verbose=1)
score = model.evaluate(X_test, Y_test, verbose=0)
print(score)
y_pred = model.predict(X_test)
print(y_pred[:9])
print(y_test[:9])
layer_dict = dict([(layer.name, layer) for layer in model.layers])
import numpy as np
from matplotlib import pyplot as plt
from keras import backend as K
%matplotlib inline
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
#x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
def vis_img_in_filter(img = np.array(X_train[2]).reshape((1, 28, 28, 1)).astype(np.float64),
layer_name = 'conv2d_14'):
layer_output = layer_dict[layer_name].output
img_ascs = list()
for filter_index in range(layer_output.shape[3]):
# build a loss function that maximizes the activation
# of the nth filter of the layer considered
loss = K.mean(layer_output[:, :, :, filter_index])
# compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, model.input)[0]
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([model.input], [loss, grads])
# step size for gradient ascent
step = 5.
img_asc = np.array(img)
# run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([img_asc])
img_asc += grads_value * step
img_asc = img_asc[0]
img_ascs.append(deprocess_image(img_asc).reshape((28, 28)))
if layer_output.shape[3] >= 35:
plot_x, plot_y = 6, 6
elif layer_output.shape[3] >= 23:
plot_x, plot_y = 4, 6
elif layer_output.shape[3] >= 11:
plot_x, plot_y = 2, 6
else:
plot_x, plot_y = 1, 2
fig, ax = plt.subplots(plot_x, plot_y, figsize = (12, 12))
ax[0, 0].imshow(img.reshape((28, 28)), cmap = 'gray')
ax[0, 0].set_title('Input image')
fig.suptitle('Input image and %s filters' % (layer_name,))
fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9])
for (x, y) in [(i, j) for i in range(plot_x) for j in range(plot_y)]:
if x == 0 and y == 0:
continue
ax[x, y].imshow(img_ascs[x * plot_y + y - 1], cmap = 'gray')
ax[x, y].set_title('filter %d' % (x * plot_y + y - 1))
vis_img_in_filter()
###Output
_____no_output_____ |
test set and validation .ipynb | ###Markdown
Inference and ValidationNow that you have a trained network, you can use it for making predictions. This is typically called inference, a term borrowed from statistics. However, neural networks have a tendency to perform too well on the training data and aren't able to generalize to data that hasn't been seen before. This is called overfitting and it impairs inference performance.To test for overfitting while training, we measure the performance on data not in the training set called the validation set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting train=False here:testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
###Code
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
###Output
torch.Size([64, 10])
###Markdown
With the probabilities, we can get the most likely class using the ps.topk method.This returns the 𝑘 highest values. Since we just want the most likely class, we can use ps.topk(1).This returns a tuple of the top- 𝑘 values and the top- 𝑘 indices. If the highest value is the fifth element, we'll get back 4 as the index.
###Code
top_p, top_class = ps.topk(1, dim=1)
print(top_p.shape)
print(top_class.shape)
# Look at the most likely classes for the first 10 examples
print(top_p[:10,:])
print(top_class[:10,:])
###Output
torch.Size([64, 1])
torch.Size([64, 1])
tensor([[0.1192],
[0.1231],
[0.1226],
[0.1229],
[0.1229],
[0.1233],
[0.1213],
[0.1247],
[0.1211],
[0.1236]], grad_fn=<SliceBackward>)
tensor([[8],
[8],
[8],
[8],
[8],
[8],
[8],
[8],
[8],
[8]])
###Markdown
Now we can check if the predicted classes match the labels. This is simple to do by equating top_class and labels, but we have to be careful of the shapes. Here top_class is a 2D tensor with shape (64, 1) while labels is 1D with shape (64). To get the equality to work out the way we want, top_class and labels must have the same shape.If we doequals = top_class == labelsequals will have shape (64, 64), try it yourself. What it's doing is comparing the one element in each row of top_class with each element in labels which returns 64 True/False boolean values for each row.
###Code
a=labels.view(*top_class.shape)
a
top_class
equals = top_class == labels.view(*top_class.shape)
equals.shape
###Output
_____no_output_____
###Markdown
Now we need to calculate the percentage of correct predictions. equals has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to torch.mean. If only it was that simple. If you try torch.mean(equals), you'll get an errorRuntimeError: mean is not implemented for type torch.ByteTensorThis happens because equals has type torch.ByteTensor but torch.mean isn't implement for tensors with that type. So we'll need to convert equals to a float tensor. Note that when we take torch.mean it returns a scalar tensor, to get the actual value as a float we'll need to do accuracy.item().
###Code
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(accuracy)
print(accuracy.type)
print(f'Accuracy: {accuracy.item()*100}%')
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
###Output
_____no_output_____
###Markdown
The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called early-stopping. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.The most common method to reduce overfitting (outside of early-stopping) is dropout, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the nn.Dropout module.
###Code
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
model.eval()
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
model.train()
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(train_losses[-1]),
"Test Loss: {:.3f}.. ".format(test_losses[-1]),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
###Output
_____no_output_____
###Markdown
InferenceNow that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with model.eval(). You'll also want to turn off autograd with the torch.no_grad() context.
###Code
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels[0].view(*top_class.shape)
equalsS
###Output
_____no_output_____ |
AutoGIS/10.1-geostatistics-kriging.ipynb | ###Markdown
空间统计分析之克里金插值前面章节主要是讨论一些基础gis数据处理运算和地图可视化方面的内容,从本小节所属章节开始涉及记录一些常用的空间数据分析方法,比如统计计算方法等。本小节简单介绍一种经典的空间数据插值方法——克里金插值。 克里金(Kriging)插值的原理 克里金插值变种很多,本文主要简单介绍基本克里金插值的原理,内容主要来自[这里](https://xg1990.com/blog/archives/222)。 0 前言 --从反距离插值(IDW)说起 空间插值问题,就是在已知空间上若干离散点 $(x_i,y_i)$ 的某一属性(如气温,海拔)的观测值$z_i=z(x_i,y_i)$的条件下,估计空间上任意一点$(x,y)$的属性值的问题。直观来讲,根据地理学第一定律, ```All attribute values on a geographic surface are related to each other, but closer values are more strongly related than are more distant ones.```地理属性有空间相关性,相近的事物会更相似。由此人们发明了反距离插值,对于空间上任意一点 $(x,y)$ 的属性 $z=z(x,y)$ ,定义反距离插值公式估计量$$\hat{z} = \sum^{n}_{i=1}{\frac{1}{d^\alpha}z_i}$$其中$\alpha $通常取1或者2。 即,用空间上所有已知点的数据加权求和来估计未知点的值,权重取决于距离的倒数(或者倒数的平方)。那么,距离近的点,权重就大;距离远的点,权重就小。 反距离插值可以有效的基于地理学第一定律估计属性值空间分布,但仍然存在很多问题: (1)$\alpha $ 的值不确定 (2)用倒数函数来描述空间关联程度不够准确 因此更加准确的克里金插值方法被提出来了。 1 克里金插值的定义 克里金法(Kriging method)是依据**协方差函数**对**随机过程**/随机场进行空间建模和预测(插值)的回归算法。 相比反距离插值,克里金插值公式更加抽象$$\hat{z_o} = \sum^{n}_{i=0}{\lambda_iz_i} $$其中$\hat{z_o}$是点$(x_o,y_o)$处的估计值,即$z_o=z(x_o,y_o)$ 这里的$\lambda_i$是权重系数。它同样是用空间上所有已知点的数据加权求和来估计未知点的值。但权重系数并非距离的倒数,而是能够满足点$(x_o,y_o)$处的估计值$\hat{z_o}$与真实值$z_o$的差最小的一套最优系数,即$$\min_{\lambda_i} Var(\hat{z_o}-z_o)$$同时满足无偏估计的条件$$E(\hat{z_o}-z_o)=0$$无偏指的是估计值和实际值之差的期望等于零,最优指的是估计值和实际值的方差最小。 简单来说克里金(kriging)插值干的事情就是:已经有一些坐标和这些坐标处的真实值,我们可以称之为采样点。然后把这些采样点输入到克里金插值中,我们就可以用来估计其他未知位置处的值。 2 算法 -普通克里金(Ordinary Kriging, OK) **改进算法:** -泛克里金(Universal Kriging, UK) -协同克里金(Co-Kriging, CK) -析取克里金(Disjunctive Kriging, DK) **混合算法:** -回归克里金(regression-Kriging) -神经网络克里金(neural Kriging) -贝叶斯克里金(Bayesian Kriging) 3 假设条件 **不同的克里金插值方法的主要差异就是假设条件不同**。本文仅介绍普通克里金插值的假设条件与应用。 普通克里金插值的假设条件为,空间属性z是均一的。对于空间任意一点$(x,y)$,都有同样的期望c与方差$\sigma^2$。 即对任意点$(x,y)$都有$$E[z(x,y)] = E[z] = c$$$$Var[z(x,y)] = \sigma^2$$换一种说法:任意一点处的值$z(x,y)$,都由区域平均值c和该点的随机偏差$R(x,y)$组成,即$$z(x,y)=E[z(x,y)] + R(x,y)= c + R(x,y)$$其中$R(x,y)$表示点$(x,y)$处的偏差,其方差均为常数$$Var[R(x,y)] = \sigma^2$$ 4 无偏约束条件 先分析无偏估计条件$E(\hat{z_o}-z_o)=0$,将$\hat{z_o} = \sum^{n}_{i=0}{\lambda_iz_i}$带入则有$E(\sum^{n}_{i=0}{\lambda_iz_i}- z_o)=0$又因为对任意的z都有$E[z] = c$,则$c \sum^{n}_{i=0}{\lambda_i}- c=0$即$\sum^{n}_{i=0}{\lambda_i} = 1$这是$\lambda_i$的约束条件之一。 5 优化目标/代价函数J 再分析估计误差$Var(\hat{z_o}-z_o)$。为方便公式推理,用符号$J$表示,即$J = Var(\hat{z_o}-z_o)$则有 $$J= Var(\sum^{n}_{i=0}{\lambda_iz_i} - z_o) = Var(\sum^{n}_{i=0}{\lambda_iz_i}) - 2 Cov(\sum^{n}_{i=0}{\lambda_iz_i}, z_o) + Cov(z_o, z_o) = \sum^{n}_{i=0}\sum^{n}_{j=0}{\lambda_i\lambda_jCov( z_i, z_j)} - 2 \sum^{n}_{i=0}{\lambda_iCov(z_i, z_o)} + Cov(z_o, z_o) $$ 为简化描述,定义符号 $C_{ij} = Cov(z_i,z_j) = Cov(R_i,R_j)$,这里$R_i = z_i - c$,即点$(x_i,y_i)$处的属性值相对于区域平均属性值的偏差。则有$J = \sum^{n}_{i=0}\sum^{n}_{j}{\lambda_i\lambda_jC_{ij}} - 2 \sum^{n}_{i=0}{\lambda_iC_{io}} + C_{oo} $ 6 代价函数的最优解 再定义半方差函数 $r_{ij} = \sigma^2 -C_{ij}$,带入$J$中,有$$J = \sum^{n}_{i=0}\sum^{n}_{j=0}{\lambda_i\lambda_j(\sigma^2 - r_{ij})} - 2 \sum^{n}_{i=0}{\lambda_i(\sigma^2 - r_{io})} + \sigma^2 - r_{oo} =\sum^{n}_{i=0}\sum^{n}_{j=0}{\lambda_i\lambda_j(\sigma^2)} -\sum^{n}_{i=0}\sum^{n}_{j=0}{\lambda_i\lambda_j( r_{ij})}-2\sum^{n}_{i=0}{\lambda_i(\sigma^2)}+2 \sum^{n}_{i=0}{\lambda_i(r_{io})}+\sigma^2 - r_{oo} $$考虑到$\sum^{n}_{i=0}{\lambda_i} = 1$$J = \sigma^2-\sum^{n}_{i=0}\sum^{n}_{j}{\lambda_i\lambda_j(r_{ij})}-2 \sigma^2 +2 \sum^{n}_{i=0}{\lambda_i(r_{io})}+ \sigma^2 - r_{oo}=2 \sum^{n}_{i=0}{\lambda_i(r_{io})} -\sum^{n}_{i=0}\sum^{n}_{j=0}{\lambda_i\lambda_j(r_{ij})} - r_{oo} $我们的目标是寻找使$J$最小的一组 $\lambda_i$,且$J$是$\lambda_i$的函数,因此直接将$J$对$\lambda_i$求偏导数令其为0即可。即$\frac{\partial J}{\partial \lambda_i}= 0;i=1,2,\cdots$但是要注意的是,我们要保证求解出来的最优 $\lambda_i$ 满足公式$\sum^{n}_{i=0}{\lambda_i} = 1$,这是一个带约束条件的最优化问题。使用拉格朗日乘数法求解,求解方法为构造一个新的目标函数$$J + \phi(\sum^{n}_{i=0}{\lambda_i}-1)$$其中$\phi$是拉格朗日乘数。求解使这个代价函数最小的参数集${\phi,\lambda_1,\lambda_2,\cdots,\lambda_n}$,则能满足其在$\sum^{n}_{i=0}{\lambda_i} = 1$约束下最小化$J$。即$$\frac{\partial(J + \phi(\sum^{n}_{i=0}{\lambda_i}-1))}{\partial \lambda_i} = 0;i=1,2,\cdots,n\\ \frac{\partial(J + \phi(\sum^{n}_{i=0}{\lambda_i}-1))}{\partial \phi} = 0 $$$$ \frac{\partial (2 \sum^{n}_{i=0}{\lambda_i(r_{io})} - \sum^{n}_{i=0}\sum^{n}_{j}{\lambda_i\lambda_j(r_{ij})} - r_{oo})}{\partial \lambda_i}= 0;i=1,2,\cdots,$$$$ \frac{\partial (\partial 2 \sum^{n}_{i=0}{\lambda_i(r_{io})} - \sum^{n}_{i=0}\sum^{n}_{j}{\lambda_i\lambda_j(r_{ij})} - r_{oo})}{\partial \phi} = 0 $$$$2r_{io} - \sum^{n}_{j=1}{(r_{ij}+r_{ji})\lambda_j}=0;i=1,2,\cdots,$$$$ \sum^{n}_{i=0}{\lambda_i} = 1$$由于$C_{ij}=Cov(z_i,z_j)=C_{ji}$,因此同样地$r_{ij}=r_{ji}$,那么有$$ r_{io} - \sum^{n}_{j=1}{r_{ij}\lambda_j} = 0;i=1,2,\cdots,$$$$ \sum^{n}_{i=0}{\lambda_i} = 1 $$式子中半方差函数$r_{ij}$十分重要,最后会详细解释其计算与定义在以上计算中我们得到了对于求解权重系数$\lambda_j$的方程组。写成线性方程组的形式就是:$$ r_{11}\lambda_1 + r_{12}\lambda_2 + \cdots + r_{1n}\lambda_n + \phi= r_{1o}$$ $$r_{21}\lambda_1 + r_{22}\lambda_2 + \cdots + r_{2n}\lambda_n + \phi= r_{2o}$$ $$ r_{n1}\lambda_1 + r_{n2}\lambda_2 + \cdots + r_{nn}\lambda_n + \phi= r_{no}$$ $$ \lambda_1 + \lambda_2 + \cdots + \lambda_n = 1$$写成矩阵形式即为$\begin{bmatrix}r_{11}&r_{12}&\cdots&r_{1n}&1\\ r_{21}&r_{22}&\cdots&r_{2n}&1\\\cdots&\cdots&\cdots&\cdots&\cdots\\r_{n1}&r_{n2}&\cdots&r_{nn}&1\\1&1&\cdots&1&0\end{bmatrix}\begin{bmatrix} \lambda_1\\ \lambda_2\\\cdots\\\lambda_n\\0\end{bmatrix}=\begin{bmatrix} r_{1o}\\ r_{2o}\\\cdots\\r_{no}\\1\end{bmatrix}$对矩阵求逆即可求解。唯一未知的就是上文中定义的半方差函数$r_{ij}$,接下来将详细讨论 7 半方差函数 上文中对半方差函数的定义为$r_{ij} = \sigma^2 -C_{ij}$其等价形式为$r_{ij} = \frac{1}{2}E[(z_i-z_j)^2]$这也是半方差函数名称的来由,接下来证明这二者是等价的:根据上文定义 $R_i = z_i - c$,有$z_i-z_j = R_i - R_j$,则$$ r_{ij} = \frac{1}{2}E[(R_i-R_j)^2]= \frac{1}{2}E[R_i^2-2R_iR_j+R_j^2]= \frac{1}{2}E[R_i^2]+\frac{1}{2}E[R_j^2]-E[R_iR_j] $$又因为:$E[R_i^2] =E[R_j^2] = E[(z_i - c)^2] = Var(z_i) = \sigma^2 $$E[R_iR_j] = E[(z_i - c)(z_j-c)] = Cov(z_i,z_j) = C_{ij}$于是有$$ r_{ij} = \frac{1}{2}E[(z_i-z_j)^2]= \frac{1}{2}E[R_i^2]+\frac{1}{2}E[R_j^2]-E[R_iR_j]= \frac{1}{2}\sigma^2+\frac{1}{2}\sigma^2- C_{ij}=\sigma^2 -C_{ij} $$$ \sigma^2 -C_{ij} = \frac{1}{2}E[(z_i-z_j)^2]$得证,现在的问题就是如何计算$$r_{ij} = \frac{1}{2}E[(z_i-z_j)^2]$$这时需要用到地理学第一定律,空间上相近的属性相近。$(r_{ij} = \frac{1}{2}(z_i-z_j)^2$表达了属性的相似度;空间的相似度就用距离来表达,定义i与j之间的几何距离$d_{ij} = d(z_i,z_j) = d( (x_i,y_i), (x_j,y_j)) = \sqrt{(x_i-x_j)^2 + (y_i - y_j)^2}$克里金插值假设$r_{ij}$与$d_{ij}$存在着函数关系,这种函数关系可以是线性、二次函数、指数、对数关系。为了确认这种关系,我们需要首先对观测数据集$\{z(x_1,y_1),z(x_2,y_2),z(x_3,y_3),\cdots,z(x_{n-1},y_{n-1}),z(x_n,y_n)\}$计算任意两个点的 距离$d_{ij}= \sqrt{(x_i-x_j)^2 + (y_i - y_j)^2}$和 半方差 $\sigma^2 -C_{ij} =\frac{1}{2}E[(z_i-z_j)^2]$,这时会得到$n^2$个$(d_{ij}, r_{ij})$的数据对。将所有的$d$和$r$绘制成散点图,寻找一个最优的拟合曲线拟合$d$与$r$的关系,得到函数关系式$r = r(d)$那么对于任意两点$(x_i,y_i), (x_j,y_j)$,先计算其距离$d_{ij}$,然后根据得到的函数关系就可以得到这两点的半方差$r_{ij}$ 8 简单克里金(simple kriging)与普通克里金(ordinary kriging)的区别 以上介绍的均为普通克里金(ordinary kriging)的公式与推理。事实上普通克里金插值还有简化版,即简单克里金(simple kriging)插值。二者的差异就在于如何定义插值形式:上文讲到,普通克里金插值形式为$\hat{z_o} = \sum^{n}_{i=0}{\lambda_iz_i}$而简单克里金的形式则为$\hat{z_o} - c= \sum^{n}_{i=0}{\lambda_i(z_i-c)}$这里的符号c在上文介绍过了,是属性值的数学期望,即$E[z] = c$。也就是说,在普通克里金插值中,认为未知点的属性值是已知点的属性值的加权求和;而在简单克里金插值中,假设未知点的属性值相对于平均值的偏差是已知点的属性值相对于平均值的偏差的加权求和,用公式表达即为:$\hat{R_o} = \sum^{n}_{i=0}{\lambda_iR_i}$这里的$R_i$在上文定义过了:$R_i = z_i - c$。但是为什么这样的克里金插值称为“简单克里金”呢?由于有假设$E[z] = c$,也就是说$E(R_i + c) = c$,即$E(R_i) = 0$。那么上面的公式$\hat{R_o} = \sum^{n}_{i=0}{\lambda_iR_i}$两边的期望一定相同,那么在求解未知参数$\lambda_i$就不需要有无偏约束条件$\sum^{n}_{i=0}{\lambda_i} = 1$。换句话说,这样的估计公式天生就能满足无偏条件。因此它被称为简单克里金。从在上文(第5节优化目标/代价函数J)中可以知道,优化目标的推理和求解过程是通过对属性值相对于期望的偏差量$R_i$进行数学计算而进行的。也就是说这两种克里金插值方法虽然插值形式不一样,求解方法是一样的,重要的区别是简单克里金插值不需要约束条件$\sum^{n}_{i=0}{\lambda_i} = 1$,求解方程组为:$$r_{11}\lambda_1 + r_{12}\lambda_2 + \cdots + r_{1n}\lambda_n + \phi= r_{1o} $$$$r_{21}\lambda_1 + r_{22}\lambda_2 + \cdots + r_{2n}\lambda_n + \phi= r_{2o} $$$$r_{n1}\lambda_1 + r_{n2}\lambda_2 + \cdots + r_{nn}\lambda_n + \phi= r_{no} $$还有更重要的一点,简单克里金的插值公式为:$\hat{z_o} = \sum^{n}_{i=0}{\lambda_i(z_i-c)}+c$换句话说,在计算未知点属性值$\hat{z_o}$前,需要知道该地区的属性值期望c。事实上我们在进行插值前很难知道这个地区的真实属性值期望。有些研究者可能会采用对观测数据简单求平均的方法计算期望值c,而考虑到空间采样点位置代表性可能有偏差(比如采样点聚集在某一小片地区,没有代表性),简单平均估计的期望也可能是有偏差的。这是简单克里金方法的局限性。 9 小结 总的来说,进行克里金插值分为这几个步骤: 1、对于**观测数据**,两两计算距离与半方差 2、寻找一个拟合曲线拟合距离与半方差的关系,从而能根据**任意距离计算出相应的半方差** 3、计算出所有已知点之间的半方差$r_{ij}$ 4、对于未知点$z_0$,计算它到所有已知点$z_i$的半方差$r_{io}$ 5、求解第五节中的方程组,得到最优系数$\lambda_i$ 6、使用最优系数对已知点的属性值进行加权求和,得到未知点$z_o$的估计值 Python Kriging在Python里,有两个比较常用的克里金插值包,**pykrige和pykriging**,此外 PySAL中也集成了包括克里金插值在内的很多空间统计方法,后续再补充。python中Kriging算法: ``` OrdinaryKriging```: 2D ordinary kriging with estimated mean ``` UniversalKriging```:2D universal kriging providing drift terms ```OrdinaryKriging3D: ```3D ordinary kriging ```UniversalKriging3D: ```3D universal kriging ```RegressionKriging:``` An implementation of Regression-Kriging ```ClassificationKriging:``` An implementation of Simplicial Indicator Kriging Using pyKrige Ordinary Kriging Example
###Code
#下面代码给出了使用普通克里金进行插值的一个简单例子,其他类型的克里金插值类似。
from pykrige.ok import OrdinaryKriging
import numpy as np
from matplotlib import pyplot as plt
# 已知采样点的数据,是坐标(x,y)和坐标对应的值
# 矩阵中第一列是x,第二列是y,第三列是坐标对应的值
data = np.array(
[
[0.1, 0.1, 0.9],
[0.2, 0.1, 0.8],
[0.1, 0.3, 0.9],
[0.5, 0.4, 0.5],
[0.3, 0.3, 0.7],
])
# 网格
x_range = 0.6
y_range = 0.6
range_step = 0.1
gridx = np.arange(0.0, x_range, range_step) #三个参数的意思:范围0.0 - 0.6 ,每隔0.1划分一个网格
gridy = np.arange(0.0, y_range, range_step)
ok3d = OrdinaryKriging(
data[:, 0],
data[:, 1],
data[:, 2],
variogram_model="linear") # 模型
# variogram_model是变差函数模型,pykrige提供 linear, power, gaussian, spherical, exponential, hole-effect几种variogram_model可供选择,默认的为linear模型。
# 使用不同的variogram_model,预测效果是不一样的,应该针对自己的任务选择合适的variogram_model。
k3d1, ss3d = ok3d.execute("grid", gridx, gridy) # k3d1是结果,给出了每个网格点处对应的值
print(np.round(k3d1,2))
# 绘图
fig, (ax1) = plt.subplots(1)
ax1.imshow(k3d1, origin="lower")
ax1.set_title("ordinary kriging")
plt.tight_layout()
plt.show()
#黄色是值比较大的区域
###Output
_____no_output_____
###Markdown
Universal Kriging Example
###Code
from pykrige import UniversalKriging
UK = UniversalKriging(
data[:, 0],
data[:, 1],
data[:, 2],
variogram_model="linear",
drift_terms=["regional_linear"],
)
z, ss = UK.execute("grid", gridx, gridy)
plt.imshow(z)
plt.show()
###Output
_____no_output_____
###Markdown
Three-Dimensional Kriging Example
###Code
from pykrige import OrdinaryKriging3D
from pykrige import UniversalKriging3D
data = np.array(
[
[0.1, 0.1, 0.3, 0.9],
[0.2, 0.1, 0.4, 0.8],
[0.1, 0.3, 0.1, 0.9],
[0.5, 0.4, 0.4, 0.5],
[0.3, 0.3, 0.2, 0.7],
]
)
gridx = np.arange(0.0, 0.6, 0.05)
gridy = np.arange(0.0, 0.6, 0.01)
gridz = np.arange(0.0, 0.6, 0.1)
ok3d = OrdinaryKriging3D(
data[:, 0], data[:, 1], data[:, 2], data[:, 3], variogram_model="linear"
)
k3d1, ss3d = ok3d.execute("grid", gridx, gridy, gridz)
uk3d = UniversalKriging3D(
data[:, 0],
data[:, 1],
data[:, 2],
data[:, 3],
variogram_model="linear",
drift_terms=["regional_linear"],
)
k3d2, ss3d = uk3d.execute("grid", gridx, gridy, gridz)
zg, yg, xg = np.meshgrid(gridz, gridy, gridx, indexing="ij")
uk3d = UniversalKriging3D(
data[:, 0],
data[:, 1],
data[:, 2],
data[:, 3],
variogram_model="linear",
drift_terms=["specified"],
specified_drift=[data[:, 0], data[:, 1], data[:, 2]],
)
k3d3, ss3d = uk3d.execute(
"grid", gridx, gridy, gridz, specified_drift_arrays=[xg, yg, zg]
)
func = lambda x, y, z: x
uk3d = UniversalKriging3D(
data[:, 0],
data[:, 1],
data[:, 2],
data[:, 3],
variogram_model="linear",
drift_terms=["functional"],
functional_drift=[func],
)
k3d4, ss3d = uk3d.execute("grid", gridx, gridy, gridz)
fig, (ax1, ax2, ax3, ax4) = plt.subplots(4)
ax1.imshow(k3d1[:, :, 0], origin="lower")
ax1.set_title("ordinary kriging")
ax2.imshow(k3d2[:, :, 0], origin="lower")
ax2.set_title("regional lin. drift")
ax3.imshow(k3d3[:, :, 0], origin="lower")
ax3.set_title("specified drift")
ax4.imshow(k3d4[:, :, 0], origin="lower")
ax4.set_title("functional drift")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Geometric example
###Code
#A small example script showing the usage of the 'geographic' coordinates type for ordinary kriging on a sphere.
from pykrige import OrdinaryKriging
import numpy as np
from matplotlib import pyplot as plt
# Make this example reproducible:
np.random.seed(89239413)
N = 7
lon = 360.0 * np.random.random(N)
lat = 180.0 / np.pi * np.arcsin(2 * np.random.random(N) - 1)
z = 3.5 * np.random.rand(N) + 2.0
# Generate a regular grid with 60° longitude and 30° latitude steps:
grid_lon = np.linspace(0.0, 360.0, 7)
grid_lat = np.linspace(-90.0, 90.0, 7)
# Create ordinary kriging object:
OK = OrdinaryKriging(
lon,
lat,
z,
variogram_model="linear",
verbose=False,
enable_plotting=False,
coordinates_type="geographic",
)
OK
# Execute on grid:
z1, ss1 = OK.execute("grid", grid_lon, grid_lat)
z1
# Create ordinary kriging object ignoring curvature:
OK = OrdinaryKriging(
lon, lat, z, variogram_model="linear", verbose=False, enable_plotting=False
)
# Execute on grid:
z2, ss2 = OK.execute("grid", grid_lon, grid_lat)
z2
# Print data at equator (last longitude index will show periodicity):
print("Original data:")
print("Longitude:", lon.astype(int))
print("Latitude: ", lat.astype(int))
print("z: ", np.array_str(z, precision=2))
print("\nKrige at 60° latitude:\n======================")
print("Longitude:", grid_lon)
print("Value: ", np.array_str(z1[5, :], precision=2))
print("Sigma²: ", np.array_str(ss1[5, :], precision=2))
print("\nIgnoring curvature:\n=====================")
print("Value: ", np.array_str(z2[5, :], precision=2))
print("Sigma²: ", np.array_str(ss2[5, :], precision=2))
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(z1, extent=[0, 360, -90, 90], origin="lower")
ax1.set_title("geo-coordinates")
ax2.imshow(z2, extent=[0, 360, -90, 90], origin="lower")
ax2.set_title("non geo-coordinates")
plt.show()
###Output
_____no_output_____
###Markdown
Using pyKriging pyKriging 旨在简化创建代理模型的过程。以下示例演示了如何创建抽样计划,评估这些位置的测试函数,创建和训练克里金模型并添加填充点以减少模型的均方误差 (MSE)。
###Code
import pyKriging
from pyKriging.krige import kriging
from pyKriging.samplingplan import samplingplan
# The Kriging model starts by defining a sampling plan, we use an optimal Latin Hypercube (最优的拉丁超立方体)here
sp = samplingplan(2)
X = sp.optimallhc(20)
X
# Next, we define the problem we would like to solve
testfun = pyKriging.testfunctions().branin
y = testfun(X)
# Now that we have our initial data, we can create an instance of a Kriging model
k = kriging(X, y, testfunction=testfun, name='simple')
k.train()
# Now, five infill points are added. Note that the model is re-trained after each point is added
numiter = 5
for i in range(numiter):
#print "Infill iteration {0} of {1}....".format(i + 1, numiter)
newpoints = k.infill(1)
for point in newpoints:
k.addPoint(point, testfun(point)[0])
k.train()
# And plot the results
k.plot()
###Output
_____no_output_____ |
Homeworks/Homework5/TF - IDF implementation.ipynb | ###Markdown
For greater than a score of 3 Create a TF - IDF implementation and Analyze the following Sherlock Holmes book from Project Gutenberg text versions of :The Adventures of Sherlock Holmes- http://www.gutenberg.org/ebooks/1661.txt.utf-8A Study in Scarlet - http://www.gutenberg.org/files/244/244-0.txtThe Hound of the Baskervilles - http://www.gutenberg.org/files/2852/2852-0.txtThe Return of Sherlock Holmes - http://www.gutenberg.org/files/108/108-0.txtThe Sign of the Four - http://www.gutenberg.org/ebooks/2097.txt.utf-8 Display the scores for the top 20 highest frequency terms and the relationship to the books **To display the frequency distribution of top 20 words in all 5 books**
###Code
import nltk
## book1
with open ('data/books/SH1.txt', 'r') as myfile:
data=myfile.read().replace('\n', ' ')
data = data.split(' ')
fdist1 = nltk.FreqDist(data)
print("Top 20 word frequency distribution of all 5 text books")
print '\n'
print("The Adventures of Sherlock Holmes:")
print (fdist1.most_common(20))
## book2
with open ('data/books/Scarlet.txt', 'r') as myfile:
data=myfile.read().replace('\n', ' ')
data = data.split(' ')
fdist2 = nltk.FreqDist(data)
print '\n'
print("A Study In Scarlet:")
print (fdist2.most_common(20))
## book3
with open ('data/books/baskervilles.txt', 'r') as myfile:
data=myfile.read().replace('\n', ' ')
data = data.split(' ')
fdist3 = nltk.FreqDist(data)
print '\n'
print("The Hound of the Baskervilles:")
print (fdist3.most_common(20))
## book4
with open ('data/books/SH2.txt', 'r') as myfile:
data=myfile.read().replace('\n', ' ')
data = data.split(' ')
fdist4 = nltk.FreqDist(data)
print '\n'
print("The Return of Sherlock Holmes:")
print (fdist4.most_common(20))
## book5
with open ('data/books/sign4.txt', 'r') as myfile:
data=myfile.read().replace('\n', ' ')
data = data.split(' ')
fdist5 = nltk.FreqDist(data)
print '\n'
print("The Sign of the Four:")
print (fdist5.most_common(20))
print '\n'
print("Flat list of all most common words")
flat_list = (fdist1.most_common(20)) + (fdist2.most_common(20)) + (fdist3.most_common(20)) + (fdist4.most_common(20)) + (fdist5.most_common(20))
print sorted(flat_list)
###Output
Top 20 word frequency distribution of all 5 text books
The Adventures of Sherlock Holmes:
[('the', 5404), ('', 3145), ('and', 2798), ('of', 2720), ('to', 2700), ('a', 2575), ('I', 2533), ('in', 1702), ('that', 1559), ('was', 1360), ('his', 1096), ('is', 1076), ('you', 1029), ('he', 1014), ('it', 976), ('my', 901), ('have', 893), ('with', 843), ('had', 806), ('as', 776)]
A Study In Scarlet:
[('the', 2512), ('', 1846), ('and', 1378), ('of', 1304), ('to', 1149), ('a', 1014), ('I', 769), ('in', 727), ('was', 633), ('he', 615), ('his', 612), ('that', 589), ('had', 464), ('with', 364), ('you', 354), ('it', 325), ('which', 318), ('for', 310), ('as', 310), ('is', 309)]
The Hound of the Baskervilles:
[('', 3848), ('the', 3221), ('of', 1690), ('and', 1560), ('to', 1450), ('a', 1280), ('I', 1260), ('that', 1040), ('in', 907), ('was', 779), ('you', 663), ('he', 658), ('his', 658), ('is', 605), ('it', 576), ('have', 525), ('had', 496), ('with', 467), ('which', 422), ('my', 420)]
The Return of Sherlock Holmes:
[('the', 5864), ('', 3086), ('of', 2852), ('and', 2810), ('to', 2624), ('a', 2582), ('I', 2519), ('that', 1913), ('in', 1779), ('was', 1769), ('his', 1266), ('you', 1177), ('he', 1132), ('is', 1077), ('it', 1057), ('had', 1000), ('have', 917), ('with', 893), ('my', 798), ('for', 751)]
The Sign of the Four:
[('', 2913), ('the', 2311), ('of', 1220), ('and', 1190), ('to', 1132), ('a', 1081), ('I', 1044), ('in', 689), ('was', 552), ('that', 545), ('is', 454), ('his', 452), ('he', 419), ('with', 415), ('you', 401), ('it', 367), ('had', 349), ('have', 337), ('at', 324), ('my', 323)]
Flat list of all most common words
[('', 1846), ('', 2913), ('', 3086), ('', 3145), ('', 3848), ('I', 769), ('I', 1044), ('I', 1260), ('I', 2519), ('I', 2533), ('a', 1014), ('a', 1081), ('a', 1280), ('a', 2575), ('a', 2582), ('and', 1190), ('and', 1378), ('and', 1560), ('and', 2798), ('and', 2810), ('as', 310), ('as', 776), ('at', 324), ('for', 310), ('for', 751), ('had', 349), ('had', 464), ('had', 496), ('had', 806), ('had', 1000), ('have', 337), ('have', 525), ('have', 893), ('have', 917), ('he', 419), ('he', 615), ('he', 658), ('he', 1014), ('he', 1132), ('his', 452), ('his', 612), ('his', 658), ('his', 1096), ('his', 1266), ('in', 689), ('in', 727), ('in', 907), ('in', 1702), ('in', 1779), ('is', 309), ('is', 454), ('is', 605), ('is', 1076), ('is', 1077), ('it', 325), ('it', 367), ('it', 576), ('it', 976), ('it', 1057), ('my', 323), ('my', 420), ('my', 798), ('my', 901), ('of', 1220), ('of', 1304), ('of', 1690), ('of', 2720), ('of', 2852), ('that', 545), ('that', 589), ('that', 1040), ('that', 1559), ('that', 1913), ('the', 2311), ('the', 2512), ('the', 3221), ('the', 5404), ('the', 5864), ('to', 1132), ('to', 1149), ('to', 1450), ('to', 2624), ('to', 2700), ('was', 552), ('was', 633), ('was', 779), ('was', 1360), ('was', 1769), ('which', 318), ('which', 422), ('with', 364), ('with', 415), ('with', 467), ('with', 843), ('with', 893), ('you', 354), ('you', 401), ('you', 663), ('you', 1029), ('you', 1177)]
###Markdown
**To display TF-IDF of text - The Adventures of Sherlock Holmes.**
###Code
import nltk
import re
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize
import math
text1 = open('data/books/SH1.txt','r')
stripped = text1.read()
def remove_spl_char(s):
stripped = re.sub('[^\w\s]', '', s)
stripped = re.sub('_', '', stripped)
stripped = re.sub('\s+', ' ', stripped)
stripped = stripped.strip()
return stripped
def get_doc(sent):
doc_info = []
i = 0
for sent in text_sents_clean:
i += 1
count = count_words(sent)
temp = {'doc_id' : i, 'doc_length' : count}
doc_info.append(temp)
return doc_info
def count_words(sent):
count = 0
words = word_tokenize(sent)
for word in words:
count += 1
return count
def create_freq_dict(sents):
i = 0
freqDict_list = []
for sent in sents:
i += 1
freq_dict = {}
words = word_tokenize(sent)
for word in words:
word = word.lower()
if word in freq_dict:
freq_dict[word] += 1
else:
freq_dict[word] = 1
temp = {'doc_id' : i, 'freq_dict' : freq_dict}
freqDict_list.append(temp)
return freqDict_list
def computeTF(doc_info, freqDict_list):
TF_scores = []
for tempDict in freqDict_list:
id = tempDict['doc_id']
for k in tempDict['freq_dict']:
temp = {'doc_id' : id,
'TF_score' : tempDict['freq_dict'][k]/doc_info[id-1]['doc_length'],
'key' : k}
TF_scores.append(temp)
return TF_scores
def computeIDF(doc_info, freqDict_list):
IDF_scores = []
counter = 0
for dict in freqDict_list:
counter += 1
for k in dict['freq_dict'].keys():
count = sum([k in tempDict['freq_dict'] for tempDict in freqDict_list])
temp = {'doc_id' : counter, 'IDF_score' : math.log(len(doc_info)/count), 'key' : k}
IDF_scores.append(temp)
return IDF_scores
def computeTFIDF(TF_scores, IDF_scores):
TFIDF_scores = []
for j in IDF_scores:
for i in TF_scores:
if j['key'] == i['key'] and j['doc_id'] == i['doc_id']:
temp = {'doc_id' : j['doc_id'],
'TFIDF_score' : j['IDF_score']*i['TF_score'],
'key' : i['key']}
TFIDF_scores.append(temp)
return TFIDF_scores
text_sents = sent_tokenize(stripped)
text_sents_clean = [remove_spl_char(s) for s in text_sents]
doc_info = get_doc(text_sents_clean)
freqDict_list = create_freq_dict(text_sents_clean)
TF_scores = computeTF(doc_info, freqDict_list)
IDF_scores = computeIDF(doc_info, freqDict_list)
doc_info
freqDict_list
TF_scores
IDF_scores
###Output
_____no_output_____ |
notebooks/2_explore_database.ipynb | ###Markdown
Physionet 2017 | ECG Rhythm Classification 2. Explore Database Sebastian D. Goodfellow, Ph.D. Setup Noteboook
###Code
# Import 3rd party libraries
import os
import sys
# Import local Libraries
sys.path.insert(0, r'C:\Users\sebig\Documents\code\deep_ecg')
from utils.data.ecg_tools.waveform_db import WaveformDB
from utils.plotting.waveforms import plot_waveforms_interact
# Configure Notebook
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
1. Import Waveform Database
###Code
# Set path
path = os.path.join(os.path.dirname(os.getcwd()), 'data')
# Sample frequency
fs = 300
# Initialize
waveform_db = WaveformDB(
path_waveforms=os.path.join(path, 'waveforms'),
path_labels=os.path.join(path, 'labels'),
fs=fs
)
# Build waveform database
waveform_db.load_database()
###Output
_____no_output_____
###Markdown
2. Plot Waveforms
###Code
# Plot waveforms
plot_waveforms_interact(waveform_db.waveforms)
###Output
_____no_output_____ |
final_kernels_project_solutions/.ipynb_checkpoints/prob5-sols-checkpoint.ipynb | ###Markdown
Problem 5 - Kernel Ridge Regression
###Code
#!pip3 install -U scikit-learn scipy matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
%matplotlib inline
from sklearn.model_selection import GridSearchCV
from sklearn.kernel_ridge import KernelRidge
from sklearn.metrics.pairwise import pairwise_kernels
from sklearn.metrics import mean_squared_error as MSE
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import fixed
import pylab as p
import mpl_toolkits.mplot3d.axes3d as p3
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Question 5aOne common application of kernels is in regression. While regression can take lots of computation, using kernelization allows you to speed up the process by reusing the computed kernel. Let's observe a few examples, and learn how to implement regression with kernelization.
###Code
### Utility functions
#kernel
def generate_kernel_widget():
return widgets.Dropdown(
options=['rbf', 'laplacian'],
description='kernel: ',
disabled=False)
#parameter of the kernel
def generate_gamma_widget():
return widgets.FloatLogSlider(
value=-50,
base=np.sqrt(2),
min=-20,
max=40,
step=1,
description='$\gamma$',
continuous_update= False)
#level of regularization
def generate_reg_widget():
return widgets.FloatLogSlider(
value=0.,
base=np.sqrt(2),
min=-100,
max=40,
step=1,
description='Reg ($\lambda$):',
continuous_update= False)
# Ground truth function
def generate_function_widget():
return widgets.Dropdown(
value='sin(20x)',
options=['sin(20x)',
'piecewise linear'],
description='True function: ',
disabled=False)
def f_true(X, f_type):
if f_type == 'sin(20x)':
return np.sin(20 * X[:,0])
else:
TenX = 10 * X[:,0]
_ = 12345
return (TenX - np.floor(TenX)) * np.sin(_ * np.ceil(TenX)) - (TenX - np.ceil(TenX)) * np.sin(_ * np.floor(TenX))
def compute_predictor(X, y, lambda_reg, kernel, gamma):
"""
Input:
X: data matrix
y: outputs
lambda_reg: regularization term in kernel ridge regression
kernel: which kernel to use ['poly', 'rbf', 'laplacian']
gamma: kernel parameter
Output:
predictor: predict using the kernel ridge model
dual_coef: representation of weight vector(s) in kernel space, i.e. alpha
"""
K = pairwise_kernels(X, X, metric=kernel, gamma=gamma) + lambda_reg * np.eye(len(y))
alpha = np.linalg.inv(K) @ y
return lambda x: pairwise_kernels(x, X, metric=kernel, gamma=gamma) @ alpha, alpha
def plot_kernel_ridge_regression(kernel, n_samples, gamma, sigma, f_type, lambda_reg):
n_features = 1
rng = np.random.RandomState(1)
X = np.sort(rng.rand(n_samples, n_features), axis=0)
# Generate y
y = f_true(X, f_type) + rng.randn(n_samples) * sigma
# Kernel Ridge Regression Solver
predictor, alpha_ = compute_predictor(X, y, lambda_reg, kernel=kernel, gamma=gamma)
clip_bound = 2.5
# Sample test data points
X_test = np.concatenate([X.copy(), np.expand_dims(np.linspace(0., 1., 1000), axis=1)])
X_test = np.sort(X_test, axis=0)
# Visualization
plt.figure(figsize=(10,7))
plt.xlim(0, 1)
plt.ylim(-clip_bound, clip_bound)
plt.plot(X_test, predictor(X_test), '-', color='forestgreen', label='prediction')
plt.scatter(X[:,0], y, c='darkorange', s=40.0, label='training data points')
plt.plot(X_test, f_true(X_test, f_type), '--', color='royalblue', linewidth=2.0, label='Ground truth')
plt.legend()
# Uncomment the seeded random state if you wish to get the same result every time
#rng = np.random.RandomState(0)
rng = np.random.RandomState()
# Generate sample data
n_samples = 10000
sigma = 0.05 # Standard deviation of noise
X = np.expand_dims(np.linspace(0, 1, n_samples), axis=1)
y_sin_true = f_true(X, 'sin(20x)')
y_plf_true = f_true(X, 'piecewise linear')
y_sin_noisy = (y_sin_true + rng.rand(n_samples) * sigma)[::50]
y_plf_noisy = (y_plf_true + rng.rand(n_samples) * sigma)[::50]
plt.figure(figsize=(16, 5))
plt.subplot(121)
plt.plot(X, y_sin_true, c='b', label='true function')
plt.scatter(X[::50], y_sin_noisy, c='orange', label='training data points')
plt.title('sin(20x)');
plt.legend(loc='upper right');
plt.subplot(122)
plt.plot(X, y_plf_true, c='b', label='true function')
plt.scatter(X[::50], y_plf_noisy, c='orange', label='training data points')
plt.title('Piecewise Linear Function');
plt.legend(loc='upper right');
###Output
_____no_output_____
###Markdown
We'll be using the two following kernel functions: - Laplacian: $$K_{\text{rbf}}(x_i, x_j) = \exp(-\gamma(x_i - x_j)^2),$$- RBF (radial basis function): $$K_{\text{laplacian}}(x_i, x_j) = \exp(-\gamma|x_i-x_j|).$$Note: Depending on where you look, the parametrization of the kernel functions may be slightly different (by a constant factor, a reciprocal, etc.). Let's visualize the different predictions of the function when we use different parameters.
###Code
interactive_plot = interactive(plot_kernel_ridge_regression,
n_samples=fixed(50),
kernel=generate_kernel_widget(),
gamma=generate_gamma_widget(),
sigma=fixed(0),
f_type = generate_function_widget(),
lambda_reg=generate_reg_widget())
interactive_plot
###Output
_____no_output_____
###Markdown
Answer the following questions:- **What happens as you vary the parameters?**- **What pair of values for the parameters work best for each combination of kernelization and true function?**- **Does one kernel perform better than the other?** ANSWER HEREAs we vary $\lambda$, we control the ammount of regularization. As $\lambda$ increases, our model will become closer and closer to the constant line 0. When we increase $\gamma$, the model will begin fitting the true function. However, if we increase it too far, the model will become similar to the constant zero function except at the training data points, where the predicted function will spike to the value of the training point.For $\sin(20x)$, $\gamma=0.0110, \lambda = 3e-8$ works well for the Laplacian kernel, and $\gamma=128, \lambda = 3e-8$ with the RBF kernel seems to fit the function exactly.For the piecewise linear function, $\gamma=4, \lambda = 3e-8$ works well for the Laplacian kernel, and $\gamma=64, \lambda = 1.73e-4$ works well for the RBF kernel.The RBF kernel seems to perform better than the Laplacian when trying to predict either function. We'll see that this is actually the case in the next part. Question 5bNow, let's see what parameters perform best for each kernelization, and ultimately, which kernelization works better.We will do this via grid searching a set of parameters.__Hint__:- Use [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [KernelRidge](https://scikit-learn.org/stable/modules/generated/sklearn.kernel_ridge.KernelRidge.html).
###Code
X_train = X[::50]
y_sin_train = y_sin_noisy
alphas = [1e-4, 1e-3, 1e-2, 0.1, 1, 10]
gammas = np.logspace(-2, 2, 5)
### BEGIN 5Bi ###
# Grid search the parameters above using kernel ridge regression
kr_rbf = GridSearchCV(KernelRidge(kernel='rbf', gamma=0.1),
param_grid={"alpha": alphas,
"gamma": gammas})
kr_lap = GridSearchCV(KernelRidge(kernel='laplacian', gamma=0.1),
param_grid={"alpha": alphas,
"gamma": gammas})
# Fit the data
kr_rbf.fit(X_train, y_sin_train)
kr_lap.fit(X_train, y_sin_train)
# Generate predictions from models
y_rbf_pred = kr_rbf.predict(X_train)
y_lap_pred = kr_lap.predict(X_train)
### END 5Bi ###
# Print best parameters and corresponding errors
print(f"Best parameters for RBF kernel: {kr_rbf.best_params_}")
print(f"\twith Mean Square Error = {MSE(y_sin_true[::50], y_rbf_pred)}")
print(f"Best parameters for Laplacian kernel: {kr_lap.best_params_}")
print(f"\twith Mean Square Error = {MSE(y_sin_true[::50], y_lap_pred)}")
plt.figure(figsize=(10, 5))
plt.plot(X, y_sin_true, c='b', label='true function')
# plt.scatter(X_train, y_sin_train, c='orange', label='training data points')
plt.plot(X_train, y_rbf_pred, c='r', label='predicted function using RBF kernel')
plt.plot(X_train, y_lap_pred, c='y', label='predicted function using Laplacian kernel')
plt.title('sin(20x)');
plt.legend(loc='upper right');
X_train = X[::50]
y_plf_train = y_plf_noisy
alphas = [1e-4, 1e-3, 1e-2, 0.1, 1, 10]
gammas = np.logspace(-2, 2, 5)
### BEGIN 5Bii ###
# Fit the data
kr_rbf.fit(X_train, y_plf_train)
kr_lap.fit(X_train, y_plf_train)
# Generate predictions from models
y_rbf_pred = kr_rbf.predict(X_train)
y_lap_pred = kr_lap.predict(X_train)
### END 5Bii ###
# Print best parameters and corresponding errors
print(f"Best parameters for RBF kernel: {kr_rbf.best_params_}")
print(f"\twith Mean Square Error = {MSE(y_sin_true[::50], y_rbf_pred)}")
print(f"Best parameters for Laplacian kernel: {kr_lap.best_params_}")
print(f"\twith Mean Square Error = {MSE(y_sin_true[::50], y_lap_pred)}")
plt.figure(figsize=(10, 5))
plt.plot(X, y_plf_true, c='b', label='true function')
# plt.scatter(X_train, y_sin_train, c='orange', label='training data points')
plt.plot(X_train, y_rbf_pred, c='r', label='predicted function using RBF kernel')
plt.plot(X_train, y_lap_pred, c='y', label='predicted function using Laplacian kernel')
plt.title('Piecewise Linear Function');
plt.legend(loc='upper right');
###Output
Best parameters for RBF kernel: {'alpha': 0.001, 'gamma': 100.0}
with Mean Square Error = 0.7195529634040612
Best parameters for Laplacian kernel: {'alpha': 0.0001, 'gamma': 10.0}
with Mean Square Error = 0.720393876806369
###Markdown
**Which kernel actually performs better? Will this always be the case?** ANSWER HERELooking at both the sinusoidal and piecewise linear functions, themodels using the RBF kernel have a lower MSE, and thus, erform better than the Laplacian kernel. That being saaid, it is not always the case that the RBF kernel outperforms the Laplacian kernel. When applying kernel ridge regression, try out differet kernel functions to see which perfroms best! Question 5cIn the real world, models are often not so simple. It is rare that we find ourselves dealing with the 1d case; in fact, we will usually be dealing with models of high dimension that we are incapable of visualizing. To introduce you to higher dimension kernel ridge regression, we will consider the following 3d function:$$f(x, y) = \sin(\sqrt{x^2 + y^2}).$$Let's take a look at what the function we're trying to learn looks like.Note: This has been set up so that you can try arbitrary 3d functions by changing the function passed in.
###Code
def gen_labels(f, *args, noisy=False):
assert len(args) == 2, "Currently only supports 3d functions"
rng = np.random.RandomState(1)
data = np.meshgrid(*args)
Z = f(*data)
# Add noise to labels
if noisy:
Z += rng.rand(*Z.shape)
return data, Z
n_samples = 1000
x = np.linspace(-5, 5, n_samples)
y = np.linspace(-5, 5, n_samples)
data, Z_true = gen_labels(lambda x, y: np.sin(np.sqrt(x**2 + y**2)), x, y)
fig = plt.figure(figsize=(7, 7))
ax = plt.axes(projection='3d')
ax.contour3D(*data, Z_true, 50)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.set_title('$\sin(\sqrt{x^2 + y^2})$');
###Output
_____no_output_____
###Markdown
Create a kernel ridge regression model using the RBF kernel, and use cross validation to find the best parameters. This will not be too different from learning the 1d functions. What you will see that performing kernel ridge regression is quite algorithmic. Feel free to try out other kernels as well.
###Code
n_samples = 400
x_train = np.linspace(-5, 5, n_samples)
y_train = np.linspace(-5, 5, n_samples)
_, Z_train = gen_labels(lambda x, y: np.sin(np.sqrt(x**2 + y**2)), x_train, y_train, noisy=True);
X_train, Y_train = _
### START 5C ###
alphas = [1e-4, 1e-3, 1e-2, 0.1, 1, 10]
gammas = np.logspace(-2, 2, 5)
# Create the model
kr = GridSearchCV(KernelRidge(kernel='rbf', gamma=0.1),
param_grid={"alpha": alphas,
"gamma": gammas})
train_stacked = np.hstack((X_train, Y_train))
# Fit the data
kr.fit(train_stacked, Z_train)
# Make predictions with the fitted model
Z_pred = kr.predict(train_stacked)
### END 5C ###
fig = plt.figure(figsize=(8, 5))
ax1 = plt.axes(projection='3d')
ax1.contour3D(*data, Z_true, 50)
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('z')
ax1.set_title('True Function');
fig = plt.figure(figsize=(8, 5))
ax2 = plt.axes(projection='3d')
ax2.contour3D(X_train, Y_train, Z_pred, 50, cmap='plasma', label='predicted function')
ax2.set_xlabel('x')
ax2.set_ylabel('y')
ax2.set_zlabel('z')
ax2.set_title('Predicted Function');
###Output
_____no_output_____ |
notebooks/Data_Preparation-V1.ipynb | ###Markdown
Data Preparation Focus is to understand the final data structure Support each step by visual analytics John Hopkins GITHUB csv Data
###Code
data_path='C:/Users/Nitin/ds-covid19/data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
pd_raw=pd.read_csv(data_path)
pd_raw
pd_raw.columns[4:]
time_idx=pd_raw.columns[4:]
df_plot = pd.DataFrame({
'date':time_idx})
df_plot.head()
pd_raw['Country/Region']
pd_raw[pd_raw['Country/Region']=='Germany'].iloc[:,4::].sum(axis=0)
country_list=['Italy',
'US',
'Spain',
'Germany',
'Korea,South',
]
for each in country_list:
df_plot[each]=np.array(pd_raw[pd_raw['Country/Region']==each].iloc[:,4::].sum(axis=0))
%matplotlib inline
df_plot.set_index('date').plot()
###Output
_____no_output_____
###Markdown
Data Type Date
###Code
df_plot.head()
from datetime import datetime
time_idx=[datetime.strptime(each,"%m/%d/%y") for each in df_plot.date] #convert to datetime
time_str=[each.strftime('%Y-%m-%d') for each in time_idx] #convert back to date ISO norm (str)
df_plot['date']=time_idx
type(df_plot['date'][0])
df_plot.head()
df_plot.to_csv('C:/Users/Nitin/ds-covid19/data/processed/COVID_small_flat_table.csv',sep=';',index=False)
###Output
_____no_output_____
###Markdown
Relational Data Model - defining a Primary Key In a relational model, a primary key is a specific choice of a minimal set of attributes (columns) that uniquely specify a tuple (row) in a relation (table) (Source: Wiki) The main features of a primary key are: It must contain a unique value for each row of data It cannot contain null values
###Code
data_path='C:/Users/Nitin/ds-covid19/data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
pd_raw=pd.read_csv(data_path)
pd_raw.head()
pd_data_base=pd_raw.rename(columns={'Country/Region':'country',
'Province/State':'state'})
pd_data_base=pd_data_base.drop(['Lat','Long'],axis=1)
pd_data_base.head()
pd_relational_model=pd_data_base.set_index(['state','country'])\
.T \
.stack(level=[0,1]) \
.reset_index() \
.rename(columns={'level_0':'date',
0:'confirmed'},
)
pd_relational_model
pd_relational_model.dtypes
pd_relational_model['date']=pd_relational_model.date.astype('datetime64[ns]')
pd_relational_model.dtypes
df_plot.to_csv('C:/Users/Nitin/ds-covid19/data/processed/COVID_relational_confirmed.csv',sep=';')
###Output
_____no_output_____
###Markdown
Group-by Apply
###Code
pd_JH_data=pd.read_csv('C:/Users/Nitin/ds-covid19/data/processed/COVID_relational_confirmed.csv',sep=';',parse_dates=[0])
pd_JH_data=pd_JH_data.sort_values('date',ascending=True).reset_index(drop=True).copy()
pd_JH_data.head()
test_data=pd_JH_data[((pd_JH_data['country']=='US')|
(pd_JH_data['country']=='Germany'))&
(pd_JH_data['date']>'2020-03-20')]
test_data
test_data.groupby(['country']).agg(np.max)
# %load C:\Users\Nitin\ds-covid19\src\features\build_features.py
import numpy as np
from sklearn import linear_model
reg = linear_model.LinearRegression(fit_intercept=True)
def get_doubling_time_via_regression(in_array):
''' Use a linear regression to approximate the doubling rate'''
y = np.array(in_array)
X = np.arange(-1,2).reshape(-1,1)
assert len(in_array)
reg.fit(X,y)
intercept=reg.intercept_
slope=reg.coef_
return intercept/slope
if __name__ == '__main__':
test_data=np.array([2,4,6])
result=get_doubling_time_via_regression(test_data)
print('The test slope is: '+str(result))
test_data.groupby(['state','country']).agg(np.max)
#test_data.groupby(['state','country']).apply(get_doubling_time_via_regression)
def rolling_reg(df_input,col='confirmed'):
''' input has to be a data frame'''
''' return is single series (mandatory for group apply)'''
days_back=3
result=df_input[col].rolling(
window=days_back,
min_periods=days_back).apply(get_doubling_time_via_regression,raw=False)
return result
test_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed')
pd_DR_result=pd_JH_data[['state','country','confirmed']].groupby(['state','country']).apply(rolling_reg,'confirmed').reset_index()
pd_DR_result=pd_DR_result.rename(columns={'confirmed':'doubling_rate',
'level_2':'index'})
pd_DR_result.head()
pd_JH_data=pd_JH_data.reset_index().head()
pd_JH_data.head()
pd_result_larg=pd.merge(pd_JH_data,pd_DR_result[['index','doubling_rate']],on=['index'],how='left')
#pd_result_larg[pd_result_larg['country']=='Germany']
###Output
_____no_output_____ |
Kaggle/Playgroud/RiskPrediction/.ipynb_checkpoints/introduction-to-manual-feature-engineering-checkpoint.ipynb | ###Markdown
Introduction: Manual Feature EngineeringIf you are new to this competition, I highly suggest checking out [this notebook](https://www.kaggle.com/willkoehrsen/start-here-a-gentle-introduction/) to get started.In this notebook, we will explore making features by hand for the Home Credit Default Risk competition. In an earlier notebook, we used only the `application` data in order to build a model. The best model we made from this data achieved a score on the leaderboard around 0.74. In order to better this score, we will have to include more information from the other dataframes. Here, we will look at using information from the `bureau` and `bureau_balance` data. The definitions of these data files are:* bureau: information about client's previous loans with other financial institutions reported to Home Credit. Each previous loan has its own row.* bureau_balance: monthly information about the previous loans. Each month has its own row.Manual feature engineering can be a tedious process (which is why we use automated feature engineering with featuretools!) and often relies on domain expertise. Since I have limited domain knowledge of loans and what makes a person likely to default, I will instead concentrate of getting as much info as possible into the final training dataframe. The idea is that the model will then pick up on which features are important rather than us having to decide that. Basically, our approach is to make as many features as possible and then give them all to the model to use! Later, we can perform feature reduction using the feature importances from the model or other techniques such as PCA. The process of manual feature engineering will involve plenty of Pandas code, a little patience, and a lot of great practice manipulation data. Even though automated feature engineering tools are starting to be made available, feature engineering will still have to be done using plenty of data wrangling for a little while longer.
###Code
# pandas and numpy for data manipulation
import pandas as pd
import numpy as np
# matplotlib and seaborn for plotting
import matplotlib.pyplot as plt
import seaborn as sns
# Suppress warnings from pandas
import warnings
warnings.filterwarnings('ignore')
plt.style.use('fivethirtyeight')
###Output
_____no_output_____
###Markdown
Example: Counts of a client's previous loansTo illustrate the general process of manual feature engineering, we will first simply get the count of a client's previous loans at other financial institutions. This requires a number of Pandas operations we will make heavy use of throughout the notebook:* `groupby`: group a dataframe by a column. In this case we will group by the unique client, the `SK_ID_CURR` column* `agg`: perform a calculation on the grouped data such as taking the mean of columns. We can either call the function directly (`grouped_df.mean()`) or use the `agg` function together with a list of transforms (`grouped_df.agg([mean, max, min, sum])`)* `merge`: match the aggregated statistics to the appropriate client. We need to merge the original training data with the calculated stats on the `SK_ID_CURR` column which will insert `NaN` in any cell for which the client does not have the corresponding statisticWe also use the (`rename`) function quite a bit specifying the columns to be renamed as a dictionary. This is useful in order to keep track of the new variables we create.This might seem like a lot, which is why we'll eventually write a function to do this process for us. Let's take a look at implementing this by hand first.
###Code
# Read in bureau
bureau = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/bureau.csv')
bureau.head()
# Groupby the client id (SK_ID_CURR), count the number of previous loans, and rename the column
previous_loan_counts = bureau.groupby('SK_ID_CURR', as_index=False)['SK_ID_BUREAU'].count().rename(columns = {'SK_ID_BUREAU': 'previous_loan_counts'})
previous_loan_counts.head()
# Join to the training dataframe
train = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/application_train.csv')
train = train.merge(previous_loan_counts, on = 'SK_ID_CURR', how = 'left')
# Fill the missing values with 0
train['previous_loan_counts'] = train['previous_loan_counts'].fillna(0)
train.head()
train = train.fillna(0)
train.head()
###Output
_____no_output_____
###Markdown
Scroll all the way to the right to see the new column. Assessing Usefulness of New Variable with r valueTo determine if the new variable is useful, we can calculate the Pearson Correlation Coefficient (r-value) between this variable and the target. This measures the strength of a linear relationship between two variables and ranges from -1 (perfectly negatively linear) to +1 (perfectly positively linear). The r-value is not best measure of the "usefulness" of a new variable, but it can give a first approximation of whether a variable will be helpful to a machine learning model. The larger the r-value of a variable with respect to the target, the more a change in this variable is likely to affect the value of the target. Therefore, we look for the variables with the greatest absolute value r-value relative to the target.We can also visually inspect a relationship with the target using the Kernel Density Estimate (KDE) plot. Kernel Density Estimate PlotsThe kernel density estimate plot shows the distribution of a single variable (think of it as a smoothed histogram). To see the different in distributions dependent on the value of a categorical variable, we can color the distributions differently according to the category. For example, we can show the kernel density estimate of the `previous_loan_count` colored by whether the `TARGET` = 1 or 0. The resulting KDE will show any significant differences in the distribution of the variable between people who did not repay their loan (`TARGET == 1`) and the people who did (`TARGET == 0`). This can serve as an indicator of whether a variable will be 'relevant' to a machine learning model. We will put this plotting functionality in a function to re-use for any variable.
###Code
# Plots the disribution of a variable colored by value of the target
def kde_target(var_name, df):
# Calculate the correlation coefficient between the new variable and the target
corr = df['TARGET'].corr(df[var_name])
# Calculate medians for repaid vs not repaid
avg_repaid = df.ix[df['TARGET'] == 0, var_name].median()
avg_not_repaid = df.ix[df['TARGET'] == 1, var_name].median()
plt.figure(figsize = (12, 6))
# Plot the distribution for target == 0 and target == 1
sns.kdeplot(df.ix[df['TARGET'] == 0, var_name], label = 'TARGET == 0')
sns.kdeplot(df.ix[df['TARGET'] == 1, var_name], label = 'TARGET == 1')
# label the plot
plt.xlabel(var_name); plt.ylabel('Density'); plt.title('%s Distribution' % var_name)
plt.legend();
# print out the correlation
print('The correlation between %s and the TARGET is %0.4f' % (var_name, corr))
# Print out average values
print('Median value for loan that was not repaid = %0.4f' % avg_not_repaid)
print('Median value for loan that was repaid = %0.4f' % avg_repaid)
###Output
_____no_output_____
###Markdown
We can test this function using the `EXT_SOURCE_3` variable which we [found to be one of the most important variables ](https://www.kaggle.com/willkoehrsen/start-here-a-gentle-introduction) according to a Random Forest and Gradient Boosting Machine.
###Code
kde_target('EXT_SOURCE_3', train)
###Output
The correlation between EXT_SOURCE_3 and the TARGET is -0.1196
Median value for loan that was not repaid = 0.2881
Median value for loan that was repaid = 0.4741
###Markdown
Now for the new variable we just made, the number of previous loans at other institutions.
###Code
kde_target('previous_loan_counts', train)
###Output
The correlation between previous_loan_counts and the TARGET is -0.0100
Median value for loan that was not repaid = 3.0000
Median value for loan that was repaid = 4.0000
###Markdown
From this it's difficult to tell if this variable will be important. The correlation coefficient is extremely weak and there is almost no noticeable difference in the distributions. Let's move on to make a few more variables from the bureau dataframe. We will take the mean, min, and max of every numeric column in the bureau dataframe. Aggregating Numeric ColumnsTo account for the numeric information in the `bureau` dataframe, we can compute statistics for all the numeric columns. To do so, we `groupby` the client id, `agg` the grouped dataframe, and merge the result back into the training data. The `agg` function will only calculate the values for the numeric columns where the operation is considered valid. We will stick to using `'mean', 'max', 'min', 'sum'` but any function can be passed in here. We can even write our own function and use it in an `agg` call.
###Code
# Group by the client id, calculate aggregation statistics
bureau_agg = bureau.drop(columns = ['SK_ID_BUREAU']).groupby('SK_ID_CURR', as_index = False).agg(['count', 'mean', 'max', 'min', 'sum']).reset_index()
bureau_agg.head()
###Output
_____no_output_____
###Markdown
We need to create new names for each of these columns. The following code makes new names by appending the stat to the name. Here we have to deal with the fact that the dataframe has a multi-level index. I find these confusing and hard to work with, so I try to reduce to a single level index as quickly as possible.
###Code
# List of column names
columns = ['SK_ID_CURR']
# Iterate through the variables names
for var in bureau_agg.columns.levels[0]:
# Skip the id name
if var != 'SK_ID_CURR':
# Iterate through the stat names
for stat in bureau_agg.columns.levels[1][:-1]:
# Make a new column name for the variable and stat
columns.append('bureau_%s_%s' % (var, stat))
# Assign the list of columns names as the dataframe column names
bureau_agg.columns = columns
bureau_agg.head()
###Output
_____no_output_____
###Markdown
Now we simply merge with the training data as we did before.
###Code
# Merge with the training data
train = train.merge(bureau_agg, on = 'SK_ID_CURR', how = 'left')
train.head()
###Output
_____no_output_____
###Markdown
Correlations of Aggregated Values with TargetWe can calculate the correlation of all new values with the target. Again, we can use these as an approximation of the variables which may be important for modeling.
###Code
# List of new correlations
new_corrs = []
# Iterate through the columns
for col in columns:
# Calculate correlation with the target
corr = train['TARGET'].corr(train[col])
# Append the list as a tuple
new_corrs.append((col, corr))
###Output
_____no_output_____
###Markdown
In the code below, we sort the correlations by the magnitude (absolute value) using the `sorted` Python function. We also make use of an anonymous `lambda` function, another important Python operation that is good to know.
###Code
# Sort the correlations by the absolute value
# Make sure to reverse to put the largest values at the front of list
new_corrs = sorted(new_corrs, key = lambda x: abs(x[1]), reverse = True)
new_corrs[:15]
###Output
_____no_output_____
###Markdown
None of the new variables have a significant correlation with the TARGET. We can look at the KDE plot of the highest correlated variable, `bureau_DAYS_CREDIT_mean`, with the target in in terms of absolute magnitude correlation.
###Code
train = train.fillna(0)
train.head()
kde_target('bureau_DAYS_CREDIT_mean', train)
###Output
The correlation between bureau_DAYS_CREDIT_mean and the TARGET is 0.0840
Median value for loan that was not repaid = -678.7000
Median value for loan that was repaid = -949.3333
###Markdown
The definition of this column is: "How many days before current application did client apply for Credit Bureau credit". My interpretation is this is the number of days that the previous loan was applied for before the application for a loan at Home Credit. Therefore, a larger negative number indicates the loan was further before the current loan application. We see an extremely weak positive relationship between the average of this variable and the target meaning that clients who applied for loans further in the past potentially are more likely to repay loans at Home Credit. With a correlation this weak though, it is just as likely to be noise as a signal. The Multiple Comparisons ProblemWhen we have lots of variables, we expect some of them to be correlated just by pure chance, a [problem known as multiple comparisons](https://towardsdatascience.com/the-multiple-comparisons-problem-e5573e8b9578). We can make hundreds of features, and some will turn out to be corelated with the target simply because of random noise in the data. Then, when our model trains, it may overfit to these variables because it thinks they have a relationship with the target in the training set, but this does not necessarily generalize to the test set. There are many considerations that we have to take into account when making features! Function for Numeric AggregationsLet's encapsulate all of the previous work into a function. This will allow us to compute aggregate stats for numeric columns across any dataframe. We will re-use this function when we want to apply the same operations for other dataframes.
###Code
def agg_numeric(df, group_var, df_name):
"""Aggregates the numeric values in a dataframe. This can
be used to create features for each instance of the grouping variable.
Parameters
--------
df (dataframe):
the dataframe to calculate the statistics on
group_var (string):
the variable by which to group df
df_name (string):
the variable used to rename the columns
Return
--------
agg (dataframe):
a dataframe with the statistics aggregated for
all numeric columns. Each instance of the grouping variable will have
the statistics (mean, min, max, sum; currently supported) calculated.
The columns are also renamed to keep track of features created.
"""
# Remove id variables other than grouping variable
for col in df:
if col != group_var and 'SK_ID' in col:
df = df.drop(columns = col)
group_ids = df[group_var]
numeric_df = df.select_dtypes('number')
numeric_df[group_var] = group_ids
# Group by the specified variable and calculate the statistics
agg = numeric_df.groupby(group_var).agg(['count', 'mean', 'max', 'min', 'sum']).reset_index()
# Need to create new column names
columns = [group_var]
# Iterate through the variables names
for var in agg.columns.levels[0]:
# Skip the grouping variable
if var != group_var:
# Iterate through the stat names
for stat in agg.columns.levels[1][:-1]:
# Make a new column name for the variable and stat
columns.append('%s_%s_%s' % (df_name, var, stat))
agg.columns = columns
return agg
bureau_agg_new = agg_numeric(bureau.drop(columns = ['SK_ID_BUREAU']), group_var = 'SK_ID_CURR', df_name = 'bureau')
bureau_agg_new.head()
###Output
_____no_output_____
###Markdown
To make sure the function worked as intended, we should compare with the aggregated dataframe we constructed by hand.
###Code
bureau_agg.head()
###Output
_____no_output_____
###Markdown
If we go through and inspect the values, we do find that they are equivalent. We will be able to reuse this function for calculating numeric stats for other dataframes. Using functions allows for consistent results and decreases the amount of work we have to do in the future! Correlation FunctionBefore we move on, we can also make the code to calculate correlations with the target into a function.
###Code
# Function to calculate correlations with the target for a dataframe
def target_corrs(df):
# List of correlations
corrs = []
# Iterate through the columns
for col in df.columns:
print(col)
# Skip the target column
if col != 'TARGET':
# Calculate correlation with the target
corr = df['TARGET'].corr(df[col])
# Append the list as a tuple
corrs.append((col, corr))
# Sort by absolute magnitude of correlations
corrs = sorted(corrs, key = lambda x: abs(x[1]), reverse = True)
return corrs
###Output
_____no_output_____
###Markdown
Categorical VariablesNow we move from the numeric columns to the categorical columns. These are discrete string variables, so we cannot just calculate statistics such as mean and max which only work with numeric variables. Instead, we will rely on calculating value counts of each category within each categorical variable. As an example, if we have the following dataframe:| SK_ID_CURR | Loan type ||------------|-----------|| 1 | home || 1 | home || 1 | home || 1 | credit || 2 | credit || 3 | credit || 3 | cash || 3 | cash || 4 | credit || 4 | home || 4 | home |we will use this information counting the number of loans in each category for each client. | SK_ID_CURR | credit count | cash count | home count | total count ||------------|--------------|------------|------------|-------------|| 1 | 1 | 0 | 3 | 4 || 2 | 1 | 0 | 0 | 1 || 3 | 1 | 2 | 0 | 3 || 4 | 1 | 0 | 2 | 3 |Then we can normalize these value counts by the total number of occurences of that categorical variable for that observation (meaning that the normalized counts must sum to 1.0 for each observation).| SK_ID_CURR | credit count | cash count | home count | total count | credit count norm | cash count norm | home count norm ||------------|--------------|------------|------------|-------------|-------------------|-----------------|-----------------|| 1 | 1 | 0 | 3 | 4 | 0.25 | 0 | 0.75 || 2 | 1 | 0 | 0 | 1 | 1.00 | 0 | 0 || 3 | 1 | 2 | 0 | 3 | 0.33 | 0.66 | 0 || 4 | 1 | 0 | 2 | 3 | 0.33 | 0 | 0.66 |Hopefully, encoding the categorical variables this way will allow us to capture the information they contain. If anyone has a better idea for this process, please let me know in the comments!We will now go through this process step-by-step. At the end, we will wrap up all the code into one function to be re-used for many dataframes. First we one-hot encode a dataframe with only the categorical columns (`dtype == 'object'`).
###Code
categorical = pd.get_dummies(bureau.select_dtypes('object'))
categorical['SK_ID_CURR'] = bureau['SK_ID_CURR']
categorical.head()
categorical_grouped = categorical.groupby('SK_ID_CURR').agg(['sum', 'mean'])
categorical_grouped.head()
###Output
_____no_output_____
###Markdown
The `sum` columns represent the count of that category for the associated client and the `mean` represents the normalized count. One-hot encoding makes the process of calculating these figures very easy!We can use a similar function as before to rename the columns. Again, we have to deal with the multi-level index for the columns. We iterate through the first level (level 0) which is the name of the categorical variable appended with the value of the category (from one-hot encoding). Then we iterate stats we calculated for each client. We will rename the column with the level 0 name appended with the stat. As an example, the column with `CREDIT_ACTIVE_Active` as level 0 and `sum` as level 1 will become `CREDIT_ACTIVE_Active_count`.
###Code
categorical_grouped.columns.levels[0][:10]
categorical_grouped.columns.levels[1]
group_var = 'SK_ID_CURR'
# Need to create new column names
columns = []
# Iterate through the variables names
for var in categorical_grouped.columns.levels[0]:
# Skip the grouping variable
if var != group_var:
# Iterate through the stat names
for stat in ['count', 'count_norm']:
# Make a new column name for the variable and stat
columns.append('%s_%s' % (var, stat))
# Rename the columns
categorical_grouped.columns = columns
categorical_grouped.head()
###Output
_____no_output_____
###Markdown
The sum column records the counts and the mean column records the normalized count. We can merge this dataframe into the training data.
###Code
train = train.merge(categorical_grouped, left_on = 'SK_ID_CURR', right_index = True, how = 'left')
train.head()
train.shape
train.iloc[:10, 123:]
###Output
_____no_output_____
###Markdown
Function to Handle Categorical VariablesTo make the code more efficient, we can now write a function to handle the categorical variables for us. This will take the same form as the `agg_numeric` function in that it accepts a dataframe and a grouping variable. Then it will calculate the counts and normalized counts of each category for all categorical variables in the dataframe.
###Code
def count_categorical(df, group_var, df_name):
"""Computes counts and normalized counts for each observation
of `group_var` of each unique category in every categorical variable
Parameters
--------
df : dataframe
The dataframe to calculate the value counts for.
group_var : string
The variable by which to group the dataframe. For each unique
value of this variable, the final dataframe will have one row
df_name : string
Variable added to the front of column names to keep track of columns
Return
--------
categorical : dataframe
A dataframe with counts and normalized counts of each unique category in every categorical variable
with one row for every unique value of the `group_var`.
"""
# Select the categorical columns
categorical = pd.get_dummies(df.select_dtypes('object'))
# Make sure to put the identifying id on the column
categorical[group_var] = df[group_var]
# Groupby the group var and calculate the sum and mean
categorical = categorical.groupby(group_var).agg(['sum', 'mean'])
column_names = []
# Iterate through the columns in level 0
for var in categorical.columns.levels[0]:
# Iterate through the stats in level 1
for stat in ['count', 'count_norm']:
# Make a new column name
column_names.append('%s_%s_%s' % (df_name, var, stat))
categorical.columns = column_names
return categorical
bureau_counts = count_categorical(bureau, group_var = 'SK_ID_CURR', df_name = 'bureau')
bureau_counts.head()
###Output
_____no_output_____
###Markdown
Applying Operations to another dataframeWe will now turn to the bureau balance dataframe. This dataframe has monthly information about each client's previous loan(s) with other financial institutions. Instead of grouping this dataframe by the `SK_ID_CURR` which is the client id, we will first group the dataframe by the `SK_ID_BUREAU` which is the id of the previous loan. This will give us one row of the dataframe for each loan. Then, we can group by the `SK_ID_CURR` and calculate the aggregations across the loans of each client. The final result will be a dataframe with one row for each client, with stats calculated for their loans.
###Code
# Read in bureau balance
bureau_balance = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/bureau_balance.csv')
bureau_balance.head()
###Output
_____no_output_____
###Markdown
First, we can calculate the value counts of each status for each loan. Fortunately, we already have a function that does this for us!
###Code
# Counts of each type of status for each previous loan
bureau_balance_counts = count_categorical(bureau_balance, group_var = 'SK_ID_BUREAU', df_name = 'bureau_balance')
bureau_balance_counts.head()
###Output
_____no_output_____
###Markdown
Now we can handle the one numeric column. The `MONTHS_BALANCE` column has the "months of balance relative to application date." This might not necessarily be that important as a numeric variable, and in future work we might want to consider this as a time variable. For now, we can just calculate the same aggregation statistics as previously.
###Code
# Calculate value count statistics for each `SK_ID_CURR`
bureau_balance_agg = agg_numeric(bureau_balance, group_var = 'SK_ID_BUREAU', df_name = 'bureau_balance')
bureau_balance_agg.head()
###Output
_____no_output_____
###Markdown
The above dataframes have the calculations done on each _loan_. Now we need to aggregate these for each _client_. We can do this by merging the dataframes together first and then since all the variables are numeric, we just need to aggregate the statistics again, this time grouping by the `SK_ID_CURR`.
###Code
# Dataframe grouped by the loan
bureau_by_loan = bureau_balance_agg.merge(bureau_balance_counts, right_index = True, left_on = 'SK_ID_BUREAU', how = 'outer')
# Merge to include the SK_ID_CURR
bureau_by_loan = bureau_by_loan.merge(bureau[['SK_ID_BUREAU', 'SK_ID_CURR']], on = 'SK_ID_BUREAU', how = 'left')
bureau_by_loan.head()
bureau_balance_by_client = agg_numeric(bureau_by_loan.drop(columns = ['SK_ID_BUREAU']), group_var = 'SK_ID_CURR', df_name = 'client')
bureau_balance_by_client.head()
###Output
_____no_output_____
###Markdown
To recap, for the `bureau_balance` dataframe we:1. Calculated numeric stats grouping by each loan2. Made value counts of each categorical variable grouping by loan3. Merged the stats and the value counts on the loans4. Calculated numeric stats for the resulting dataframe grouping by the client idThe final resulting dataframe has one row for each client, with statistics calculated for all of their loans with monthly balance information. Some of these variables are a little confusing, so let's try to explain a few:* `client_bureau_balance_MONTHS_BALANCE_mean_mean`: For each loan calculate the mean value of `MONTHS_BALANCE`. Then for each client, calculate the mean of this value for all of their loans. * `client_bureau_balance_STATUS_X_count_norm_sum`: For each loan, calculate the number of occurences of `STATUS` == X divided by the number of total `STATUS` values for the loan. Then, for each client, add up the values for each loan. We will hold off on calculating the correlations until we have all the variables together in one dataframe. Putting the Functions TogetherWe now have all the pieces in place to take the information from the previous loans at other institutions and the monthly payments information about these loans and put them into the main training dataframe. Let's do a reset of all the variables and then use the functions we built to do this from the ground up. This demonstrate the benefit of using functions for repeatable workflows!
###Code
# Free up memory by deleting old objects
import gc
gc.enable()
del train, bureau, bureau_balance, bureau_agg, bureau_agg_new, bureau_balance_agg, bureau_balance_counts, bureau_by_loan, bureau_balance_by_client, bureau_counts
gc.collect()
# Read in new copies of all the dataframes
train = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/application_train.csv')
bureau = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/bureau.csv')
bureau_balance = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/bureau_balance.csv')
###Output
_____no_output_____
###Markdown
Counts of Bureau Dataframe
###Code
bureau_counts = count_categorical(bureau, group_var = 'SK_ID_CURR', df_name = 'bureau')
bureau_counts.head()
###Output
_____no_output_____
###Markdown
Aggregated Stats of Bureau Dataframe
###Code
bureau_agg = agg_numeric(bureau.drop(columns = ['SK_ID_BUREAU']), group_var = 'SK_ID_CURR', df_name = 'bureau')
bureau_agg.head()
###Output
_____no_output_____
###Markdown
Value counts of Bureau Balance dataframe by loan
###Code
bureau_balance_counts = count_categorical(bureau_balance, group_var = 'SK_ID_BUREAU', df_name = 'bureau_balance')
bureau_balance_counts.head()
###Output
_____no_output_____
###Markdown
Aggregated stats of Bureau Balance dataframe by loan
###Code
bureau_balance_agg = agg_numeric(bureau_balance, group_var = 'SK_ID_BUREAU', df_name = 'bureau_balance')
bureau_balance_agg.head()
###Output
_____no_output_____
###Markdown
Aggregated Stats of Bureau Balance by Client
###Code
# Dataframe grouped by the loan
bureau_by_loan = bureau_balance_agg.merge(bureau_balance_counts, right_index = True, left_on = 'SK_ID_BUREAU', how = 'outer')
# Merge to include the SK_ID_CURR
bureau_by_loan = bureau[['SK_ID_BUREAU', 'SK_ID_CURR']].merge(bureau_by_loan, on = 'SK_ID_BUREAU', how = 'left')
# Aggregate the stats for each client
bureau_balance_by_client = agg_numeric(bureau_by_loan.drop(columns = ['SK_ID_BUREAU']), group_var = 'SK_ID_CURR', df_name = 'client')
###Output
_____no_output_____
###Markdown
Insert Computed Features into Training Data
###Code
original_features = list(train.columns)
print('Original Number of Features: ', len(original_features))
# Merge with the value counts of bureau
train = train.merge(bureau_counts, on = 'SK_ID_CURR', how = 'left')
# Merge with the stats of bureau
train = train.merge(bureau_agg, on = 'SK_ID_CURR', how = 'left')
# Merge with the monthly information grouped by client
train = train.merge(bureau_balance_by_client, on = 'SK_ID_CURR', how = 'left')
new_features = list(train.columns)
print('Number of features using previous loans from other institutions data: ', len(new_features))
###Output
Number of features using previous loans from other institutions data: 333
###Markdown
Feature Engineering OutcomesAfter all that work, now we want to take a look at the variables we have created. We can look at the percentage of missing values, the correlations of variables with the target, and also the correlation of variables with the other variables. The correlations between variables can show if we have collinear varibles, that is, variables that are highly correlated with one another. Often, we want to remove one in a pair of collinear variables because having both variables would be redundant. We can also use the percentage of missing values to remove features with a substantial majority of values that are not present. __Feature selection__ will be an important focus going forward, because reducing the number of features can help the model learn during training and also generalize better to the testing data. The "curse of dimensionality" is the name given to the issues caused by having too many features (too high of a dimension). As the number of variables increases, the number of datapoints needed to learn the relationship between these variables and the target value increases exponentially. Feature selection is the process of removing variables to help our model to learn and generalize better to the testing set. The objective is to remove useless/redundant variables while preserving those that are useful. There are a number of tools we can use for this process, but in this notebook we will stick to removing columns with a high percentage of missing values and variables that have a high correlation with one another. Later we can look at using the feature importances returned from models such as the `Gradient Boosting Machine` or `Random Forest` to perform feature selection. Missing ValuesAn important consideration is the missing values in the dataframe. Columns with too many missing values might have to be dropped.
###Code
# Function to calculate missing values by column# Funct
def missing_values_table(df):
# Total missing values
mis_val = df.isnull().sum()
# Percentage of missing values
mis_val_percent = 100 * df.isnull().sum() / len(df)
# Make a table with the results
mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)
# Rename the columns
mis_val_table_ren_columns = mis_val_table.rename(
columns = {0 : 'Missing Values', 1 : '% of Total Values'})
# Sort the table by percentage of missing descending
mis_val_table_ren_columns = mis_val_table_ren_columns[
mis_val_table_ren_columns.iloc[:,1] != 0].sort_values(
'% of Total Values', ascending=False).round(1)
# Print some summary information
print ("Your selected dataframe has " + str(df.shape[1]) + " columns.\n"
"There are " + str(mis_val_table_ren_columns.shape[0]) +
" columns that have missing values.")
# Return the dataframe with missing information
return mis_val_table_ren_columns
missing_train = missing_values_table(train)
missing_train.head(10)
###Output
Your selected dataframe has 333 columns.
There are 278 columns that have missing values.
###Markdown
We see there are a number of columns with a high percentage of missing values. There is no well-established threshold for removing missing values, and the best course of action depends on the problem. Here, to reduce the number of features, we will remove any columns in either the training or the testing data that have greater than 90% missing values.
###Code
missing_train_vars = list(missing_train.index[missing_train['% of Total Values'] > 90])
len(missing_train_vars)
###Output
_____no_output_____
###Markdown
Before we remove the missing values, we will find the missing value percentages in the testing data. We'll then remove any columns with greater than 90% missing values in either the training or testing data.Let's now read in the testing data, perform the same operations, and look at the missing values in the testing data. We already have calculated all the counts and aggregation statistics, so we only need to merge the testing data with the appropriate data. Calculate Information for Testing Data
###Code
# Read in the test dataframe
test = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/application_test.csv')
# Merge with the value counts of bureau
test = test.merge(bureau_counts, on = 'SK_ID_CURR', how = 'left')
# Merge with the stats of bureau
test = test.merge(bureau_agg, on = 'SK_ID_CURR', how = 'left')
# Merge with the value counts of bureau balance
test = test.merge(bureau_balance_by_client, on = 'SK_ID_CURR', how = 'left')
print('Shape of Testing Data: ', test.shape)
###Output
Shape of Testing Data: (48744, 332)
###Markdown
We need to align the testing and training dataframes, which means matching up the columns so they have the exact same columns. This shouldn't be an issue here, but when we one-hot encode variables, we need to align the dataframes to make sure they have the same columns.
###Code
train_labels = train['TARGET']
# Align the dataframes, this will remove the 'TARGET' column
train, test = train.align(test, join = 'inner', axis = 1)
train['TARGET'] = train_labels
print('Training Data Shape: ', train.shape)
print('Testing Data Shape: ', test.shape)
###Output
Training Data Shape: (307511, 333)
Testing Data Shape: (48744, 332)
###Markdown
The dataframes now have the same columns (with the exception of the `TARGET` column in the training data). This means we can use them in a machine learning model which needs to see the same columns in both the training and testing dataframes.Let's now look at the percentage of missing values in the testing data so we can figure out the columns that should be dropped.
###Code
missing_test = missing_values_table(test)
missing_test.head(10)
missing_test_vars = list(missing_test.index[missing_test['% of Total Values'] > 90])
len(missing_test_vars)
missing_columns = list(set(missing_test_vars + missing_train_vars))
print('There are %d columns with more than 90%% missing in either the training or testing data.' % len(missing_columns))
# Drop the missing columns
train = train.drop(columns = missing_columns)
test = test.drop(columns = missing_columns)
###Output
_____no_output_____
###Markdown
We ended up removing no columns in this round because there are no columns with more than 90% missing values. We might have to apply another feature selection method to reduce the dimensionality. At this point we will save both the training and testing data. I encourage anyone to try different percentages for dropping the missing columns and compare the outcomes.
###Code
train.to_csv('./submission_2/train_bureau_raw.csv', index = False)
test.to_csv('./submission_2/test_bureau_raw.csv', index = False)
###Output
_____no_output_____
###Markdown
CorrelationsFirst let's look at the correlations of the variables with the target. We can see in any of the variables we created have a greater correlation than those already present in the training data (from `application`).
###Code
# Calculate all correlations in dataframe
corrs = train.corr()
corrs = corrs.sort_values('TARGET', ascending = False)
# Ten most positive correlations
pd.DataFrame(corrs['TARGET'].head(10))
# Ten most negative correlations
pd.DataFrame(corrs['TARGET'].dropna().tail(10))
###Output
_____no_output_____
###Markdown
The highest correlated variable with the target (other than the `TARGET` which of course has a correlation of 1), is a variable we created. However, just because the variable is correlated does not mean that it will be useful, and we have to remember that if we generate hundreds of new variables, some are going to be correlated with the target simply because of random noise. Viewing the correlations skeptically, it does appear that several of the newly created variables may be useful. To assess the "usefulness" of variables, we will look at the feature importances returned by the model. For curiousity's sake (and because we already wrote the function) we can make a kde plot of two of the newly created variables.
###Code
train = train.fillna(0)
train.head()
kde_target(var_name='client_bureau_balance_MONTHS_BALANCE_count_mean', df=train)
###Output
The correlation between client_bureau_balance_MONTHS_BALANCE_count_mean and the TARGET is -0.0247
Median value for loan that was not repaid = 0.0000
Median value for loan that was repaid = 0.0000
###Markdown
This variable represents the average number of monthly records per loan for each client. For example, if a client had three previous loans with 3, 4, and 5 records in the monthly data, the value of this variable for them would be 4. Based on the distribution, clients with a greater number of average monthly records per loan were more likely to repay their loans with Home Credit. Let's not read too much into this value, but it could indicate that clients who have had more previous credit history are generally more likely to repay a loan.
###Code
kde_target(var_name='bureau_CREDIT_ACTIVE_Active_count_norm', df=train)
###Output
The correlation between bureau_CREDIT_ACTIVE_Active_count_norm and the TARGET is 0.0487
Median value for loan that was not repaid = 0.4000
Median value for loan that was repaid = 0.3333
###Markdown
Well this distribution is all over the place. This variable represents the number of previous loans with a `CREDIT_ACTIVE` value of `Active` divided by the total number of previous loans for a client. The correlation here is so weak that I do not think we should draw any conclusions! Collinear VariablesWe can calculate not only the correlations of the variables with the target, but also the correlation of each variable with every other variable. This will allow us to see if there are highly collinear variables that should perhaps be removed from the data. Let's look for any variables that have a greather than 0.8 correlation with other variables.
###Code
# Set the threshold
threshold = 0.8
# Empty dictionary to hold correlated variables
above_threshold_vars = {}
# For each column, record the variables that are above the threshold
for col in corrs:
above_threshold_vars[col] = list(corrs.index[corrs[col] > threshold])
###Output
_____no_output_____
###Markdown
For each of these pairs of highly correlated variables, we only want to remove one of the variables. The following code creates a set of variables to remove by only adding one of each pair.
###Code
# Track columns to remove and columns already examined
cols_to_remove = []
cols_seen = []
cols_to_remove_pair = []
# Iterate through columns and correlated columns
for key, value in above_threshold_vars.items():
# Keep track of columns already examined
cols_seen.append(key)
for x in value:
if x == key:
next
else:
# Only want to remove one in a pair
if x not in cols_seen:
cols_to_remove.append(x)
cols_to_remove_pair.append(key)
cols_to_remove = list(set(cols_to_remove))
print('Number of columns to remove: ', len(cols_to_remove))
###Output
Number of columns to remove: 134
###Markdown
We can remove these columns from both the training and the testing datasets. We will have to compare performance after removing these variables with performance keeping these variables (the raw csv files we saved earlier).
###Code
train_corrs_removed = train.drop(columns = cols_to_remove)
test_corrs_removed = test.drop(columns = cols_to_remove)
print('Training Corrs Removed Shape: ', train_corrs_removed.shape)
print('Testing Corrs Removed Shape: ', test_corrs_removed.shape)
train_corrs_removed.to_csv('./submission_2/train_bureau_corrs_removed.csv', index = False)
test_corrs_removed.to_csv('./submission_2/test_bureau_corrs_removed.csv', index = False)
###Output
_____no_output_____
###Markdown
Modeling To actually test the performance of these new datasets, we will try using them for machine learning! Here we will use a function I developed in another notebook to compare the features (the raw version with the highly correlated variables removed). We can run this kind of like an experiment, and the control will be the performance of just the `application` data in this function when submitted to the competition. I've already recorded that performance, so we can list out our control and our two test conditions:__For all datasets, use the model shown below (with the exact hyperparameters).__* control: only the data in the `application` files. * test one: the data in the `application` files with all of the data recorded from the `bureau` and `bureau_balance` files* test two: the data in the `application` files with all of the data recorded from the `bureau` and `bureau_balance` files with highly correlated variables removed.
###Code
import lightgbm as lgb
from sklearn.model_selection import KFold
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import LabelEncoder
import gc
import matplotlib.pyplot as plt
def model(features, test_features, encoding = 'ohe', n_folds = 5):
"""Train and test a light gradient boosting model using
cross validation.
Parameters
--------
features (pd.DataFrame):
dataframe of training features to use
for training a model. Must include the TARGET column.
test_features (pd.DataFrame):
dataframe of testing features to use
for making predictions with the model.
encoding (str, default = 'ohe'):
method for encoding categorical variables. Either 'ohe' for one-hot encoding or 'le' for integer label encoding
n_folds (int, default = 5): number of folds to use for cross validation
Return
--------
submission (pd.DataFrame):
dataframe with `SK_ID_CURR` and `TARGET` probabilities
predicted by the model.
feature_importances (pd.DataFrame):
dataframe with the feature importances from the model.
valid_metrics (pd.DataFrame):
dataframe with training and validation metrics (ROC AUC) for each fold and overall.
"""
# Extract the ids
train_ids = features['SK_ID_CURR']
test_ids = test_features['SK_ID_CURR']
# Extract the labels for training
labels = features['TARGET']
# Remove the ids and target
features = features.drop(columns = ['SK_ID_CURR', 'TARGET'])
test_features = test_features.drop(columns = ['SK_ID_CURR'])
# One Hot Encoding
if encoding == 'ohe':
features = pd.get_dummies(features)
test_features = pd.get_dummies(test_features)
# Align the dataframes by the columns
features, test_features = features.align(test_features, join = 'inner', axis = 1)
# No categorical indices to record
cat_indices = 'auto'
# Integer label encoding
elif encoding == 'le':
# Create a label encoder
label_encoder = LabelEncoder()
# List for storing categorical indices
cat_indices = []
# Iterate through each column
for i, col in enumerate(features):
if features[col].dtype == 'object':
# Map the categorical features to integers
features[col] = label_encoder.fit_transform(np.array(features[col].astype(str)).reshape((-1,)))
test_features[col] = label_encoder.transform(np.array(test_features[col].astype(str)).reshape((-1,)))
# Record the categorical indices
cat_indices.append(i)
# Catch error if label encoding scheme is not valid
else:
raise ValueError("Encoding must be either 'ohe' or 'le'")
print('Training Data Shape: ', features.shape)
print('Testing Data Shape: ', test_features.shape)
# Extract feature names
feature_names = list(features.columns)
# Convert to np arrays
features = np.array(features)
test_features = np.array(test_features)
# Create the kfold object
k_fold = KFold(n_splits = n_folds, shuffle = False, random_state = 50)
# Empty array for feature importances
feature_importance_values = np.zeros(len(feature_names))
# Empty array for test predictions
test_predictions = np.zeros(test_features.shape[0])
# Empty array for out of fold validation predictions
out_of_fold = np.zeros(features.shape[0])
# Lists for recording validation and training scores
valid_scores = []
train_scores = []
# Iterate through each fold
for train_indices, valid_indices in k_fold.split(features):
# Training data for the fold
train_features, train_labels = features[train_indices], labels[train_indices]
# Validation data for the fold
valid_features, valid_labels = features[valid_indices], labels[valid_indices]
# Create the model
model = lgb.LGBMClassifier(n_estimators=10000, objective = 'binary',
class_weight = 'balanced', learning_rate = 0.05,
reg_alpha = 0.1, reg_lambda = 0.1,
subsample = 0.8, n_jobs = -1, random_state = 50)
# Train the model
model.fit(train_features, train_labels, eval_metric = 'auc',
eval_set = [(valid_features, valid_labels), (train_features, train_labels)],
eval_names = ['valid', 'train'], categorical_feature = cat_indices,
early_stopping_rounds = 100, verbose = 200)
# Record the best iteration
best_iteration = model.best_iteration_
# Record the feature importances
feature_importance_values += model.feature_importances_ / k_fold.n_splits
# Make predictions
test_predictions += model.predict_proba(test_features, num_iteration = best_iteration)[:, 1] / k_fold.n_splits
# Record the out of fold predictions
out_of_fold[valid_indices] = model.predict_proba(valid_features, num_iteration = best_iteration)[:, 1]
# Record the best score
valid_score = model.best_score_['valid']['auc']
train_score = model.best_score_['train']['auc']
valid_scores.append(valid_score)
train_scores.append(train_score)
# Clean up memory
gc.enable()
del model, train_features, valid_features
gc.collect()
# Make the submission dataframe
submission = pd.DataFrame({'SK_ID_CURR': test_ids, 'TARGET': test_predictions})
# Make the feature importance dataframe
feature_importances = pd.DataFrame({'feature': feature_names, 'importance': feature_importance_values})
# Overall validation score
valid_auc = roc_auc_score(labels, out_of_fold)
# Add the overall scores to the metrics
valid_scores.append(valid_auc)
train_scores.append(np.mean(train_scores))
# Needed for creating dataframe of validation scores
fold_names = list(range(n_folds))
fold_names.append('overall')
# Dataframe of validation scores
metrics = pd.DataFrame({'fold': fold_names,
'train': train_scores,
'valid': valid_scores})
return submission, feature_importances, metrics
def plot_feature_importances(df):
"""
Plot importances returned by a model. This can work with any measure of
feature importance provided that higher importance is better.
Args:
df (dataframe): feature importances. Must have the features in a column
called `features` and the importances in a column called `importance
Returns:
shows a plot of the 15 most importance features
df (dataframe): feature importances sorted by importance (highest to lowest)
with a column for normalized importance
"""
# Sort features according to importance
df = df.sort_values('importance', ascending = False).reset_index()
# Normalize the feature importances to add up to one
df['importance_normalized'] = df['importance'] / df['importance'].sum()
# Make a horizontal bar chart of feature importances
plt.figure(figsize = (10, 6))
ax = plt.subplot()
# Need to reverse the index to plot most important on top
ax.barh(list(reversed(list(df.index[:15]))),
df['importance_normalized'].head(15),
align = 'center', edgecolor = 'k')
# Set the yticks and labels
ax.set_yticks(list(reversed(list(df.index[:15]))))
ax.set_yticklabels(df['feature'].head(15))
# Plot labeling
plt.xlabel('Normalized Importance'); plt.title('Feature Importances')
plt.show()
return df
###Output
_____no_output_____
###Markdown
ControlThe first step in any experiment is establishing a control. For this we will use the function defined above (that implements a Gradient Boosting Machine model) and the single main data source (`application`).
###Code
train_control = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/application_train.csv')
test_control = pd.read_csv('/Users/szkfzx/datasets/home-credit-default-risk/application_test.csv')
###Output
_____no_output_____
###Markdown
Fortunately, once we have taken the time to write a function, using it is simple (if there's a central theme in this notebook, it's use functions to make things simpler and reproducible!). The function above returns a `submission` dataframe we can upload to the competition, a `fi` dataframe of feature importances, and a `metrics` dataframe with validation and test performance.
###Code
submission, fi, metrics = model(train_control, test_control)
metrics
###Output
_____no_output_____
###Markdown
The control slightly overfits because the training score is higher than the validation score. We can address this in later notebooks when we look at regularization (we already perform some regularization in this model by using `reg_lambda` and `reg_alpha` as well as early stopping). We can visualize the feature importance with another function, `plot_feature_importances`. The feature importances may be useful when it's time for feature selection.
###Code
fi_sorted = plot_feature_importances(fi)
submission.to_csv('./submission_2/control.csv', index = False)
###Output
_____no_output_____
###Markdown
__The control scores 0.745 when submitted to the competition.__ Test OneLet's conduct the first test. We will just need to pass in the data to the function, which does most of the work for us.
###Code
submission_raw, fi_raw, metrics_raw = model(train, test)
metrics_raw
###Output
_____no_output_____
###Markdown
Based on these numbers, the engineered features perform better than the control case. However, we will have to submit the predictions to the leaderboard before we can say if this better validation performance transfers to the testing data.
###Code
fi_raw_sorted = plot_feature_importances(fi_raw)
###Output
_____no_output_____
###Markdown
Examining the feature improtances, it looks as if a few of the feature we constructed are among the most important. Let's find the percentage of the top 100 most important features that we made in this notebook. However, rather than just compare to the original features, we need to compare to the _one-hot encoded_ original features. These are already recorded for us in `fi` (from the original data).
###Code
top_100 = list(fi_raw_sorted['feature'])[:100]
new_features = [x for x in top_100 if x not in list(fi['feature'])]
print('%% of Top 100 Features created from the bureau data = %d.00' % len(new_features))
###Output
_____no_output_____
###Markdown
Over half of the top 100 features were made by us! That should give us confidence that all the hard work we did was worthwhile.
###Code
submission_raw.to_csv('./submission_2/test_one.csv', index = False)
###Output
_____no_output_____
###Markdown
__Test one scores 0.759 when submitted to the competition.__ Test TwoThat was easy, so let's do another run! Same as before but with the highly collinear variables removed.
###Code
submission_corrs, fi_corrs, metrics_corr = model(train_corrs_removed, test_corrs_removed)
metrics_corr
###Output
_____no_output_____
###Markdown
These results are better than the control, but slightly lower than the raw features.
###Code
fi_corrs_sorted = plot_feature_importances(fi_corrs)
submission_corrs.to_csv('./submission_2/test_two.csv', index = False)
###Output
_____no_output_____ |
available_gates.ipynb | ###Markdown
Definition of all avalaible gates In the following table, we list and give the definition of all available gates on the QLM.<table align="center" style="width:100% ; border: 1px solid black" > Gate name pyAQASM name qubits Matrix Hadamard H 1 $ \begin{vmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \end{vmatrix}$ Pauli X X 1 $ \begin{vmatrix} 0 & 1 \\ 1 & 0 \\ \end{vmatrix}$ Pauli Y Y 1 $ \begin{vmatrix} 0 & -i \\ i & 0 \\ \end{vmatrix}$ Pauli Z Z 1 $ \begin{vmatrix} 1 & 0 \\ 0 & -1 \\ \end{vmatrix}$ Identity I 1 $ \begin{vmatrix} 1 & 0 \\ 0 & 1 \\ \end{vmatrix}$ Phase Shift PH($\theta$) 1 $\forall \theta \in\rm I\!R$: $\begin{vmatrix} 1 & 0 \\ 0 & e^{i\theta} \\ \end{vmatrix}$ Phase Shift gate of $\frac{\pi}{2}$ S 1 $ \begin{vmatrix} 1 & 0 \\ 0 & i \\ \end{vmatrix}$ Phase Shift gate of $\frac{\pi}{4}$ T 1 $ \begin{vmatrix} 1 & 0 \\ 0 & e^{i\frac{\pi}{4}} \\ \end{vmatrix}$ X Rotation RX($\theta$) 1 $\forall \theta \in\rm I\!R$: $\begin{vmatrix} \cos(\frac{\theta}{2}) & -i\sin(\frac{\theta}{2}) ~\\ -i\sin(\frac{\theta}{2}) & \cos(\frac{\theta}{2}) \\ \end{vmatrix}$ Y Rotation RY($\theta$) 1 $\forall \theta \in\rm I\!R$: $\begin{vmatrix} \cos(\frac{\theta}{2}) & -\sin(\frac{\theta}{2}) ~\\ \sin(\frac{\theta}{2}) & \cos(\frac{\theta}{2}) \\ \end{vmatrix}$ Z Rotation RZ($\theta$) 1 $\forall \theta \in\rm I\!R$: $\begin{vmatrix} e^{-i\frac{\theta}{2}} & 0 \\ 0 & e^{i\frac{\theta}{2}} \\ \end{vmatrix}$ Controlled NOT CNOT 2 $\begin{vmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{vmatrix}$ SWAP SWAP 2 $\begin{vmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{vmatrix}$ iSWAP ISWAP 2 $\begin{vmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & i & 0 \\ 0 & i & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{vmatrix}$ $\sqrt{\text{SWAP}}$ SQRTSWAP 2 $\begin{vmatrix} 1 & 0 & 0 & 0 \\ 0 & \frac{1}{2}(1 + i) & \frac{1}{2}(1 - i) & 0 \\ 0 & \frac{1}{2}(1 - i) & \frac{1}{2}(1 + i) & 0 \\ 0 & 0 & 0 & 1 \\ \end{vmatrix}$ Toffoli CCNOT 3 $\begin{vmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \end{vmatrix}$ Short exampleHere is a short example to illustrate the creation of a quantum program with all those gates:
###Code
from qat.lang.AQASM import Program, H, X, Y, Z, I, PH, S, T, RX, RY, RZ, CNOT, ISWAP, SQRTSWAP, CCNOT, SWAP
p = Program()
reg = p.qalloc(3)
p.apply(H, reg[0])
p.apply(X, reg[0])
p.apply(Y, reg[2])
p.apply(Z, reg[1])
p.apply(I, reg[1])
p.apply(S, reg[0])
p.apply(T, reg[0])
p.apply(PH(0.3), reg[0])
p.apply(RX(-0.3), reg[0])
p.apply(RY(0.6), reg[1])
p.apply(RZ(0.3), reg[0])
p.apply(CNOT, reg[0:2])
p.apply(SWAP, reg[0], reg[2])
p.apply(ISWAP, reg[1:3])
p.apply(SQRTSWAP, reg[0:2])
p.apply(CCNOT, reg)
circuit = p.to_circ()
%qatdisplay circuit
###Output
_____no_output_____ |
IBF/.ipynb_checkpoints/benchmark-checkpoint.ipynb | ###Markdown
Insert Benchmark Evaluation
###Code
filename = os.path.join(path, "insert.json")
with open(filename, "r") as f:
x = json.load(f)
row_list = []
for benchmark in x['benchmarks']:
row_dict = {}
[test, spec, alphabet, strategy, bitvector, bins, k, ram, h] = re.match(pattern, benchmark['name']).groups()
if ram is None:
ram = int(4**int(k)*int(bins)/1024/1024/8)
else:
ram = int(2**int(ram)/1024/1024/8)
if h is None:
h = 1
time = "{0:,.2f}".format(benchmark['real_time']/10**9/60)
size = "{0:,}".format(int(benchmark['Size']))
row_dict['Function'] = test
row_dict['BD'] = spec
row_dict['Alphabet'] = alphabet
row_dict['Strategy'] = strategy
row_dict['Bitvector'] = bitvector
row_dict['bins'] = bins
row_dict['k'] = k
row_dict['RAM'] = "{0:,}".format(ram)
row_dict['h'] = h
row_dict['Time'] = time
row_dict['Size'] = size
row_list.append(row_dict)
df = pd.DataFrame(row_list)
df = df[["Function",
"BD",
"Alphabet",
"Strategy",
"Bitvector",
"bins",
"k",
"h",
"RAM",
"Size",
"Time"]]
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
display(df)
df.to_csv(os.path.join(path, "insertBenchmark.tsv"), sep='\t', index=False)
###Output
_____no_output_____
###Markdown
Select Benchmark Evaluation
###Code
filename = os.path.join(path, "select.json")
with open(filename, "r") as f:
x = json.load(f)
row_list = []
for benchmark in x['benchmarks']:
row_dict = {}
[test, spec, alphabet, strategy, bitvector, bins, k, ram, h] = re.match(pattern, benchmark['name']).groups()
if ram is None:
ram = int(4**int(k)*int(bins)/1024/1024/8)
else:
ram = int(2**int(ram)/1024/1024/8)
if h is None:
h = 1
#time = round(benchmark['real_time']/10**9,2)
#size = int(benchmark['Size'])
row_dict['Full Time'] = "{0:,.2f}".format(benchmark['fullTime'])
row_dict['load BD'] = "{0:,.2f}".format(benchmark['loadingTime'])
row_dict['load Reads'] = "{0:,.2f}".format(benchmark['ioTime'])
row_dict['sum Select'] = "{0:,.2f}".format(benchmark['selectTime'])
row_dict['avg Select'] = "{0:,.2f}".format(benchmark['selectTime'] / 32)
row_dict['Threads'] = 32
row_dict['TP'] = "{0:,}".format(int(benchmark['TP']))
row_dict['FN'] = "{0:,}".format(int(benchmark['FN']))
row_dict['FP'] = "{0:,}".format(int(benchmark['FP']))
row_dict['P'] = "{0:,}".format(int(benchmark['P']))
row_dict['readNo'] = "{0:,}".format(int(benchmark['readNo']))
row_dict['Absolute Verifications'] = "{0:,}".format(int(benchmark['verifications']))
row_dict['Verifications per read'] = "{0:,.2f}".format(benchmark['Verifications'])
row_dict['Sensitivity'] = benchmark['Sensitivity']
row_dict['Precision'] = benchmark['Precision']
row_dict['FNR'] = "{0:,.2f}".format(benchmark['FNR'])
row_dict['FDR'] = "{0:,.2f}".format(benchmark['FDR'])
row_dict['Function'] = test
row_dict['BD'] = spec
row_dict['Alphabet'] = alphabet
row_dict['Strategy'] = strategy
row_dict['Bitvector'] = bitvector
row_dict['bins'] = bins
row_dict['k'] = k
row_dict['RAM'] = "{0:,}".format(int(ram))
row_dict['h'] = h
#row_dict['Time'] = time
#row_dict['Size'] = size
row_list.append(row_dict)
df = pd.DataFrame(row_list)
df = df[["Function",
"BD",
"Alphabet",
"Strategy",
"Bitvector",
"bins",
"k",
"h",
"RAM",
"Full Time",
"load BD",
"load Reads",
"sum Select",
"avg Select",
"TP",
"FN",
"FP",
"P",
"readNo",
"Absolute Verifications",
"Verifications per read",
"Sensitivity",
"Precision",
"FNR",
"FDR",
]]
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
display(df)
df.to_csv(os.path.join(path, "selectBenchmark.tsv"), sep='\t', index=False)
###Output
_____no_output_____ |
notebooks/1 - Intro.ipynb | ###Markdown
 About CitiCiti is one of the world's biggest financial service providers. The firm spans across multiple segments of the finacial industry from institutional banking to retail wealth management and many more. The organisation is huge. It has over 200.000 employees in 160 countries. About us Markets Quantitative Analysis (MQA)MQA is part of the Insitutional Clients Group (ICG) within Citi. Our main focus is to provide quantitative support for ICG Markets. MQA is present (by descending order of size) in New York, London, Budapest and other offices (Houston, Tokyo, Singapore, Hong Kong). In Budapest our team has currently 30+ collegues sitting at Bank Center (two blocks from here). As MQA's remit spans multiple disciplines, we have team members with diverse backgrounds: Quants with Math and Physics degrees, Quantitative Developers and Quant Support with many types of IT background. MQA provides solutions to deal with the complex tasks around analysing OTC traded and market traded financial products. The main asset classes we work with are Rates, FX, Commodities, Equities, Mortgages and Multi asset products. We work in multiple programming languages and apply several tools. We are heavy users of C++, Python and Excel. Other coding languages we use frequently include Matlab, R and Perl. TradingCiti has both Sales and Trading desk in Budapest’s dealing room. Trading is responsible for market making of HUF denominated government securities and its short and long term derivatives. They are one of the main primary dealers of local currency denominated government securities. Sales covers Citi’s local and international corporate clients and financial institutions offers full range of currency, interest rate or commodity related products and complex risk management solutions to them. Contact usYou can reach us at [email protected]__ About youWe will start the first class by asking you about your past courses, your experience, and your current interests. Course syllabusClasses are always from 17:30 to 19:10. Usually on Fridays, otherwise marked.| Date | Title | Lecturer(Assistant) | Description ||:-----|:------|:---------------------|:------------|| Jan 11 Fri | [Introduction](1 - Intro.ipynb) | Marton S (All) | Investment banks. Citi. MQA. Students. Python tools. Jupyter notebooks. || Jan 18 Fri | [Data cleaning](2 - Exploring, cleaning and preparing tabular data for analysis.ipynb) | Marton K (Illes) | Data cleaning. Regression tests. || Jan 24 Thu | [Loan default prediction](3 - Loan default prediction.ipynb) | Marton S (Tibor) | Prediction of Loan Default based on historical performance data using Scikit-learn. || Feb 01 Fri | [Products and Greeks](4 - Products and Greeks.ipynb) | Illes (Marton S) | Risk-neutral pricing. Derivative products. Greeks and hedging. || Feb 07 Thu | [Binomial tree](5 - Call_Binom.ipynb) | Illes (Marton K) | Risk-neutral pricing of European call option with binomial tree. Normalized n-step binomial tree. || Feb 22 Fri | [Black-Scholes](6 - Black-Scholes.ipynb) | Illes (Marton S) | Wiener process. Itô process. Black-Scholes-Merton option pricing. || Mar 01 Fri | [Monte Carlo](7 - Monte-Carlo simulations.ipynb) | Robi S (Tibor) | Comparison to analytic solution. Path dependence. Variance reduction to speed up convergence. || Mar 13 Wed | [FX Game](8 - FX Game.ipynb) | Robi K | Students play a trading game in teams of 3. Discussion of strategies and results. || Mar 29 Fri | [Advanced topic](9 - Trade Compression.ipynb) | Robert S (Tibor) | Trade compression and its motivations. Unilateral case as a linear programming problem. || Apr 05 Fri | Final Exam | All | Final exam. | Learning PythonThis course requires some python skills to follow. We do not inted to teach how to write python code but we will obviously explain the more elaborate constructs. The following links can help you get up to speed in python:* If Python will be your first coding language: https://www.learningpython.org* If you have coded before: https://www.hackerrank.com/domains/python Jupyter Notebook__Please note: Throughout the course we will be using Python version 3__Jupyter notebooks are built from "cells". Each cell has its content saved separately in the notebook file (extension: .ipynb).A cell can contain text (type: Markdown) or python code (type: Code). You can change the type of a cell and add new cells in the menu. Jupyter notebooks can run a python process on your computer. Each browser tab is associated with a new python process. The code cells form the code that is executed in the python process. A cell can be executed by typing ```python``` followed by pressing ```CTRL-ENTER``` (the keys CTRL and ENTER at the same time). This sends the corresponding piece of code to the kernel. The kernel executes it and returns any result printed to the browser. The browser then prints the result below the cell and jumps to the next cell. A diagram of this process:Note that each browser tab has an associated python process (called kernel). The order of the cells gives you an idea of the particular course material, BUT it does not tell the process in what order it's going to be executed. This is determined by you running the cells. To warm up with the iPython notebook, you may want to try executing the two code cells below (move into the cell and press CTRL-ENTER).__Basic iPython notebook commands__CTRL-M L Show / Hide line numbers in a notebook cellCTRL-M J Move to next cellCTRL-ENTER Execute contents of a cell. Stay in the same cell.SHIFT-ENTER Execute contents of a cell. Move to next cell.__Execute the following two cells several times in different order__
###Code
a = 1
print(a)
a = a + 1
print(a)
###Output
2
|
colabs/notebook_test.ipynb | ###Markdown
###Code
print("Hello World")
(True, False) = (False, True)
###Output
_____no_output_____ |
PyBoss/PyBoss.ipynb | ###Markdown
In this challenge, you get to be the _boss_. You oversee hundreds of employees across the country developing Tuna 2.0, a world-changing snack food based on canned tuna fish. Alas, being the boss isn't all fun, games, and self-adulation. The company recently decided to purchase a new HR system, and unfortunately for you, the new system requires employee records be stored completely differently.Your task is to help bridge the gap by creating a Python script able to convert your employee records to the required format. Your script will need to do the following:* Import the `employee_data1.csv` and `employee_data2.csv` files, which currently holds employee records like the below:Emp ID,Name,DOB,SSN,State214,Sarah Simpson,1985-12-04,282-01-8166,Florida15,Samantha Lara,1993-09-08,848-80-7526,Colorado411,Stacy Charles,1957-12-20,658-75-8526,Pennsylvania* Then convert and export the data to use the following format instead:Emp ID,First Name,Last Name,DOB,SSN,State214,Sarah,Simpson,12/04/1985,***-**-8166,FL15,Samantha,Lara,09/08/1993,***-**-7526,CO411,Stacy,Charles,12/20/1957,***-**-8526,PA
###Code
#PyBoss - via Pandas
import pandas as pd
# Store filepath in a variable
file_one = "raw_data/employee_data1.csv"
# Read our Data file with the pandas library
file_one_df = pd.read_csv(file_one, encoding = "ISO-8859-1")
file_one_df.head()
#create new data frame
new_hr_df = file_one_df[["Emp ID", "Name", "DOB", "SSN","State"]]
# Add new column to DF
#new_hr_df["First Name"]
#new_hr_df.head()
# Place the data series into a new column inside of the DataFrame
#new_hr_df["View Group"] = pd.cut(ted_df["views"],bins,labels=group_labels)
#ted_df.head()
new_hr_df['First Name'], new_hr_df['Last Name'] = new_hr_df['Name'].str.split(' ', 1).str
new_hr_df.head()
#parse the table and reformat the fields
for row in new_hr_df:
###Output
_____no_output_____ |
.ipynb_checkpoints/Get_track_INFO_Wikipedia-checkpoint.ipynb | ###Markdown
GET ARTIST INFO from WIKIPEDIA
###Code
# Using selenium to get TRACK info
import selenium
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
# html contents
from IPython.display import display, clear_output
from IPython.core.display import HTML
# FONT style
import fontstyle
# Widget
import ipywidgets as widget
# FUNCTION to GET information from wiki
# pass key word into the function
def wiki_info(srch_name):
browser_options= Options()
browser_options.add_argument('--headless')
# headless
#browser= webdriver.Chrome(options=browser_options)
browser= webdriver.Chrome(options= browser_options)
browser.get('https://en.wikipedia.org/wiki/Main_Page')
# wating time before scrapping
browser.implicitly_wait(5)
# locate the placeholder, here looking for Search box
srch_box= browser.find_element_by_xpath('//*[@id="searchInput"]')
# searched strings
#usr_input= input() # SHOWING ERROR IN voila
#usr_input="50 cent"
# send Keys with STRINGS
srch_box.send_keys(srch_name)
# click for the searched string
login= browser.find_element_by_xpath('//*[@id="searchButton"]')
login.click()
try:
#browser.implicitly_wait(4)
# NEXT page
# GET the NAME of the ARTIST
name_artist= browser.find_element_by_xpath('//*[@id="firstHeading"]')
# get BIO from wiki, first paragraph
artist_info= browser.find_element_by_xpath('//*[@id="mw-content-text"]/div[1]/p[3]')
# GET image from wikipedia
image_artist= browser.find_element_by_xpath('//*[@id="mw-content-text"]/div[1]/table[1]/tbody/tr[2]/td/a/img')
img_src= image_artist.get_attribute('src')
print('Name: ',fontstyle.apply(name_artist.text, 'bold'))
print('Bio: \n', fontstyle.apply(artist_info.text, 'blue/blink'))
html_pic= '<img src=\"'+img_src+'\" width=\"200\" height=\"150\">'
display(HTML(html_pic))
# RETURNS the searched Name if not found on WIKI
except:
print(fontstyle.apply(srch_name.upper(), 'red/bold'))
print(fontstyle.apply("\"No info on WIKI\"",'red' ))
# closes selenium browser
browser.close()
# TEST: w widget and text
# display(wiki_info(srch_art_name.value))
button= widget.Button(description= "Search") # button
# with BUTTON onClick
outPut= widget.Output()
srch_art_name= widget.Text(
value= '',
placeholder= 'Enter the Name'
)
display(srch_art_name)
display(button,outPut)
# OUTPUT onClick Function
def on_button_activity(b):
# by on click action
with outPut:
# first, clear any output
clear_output()
wiki_info(srch_art_name.value)
# test
button.on_click(on_button_activity)
###Output
_____no_output_____
###Markdown
TEXT: Testing color font
###Code
info= 'yo mama so fat, she ate whole curry'
print( fontstyle.apply(info, 'blue/Italic/underline/inverse'))
info_two= fontstyle.apply('he ain\'t no mad, he just super happy', 'bold/GREEN/underline')
print(info_two)
# preserve text
print(fontstyle.preserve(info_two))
print(fontstyle.apply(info, 'yellow/strike'))
###Output
[94m[3m[4m[7myo mama so fat, she ate whole curry[0m
[1m[92m[4mhe ain't no mad, he just super happy[0m
he ain't no mad, he just super happy
[93m[9myo mama so fat, she ate whole curry[0m
|
load_models.ipynb | ###Markdown
Load and Process modelsThis script will load the M models in the collection using cobrapy, and convert them to a normalized format. They will also be exported to the "mat" format used by the COBRA toolbox.This requires [cobrapy](https://opencobra.github.io/cobrapy) version 0.4.0b1 or later.
###Code
import os
import warnings
import re
from itertools import chain
import sympy
import scipy
import scipy.io
import cobra
from read_excel import read_excel
###Output
_____no_output_____
###Markdown
Read in Models
###Code
def open_exchanges(model, amount=10):
for reaction in model.reactions:
if len(reaction.metabolites) == 1:
# Ensure we are not creating any new sinks
if reaction.metabolites.values()[0] > 0:
reaction.upper_bound = max(reaction.upper_bound, amount)
else:
reaction.lower_bound = min(reaction.lower_bound, -amount)
def add_exchanges(model, extracellular_suffix="[e]", uptake_amount=10):
for metabolite in model.metabolites:
if str(metabolite).endswith(extracellular_suffix):
if len(metabolite.reactions) == 0:
print "no reactions for " + metabolite.id
continue
if min(len(i.metabolites) for i in metabolite.reactions) > 1:
EX_reaction = cobra.Reaction("EX_" + metabolite.id)
EX_reaction.add_metabolites({metabolite: 1})
m.add_reaction(EX_reaction)
EX_reaction.upper_bound = uptake_amount
EX_reaction.lower_bound = -uptake_amount
###Output
_____no_output_____
###Markdown
SBML modelsThese models will be read in using [libSBML](http://sbml.org/Software/libSBML) through cobrapy. Some models will need their exchanges opened.
###Code
legacy_SBML = {"T_Maritima", "iNJ661m", "iSR432", "iTH366"}
open_boundaries = {"iRsp1095", "iWV1314", "iFF708", "iZM363"}
models = cobra.DictList()
for i in sorted(os.listdir("sbml")):
if not i.endswith(".xml"):
continue
model_id = i[:-4]
filepath = os.path.join("sbml", i)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
m = cobra.io.read_legacy_sbml(filepath) if model_id in legacy_SBML \
else cobra.io.read_sbml_model(filepath)
m.id = m.description = model_id.replace(".", "_")
if m.id in open_boundaries:
open_exchanges(m)
models.append(m)
###Output
_____no_output_____
###Markdown
Models available in COBRA Toolbox "mat" format
###Code
for i in sorted(os.listdir("mat")):
if not i.endswith(".mat"):
continue
m = cobra.io.load_matlab_model(os.path.join("mat", i))
m.id = i[:-4]
if m.id in open_boundaries:
open_exchanges(m)
models.append(m)
###Output
_____no_output_____
###Markdown
Some models are only available as Microsoft Excel files
###Code
m = read_excel("xls/iJS747.xls",
verbose=False, rxn_sheet_header=7)
models.append(m)
m = read_excel("xls/iRM588.xls",
verbose=False, rxn_sheet_header=5)
models.append(m)
m = read_excel("xls/iSO783.xls", verbose=False, rxn_sheet_header=2)
models.append(m)
m = read_excel("xls/iCR744.xls", rxn_sheet_header=4, verbose=False)
models.append(m)
m = read_excel("xls/iNV213.xls", rxn_str_key="Reaction Formula", verbose=False)
# remove boundary metabolites
for met in list(m.metabolites):
if met.id.endswith("[b]"):
met.remove_from_model()
models.append(m)
m = read_excel("xls/iTL885.xls", verbose=False,
rxn_id_key="Rxn name", rxn_gpr_key="Gene-reaction association", met_sheet_name="ignore")
models.append(m)
m = read_excel("xls/iWZ663.xls", verbose=False,
rxn_id_key="auto", rxn_name_key="Reaction name", rxn_gpr_key="Local gene")
models.append(m)
m = read_excel("xls/iOR363.xls", verbose=False)
models.append(m)
m = read_excel("xls/iMA945.xls", verbose=False)
models.append(m)
m = read_excel("xls/iPP668.xls", verbose=False)
add_exchanges(m)
models.append(m)
m = read_excel("xls/iVM679.xls", verbose=False, met_sheet_name="ignore",
rxn_id_key="Name", rxn_name_key="Description", rxn_str_key="Reaction")
open_exchanges(m)
models.append(m)
m = read_excel("xls/iTY425.xls", rxn_sheet_header=1,
rxn_sheet_name="S8", rxn_id_key="Number", rxn_str_key="Reaction", verbose=False)
add_exchanges(m, "xt")
# Protein production reaction does not prdocue "PROTEIN" metabolite
m.reactions.R511.add_metabolites({m.metabolites.PROTEIN: 1})
m.id = m.id + "_fixed"
models.append(m)
m = read_excel("xls/iSS724.xls", rxn_str_key="Reactions",
rxn_sheet_header=1, met_sheet_header=1, rxn_id_key="Name",
verbose=False)
add_exchanges(m, "xt")
models.append(m)
m = read_excel("xls/iCS400.xls", rxn_sheet_name="Complete Rxn List",
rxn_sheet_header=2, rxn_str_key="Reaction",
rxn_id_key="Name", verbose=False)
add_exchanges(m, "xt")
models.append(m)
m = read_excel("xls/iLL672.xls",
rxn_id_key="auto", met_sheet_name="Appendix 3 iLL672 metabolites",\
rxn_str_key="REACTION", rxn_gpr_key="skip", verbose=False,
rxn_sheet_name='Appendix 3 iLL672 reactions')
m.reactions[-1].objective_coefficient = 1
m.metabolites.BM.remove_from_model()
add_exchanges(m, "xt")
models.append(m)
plus_re = re.compile("(?<=\S)\+") # substitute H+ with H, etc.
m = read_excel("xls/iMH551.xls", rxn_sheet_name="GPR Annotation", rxn_sheet_header=4,
rxn_id_key="auto", rxn_str_key="REACTION", rxn_gpr_key="skip",
rxn_name_key="ENZYME", rxn_skip_rows=[625, 782, 787], verbose=False,
rxn_sheet_converters={"REACTION": lambda x: plus_re.sub("", x)})
for met in m.metabolites:
if met.id.endswith("(extracellular)"):
met.id = met.id[:-15] + "_e"
m.repair()
add_exchanges(m, "_e")
models.append(m)
m = read_excel("xls/iCS291.xls", rxn_sheet_name="Sheet1",
rxn_str_key="Reaction",
rxn_sheet_header=5, rxn_id_key="Name",
verbose=False)
add_exchanges(m, "xt")
# BIOMASS is just all model metabolites in the Demands list
m.add_reaction(cobra.Reaction("BIOMASS"))
# taken from Table 1 in publication
biomass_mets = {}
for i in {"ALA", "ARG", "ASN", "ASP", "CYS", "GLU", "GLN", "GLY",
"HIS", "ILE", "LEU", "LYS", "MET", "PHE", "PRO", "SER",
"THR", "TRP", "TYR", "VAL", "PTRC", "SPMD", "ATP", "GTP",
"CTP", "UTP", "DATP", "DGTP", "DCTP", "DTTP", "PS", "PE",
"PG", "PEPTIDO", "LPS", "OPP", "UDPP", "NAD", "NADP", "FAD",
"COA", "ACP", "PTH", "THIAMIN", "MTHF", "MK", "DMK"
}:
biomass_mets[m.metabolites.get_by_id(i)] = -1
dm = cobra.Reaction("DM_" + i)
m.add_reaction(dm)
dm.add_metabolites({m.metabolites.get_by_id(i): -1})
m.reactions.BIOMASS.add_metabolites(biomass_mets)
m.change_objective("BIOMASS")
add_exchanges(m, "xt")
models.append(m)
m = read_excel("xls/iYO844.xls", rxn_sheet_name="Reaction and locus", verbose=False, rxn_gpr_key="Locus name",
rxn_str_key=u'Equation (note [c] and [e] at the beginning refer to the compartment \n'
'the reaction takes place in, cytosolic and extracellular respectively)')
add_exchanges(m)
# create the biomass reaction from supplementary data table
# http://www.jbc.org/content/suppl/2007/06/29/M703759200.DC1/Biomass_composition.doc
r = cobra.Reaction("biomass")
r.objective_coefficient = 1.
m.add_reaction(r)
r.reaction = ("408.3 gly[c] + 266.9 ala-L[c] + 306.7 val-L[c] + 346.4 leu-L[c] + 269.9 ile-L[c] + "
"216.2 ser-L[c] + 186.3 thr-L[c] + 175.9 phe-L[c] + 110.8 tyr-L[c] + 54.3 trp-L[c] + "
"56.7 cys-L[c] + 113.3 met-L[c] + 323.1 lys-L[c] + 193.0 arg-L[c] + 81.7 his-L[c] + "
"148.0 asp-L[c] + 260.4 glu-L[c] + 148.0 asp-L[c] + 260.3 gln-L[c] + 160.6 pro-L[c] + "
"62.7 gtp[c] + 38.9 ctp[c] + 41.5 utp[c] + 23.0 datp[c] + 17.4 dgtp[c] + 17.4 dctp[c] + "
"22.9 dttp[c] + 0.085750 m12dg_BS[c] + 0.110292 d12dg_BS[c] + 0.065833 t12dg_BS[c] + "
"0.004642 cdlp_BS[c] + 0.175859 pgly_BS[c] + 0.022057 lysylpgly_BS[c] + 0.559509 psetha_BS[c] + "
"0.006837 lipo1-24_BS[c] + 0.006123 lipo2-24_BS[c] + 0.018162 lipo3-24_BS[c] + "
"0.014676 lipo4-24_BS[c] + 101.82 peptido_BS[c] + 3.62 gtca1-45_BS[c] + 2.35 gtca2-45_BS[c] + "
"1.82 gtca3-45_BS[c] + 3.11 tcam_BS[c] + 706.3 k[c] + 101.7 mg2[c] + 3.4 fe3[c] + 3.2 ca2[c] + "
"0.9 ppi[c] + 0.3 mql7[c] + 0.4 10fthf[c] + 16.2 nad[c] + 4.7 amp[c] + 2.6 adp[c] + 1.0 cmp[c] + "
"0.9 nadp[c] + 0.5 ctp[c] + 0.5 gmp[c] + 0.4 gtp[c] + 0.3 cdp[c] + 0.2 nadph[c] + 0.2 gdp[c] + "
"105053.5 atp[c] + 105000 h2o[c] --> 104985.6 pi[c] + 104997.4 adp[c] + 105000 h[c]")
# units are in mg for this reaction, so scale to grams
r *= 0.001
models.append(m)
models.sort()
###Output
_____no_output_____
###Markdown
Determine Objective ReactionsSome of these models do not specify an objective (or biomass) reaction. These will be automatically detected if possible, or set from a manually curated list.
###Code
# regular expression to detect "biomass"
biomass_re = re.compile("biomass", re.IGNORECASE)
# manually identified objective reactions
curated_objectives = {"VvuMBEL943": "R806",
"iAI549": "BIO_CBDB1_DM_855",
"mus_musculus": "BIO028",
"iRsp1095": "RXN1391",
"iLC915": "r1133",
"PpaMBEL1254": "R01288",
"AbyMBEL891": "R761",
"iAbaylyiV4": "GROWTH_DASH_RXN",
"iOG654": "RM00001",
"iOR363": "OF14e_Retli",
"iRM588": "agg_GS13m",
"iJS747": "agg_GS13m_2",
"iTL885": "SS1240",
"iMH551": "R0227"}
for m in models:
if len(m.reactions.query(lambda x: x > 0, "objective_coefficient")):
continue
if m.id in curated_objectives:
m.change_objective(curated_objectives[m.id])
continue
# look for reactions with "biomass" in the id or name
possible_objectives = m.reactions.query(biomass_re)
if len(possible_objectives) == 0:
possible_objectives = m.reactions.query(biomass_re, "name")
# In some cases, a biomass "metabolite" is produced, whose production
# should be the objective function.
possible_biomass_metabolites = m.metabolites.query(biomass_re)
if len(possible_biomass_metabolites) == 0:
possible_biomass_metabolites = m.metabolites.query(biomass_re, "name")
if len(possible_biomass_metabolites) > 0:
biomass_met = possible_biomass_metabolites[0]
r = cobra.Reaction("added_biomass_sink")
r.objective_coefficient = 1
r.add_metabolites({biomass_met: -1})
m.add_reaction(r)
print ("autodetected biomass metabolite '%s' for model '%s'" %
(biomass_met.id, m.id))
elif len(possible_objectives) > 0:
print("autodetected objective reaction '%s' for model '%s'" %
(possible_objectives[0].id, m.id))
m.change_objective(possible_objectives[0])
else:
print("no objective found for " + m.id)
# Ensure the biomass objective flux is unconstrained
for m in models:
for reaction in m.reactions.query(lambda x: x > 0, "objective_coefficient"):
reaction.lower_bound = min(reaction.lower_bound, 0)
reaction.upper_bound = max(reaction.upper_bound, 1000)
###Output
autodetected biomass metabolite 'BIOMASS_c' for model 'GSMN_TB'
autodetected objective reaction 'Biomass' for model 'PpuMBEL1071'
autodetected objective reaction 'RXNBiomass' for model 'SpoMBEL1693'
autodetected objective reaction 'BIOMASS_LM3' for model 'iAC560'
autodetected biomass metabolite '140' for model 'iAO358'
autodetected biomass metabolite 'PROT_LPL_v60[c]' for model 'iBT721_fixed'
autodetected objective reaction 'BIO_Rfer3' for model 'iCR744'
autodetected objective reaction 'BIOMASS' for model 'iCS400'
autodetected biomass metabolite 'BIOMASS' for model 'iMA871'
autodetected objective reaction 'ST_biomass_core' for model 'iMA945'
autodetected objective reaction 'biomass_mm_1_no_glygln' for model 'iMM1415'
autodetected objective reaction 'biomass_STR' for model 'iMP429_fixed'
autodetected objective reaction 'R_biomass_target' for model 'iNV213'
autodetected objective reaction 'BIOMASS' for model 'iPP668'
autodetected objective reaction 'Biomass_Chlamy_auto' for model 'iRC1080'
autodetected objective reaction 'ST_Biomass_Final' for model 'iRR1083'
autodetected objective reaction 'Biomass' for model 'iSO783'
autodetected objective reaction 'biomass_target' for model 'iSR432'
autodetected biomass metabolite 'BIOMASS' for model 'iSS724'
autodetected biomass metabolite 'm828' for model 'iSS884'
autodetected biomass metabolite 'BIOMASS' for model 'iTY425_fixed'
autodetected objective reaction 'Biomass' for model 'iVM679'
autodetected biomass metabolite 'Biomass' for model 'iWV1314'
autodetected objective reaction 'R0822' for model 'iWZ663'
###Markdown
Fixes of various encoding bugs General GSMN_TB does not use the convention of extracellular metabolites with exchanges. Although the model still solves with this formulation, this is still normalized here. This process does not change the mathematical structure of the model.
###Code
h_c = models.GSMN_TB.metabolites.H_c
for r in models.GSMN_TB.reactions:
if len(r.metabolites) == 2 and h_c in r.metabolites:
met = [i for i in r.metabolites if i is not h_c][0]
EX_met = cobra.Metabolite(met.id[:-1] + "e")
r.add_metabolites({EX_met: -r.metabolites[met]})
if "EX_" + EX_met.id not in models.GSMN_TB.reactions:
exchange = cobra.Reaction("EX_" + EX_met.id)
exchange.add_metabolites({EX_met: -1})
exchange.lower_bound = -1000000.0
exchange.upper_bound = 1000000.0
models.GSMN_TB.add_reaction(exchange)
###Output
_____no_output_____
###Markdown
Reaction and Metabolites id's
###Code
# reaction id's with spaces in them
models.iJS747.reactions.get_by_id("HDH [deleted 01/16/2007 12:02:30 PM]").id = "HDH_del"
models.iJS747.reactions.get_by_id("HIBD [deleted 03/21/2007 01:06:12 PM]").id = "HIBD_del"
models.iAC560.reactions.get_by_id("GLUDx [m]").id = "GLUDx[m]"
for r in models.iOR363.reactions:
if " " in r.id:
r.id = r.id.split()[0]
models.textbook.reactions.query("Biomass")[0].id = "Biomass_Ecoli_core"
###Output
_____no_output_____
###Markdown
Use the convention underscore + compartment i.e. _c instead of [c] (c) etc.
###Code
SQBKT_re = re.compile("\[([a-z])\]$")
def fix_brackets(id_str, compiled_re):
result = compiled_re.findall(id_str)
if len(result) > 0:
return compiled_re.sub("_" + result[0], id_str)
else:
return id_str
for r in models.iRS1597.reactions:
r.id = fix_brackets(r.id, re.compile("_LSQBKT_([a-z])_RSQBKT_$"))
for m_id in ["iJS747", "iRM588", "iSO783", "iCR744", "iNV213", "iWZ663", "iOR363", "iMA945", "iPP668",
"iTL885", "iVM679", "iYO844", "iZM363"]:
for met in models.get_by_id(m_id).metabolites:
met.id = fix_brackets(met.id, SQBKT_re)
for met in models.S_coilicolor_fixed.metabolites:
if met.id.endswith("_None_"):
met.id = met.id[:-6]
# Some models only have intra and extracellular metabolites, but don't use _c and _e.
for m_id in ["iCS291", "iCS400", "iTY425_fixed", "iSS724"]:
for metabolite in models.get_by_id(m_id).metabolites:
if metabolite.id.endswith("xt"):
metabolite.id = metabolite.id[:-2] + "_e"
elif len(metabolite.id) < 2 or metabolite.id[-2] != "_":
metabolite.id = metabolite.id + "_c"
# Exchange reactions should have the id of the metabolite after with the same convention
for m_id in ["iAF1260", "iJO1366", "iAF692", "iJN746", "iRC1080", "textbook", "iNV213",
"iIT341", "iJN678", "iJR904", "iND750", "iNJ661", "iPS189_fixed", "iSB619",
"iZM363", "iMH551"]:
for r in models.get_by_id(m_id).reactions:
if len(r.metabolites) != 1:
continue
if r.id.startswith("EX_"):
r.id = "EX_" + list(r.metabolites.keys())[0].id
if r.id.startswith("DM_"):
r.id = "DM_" + list(r.metabolites.keys())[0].id
for m in models:
m.repair()
###Output
_____no_output_____
###Markdown
Metabolite Formulas
###Code
for model in models:
for metabolite in model.metabolites:
if metabolite.formula is None:
metabolite.formula = ""
continue
if str(metabolite.formula).lower() == "none":
metabolite.formula = ""
continue
# some characters should not be in a formula
if "(" in metabolite.formula or \
")" in metabolite.formula or \
"." in metabolite.formula:
metabolite.formula = ""
###Output
_____no_output_____
###Markdown
Metabolite Compartments
###Code
compartments = {
'c': 'Cytoplasm',
'e': 'Extracellular',
'p': 'Periplasm',
'm': 'Mitochondria',
'g': 'Golgi',
'n': "Nucleus",
'r': "Endoplasmic reticulum",
'x': "Peroxisome",
'v': "Vacuole",
"h": "Chloroplast",
"x": "Glyoxysome",
"s": "Eyespot",
"default": "No Compartment"}
for model in models:
for metabolite in model.metabolites:
if metabolite.compartment is None or len(metabolite.compartment.strip()) == 0 or metabolite.compartment == "[":
if len(metabolite.id) > 2 and metabolite.id[-2] == "_" and metabolite.id[-1].isalpha():
metabolite.compartment = metabolite.id[-1]
else:
metabolite.compartment = "default"
if metabolite.compartment not in model.compartments:
model.compartments[metabolite.compartment] = compartments.get(metabolite.compartment, metabolite.compartment)
###Output
_____no_output_____
###Markdown
Metabolite and Reaction NamesNames which start with numbers don't need to be escaped with underscores.
###Code
for model in models:
for x in chain(model.metabolites, model.reactions):
if x.name is not None and x.name.startswith("_"):
x.name = x.name.lstrip("_")
if x.name is not None:
x.name = x.name.strip()
if x.name is None:
x.name = x.id
###Output
_____no_output_____
###Markdown
MISC fixes
###Code
models.iMM1415.reactions.EX_lnlc_dup_e.remove_from_model()
models.iMM1415.reactions.EX_retpalm_e.remove_from_model(remove_orphans=True)
# these reaction names are reaction strings
for r in models.iCac802.reactions:
r.name = ""
###Output
_____no_output_____
###Markdown
Fix Genes and GPR's A lot of genes have characters which won't work in their names
###Code
# nonbreaking spaces
models.iCB925.reactions.FDXNRy.gene_reaction_rule = '( Cbei_0661 or Cbei_2182 )'
for r in models.iCB925.reactions:
if "\xa0" in r.gene_reaction_rule:
r.gene_reaction_rule = r.gene_reaction_rule.replace("\xc2", " ").replace("\xa0", " ")
for g in list(models.iCB925.genes):
if len(g.reactions) == 0:
models.iCB925.genes.remove(g)
###Output
_____no_output_____
###Markdown
Some GPR's are not valid boolean expressions.
###Code
multiple_ors = re.compile("(\s*or\s+){2,}")
multiple_ands = re.compile("(\s*and\s+){2,}")
for model_id in ["iRS1563", "iRS1597", "iMM1415"]:
model = models.get_by_id(model_id)
for reaction in model.reactions:
gpr = reaction.gene_reaction_rule
gpr = multiple_ors.sub(" or ", gpr)
gpr = multiple_ands.sub(" and ", gpr)
if "[" in gpr:
gpr = gpr.replace("[", "(").replace("]", ")")
if gpr.endswith(" or"):
gpr = gpr[:-3]
if gpr.count("(") != gpr.count(")"):
gpr = "" # mismatched parenthesis somewhere
reaction.gene_reaction_rule = gpr
for gene in list(model.genes):
if gene.id.startswith("[") or gene.id.endswith("]"):
if len(gene.reactions) == 0:
model.genes.remove(gene.id)
# Some models are missing spaces between the ands/ors in some of their GPR's
for m_id in ["iJN678", "iTL885"]:
for r in models.get_by_id(m_id).reactions:
r.gene_reaction_rule = r.gene_reaction_rule.replace("and", " and ").replace("or", " or ")
models.iCac802.reactions.R0095.gene_reaction_rule = \
models.iCac802.reactions.R0095.gene_reaction_rule.replace(" AND ", " and ")
# make sbml3 output deterministic by sorting genes
for m in models:
m.genes.sort()
###Output
_____no_output_____
###Markdown
Ensure all ID's are SBML compliant
###Code
for m in models:
cobra.manipulation.escape_ID(m)
###Output
_____no_output_____
###Markdown
Export Models SBML 3Export the models to the use the fbc version 2 (draft RC6) extension to SBML level 3 version 1.
###Code
for model in models:
cobra.io.write_sbml_model(model, "sbml3/%s.xml" % model.id)
###Output
_____no_output_____
###Markdown
matSave all the models into a single mat file. In addition to the usual fields in the "mat" struct, we will also include S_num and S_denom, which are the numerator and denominator of the stoichiometric coefficients encoded as rational numbers.
###Code
def convert_to_rational(value):
return sympy.Rational("%.15g" % value)
def construct_S_num_denom(model):
"""convert model to two S matrices
they encode the numerator and denominator of stoichiometric
coefficients encoded as rational numbers
"""
# intialize to 0
dimensions = (len(model.metabolites), len(model.reactions))
S_num = scipy.sparse.lil_matrix(dimensions)
S_denom = scipy.sparse.lil_matrix(dimensions)
# populate with stoichiometry
for i, r in enumerate(model.reactions):
for met, value in r._metabolites.iteritems():
rational_value = convert_to_rational(value)
num, denom = (rational_value.p, rational_value.q)
S_num[model.metabolites.index(met), i] = num
S_denom[model.metabolites.index(met), i] = denom
return S_num, S_denom
all_model_dict = {}
for model in models:
model_dict = cobra.io.mat.create_mat_dict(model)
model_dict["S_num"], model_dict["S_denom"] = construct_S_num_denom(model)
all_model_dict[model.id] = model_dict
scipy.io.savemat("all_models.mat", all_model_dict, oned_as="column")
###Output
_____no_output_____ |
Mathematics_for_Machine_Learning/Linear Algebra/ReflectingBear.ipynb | ###Markdown
Reflecting Bear BackgroundPanda Bear is confused. He is trying to work out how things should look when reflected in a mirror, but is getting the wrong results.As is the way with bears, his coordinate system is not orthonormal: so what he thinks is the direction perpendicular to the mirror isn't actually the direction the mirror reflects in.Help Bear write a code that will do his matrix calculations properly! InstructionsIn this assignment you will write a Python function that will produce a transformation matrix for reflecting vectors in an arbitrarily angled mirror.Building on the last assingment, where you wrote a code to construct an orthonormal basis that spans a set of input vectors, here you will take a matrix which takes simple form in that basis, and transform it into our starting basis.Recall the from the last video,\\( T = E T_E E^{-1} \\)You will write a function that will construct this matrix.This assessment is not conceptually complicated, but will build and test your ability to express mathematical ideas in code.As such, your final code submission will be relatively short, but you will receive less structure on how to write it. Matrices in PythonFor this exercise, we shall make use of the @ operator again.Recall from the last exercise, we used this operator to take the dot product of vectors.In general the operator will combine vectors and/or matrices in the expected linear algebra way,i.e. it will be either the vector dot product, matrix multiplication, or matrix operation on a vector, depending on it's input.For example to calculate the following expressions,\\( a = \mathbf{s}\cdot\mathbf{t} \\)\\( \mathbf{s} = A\mathbf{t} \\)\\( M = A B \\),One would use the code,```pythona = s @ ts = A @ tM = A @ B```(This is in contrast to the \\(*\\) operator, which performs element-wise multiplication, or multiplication by a scalar.)You may need to use some of the following functions:```pythoninv(A)transpose(A)gsBasis(A)```These, respectively, take the inverse of a matrix, give the transpose of a matrix, and produce a matrix of orthonormal column vectors given a general matrix of column vectors - i.e. perform the Gram-Schmidt process.This exercise will require you to combine some of these functions. How to submitEdit the code in the cell below to complete the assignment.Once you are finished and happy with it, press the *Submit Assignment* button at the top of this notebook.Please don't change any of the function names, as these will be checked by the grading script.If you have further questions about submissions or programming assignments, here is a [list](https://www.coursera.org/learn/linear-algebra-machine-learning/discussions/weeks/1/threads/jB4klkn5EeibtBIQyzFmQg) of Q&A. You can also raise an issue on the discussion forum. Good luck!
###Code
# PACKAGE
# Run this cell once first to load the dependancies. There is no need to submit this cell.
import numpy as np
from numpy.linalg import norm, inv
from numpy import transpose
from readonly.bearNecessities import *
# GRADED FUNCTION
# This is the cell you should edit and submit.
# In this function, you will return the transformation matrix T,
# having built it out of an orthonormal basis set E that you create from Bear's Basis
# and a transformation matrix in the mirror's coordinates TE.
def build_reflection_matrix(bearBasis) : # The parameter bearBasis is a 2×2 matrix that is passed to the function.
# Use the gsBasis function on bearBasis to get the mirror's orthonormal basis.
E = gsBasis(bearBasis)
# Write a matrix in component form that perform's the mirror's reflection in the mirror's basis.
# Recall, the mirror operates by negating the last component of a vector.
# Replace a,b,c,d with appropriate values
TE = np.array([[1, 0],
[0, -1]])
# Combine the matrices E and TE to produce your transformation matrix.
T = E@TE@transpose(E)
# Finally, we return the result. There is no need to change this line.
return T
###Output
_____no_output_____
###Markdown
Test your code before submissionTo test the code you've written above, run the cell (select the cell above, then press the play button [ ▶| ] or press shift-enter).You can then use the code below to test out your function.You don't need to submit this cell; you can edit and run it as much as you like.The code below will show a picture of Panda Bear.If you have correctly implemented the function above, you will also see Bear's reflection in his mirror.
###Code
# First load Pyplot, a graph plotting library.
%matplotlib inline
import matplotlib.pyplot as plt
# This is the matrix of Bear's basis vectors.
bearBasis = np.array(
[[1, -1],
[1.5, 2]])
# This line uses your code to build a transformation matrix for us to use.
T = build_reflection_matrix(bearBasis)
# Bear is drawn as a set of polygons, the vertices of which are placed as a matrix list of column vectors.
# We have three of these non-square matrix lists: bear_white_fur, bear_black_fur, and bear_face.
# We'll make new lists of vertices by applying the T matrix you've calculated.
reflected_bear_white_fur = T @ bear_white_fur
reflected_bear_black_fur = T @ bear_black_fur
reflected_bear_face = T @ bear_face
# This next line runs a code to set up the graphics environment.
ax = draw_mirror(bearBasis)
# We'll first plot Bear, his white fur, his black fur, and his face.
ax.fill(bear_white_fur[0], bear_white_fur[1], color=bear_white, zorder=1)
ax.fill(bear_black_fur[0], bear_black_fur[1], color=bear_black, zorder=2)
ax.plot(bear_face[0], bear_face[1], color=bear_white, zorder=3)
# Next we'll plot Bear's reflection.
ax.fill(reflected_bear_white_fur[0], reflected_bear_white_fur[1], color=bear_white, zorder=1)
ax.fill(reflected_bear_black_fur[0], reflected_bear_black_fur[1], color=bear_black, zorder=2)
ax.plot(reflected_bear_face[0], reflected_bear_face[1], color=bear_white, zorder=3);
###Output
/home/jovyan/work/readonly/bearNecessities.py:31: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead.
ax.set_axis_bgcolor(blue1)
|
Recurrent Neural Net/Building_a_Recurrent_Neural_Network_Step_by_Step.ipynb | ###Markdown
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment, where you'll be implementing key components of a Recurrent Neural Network, or RNN, in NumPy! By the end of this assignment, you'll be able to:* Define notation for building sequence models* Describe the architecture of a basic RNN* Identify the main components of an LSTM* Implement backpropagation through time for a basic RNN and an LSTM* Give examples of several types of RNN Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory." They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some contextual information through the hidden layer activations that get passed from one time step to the next. This allows a unidirectional (one-way) RNN to take information from the past to process later inputs. A bidirectional (two-way) RNN can take context from both the past and the future, much like Marty McFly. **Notation**:- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. - Superscript $(i)$ denotes an object associated with the $i^{th}$ example. - Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time step. - Subscript $i$ denotes the $i^{th}$ entry of a vector.**Example**: - $a^{(2)[3]}_5$ denotes the activation of the 2nd training example (2), 3rd layer [3], 4th time step , and 5th entry in the vector. Pre-requisites* You should already be familiar with `numpy`* To refresh your knowledge of numpy, you can review course 1 of the specialization "Neural Networks and Deep Learning": * Specifically, review the week 2's practice assignment ["Python Basics with Numpy (optional assignment)"](https://www.coursera.org/learn/neural-networks-deep-learning/programming/isoAV/python-basics-with-numpy) Be careful when modifying the starter code!* When working on graded functions, please remember to only modify the code that is between:```Python START CODE HERE```and:```Python END CODE HERE```* In particular, avoid modifying the first line of graded routines. These start with:```Python GRADED FUNCTION: routine_name```The automatic grader (autograder) needs these to locate the function - so even a change in spacing will cause issues with the autograder, returning 'failed' if any of these are modified or missing. Now, let's get started! Table of Content- [Packages](0)- [1 - Forward Propagation for the Basic Recurrent Neural Network](1) - [1.1 - RNN Cell](1-1) - [Exercise 1 - rnn_cell_forward](ex-1) - [1.2 - RNN Forward Pass](1-2) - [Exercise 2 - rnn_forward](ex-2)- [2 - Long Short-Term Memory (LSTM) Network](2) - [2.1 - LSTM Cell](2-1) - [Exercise 3 - lstm_cell_forward](ex-3) - [2.2 - Forward Pass for LSTM](2-2) - [Exercise 4 - lstm_forward](ex-4)- [3 - Backpropagation in Recurrent Neural Networks (OPTIONAL / UNGRADED)](3) - [3.1 - Basic RNN Backward Pass](3-1) - [Exercise 5 - rnn_cell_backward](ex-5) - [Exercise 6 - rnn_backward](ex-6) - [3.2 - LSTM Backward Pass](3-2) - [Exercise 7 - lstm_cell_backward](ex-7) - [3.3 Backward Pass through the LSTM RNN](3-3) - [Exercise 8 - lstm_backward](ex-8) Packages
###Code
import numpy as np
from rnn_utils import *
from public_tests import *
###Output
_____no_output_____
###Markdown
1 - Forward Propagation for the Basic Recurrent Neural NetworkLater this week, you'll get a chance to generate music using an RNN! The basic RNN that you'll implement has the following structure: In this example, $T_x = T_y$. Figure 1: Basic RNN model Dimensions of input $x$ Input with $n_x$ number of units* For a single time step of a single input example, $x^{(i) \langle t \rangle }$ is a one-dimensional input vector* Using language as an example, a language with a 5000-word vocabulary could be one-hot encoded into a vector that has 5000 units. So $x^{(i)\langle t \rangle}$ would have the shape (5000,) * The notation $n_x$ is used here to denote the number of units in a single time step of a single training example Time steps of size $T_{x}$* A recurrent neural network has multiple time steps, which you'll index with $t$.* In the lessons, you saw a single training example $x^{(i)}$ consisting of multiple time steps $T_x$. In this notebook, $T_{x}$ will denote the number of timesteps in the longest sequence. Batches of size $m$* Let's say we have mini-batches, each with 20 training examples * To benefit from vectorization, you'll stack 20 columns of $x^{(i)}$ examples* For example, this tensor has the shape (5000,20,10) * You'll use $m$ to denote the number of training examples * So, the shape of a mini-batch is $(n_x,m,T_x)$ 3D Tensor of shape $(n_{x},m,T_{x})$* The 3-dimensional tensor $x$ of shape $(n_x,m,T_x)$ represents the input $x$ that is fed into the RNN Taking a 2D slice for each time step: $x^{\langle t \rangle}$* At each time step, you'll use a mini-batch of training examples (not just a single example)* So, for each time step $t$, you'll use a 2D slice of shape $(n_x,m)$* This 2D slice is referred to as $x^{\langle t \rangle}$. The variable name in the code is `xt`. Definition of hidden state $a$* The activation $a^{\langle t \rangle}$ that is passed to the RNN from one time step to another is called a "hidden state." Dimensions of hidden state $a$* Similar to the input tensor $x$, the hidden state for a single training example is a vector of length $n_{a}$* If you include a mini-batch of $m$ training examples, the shape of a mini-batch is $(n_{a},m)$* When you include the time step dimension, the shape of the hidden state is $(n_{a}, m, T_x)$* You'll loop through the time steps with index $t$, and work with a 2D slice of the 3D tensor * This 2D slice is referred to as $a^{\langle t \rangle}$* In the code, the variable names used are either `a_prev` or `a_next`, depending on the function being implemented* The shape of this 2D slice is $(n_{a}, m)$ Dimensions of prediction $\hat{y}$* Similar to the inputs and hidden states, $\hat{y}$ is a 3D tensor of shape $(n_{y}, m, T_{y})$ * $n_{y}$: number of units in the vector representing the prediction * $m$: number of examples in a mini-batch * $T_{y}$: number of time steps in the prediction* For a single time step $t$, a 2D slice $\hat{y}^{\langle t \rangle}$ has shape $(n_{y}, m)$* In the code, the variable names are: - `y_pred`: $\hat{y}$ - `yt_pred`: $\hat{y}^{\langle t \rangle}$ Here's how you can implement an RNN: Steps:1. Implement the calculations needed for one time step of the RNN.2. Implement a loop over $T_x$ time steps in order to process all the inputs, one at a time. 1.1 - RNN CellYou can think of the recurrent neural network as the repeated use of a single cell. First, you'll implement the computations for a single time step. The following figure describes the operations for a single time step of an RNN cell: Figure 2: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $\hat{y}^{\langle t \rangle}$ **`RNN cell` versus `RNN_cell_forward`**:* Note that an RNN cell outputs the hidden state $a^{\langle t \rangle}$. * `RNN cell` is shown in the figure as the inner box with solid lines * The function that you'll implement, `rnn_cell_forward`, also calculates the prediction $\hat{y}^{\langle t \rangle}$ * `RNN_cell_forward` is shown in the figure as the outer box with dashed lines Exercise 1 - rnn_cell_forwardImplement the RNN cell described in Figure 2.**Instructions**:1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. (The function `softmax` is provided)3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in a `cache`4. Return $a^{\langle t \rangle}$ , $\hat{y}^{\langle t \rangle}$ and `cache` Additional Hints* A little more information on [numpy.tanh](https://numpy.org/devdocs/reference/generated/numpy.tanh.html)* In this assignment, there's an existing `softmax` function for you to use. It's located in the file 'rnn_utils.py' and has already been imported.* For matrix multiplication, use [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh(np.dot(Waa, a_prev) + np.dot(Wax, xt) + ba)
# compute output of the current cell using the formula given above
yt_pred = softmax(np.dot(Wya, a_next) + by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt_tmp = np.random.randn(3, 10)
a_prev_tmp = np.random.randn(5, 10)
parameters_tmp = {}
parameters_tmp['Waa'] = np.random.randn(5, 5)
parameters_tmp['Wax'] = np.random.randn(5, 3)
parameters_tmp['Wya'] = np.random.randn(2, 5)
parameters_tmp['ba'] = np.random.randn(5, 1)
parameters_tmp['by'] = np.random.randn(2, 1)
a_next_tmp, yt_pred_tmp, cache_tmp = rnn_cell_forward(xt_tmp, a_prev_tmp, parameters_tmp)
print("a_next[4] = \n", a_next_tmp[4])
print("a_next.shape = \n", a_next_tmp.shape)
print("yt_pred[1] =\n", yt_pred_tmp[1])
print("yt_pred.shape = \n", yt_pred_tmp.shape)
# UNIT TESTS
rnn_cell_forward_tests(rnn_cell_forward)
###Output
a_next[4] =
[ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
a_next.shape =
(5, 10)
yt_pred[1] =
[0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
yt_pred.shape =
(2, 10)
[92mAll tests passed
###Markdown
**Expected Output**: ```Pythona_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037]a_next.shape = (5, 10)yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526]yt_pred.shape = (2, 10)``` 1.2 - RNN Forward Pass - A recurrent neural network (RNN) is a repetition of the RNN cell that you've just built. - If your input sequence of data is 10 time steps long, then you will re-use the RNN cell 10 times - Each cell takes two inputs at each time step: - $a^{\langle t-1 \rangle}$: The hidden state from the previous cell - $x^{\langle t \rangle}$: The current time step's input data- It has two outputs at each time step: - A hidden state ($a^{\langle t \rangle}$) - A prediction ($y^{\langle t \rangle}$)- The weights and biases $(W_{aa}, b_{a}, W_{ax}, b_{x})$ are re-used each time step - They are maintained between calls to `rnn_cell_forward` in the 'parameters' dictionaryFigure 3: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. Exercise 2 - rnn_forwardImplement the forward propagation of the RNN described in Figure 3.**Instructions**:* Create a 3D array of zeros, $a$ of shape $(n_{a}, m, T_{x})$ that will store all the hidden states computed by the RNN* Create a 3D array of zeros, $\hat{y}$, of shape $(n_{y}, m, T_{x})$ that will store the predictions - Note that in this case, $T_{y} = T_{x}$ (the prediction and input have the same number of time steps)* Initialize the 2D hidden state `a_next` by setting it equal to the initial hidden state, $a_{0}$* At each time step $t$: - Get $x^{\langle t \rangle}$, which is a 2D slice of $x$ for a single time step $t$ - $x^{\langle t \rangle}$ has shape $(n_{x}, m)$ - $x$ has shape $(n_{x}, m, T_{x})$ - Update the 2D hidden state $a^{\langle t \rangle}$ (variable name `a_next`), the prediction $\hat{y}^{\langle t \rangle}$ and the cache by running `rnn_cell_forward` - $a^{\langle t \rangle}$ has shape $(n_{a}, m)$ - Store the 2D hidden state in the 3D tensor $a$, at the $t^{th}$ position - $a$ has shape $(n_{a}, m, T_{x})$ - Store the 2D $\hat{y}^{\langle t \rangle}$ prediction (variable name `yt_pred`) in the 3D tensor $\hat{y}_{pred}$ at the $t^{th}$ position - $\hat{y}^{\langle t \rangle}$ has shape $(n_{y}, m)$ - $\hat{y}$ has shape $(n_{y}, m, T_x)$ - Append the cache to the list of caches* Return the 3D tensor $a$ and $\hat{y}$, as well as the list of caches Additional Hints- Some helpful documentation on [np.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html)- If you have a 3 dimensional numpy array and are indexing by its third dimension, you can use array slicing like this: `var_name[:,:,i]`
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and parameters["Wya"]
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y_pred" with zeros (≈2 lines)
a = np.zeros((n_a, m, T_x))
y_pred = np.zeros((n_y, m, T_x))
# Initialize a_next (≈1 line)
a_next = a0
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x_tmp = np.random.randn(3, 10, 4)
a0_tmp = np.random.randn(5, 10)
parameters_tmp = {}
parameters_tmp['Waa'] = np.random.randn(5, 5)
parameters_tmp['Wax'] = np.random.randn(5, 3)
parameters_tmp['Wya'] = np.random.randn(2, 5)
parameters_tmp['ba'] = np.random.randn(5, 1)
parameters_tmp['by'] = np.random.randn(2, 1)
a_tmp, y_pred_tmp, caches_tmp = rnn_forward(x_tmp, a0_tmp, parameters_tmp)
print("a[4][1] = \n", a_tmp[4][1])
print("a.shape = \n", a_tmp.shape)
print("y_pred[1][3] =\n", y_pred_tmp[1][3])
print("y_pred.shape = \n", y_pred_tmp.shape)
print("caches[1][1][3] =\n", caches_tmp[1][1][3])
print("len(caches) = \n", len(caches_tmp))
#UNIT TEST
rnn_forward_test(rnn_forward)
###Output
a[4][1] =
[-0.99999375 0.77911235 -0.99861469 -0.99833267]
a.shape =
(5, 10, 4)
y_pred[1][3] =
[0.79560373 0.86224861 0.11118257 0.81515947]
y_pred.shape =
(2, 10, 4)
caches[1][1][3] =
[-1.1425182 -0.34934272 -0.20889423 0.58662319]
len(caches) =
2
[92mAll tests passed
###Markdown
**Expected Output**:```Pythona[4][1] = [-0.99999375 0.77911235 -0.99861469 -0.99833267]a.shape = (5, 10, 4)y_pred[1][3] = [ 0.79560373 0.86224861 0.11118257 0.81515947]y_pred.shape = (2, 10, 4)caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]len(caches) = 2``` Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. Nice work! Situations when this RNN will perform better:- This will work well enough for some applications, but it suffers from vanishing gradients. - The RNN works best when each output $\hat{y}^{\langle t \rangle}$ can be estimated using "local" context. - "Local" context refers to information that is close to the prediction's time step $t$.- More formally, local context refers to inputs $x^{\langle t' \rangle}$ and predictions $\hat{y}^{\langle t \rangle}$ where $t'$ is close to $t$.What you should remember:* The recurrent neural network, or RNN, is essentially the repeated use of a single cell.* A basic RNN reads inputs one at a time, and remembers information through the hidden layer activations (hidden states) that are passed from one time step to the next. * The time step dimension determines how many times to re-use the RNN cell* Each cell takes two inputs at each time step: * The hidden state from the previous cell * The current time step's input data* Each cell has two outputs at each time step: * A hidden state * A predictionIn the next section, you'll build a more complex model, the LSTM, which is better at addressing vanishing gradients. The LSTM is better able to remember a piece of information and save it for many time steps. 2 - Long Short-Term Memory (LSTM) NetworkThe following figure shows the operations of an LSTM cell:Figure 4: LSTM cell. This tracks and updates a "cell state," or memory variable $c^{\langle t \rangle}$ at every time step, which can be different from $a^{\langle t \rangle}$. Note, the $softmax^{}$ includes a dense layer and softmax.Similar to the RNN example above, you'll begin by implementing the LSTM cell for a single time step. Then, you'll iteratively call it from inside a "for loop" to have it process an input with $T_x$ time steps. Overview of gates and states Forget gate $\mathbf{\Gamma}_{f}$* Let's assume you are reading words in a piece of text, and plan to use an LSTM to keep track of grammatical structures, such as whether the subject is singular ("puppy") or plural ("puppies"). * If the subject changes its state (from a singular word to a plural word), the memory of the previous state becomes outdated, so you'll "forget" that outdated state.* The "forget gate" is a tensor containing values between 0 and 1. * If a unit in the forget gate has a value close to 0, the LSTM will "forget" the stored state in the corresponding unit of the previous cell state. * If a unit in the forget gate has a value close to 1, the LSTM will mostly remember the corresponding value in the stored state. Equation$$\mathbf{\Gamma}_f^{\langle t \rangle} = \sigma(\mathbf{W}_f[\mathbf{a}^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_f)\tag{1} $$ Explanation of the equation:* $\mathbf{W_{f}}$ contains weights that govern the forget gate's behavior. * The previous time step's hidden state $[a^{\langle t-1 \rangle}$ and current time step's input $x^{\langle t \rangle}]$ are concatenated together and multiplied by $\mathbf{W_{f}}$. * A sigmoid function is used to make each of the gate tensor's values $\mathbf{\Gamma}_f^{\langle t \rangle}$ range from 0 to 1.* The forget gate $\mathbf{\Gamma}_f^{\langle t \rangle}$ has the same dimensions as the previous cell state $c^{\langle t-1 \rangle}$. * This means that the two can be multiplied together, element-wise.* Multiplying the tensors $\mathbf{\Gamma}_f^{\langle t \rangle} * \mathbf{c}^{\langle t-1 \rangle}$ is like applying a mask over the previous cell state.* If a single value in $\mathbf{\Gamma}_f^{\langle t \rangle}$ is 0 or close to 0, then the product is close to 0. * This keeps the information stored in the corresponding unit in $\mathbf{c}^{\langle t-1 \rangle}$ from being remembered for the next time step.* Similarly, if one value is close to 1, the product is close to the original value in the previous cell state. * The LSTM will keep the information from the corresponding unit of $\mathbf{c}^{\langle t-1 \rangle}$, to be used in the next time step. Variable names in the codeThe variable names in the code are similar to the equations, with slight differences. * `Wf`: forget gate weight $\mathbf{W}_{f}$* `bf`: forget gate bias $\mathbf{b}_{f}$* `ft`: forget gate $\Gamma_f^{\langle t \rangle}$ Candidate value $\tilde{\mathbf{c}}^{\langle t \rangle}$* The candidate value is a tensor containing information from the current time step that **may** be stored in the current cell state $\mathbf{c}^{\langle t \rangle}$.* The parts of the candidate value that get passed on depend on the update gate.* The candidate value is a tensor containing values that range from -1 to 1.* The tilde "~" is used to differentiate the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ from the cell state $\mathbf{c}^{\langle t \rangle}$. Equation$$\mathbf{\tilde{c}}^{\langle t \rangle} = \tanh\left( \mathbf{W}_{c} [\mathbf{a}^{\langle t - 1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_{c} \right) \tag{3}$$ Explanation of the equation* The *tanh* function produces values between -1 and 1. Variable names in the code* `cct`: candidate value $\mathbf{\tilde{c}}^{\langle t \rangle}$ Update gate $\mathbf{\Gamma}_{i}$* You use the update gate to decide what aspects of the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ to add to the cell state $c^{\langle t \rangle}$.* The update gate decides what parts of a "candidate" tensor $\tilde{\mathbf{c}}^{\langle t \rangle}$ are passed onto the cell state $\mathbf{c}^{\langle t \rangle}$.* The update gate is a tensor containing values between 0 and 1. * When a unit in the update gate is close to 1, it allows the value of the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ to be passed onto the hidden state $\mathbf{c}^{\langle t \rangle}$ * When a unit in the update gate is close to 0, it prevents the corresponding value in the candidate from being passed onto the hidden state.* Notice that the subscript "i" is used and not "u", to follow the convention used in the literature. Equation$$\mathbf{\Gamma}_i^{\langle t \rangle} = \sigma(\mathbf{W}_i[a^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_i)\tag{2} $$ Explanation of the equation* Similar to the forget gate, here $\mathbf{\Gamma}_i^{\langle t \rangle}$, the sigmoid produces values between 0 and 1.* The update gate is multiplied element-wise with the candidate, and this product ($\mathbf{\Gamma}_{i}^{\langle t \rangle} * \tilde{c}^{\langle t \rangle}$) is used in determining the cell state $\mathbf{c}^{\langle t \rangle}$. Variable names in code (Please note that they're different than the equations)In the code, you'll use the variable names found in the academic literature. These variables don't use "u" to denote "update".* `Wi` is the update gate weight $\mathbf{W}_i$ (not "Wu") * `bi` is the update gate bias $\mathbf{b}_i$ (not "bu")* `it` is the update gate $\mathbf{\Gamma}_i^{\langle t \rangle}$ (not "ut") Cell state $\mathbf{c}^{\langle t \rangle}$* The cell state is the "memory" that gets passed onto future time steps.* The new cell state $\mathbf{c}^{\langle t \rangle}$ is a combination of the previous cell state and the candidate value. Equation$$ \mathbf{c}^{\langle t \rangle} = \mathbf{\Gamma}_f^{\langle t \rangle}* \mathbf{c}^{\langle t-1 \rangle} + \mathbf{\Gamma}_{i}^{\langle t \rangle} *\mathbf{\tilde{c}}^{\langle t \rangle} \tag{4} $$ Explanation of equation* The previous cell state $\mathbf{c}^{\langle t-1 \rangle}$ is adjusted (weighted) by the forget gate $\mathbf{\Gamma}_{f}^{\langle t \rangle}$* and the candidate value $\tilde{\mathbf{c}}^{\langle t \rangle}$, adjusted (weighted) by the update gate $\mathbf{\Gamma}_{i}^{\langle t \rangle}$ Variable names and shapes in the code* `c`: cell state, including all time steps, $\mathbf{c}$ shape $(n_{a}, m, T_x)$* `c_next`: new (next) cell state, $\mathbf{c}^{\langle t \rangle}$ shape $(n_{a}, m)$* `c_prev`: previous cell state, $\mathbf{c}^{\langle t-1 \rangle}$, shape $(n_{a}, m)$ Output gate $\mathbf{\Gamma}_{o}$* The output gate decides what gets sent as the prediction (output) of the time step.* The output gate is like the other gates, in that it contains values that range from 0 to 1. Equation$$ \mathbf{\Gamma}_o^{\langle t \rangle}= \sigma(\mathbf{W}_o[\mathbf{a}^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_{o})\tag{5}$$ Explanation of the equation* The output gate is determined by the previous hidden state $\mathbf{a}^{\langle t-1 \rangle}$ and the current input $\mathbf{x}^{\langle t \rangle}$* The sigmoid makes the gate range from 0 to 1. Variable names in the code* `Wo`: output gate weight, $\mathbf{W_o}$* `bo`: output gate bias, $\mathbf{b_o}$* `ot`: output gate, $\mathbf{\Gamma}_{o}^{\langle t \rangle}$ Hidden state $\mathbf{a}^{\langle t \rangle}$* The hidden state gets passed to the LSTM cell's next time step.* It is used to determine the three gates ($\mathbf{\Gamma}_{f}, \mathbf{\Gamma}_{u}, \mathbf{\Gamma}_{o}$) of the next time step.* The hidden state is also used for the prediction $y^{\langle t \rangle}$. Equation$$ \mathbf{a}^{\langle t \rangle} = \mathbf{\Gamma}_o^{\langle t \rangle} * \tanh(\mathbf{c}^{\langle t \rangle})\tag{6} $$ Explanation of equation* The hidden state $\mathbf{a}^{\langle t \rangle}$ is determined by the cell state $\mathbf{c}^{\langle t \rangle}$ in combination with the output gate $\mathbf{\Gamma}_{o}$.* The cell state state is passed through the `tanh` function to rescale values between -1 and 1.* The output gate acts like a "mask" that either preserves the values of $\tanh(\mathbf{c}^{\langle t \rangle})$ or keeps those values from being included in the hidden state $\mathbf{a}^{\langle t \rangle}$ Variable names and shapes in the code* `a`: hidden state, including time steps. $\mathbf{a}$ has shape $(n_{a}, m, T_{x})$* `a_prev`: hidden state from previous time step. $\mathbf{a}^{\langle t-1 \rangle}$ has shape $(n_{a}, m)$* `a_next`: hidden state for next time step. $\mathbf{a}^{\langle t \rangle}$ has shape $(n_{a}, m)$ Prediction $\mathbf{y}^{\langle t \rangle}_{pred}$* The prediction in this use case is a classification, so you'll use a softmax.The equation is:$$\mathbf{y}^{\langle t \rangle}_{pred} = \textrm{softmax}(\mathbf{W}_{y} \mathbf{a}^{\langle t \rangle} + \mathbf{b}_{y})$$ Variable names and shapes in the code* `y_pred`: prediction, including all time steps. $\mathbf{y}_{pred}$ has shape $(n_{y}, m, T_{x})$. Note that $(T_{y} = T_{x})$ for this example.* `yt_pred`: prediction for the current time step $t$. $\mathbf{y}^{\langle t \rangle}_{pred}$ has shape $(n_{y}, m)$ 2.1 - LSTM Cell Exercise 3 - lstm_cell_forwardImplement the LSTM cell described in Figure 4.**Instructions**:1. Concatenate the hidden state $a^{\langle t-1 \rangle}$ and input $x^{\langle t \rangle}$ into a single matrix: $$concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$$ 2. Compute all formulas (1 through 6) for the gates, hidden state, and cell state.3. Compute the prediction $y^{\langle t \rangle}$. Additional Hints* You can use [numpy.concatenate](https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html). Check which value to use for the `axis` parameter.* The functions `sigmoid()` and `softmax` are imported from `rnn_utils.py`.* Some docs for [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html)* Use [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) for matrix multiplication.* Notice that the variable names `Wi`, `bi` refer to the weights and biases of the **update** gate. There are no variables named "Wu" or "bu" in this function.
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the cell state (memory)
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"] # forget gate weight
bf = parameters["bf"]
Wi = parameters["Wi"] # update gate weight (notice the variable name)
bi = parameters["bi"] # (notice the variable name)
Wc = parameters["Wc"] # candidate value weight
bc = parameters["bc"]
Wo = parameters["Wo"] # output gate weight
bo = parameters["bo"]
Wy = parameters["Wy"] # prediction weight
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈1 line)
concat = np.concatenate((a_prev, xt), axis=0)
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
ft = sigmoid(np.dot(Wf, concat) + bf)
it = sigmoid(np.dot(Wi, concat) + bi)
cct = np.tanh(np.dot(Wc, concat) + bc)
c_next = (ft * c_prev) + (it * cct)
ot = sigmoid(np.dot(Wo, concat) + bo)
a_next = ot * np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(np.dot(Wy, a_next) + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt_tmp = np.random.randn(3, 10)
a_prev_tmp = np.random.randn(5, 10)
c_prev_tmp = np.random.randn(5, 10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5 + 3)
parameters_tmp['bf'] = np.random.randn(5, 1)
parameters_tmp['Wi'] = np.random.randn(5, 5 + 3)
parameters_tmp['bi'] = np.random.randn(5, 1)
parameters_tmp['Wo'] = np.random.randn(5, 5 + 3)
parameters_tmp['bo'] = np.random.randn(5, 1)
parameters_tmp['Wc'] = np.random.randn(5, 5 + 3)
parameters_tmp['bc'] = np.random.randn(5, 1)
parameters_tmp['Wy'] = np.random.randn(2, 5)
parameters_tmp['by'] = np.random.randn(2, 1)
a_next_tmp, c_next_tmp, yt_tmp, cache_tmp = lstm_cell_forward(xt_tmp, a_prev_tmp, c_prev_tmp, parameters_tmp)
print("a_next[4] = \n", a_next_tmp[4])
print("a_next.shape = ", a_next_tmp.shape)
print("c_next[2] = \n", c_next_tmp[2])
print("c_next.shape = ", c_next_tmp.shape)
print("yt[1] =", yt_tmp[1])
print("yt.shape = ", yt_tmp.shape)
print("cache[1][3] =\n", cache_tmp[1][3])
print("len(cache) = ", len(cache_tmp))
# UNIT TEST
lstm_cell_forward_test(lstm_cell_forward)
###Output
a_next[4] =
[-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
a_next.shape = (5, 10)
c_next[2] =
[ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
c_next.shape = (5, 10)
yt[1] = [0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
yt.shape = (2, 10)
cache[1][3] =
[-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
len(cache) = 10
[92mAll tests passed
###Markdown
**Expected Output**:```Pythona_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275]a_next.shape = (5, 10)c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932]c_next.shape = (5, 10)yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381]yt.shape = (2, 10)cache[1][3] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422]len(cache) = 10``` 2.2 - Forward Pass for LSTMNow that you have implemented one step of an LSTM, you can iterate this over it using a for loop to process a sequence of $T_x$ inputs. Figure 5: LSTM over multiple time steps. Exercise 4 - lstm_forward Implement `lstm_forward()` to run an LSTM over $T_x$ time steps. **Instructions*** Get the dimensions $n_x, n_a, n_y, m, T_x$ from the shape of the variables: `x` and `parameters`* Initialize the 3D tensors $a$, $c$ and $y$ - $a$: hidden state, shape $(n_{a}, m, T_{x})$ - $c$: cell state, shape $(n_{a}, m, T_{x})$ - $y$: prediction, shape $(n_{y}, m, T_{x})$ (Note that $T_{y} = T_{x}$ in this example) - **Note** Setting one variable equal to the other is a "copy by reference". In other words, don't do `c = a', otherwise both these variables point to the same underlying variable.* Initialize the 2D tensor $a^{\langle t \rangle}$ - $a^{\langle t \rangle}$ stores the hidden state for time step $t$. The variable name is `a_next`. - $a^{\langle 0 \rangle}$, the initial hidden state at time step 0, is passed in when calling the function. The variable name is `a0`. - $a^{\langle t \rangle}$ and $a^{\langle 0 \rangle}$ represent a single time step, so they both have the shape $(n_{a}, m)$ - Initialize $a^{\langle t \rangle}$ by setting it to the initial hidden state ($a^{\langle 0 \rangle}$) that is passed into the function.* Initialize $c^{\langle t \rangle}$ with zeros. - The variable name is `c_next` - $c^{\langle t \rangle}$ represents a single time step, so its shape is $(n_{a}, m)$ - **Note**: create `c_next` as its own variable with its own location in memory. Do not initialize it as a slice of the 3D tensor $c$. In other words, **don't** do `c_next = c[:,:,0]`.* For each time step, do the following: - From the 3D tensor $x$, get a 2D slice $x^{\langle t \rangle}$ at time step $t$ - Call the `lstm_cell_forward` function that you defined previously, to get the hidden state, cell state, prediction, and cache - Store the hidden state, cell state and prediction (the 2D tensors) inside the 3D tensors - Append the cache to the list of caches
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (4).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
c -- The value of the cell state, numpy array of shape (n_a, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
Wy = parameters['Wy'] # saving parameters['Wy'] in a local variable in case students use Wy instead of parameters['Wy']
# Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters['Wy'].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a, m, T_x))
c = np.zeros((n_a, m, T_x))
y = np.zeros((n_y, m, T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = a0
c_next = np.zeros(a_next.shape)
# loop over all time-steps
for t in range(T_x):
# Get the 2D slice 'xt' from the 3D input 'x' at time step 't'
xt = x[:,:,t]
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_next, c_next,
parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Append the cache into caches (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x_tmp = np.random.randn(3, 10, 7)
a0_tmp = np.random.randn(5, 10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5 + 3)
parameters_tmp['bf'] = np.random.randn(5, 1)
parameters_tmp['Wi'] = np.random.randn(5, 5 + 3)
parameters_tmp['bi']= np.random.randn(5, 1)
parameters_tmp['Wo'] = np.random.randn(5, 5 + 3)
parameters_tmp['bo'] = np.random.randn(5, 1)
parameters_tmp['Wc'] = np.random.randn(5, 5 + 3)
parameters_tmp['bc'] = np.random.randn(5, 1)
parameters_tmp['Wy'] = np.random.randn(2, 5)
parameters_tmp['by'] = np.random.randn(2, 1)
a_tmp, y_tmp, c_tmp, caches_tmp = lstm_forward(x_tmp, a0_tmp, parameters_tmp)
print("a[4][3][6] = ", a_tmp[4][3][6])
print("a.shape = ", a_tmp.shape)
print("y[1][4][3] =", y_tmp[1][4][3])
print("y.shape = ", y_tmp.shape)
print("caches[1][1][1] =\n", caches_tmp[1][1][1])
print("c[1][2][1]", c_tmp[1][2][1])
print("len(caches) = ", len(caches_tmp))
# UNIT TEST
lstm_forward_test(lstm_forward)
###Output
a[4][3][6] = 0.17211776753291672
a.shape = (5, 10, 7)
y[1][4][3] = 0.9508734618501101
y.shape = (2, 10, 7)
caches[1][1][1] =
[ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
0.41005165]
c[1][2][1] -0.8555449167181981
len(caches) = 2
[92mAll tests passed
###Markdown
**Expected Output**:```Pythona[4][3][6] = 0.172117767533a.shape = (5, 10, 7)y[1][4][3] = 0.95087346185y.shape = (2, 10, 7)caches[1][1][1] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165]c[1][2][1] -0.855544916718len(caches) = 2``` Congratulations! You have now implemented the forward passes for both the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The framework will take care of the rest. What you should remember: * An LSTM is similar to an RNN in that they both use hidden states to pass along information, but an LSTM also uses a cell state, which is like a long-term memory, to help deal with the issue of vanishing gradients* An LSTM cell consists of a cell state, or long-term memory, a hidden state, or short-term memory, along with 3 gates that constantly update the relevancy of its inputs: * A forget gate, which decides which input units should be remembered and passed along. It's a tensor with values between 0 and 1. * If a unit has a value close to 0, the LSTM will "forget" the stored state in the previous cell state. * If it has a value close to 1, the LSTM will mostly remember the corresponding value. * An update gate, again a tensor containing values between 0 and 1. It decides on what information to throw away, and what new information to add. * When a unit in the update gate is close to 1, the value of its candidate is passed on to the hidden state. * When a unit in the update gate is close to 0, it's prevented from being passed onto the hidden state. * And an output gate, which decides what gets sent as the output of the time step Let's recap all you've accomplished so far. You have: * Used notation for building sequence models* Become familiar with the architecture of a basic RNN and an LSTM, and can describe their componentsThe rest of this notebook is optional, and will not be graded, but as always, you are encouraged to push your own understanding! Good luck and have fun. 3 - Backpropagation in Recurrent Neural Networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If, however, you are an expert in calculus (or are just curious) and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. When in an earlier [course](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/0VSHe/derivatives-with-a-computation-graph) you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated, and so were not derived in lecture. However, they're briefly presented for your viewing pleasure below. Note that this notebook does not implement the backward path from the Loss 'J' backwards to 'a'. This would have included the dense layer and softmax, which are a part of the forward path. This is assumed to be calculated elsewhere and the result passed to `rnn_backward` in 'da'. It is further assumed that loss has been adjusted for batch size (m) and division by the number of examples is not required here. This section is optional and ungraded, because it's more difficult and has fewer details regarding its implementation. Note that this section only implements key elements of the full path! Onward, brave one: 3.1 - Basic RNN Backward PassBegin by computing the backward pass for the basic RNN cell. Then, in the following sections, iterate through the cells. Figure 6: The RNN cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the time steps of the RNN by following the chain rule from calculus. Internal to the cell, the chain rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. The operation can utilize the cached results from the forward path. Recall from lecture that the shorthand for the partial derivative of cost relative to a variable is `dVariable`. For example, $\frac{\partial J}{\partial W_{ax}}$ is $dW_{ax}$. This will be used throughout the remaining sections. Figure 7: This implementation of `rnn_cell_backward` does **not** include the output dense layer and softmax which are included in `rnn_cell_forward`. $da_{next}$ is $\frac{\partial{J}}{\partial a^{\langle t \rangle}}$ and includes loss from previous stages and current stage output logic. The addition shown in green will be part of your implementation of `rnn_backward`. EquationsTo compute `rnn_cell_backward`, you can use the following equations. It's a good exercise to derive them by hand. Here, $*$ denotes element-wise multiplication while the absence of a symbol indicates matrix multiplication.\begin{align}\displaystyle a^{\langle t \rangle} &= \tanh(W_{ax} x^{\langle t \rangle} + W_{aa} a^{\langle t-1 \rangle} + b_{a})\tag{-} \\[8pt]\displaystyle \frac{\partial \tanh(x)} {\partial x} &= 1 - \tanh^2(x) \tag{-} \\[8pt]\displaystyle {dtanh} &= da_{next} * ( 1 - \tanh^2(W_{ax}x^{\langle t \rangle}+W_{aa} a^{\langle t-1 \rangle} + b_{a})) \tag{0} \\[8pt]\displaystyle {dW_{ax}} &= dtanh \cdot x^{\langle t \rangle T}\tag{1} \\[8pt]\displaystyle dW_{aa} &= dtanh \cdot a^{\langle t-1 \rangle T}\tag{2} \\[8pt]\displaystyle db_a& = \sum_{batch}dtanh\tag{3} \\[8pt]\displaystyle dx^{\langle t \rangle} &= { W_{ax}}^T \cdot dtanh\tag{4} \\[8pt]\displaystyle da_{prev} &= { W_{aa}}^T \cdot dtanh\tag{5}\end{align} Exercise 5 - rnn_cell_backwardImplementing `rnn_cell_backward`.The results can be computed directly by implementing the equations above. However, you have an option to simplify them by computing 'dz' and using the chain rule. This can be further simplified by noting that $\tanh(W_{ax}x^{\langle t \rangle}+W_{aa} a^{\langle t-1 \rangle} + b_{a})$ was computed and saved as `a_next` in the forward pass. To calculate `dba`, the 'batch' above is a sum across all 'm' examples (axis= 1). Note that you should use the `keepdims = True` option.It may be worthwhile to review Course 1 [Derivatives with a Computation Graph](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/0VSHe/derivatives-with-a-computation-graph) through [Backpropagation Intuition](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/6dDj7/backpropagation-intuition-optional), which decompose the calculation into steps using the chain rule. Matrix vector derivatives are described [here](http://cs231n.stanford.edu/vecDerivs.pdf), though the equations above incorporate the required transformations.**Note**: `rnn_cell_backward` does **not** include the calculation of loss from $y \langle t \rangle$. This is incorporated into the incoming `da_next`. This is a slight mismatch with `rnn_cell_forward`, which includes a dense layer and softmax. **Note on the code**: $\displaystyle dx^{\langle t \rangle}$ is represented by dxt, $\displaystyle d W_{ax}$ is represented by dWax, $\displaystyle da_{prev}$ is represented by da_prev, $\displaystyle dW_{aa}$ is represented by dWaa, $\displaystyle db_{a}$ is represented by dba, `dz` is not derived above but can optionally be derived by students to simplify the repeated calculations.
###Code
# UNGRADED FUNCTION: rnn_cell_backward
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of dtanh term using a_next and da_next (≈1 line)
dtanh = (1 - a_next**2) * da_next
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = np.dot(Wax.T, dtanh)
dWax = np.dot(dtanh, xt.T)
# compute the gradient with respect to Waa (≈2 lines)
da_prev = np.dot(Waa.T, dtanh)
dWaa = np.dot(dtanh, a_prev.T)
# compute the gradient with respect to b (≈1 line)
dba = np.sum(dtanh, axis=1, keepdims=1)
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt_tmp = np.random.randn(3,10)
a_prev_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wax'] = np.random.randn(5,3)
parameters_tmp['Waa'] = np.random.randn(5,5)
parameters_tmp['Wya'] = np.random.randn(2,5)
parameters_tmp['ba'] = np.random.randn(5,1)
parameters_tmp['by'] = np.random.randn(2,1)
a_next_tmp, yt_tmp, cache_tmp = rnn_cell_forward(xt_tmp, a_prev_tmp, parameters_tmp)
da_next_tmp = np.random.randn(5,10)
gradients_tmp = rnn_cell_backward(da_next_tmp, cache_tmp)
print("gradients[\"dxt\"][1][2] =", gradients_tmp["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients_tmp["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients_tmp["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients_tmp["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients_tmp["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients_tmp["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients_tmp["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients_tmp["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients_tmp["dba"][4])
print("gradients[\"dba\"].shape =", gradients_tmp["dba"].shape)
###Output
gradients["dxt"][1][2] = -1.3872130506020925
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = -0.15239949377395495
gradients["da_prev"].shape = (5, 10)
gradients["dWax"][3][1] = 0.4107728249354584
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = 1.1503450668497135
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [0.20023491]
gradients["dba"].shape = (5, 1)
###Markdown
**Expected Output**: gradients["dxt"][1][2] = -1.3872130506 gradients["dxt"].shape = (3, 10) gradients["da_prev"][2][3] = -0.152399493774 gradients["da_prev"].shape = (5, 10) gradients["dWax"][3][1] = 0.410772824935 gradients["dWax"].shape = (5, 3) gradients["dWaa"][1][2] = 1.15034506685 gradients["dWaa"].shape = (5, 5) gradients["dba"][4] = [ 0.20023491] gradients["dba"].shape = (5, 1) Exercise 6 - rnn_backwardComputing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.**Instructions**:Implement the `rnn_backward` function. Initialize the return variables with zeros first, and then loop through all the time steps while calling the `rnn_cell_backward` at each time time step, updating the other variables accordingly. * Note that this notebook does not implement the backward path from the Loss 'J' backwards to 'a'. * This would have included the dense layer and softmax which are a part of the forward path. * This is assumed to be calculated elsewhere and the result passed to `rnn_backward` in 'da'. * You must combine this with the loss from the previous stages when calling `rnn_cell_backward` (see figure 7 above).* It is further assumed that loss has been adjusted for batch size (m). * Therefore, division by the number of examples is not required here.
###Code
# UNGRADED FUNCTION: rnn_backward
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = caches
(a1, a0, x1, parameters) = caches[0]
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈6 lines)
dx = np.zeros((n_x, m, T_x))
dWax = np.zeros((n_a, n_x))
dWaa = np.zeros((n_a, n_a))
dba = np.zeros((n_a, 1))
da0 = np.zeros((n_a, m))
da_prevt = np.zeros((n_a, m))
# Loop through all the time steps
for t in reversed(range(T_x)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = rnn_cell_backward(da[:,:,t] + da_prevt, caches[t])
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = dxt
dWax += dWaxt
dWaa += dWaat
dba += dbat
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = da_prevt
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x_tmp = np.random.randn(3,10,4)
a0_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wax'] = np.random.randn(5,3)
parameters_tmp['Waa'] = np.random.randn(5,5)
parameters_tmp['Wya'] = np.random.randn(2,5)
parameters_tmp['ba'] = np.random.randn(5,1)
parameters_tmp['by'] = np.random.randn(2,1)
a_tmp, y_tmp, caches_tmp = rnn_forward(x_tmp, a0_tmp, parameters_tmp)
da_tmp = np.random.randn(5, 10, 4)
gradients_tmp = rnn_backward(da_tmp, caches_tmp)
print("gradients[\"dx\"][1][2] =", gradients_tmp["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients_tmp["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients_tmp["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients_tmp["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients_tmp["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients_tmp["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients_tmp["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients_tmp["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients_tmp["dba"][4])
print("gradients[\"dba\"].shape =", gradients_tmp["dba"].shape)
###Output
gradients["dx"][1][2] = [-2.07101689 -0.59255627 0.02466855 0.01483317]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.31494237512664996
gradients["da0"].shape = (5, 10)
gradients["dWax"][3][1] = 11.264104496527777
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = 2.303333126579893
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [-0.74747722]
gradients["dba"].shape = (5, 1)
###Markdown
**Expected Output**: gradients["dx"][1][2] = [-2.07101689 -0.59255627 0.02466855 0.01483317] gradients["dx"].shape = (3, 10, 4) gradients["da0"][2][3] = -0.314942375127 gradients["da0"].shape = (5, 10) gradients["dWax"][3][1] = 11.2641044965 gradients["dWax"].shape = (5, 3) gradients["dWaa"][1][2] = 2.30333312658 gradients["dWaa"].shape = (5, 5) gradients["dba"][4] = [-0.74747722] gradients["dba"].shape = (5, 1) 3.2 - LSTM Backward Pass 1. One Step BackwardThe LSTM backward pass is slightly more complicated than the forward pass. Figure 8: LSTM Cell Backward. Note the output functions, while part of the `lstm_cell_forward`, are not included in `lstm_cell_backward` The equations for the LSTM backward pass are provided below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) 2. Gate DerivativesNote the location of the gate derivatives ($\gamma$..) between the dense layer and the activation function (see graphic above). This is convenient for computing parameter derivatives in the next step. \begin{align}d\gamma_o^{\langle t \rangle} &= da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*\left(1-\Gamma_o^{\langle t \rangle}\right)\tag{7} \\[8pt]dp\widetilde{c}^{\langle t \rangle} &= \left(dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle}* (1-\tanh^2(c_{next})) * \Gamma_u^{\langle t \rangle} * da_{next} \right) * \left(1-\left(\widetilde c^{\langle t \rangle}\right)^2\right) \tag{8} \\[8pt]d\gamma_u^{\langle t \rangle} &= \left(dc_{next}*\widetilde{c}^{\langle t \rangle} + \Gamma_o^{\langle t \rangle}* (1-\tanh^2(c_{next})) * \widetilde{c}^{\langle t \rangle} * da_{next}\right)*\Gamma_u^{\langle t \rangle}*\left(1-\Gamma_u^{\langle t \rangle}\right)\tag{9} \\[8pt]d\gamma_f^{\langle t \rangle} &= \left(dc_{next}* c_{prev} + \Gamma_o^{\langle t \rangle} * (1-\tanh^2(c_{next})) * c_{prev} * da_{next}\right)*\Gamma_f^{\langle t \rangle}*\left(1-\Gamma_f^{\langle t \rangle}\right)\tag{10}\end{align} 3. Parameter Derivatives $ dW_f = d\gamma_f^{\langle t \rangle} \begin{bmatrix} a_{prev} \\ x_t\end{bmatrix}^T \tag{11} $$ dW_u = d\gamma_u^{\langle t \rangle} \begin{bmatrix} a_{prev} \\ x_t\end{bmatrix}^T \tag{12} $$ dW_c = dp\widetilde c^{\langle t \rangle} \begin{bmatrix} a_{prev} \\ x_t\end{bmatrix}^T \tag{13} $$ dW_o = d\gamma_o^{\langle t \rangle} \begin{bmatrix} a_{prev} \\ x_t\end{bmatrix}^T \tag{14}$To calculate $db_f, db_u, db_c, db_o$ you just need to sum across all 'm' examples (axis= 1) on $d\gamma_f^{\langle t \rangle}, d\gamma_u^{\langle t \rangle}, dp\widetilde c^{\langle t \rangle}, d\gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keepdims = True` option.$\displaystyle db_f = \sum_{batch}d\gamma_f^{\langle t \rangle}\tag{15}$$\displaystyle db_u = \sum_{batch}d\gamma_u^{\langle t \rangle}\tag{16}$$\displaystyle db_c = \sum_{batch}d\gamma_c^{\langle t \rangle}\tag{17}$$\displaystyle db_o = \sum_{batch}d\gamma_o^{\langle t \rangle}\tag{18}$Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.$ da_{prev} = W_f^T d\gamma_f^{\langle t \rangle} + W_u^T d\gamma_u^{\langle t \rangle}+ W_c^T dp\widetilde c^{\langle t \rangle} + W_o^T d\gamma_o^{\langle t \rangle} \tag{19}$Here, to account for concatenation, the weights for equations 19 are the first n_a, (i.e. $W_f = W_f[:,:n_a]$ etc...)$ dc_{prev} = dc_{next}*\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh^2(c_{next}))*\Gamma_f^{\langle t \rangle}*da_{next} \tag{20}$$ dx^{\langle t \rangle} = W_f^T d\gamma_f^{\langle t \rangle} + W_u^T d\gamma_u^{\langle t \rangle}+ W_c^T dp\widetilde c^{\langle t \rangle} + W_o^T d\gamma_o^{\langle t \rangle}\tag{21} $where the weights for equation 21 are from n_a to the end, (i.e. $W_f = W_f[:,n_a:]$ etc...) Exercise 7 - lstm_cell_backwardImplement `lstm_cell_backward` by implementing equations $7-21$ below. **Note**: In the code:$d\gamma_o^{\langle t \rangle}$ is represented by `dot`, $dp\widetilde{c}^{\langle t \rangle}$ is represented by `dcct`, $d\gamma_u^{\langle t \rangle}$ is represented by `dit`, $d\gamma_f^{\langle t \rangle}$ is represented by `dft`
###Code
# UNGRADED FUNCTION: lstm_cell_backward
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = xt.shape
n_a, m = a_next.shape
# Compute gates related derivatives. Their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = da_next * np.tanh(c_next) * ot * (1 - ot)
dcct = (da_next * ot * (1 - np.tanh(c_next) ** 2) + dc_next) * it * (1 - cct ** 2)
dit = (da_next * ot * (1 - np.tanh(c_next) ** 2) + dc_next) * cct * (1 - it) * it
dft = (da_next * ot * (1 - np.tanh(c_next) ** 2) + dc_next) * c_prev * ft * (1 - ft)
# Compute parameters related derivatives. Use equations (11)-(18) (≈8 lines)
dWf = np.dot(dft, np.hstack([a_prev.T, xt.T]))
dWi = np.dot(dit, np.hstack([a_prev.T, xt.T]))
dWc = np.dot(dcct, np.hstack([a_prev.T, xt.T]))
dWo = np.dot(dot, np.hstack([a_prev.T, xt.T]))
dbf = np.sum(dft, axis=1, keepdims=True)
dbi = np.sum(dit, axis=1, keepdims=True)
dbc = np.sum(dcct, axis=1, keepdims=True)
dbo = np.sum(dot, axis=1, keepdims=True)
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (19)-(21). (≈3 lines)
da_prev = np.dot(parameters['Wf'][:, :n_a].T, dft) + np.dot(parameters['Wi'][:, :n_a].T, dit) + np.dot(parameters['Wc'][:, :n_a].T, dcct) + np.dot(parameters['Wo'][:, :n_a].T, dot)
dc_prev = dc_next * ft + ot * (1 - np.square(np.tanh(c_next))) * ft * da_next
dxt = np.dot(parameters['Wf'][:, n_a:].T, dft) + np.dot(parameters['Wi'][:, n_a:].T, dit) + np.dot(parameters['Wc'][:, n_a:].T, dcct) + np.dot(parameters['Wo'][:, n_a:].T, dot)
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt_tmp = np.random.randn(3,10)
a_prev_tmp = np.random.randn(5,10)
c_prev_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5+3)
parameters_tmp['bf'] = np.random.randn(5,1)
parameters_tmp['Wi'] = np.random.randn(5, 5+3)
parameters_tmp['bi'] = np.random.randn(5,1)
parameters_tmp['Wo'] = np.random.randn(5, 5+3)
parameters_tmp['bo'] = np.random.randn(5,1)
parameters_tmp['Wc'] = np.random.randn(5, 5+3)
parameters_tmp['bc'] = np.random.randn(5,1)
parameters_tmp['Wy'] = np.random.randn(2,5)
parameters_tmp['by'] = np.random.randn(2,1)
a_next_tmp, c_next_tmp, yt_tmp, cache_tmp = lstm_cell_forward(xt_tmp, a_prev_tmp, c_prev_tmp, parameters_tmp)
da_next_tmp = np.random.randn(5,10)
dc_next_tmp = np.random.randn(5,10)
gradients_tmp = lstm_cell_backward(da_next_tmp, dc_next_tmp, cache_tmp)
print("gradients[\"dxt\"][1][2] =", gradients_tmp["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients_tmp["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients_tmp["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients_tmp["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients_tmp["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients_tmp["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients_tmp["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients_tmp["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients_tmp["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients_tmp["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients_tmp["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients_tmp["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients_tmp["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients_tmp["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients_tmp["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients_tmp["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients_tmp["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients_tmp["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients_tmp["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients_tmp["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients_tmp["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients_tmp["dbo"].shape)
###Output
gradients["dxt"][1][2] = 3.230559115109188
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = -0.06396214197109236
gradients["da_prev"].shape = (5, 10)
gradients["dc_prev"][2][3] = 0.7975220387970015
gradients["dc_prev"].shape = (5, 10)
gradients["dWf"][3][1] = -0.14795483816449675
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 1.0574980552259903
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = 2.304562163687667
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.3313115952892109
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [0.18864637]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.40142491]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [0.25587763]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [0.13893342]
gradients["dbo"].shape = (5, 1)
###Markdown
**Expected Output**: gradients["dxt"][1][2] = 3.23055911511 gradients["dxt"].shape = (3, 10) gradients["da_prev"][2][3] = -0.0639621419711 gradients["da_prev"].shape = (5, 10) gradients["dc_prev"][2][3] = 0.797522038797 gradients["dc_prev"].shape = (5, 10) gradients["dWf"][3][1] = -0.147954838164 gradients["dWf"].shape = (5, 8) gradients["dWi"][1][2] = 1.05749805523 gradients["dWi"].shape = (5, 8) gradients["dWc"][3][1] = 2.30456216369 gradients["dWc"].shape = (5, 8) gradients["dWo"][1][2] = 0.331311595289 gradients["dWo"].shape = (5, 8) gradients["dbf"][4] = [ 0.18864637] gradients["dbf"].shape = (5, 1) gradients["dbi"][4] = [-0.40142491] gradients["dbi"].shape = (5, 1) gradients["dbc"][4] = [ 0.25587763] gradients["dbc"].shape = (5, 1) gradients["dbo"][4] = [ 0.13893342] gradients["dbo"].shape = (5, 1) 3.3 Backward Pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. Exercise 8 - lstm_backwardImplement the `lstm_backward` function.**Instructions**: Create a for loop starting from $T_x$ and going backward. For each step, call `lstm_cell_backward` and update your old gradients by adding the new gradients to them. Note that `dxt` is not updated, but is stored.
###Code
# UNGRADED FUNCTION: lstm_backward
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈12 lines)
dx = np.zeros((n_x, m, T_x))
da0 = np.zeros((n_a, m))
da_prevt = np.zeros(da0.shape)
dc_prevt = np.zeros(da0.shape)
dWf = np.zeros((n_a, n_a + n_x))
dWi = np.zeros(dWf.shape)
dWc = np.zeros(dWf.shape)
dWo = np.zeros(dWf.shape)
dbf = np.zeros((n_a, 1))
dbi = np.zeros(dbf.shape)
dbc = np.zeros(dbf.shape)
dbo = np.zeros(dbf.shape)
# loop back over the whole sequence
for t in reversed(range(T_x)):
# Compute all gradients using lstm_cell_backward
gradients = lstm_cell_backward(da[:, :, t], dc_prevt, caches[t])
# Store or add the gradient to the parameters' previous step's gradient
da_prevt = gradients['da_prev']
dc_prevt = gradients['dc_prev']
dx[:,:,t] = gradients["dxt"]
dWf += gradients["dWf"]
dWi += gradients["dWi"]
dWc += gradients["dWc"]
dWo += gradients["dWo"]
dbf += gradients["dbf"]
dbi += gradients["dbi"]
dbc += gradients["dbc"]
dbo += gradients["dbo"]
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = gradients['da_prev']
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x_tmp = np.random.randn(3,10,7)
a0_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5+3)
parameters_tmp['bf'] = np.random.randn(5,1)
parameters_tmp['Wi'] = np.random.randn(5, 5+3)
parameters_tmp['bi'] = np.random.randn(5,1)
parameters_tmp['Wo'] = np.random.randn(5, 5+3)
parameters_tmp['bo'] = np.random.randn(5,1)
parameters_tmp['Wc'] = np.random.randn(5, 5+3)
parameters_tmp['bc'] = np.random.randn(5,1)
parameters_tmp['Wy'] = np.zeros((2,5)) # unused, but needed for lstm_forward
parameters_tmp['by'] = np.zeros((2,1)) # unused, but needed for lstm_forward
a_tmp, y_tmp, c_tmp, caches_tmp = lstm_forward(x_tmp, a0_tmp, parameters_tmp)
da_tmp = np.random.randn(5, 10, 4)
gradients_tmp = lstm_backward(da_tmp, caches_tmp)
print("gradients[\"dx\"][1][2] =", gradients_tmp["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients_tmp["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients_tmp["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients_tmp["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients_tmp["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients_tmp["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients_tmp["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients_tmp["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients_tmp["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients_tmp["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients_tmp["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients_tmp["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients_tmp["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients_tmp["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients_tmp["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients_tmp["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients_tmp["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients_tmp["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients_tmp["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients_tmp["dbo"].shape)
###Output
gradients["dx"][1][2] = [ 0.00296954 0.24084671 -0.25074631 -0.43281115]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.01828646150056154
gradients["da0"].shape = (5, 10)
gradients["dWf"][3][1] = -0.06685625837650713
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 0.016536645017756424
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = -0.05752283221607342
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.04843891314443014
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [-0.10526487]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.27272551]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [-0.28212524]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [-0.29798344]
gradients["dbo"].shape = (5, 1)
|
00_get_highest_water_date/GEE_plot_leafle_example.ipynb | ###Markdown
interactive plotting using folium (leaflet wrapper)
###Code
import folium
print(folium.__version__)
# iets
palette = ['0784b5', '39ace7', '9bd4e4', 'cadeef', 'ffffff']
# Define the URL format used for Earth Engine generated map tiles.
EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}'
# water function:
def waterfunction(image):
image2 = image .reproject(crs ='EPSG:4326', scale = SCALE)\
.focal_mode()\
.focal_max(3).focal_min(5).focal_max(3)\
.reproject(crs ='EPSG:4326', scale = SCALE)
# get pixels above the threshold
water01 = image2.lt(-12)
# mask those pixels from the image
image = image.updateMask(water01)
# calculate pixelalreas
area = ee.Image.pixelArea()
waterArea = water01.multiply(area).rename('waterArea');
# add waterarea band to uitput
image = image.addBands(waterArea)
return image
# run waterfunction over collection of Sentinel
collection = S1.map(waterfunction)
# Use folium to visualize the imagery.
# mapid = S1.first().getMapId({
# 'region':rect_JSON,
# 'min':-25,
# 'max':0,
# 'palette':['0784b5', '39ace7', '9bd4e4', 'cadeef', 'ffffff']
# })
mapid = collection.select('waterArea').first().getMapId({
'region':rect_JSON,
'min':0,
'max':1,
'palette':['ffffff','0784b5']
})
map = folium.Map(location=[-6.36456378799,106.812382228])
folium.TileLayer(
tiles = EE_TILES.format(**mapid),
attr = 'Google Earth Engine',
overlay = True,
name ='median composite',
).add_to(map)
map.add_child(folium.LayerControl())
map
# if you want to save the map to html widget:
map.save(outfile='test.html')
###Output
_____no_output_____ |
pipelining/exp-csna/exp-csna_csna_shapley_value.ipynb | ###Markdown
Experiment Description> This notebook is for experiment \ and data sample \. Initialization
###Code
%reload_ext autoreload
%autoreload 2
import numpy as np, sys, os
sys.path.insert(1, '../../')
from shapley_value import compute_shapley_value, feature_key_list
sv = compute_shapley_value('exp-csna', 'csna')
###Output
_____no_output_____
###Markdown
Plotting
###Code
import matplotlib.pyplot as plt
import numpy as np
from s2search_score_pdp import pdp_based_importance
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12, 5), dpi=200)
# generate some random test data
all_data = []
average_sv = []
sv_global_imp = []
for player_sv in [f'{player}_sv' for player in feature_key_list]:
all_data.append(sv[player_sv])
average_sv.append(pdp_based_importance(sv[player_sv]))
sv_global_imp.append(np.mean(np.abs(list(sv[player_sv]))))
# average_sv.append(np.std(sv[player_sv]))
# print(np.max(sv[player_sv]))
# plot violin plot
axs[0].violinplot(all_data,
showmeans=False,
showmedians=True)
axs[0].set_title('Violin plot')
# plot box plot
axs[1].boxplot(all_data,
showfliers=False,
showmeans=True,
)
axs[1].set_title('Box plot')
# adding horizontal grid lines
for ax in axs:
ax.yaxis.grid(True)
ax.set_xticks([y + 1 for y in range(len(all_data))],
labels=['title', 'abstract', 'venue', 'authors', 'year', 'n_citations'])
ax.set_xlabel('Features')
ax.set_ylabel('Shapley Value')
plt.show()
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(12, 4), dpi=200)
# Example data
feature_names = ('title', 'abstract', 'venue', 'authors', 'year', 'n_citations')
y_pos = np.arange(len(feature_names))
# error = np.random.rand(len(feature_names))
# ax.xaxis.grid(True)
ax.barh(y_pos, average_sv, align='center', color='#008bfb')
ax.set_yticks(y_pos, labels=feature_names)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('PDP-based Feature Importance on Shapley Value')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
_, xmax = plt.xlim()
plt.xlim(0, xmax + 1)
for i, v in enumerate(average_sv):
margin = 0.05
ax.text(v + margin if v > 0 else margin, i, str(round(v, 4)), color='black', ha='left', va='center')
plt.show()
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(12, 4), dpi=200)
# Example data
feature_names = ('title', 'abstract', 'venue', 'authors', 'year', 'n_citations')
y_pos = np.arange(len(feature_names))
# error = np.random.rand(len(feature_names))
# ax.xaxis.grid(True)
ax.barh(y_pos, sv_global_imp, align='center', color='#008bfb')
ax.set_yticks(y_pos, labels=feature_names)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('SHAP Feature Importance')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
_, xmax = plt.xlim()
plt.xlim(0, xmax + 1)
for i, v in enumerate(sv_global_imp):
margin = 0.05
ax.text(v + margin if v > 0 else margin, i, str(round(v, 4)), color='black', ha='left', va='center')
plt.show()
###Output
_____no_output_____ |
TVM_Ellipses_LoDoPab.ipynb | ###Markdown
Setup Network
###Code
Phi = TVM_Net(A.to(device))
Phi = Phi.to(device)
print(Phi)
pytorch_total_params = sum(p.numel() for p in Phi.parameters() if p.requires_grad)
print(f'Number of trainable parameters: {pytorch_total_params}')
max_epochs = 2000
max_depth = 250
criterion = torch.nn.MSELoss()
fmt = '[{:2d}/{:2d}]: train_loss = {:7.3e} | '
fmt += 'depth = {:5.1f} | lr = {:5.1e} | time = {:4.1f} sec'
###Output
TVM_Net(
(shrink): Softshrink(0.1)
)
Number of trainable parameters: 0
###Markdown
Train the Network
###Code
best_loss = 1.0e10
for epoch in range(max_epochs):
sleep(0.5) # slows progress bar so it won't print on multiple lines
tot = len(data_loader)
loss_ave = 0.0
start_time_epoch = time.time()
with tqdm(total=tot, unit=" batch", leave=False, ascii=True) as tepoch:
for idx, (u_batch, d) in enumerate(data_loader):
u_batch = u_batch.to(device)
batch_size = u_batch.shape[0]
train_batch_size = d.shape[0] # re-define if batch size changes
u = Phi(d.to(device), max_depth=max_depth) # add snippet for hiding
output = criterion(u, u_batch)
train_loss = output.detach().cpu().numpy()
loss_ave += train_loss * train_batch_size
if idx%2 == 0:
# compute test image
Phi.eval()
u_test_approx = Phi(data_obs_test[0:1,:,:,:], max_depth=max_depth)
plt.figure()
plt.subplot(2,2,1)
plt.imshow(u_batch[0,0,:,:].cpu(), vmin=0, vmax=1)
plt.title('u true train')
plt.subplot(2,2,2)
plt.imshow(u[0,0,:,:].detach().cpu(), vmin=0, vmax=1)
plt.title('u approx train')
plt.subplot(2,2,3)
plt.imshow(u_test[0,0,:,:].cpu(), vmin=0, vmax=1)
plt.title('u true test')
plt.subplot(2,2,4)
plt.imshow(u_test_approx[0,0,:,:].detach().cpu(), vmin=0, vmax=1)
plt.title('u approx test')
plt.show()
Phi.train()
tepoch.update(1)
tepoch.set_postfix(train_loss="{:5.2e}".format(train_loss),
depth="{:5.1f}".format(Phi.depth))
loss_ave = loss_ave/len(data_loader.dataset)
end_time_epoch = time.time()
time_epoch = end_time_epoch - start_time_epoch
#lr_scheduler.step()
print(fmt.format(epoch+1, max_epochs, loss_ave, Phi.depth,
0.0,
time_epoch))
from skimage.metrics import structural_similarity as ssim
from skimage.metrics import peak_signal_noise_ratio as psnr
n_samples = u_test.shape[0]
data_test = TensorDataset(u_test, data_obs_test)
test_data_loader = DataLoader(dataset=data_test, batch_size=batch_size, shuffle=True)
def compute_avg_SSIM_PSNR(u_true, u_gen, n_mesh, data_range):
# assumes images are size n_samples x n_features**2 and are detached
n_samples = u_true.shape[0]
u_true = u_true.reshape(n_samples, n_mesh, n_mesh).cpu().numpy()
u_gen = u_gen.reshape(n_samples, n_mesh, n_mesh).cpu().numpy()
ssim_val = 0
psnr_val = 0
for j in range(n_samples):
ssim_val = ssim_val + ssim(u_true[j,:,:], u_gen[j,:,:], data_range=data_range)
psnr_val = psnr_val + psnr(u_true[j,:,:], u_gen[j,:,:], data_range=data_range)
return ssim_val/n_samples, psnr_val/n_samples
test_loss_ave = 0
test_PSNR_ave = 0
test_SSIM_ave = 0
with torch.no_grad():
for idx, (u_batch, d) in enumerate(test_data_loader):
u_batch = u_batch.to(device)
batch_size = u_batch.shape[0]
temp = u_batch.view(batch_size, -1)
temp = temp.permute(1,0)
test_batch_size = d.shape[0] # re-define if batch size changes
Phi.eval()
u = Phi(d, max_depth=max_depth) # add snippet for hiding
output = criterion(u, u_batch)
test_loss = output.detach().cpu().numpy()
test_SSIM, test_PSNR = compute_avg_SSIM_PSNR(u_batch, u, 128, 1)
test_PSNR_ave += test_PSNR * test_batch_size
test_loss_ave += test_loss * test_batch_size
test_SSIM_ave += test_SSIM * test_batch_size
print('test_PSNR = {:7.3e}'.format(test_PSNR))
print('test_SSIM = {:7.3e}'.format(test_SSIM))
print('test_loss = {:7.3e}'.format(test_loss))
if idx%1 == 0:
# compute test image
plt.figure()
plt.subplot(1,2,1)
plt.imshow(u_batch[0,0,:,:].cpu(), vmin=0, vmax=1)
plt.title('u true')
plt.subplot(1,2,2)
plt.imshow(u[0,0,:,:].detach().cpu(), vmin=0, vmax=1)
plt.title('u approx')
plt.show()
print('\n\nSUMMARY')
print('test_loss_ave = {:7.3e}'.format(test_loss_ave / len(data_loader.dataset)))
print('test_PSNR_ave = {:7.3e}'.format(test_PSNR_ave / len(data_loader.dataset)))
print('test_SSIM_ave = {:7.3e}'.format(test_SSIM_ave / len(data_loader.dataset)))
if use_ellipses:
ind_val = 0
else:
ind_val = 1000
u = Phi(data_obs_test[ind_val,:,:,:], max_depth=max_depth).view(128,128)
u_true = u_test[ind_val,0,:,:]
def string_ind(index):
if index < 10:
return '000' + str(index)
elif index < 100:
return '00' + str(index)
elif index < 1000:
return '0' + str(index)
else:
return str(index)
cmap = 'gray'
fig = plt.figure()
plt.imshow(np.rot90(u.detach().cpu().numpy()),cmap=cmap, vmin=0, vmax=1)
plt.axis('off')
data_type = 'Ellipse' if use_ellipses else 'Lodopab'
save_loc = './drive/MyDrive/FixedPointNetworks/Learned_Feasibility_' + data_type + '_TVM_ind_' + string_ind(ind_val) + '.pdf'
plt.savefig(save_loc,bbox_inches='tight')
plt.show()
print("SSIM: ", compute_avg_SSIM_PSNR(u_true.view(1,128,128), u.view(1,128,128).detach(), 128, 1))
############
# TRUE
###########
cmap = 'gray'
fig = plt.figure()
plt.imshow(np.rot90(u_true.detach().cpu().numpy()),cmap=cmap, vmin=0, vmax=1)
plt.axis('off')
save_loc = './drive/MyDrive/FixedPointNetworks/Learned_Feasibility_' + data_type + '_GT_ind_' + string_ind(ind_val) + '.pdf'
plt.savefig(save_loc,bbox_inches='tight')
plt.show()
###Output
_____no_output_____ |
hiseq/aligner/align.ipynb | ###Markdown
hiseq.alignThe standard port for aligner 1 Requirements 1.1 Arguments - input: str - index: str - outdir: pwd/str - unique: True/False - aligner: (guess aligner) - smp_name: None/str - index_name: None/str - threads: 4/int - overwrite: True/False - extra_para: (in dict, toml) 1.2 Output - config.toml - bam - log - stat - cmd.sh - unmap - flagstat 1.3 Directory structure - outdir/smp_name/index_name/files 2. Interpreter 2.1 Functions - unique|multiple, determine the aligner 2.2 Output - required arguments for `Align` 3 Aligner 3.1 Bowtie ``` common: -p 4 --mm default: k=1bowtie -k 1 -S -v 2 --best --un unmap.fq -x index se.fq 1> align.sam 2> align.logbowtie -k 1 -S -v 2 --best --un unmap.fq -x index -1 pe_1.fq -2 pe_2.fq 1> align.sam 2> align.log uniquebowtie -m 1 -S -v 2 --best --un unmap.fq -x index se.fq 1> align.sam 2> align.logbowtie -m 1 -S -v 2 --best --un unmap.fq -x index -1 pe_1.fq -2 pe_2.fq 1> align.sam 2> align.log``` 3.2 Bowtie2 ``` common: -p 4 --mm default:bowtie2 --very-sensitive --no-unal -x index -U se.fq --un unmap.fq 1> align.sam 2> align.logbowtie2 --very-sensitive --no-unal --no-mixed --no-discordant --un-conc unmap.fq 1> align.sam 2> align.log unique https://www.biostars.org/p/101533/101537, -q 5,10samtools view -bhs -q 10 align.bam extra parameters-I minimum fragment length (0) -X maximum fragment length (500) ``` 3.3 STAR ``` common default:STAR \ --genomeLoad NoSharedMemory \ --runMode alignReads \ --genomeDir index \ --readFilesIn pe_1 pe_2 \ --readFilesCommand zcat \ --outFileNamePrefix smp_name \ --runThreadN 4 \ --limitBAMsortRAM 10000000000 \ --outSAMtype BAM SortedByCoordinate \ --outFilterMismatchNoverLmx 0.07 \ --seedSearchStartLmax 20 \ --outReadsUnmapped Fastx unique:--outFilterMultimapNmax 1 small genome (default: 50)--seedPerWindowNmax 5 For sharing memory in STAR by Devon Ryan: https://www.biostars.org/p/260069/260077 by Dobin: https://github.com/alexdobin/STAR/pull/26 --genomeLoad```
###Code
%reload_ext autoreload
%autoreload 2
import aligner_index
from aligner_index import *
from utils import *
args = {
'genome': 'mm10',
'aligner': 'bowtie',
# 'spikein': 'dm3',
# 'to_rRNA': True,
'to_chrM': True,
# 'to_MT_trRNA': True,
'verbose': True,
}
# check_index_args(**args)
# fetch_index('dm6', 'chrM', 'bowtie')
# x = '/home/wangming/data/genome/dm6/bowtie_index/chrM'
# aligner = 'bowtie'
# p = AlignIndex(x, aligner)
# p.is_bowtie_index()
%reload_ext autoreload
%autoreload 2
from bowtie import *
args = {
'fq1': '/data/yulab/wangming/work/devel_pipeline/hiseq/rnaseq/data/pe_control_rep1_1.fq.gz',
'fq2': '/data/yulab/wangming/work/devel_pipeline/hiseq/rnaseq/data/pe_control_rep1_2.fq.gz',
'index': '/home/wangming/data/genome/dm6/bowtie_index/chrM',
'outdir': 'test',
'genome': 'dm6',
}
# a = BowtieConfig(**args)
# check_fx_args(args['fq1'], args['fq2'])
a = Bowtie(**args)
###Output
_____no_output_____ |
analysis_archive/notebooks/final-bad_gene_subclass_DE-GLUT.ipynb | ###Markdown
Subclass DE BAD
###Code
import anndata
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.patches as mpatches
import scanpy as sc
from scipy.stats import ks_2samp, ttest_ind
import ast
from scipy.sparse import csr_matrix
import warnings
warnings.filterwarnings('ignore')
import sys
sys.path.append('/home/sina/projects/mop/BYVSTZP_2020/dexpress')
from dexpress import dexpress, utils, plot
sys.path.append('/home/sina/projects/mop/BYVSTZP_2020/trackfig')
from trackfig.utils import get_notebook_name
from trackfig.trackfig import trackfig
TRACKFIG = "/home/sina/projects/mop/BYVSTZP_2020/trackfig.txt"
NB = get_notebook_name()
fsize=20
plt.rcParams.update({'font.size': fsize})
%config InlineBackend.figure_format = 'retina'
cluster_cmap = {
"Astro": (0.38823529411764707, 0.4745098039215686, 0.2235294117647059 ), # 637939,
"Endo" : (0.5490196078431373, 0.6352941176470588, 0.3215686274509804 ), # 8ca252,
"SMC" : (0.7098039215686275, 0.8117647058823529, 0.4196078431372549 ), # b5cf6b,
"VLMC" : (0.807843137254902, 0.8588235294117647, 0.611764705882353 ), # cedb9c,
"Low Quality" : (0,0,0),
"L2/3 IT" : (0.9921568627450981, 0.6823529411764706, 0.4196078431372549 ), # fdae6b
"L5 PT" : (0.9921568627450981, 0.8156862745098039, 0.6352941176470588 ), # fdd0a2
"L5 IT" : (0.5176470588235295, 0.23529411764705882, 0.2235294117647059 ), # 843c39
"L5/6 NP": "#D43F3A",
"L6 CT" : (0.8392156862745098, 0.3803921568627451, 0.4196078431372549 ), # d6616b
"L6 IT" : (0.9058823529411765, 0.5882352941176471, 0.611764705882353 ), # e7969c
"L6b" : (1.0, 0.4980392156862745, 0.054901960784313725), # ff7f0e
"L6 IT Car3" : (1.0, 0.7333333333333333, 0.47058823529411764 ), # ffbb78
"Lamp5" : (0.19215686274509805, 0.5098039215686274, 0.7411764705882353 ), # 3182bd # blues
"Sncg" : (0.4196078431372549, 0.6823529411764706, 0.8392156862745098 ), # 6baed6
"Vip" : (0.6196078431372549, 0.792156862745098, 0.8823529411764706 ), # 9ecae1
"Sst" : (0.7764705882352941, 0.8588235294117647, 0.9372549019607843 ), # c6dbef
"Pvalb":(0.7372549019607844, 0.7411764705882353, 0.8627450980392157 ), # bcbddc
}
gene = anndata.read_h5ad("../../data/notebook/revision/bad_gene.h5ad")
gene
gene = gene[:,gene.var.sort_index().index]
gene = gene[gene.obs.eval("class_label =='Glutamatergic'")]
print(gene.shape)
%%time
mat = gene.layers["log1p"].todense()
components = gene.obs.cell_id.values
features = gene.var.index.values
assignments = gene.obs.subclass_label.values
unique = np.unique(assignments)
nan_cutoff = 0.9 # of elements in cluster
corr_method = "bonferroni"
p_raw, stat, es, nfeat = dexpress.dexpress(mat, components, features, assignments, nan_cutoff=nan_cutoff)
p_raw = p_raw/2
p_corr = utils.correct_pval(p_raw, nfeat, corr_method)
s = stat
markers_gene = dexpress.make_table(assignments, features, p_raw, p_corr, es)
# convert the 0 pvalues to the smallest possible float
markers_gene["p_corr"][markers_gene.eval("p_corr == 0").values] = sys.float_info.min
markers_gene = markers_gene.query("es > 0")
###Output
27-Nov-20 13:24:14 - 1 of 8 assignments: L2/3 IT
27-Nov-20 13:24:15 - 2 of 8 assignments: L5 IT
27-Nov-20 13:24:16 - 3 of 8 assignments: L5 PT
27-Nov-20 13:24:17 - 4 of 8 assignments: L5/6 NP
27-Nov-20 13:24:18 - 5 of 8 assignments: L6 CT
27-Nov-20 13:24:19 - 6 of 8 assignments: L6 IT
27-Nov-20 13:24:19 - 7 of 8 assignments: L6 IT Car3
27-Nov-20 13:24:20 - 8 of 8 assignments: L6b
###Markdown
look at them
###Code
alpha =0.01
fc = 2
markers_gene.query(f"p_corr < {alpha}").sort_values("es").tail(20)
specific_cluster = "L2/3 IT"
specific_gene = "Calb1_ENSMUSG00000028222"
specific_gene
def violinplot(data, ax, **kwd):
xticklabels = kwd.get("xticklabels", [])
xticks = kwd.get("xticks", [])
selected = kwd.get("selected", None)
color = kwd.get("color", "grey")
if len(xticks)==0: xticks = np.arange(len(data))+1;
if len(xticklabels)==0: xticklabels = np.arange(len(data))+1;
assert(len(xticks) == len(xticklabels))
violins = ax.violinplot(data, positions=xticks, showmeans=False, showmedians=False, showextrema=False)
for vidx, v in enumerate(violins['bodies']):
v.set_facecolor(color)
v.set_edgecolor('black')
v.set_alpha(1)
if selected == vidx:
v.set_facecolor("#D43F3A")
for didx, d in enumerate(data):
x = xticks[didx]
xx = np.random.normal(x, 0.04, size=len(d))
# actual points
ax.scatter(xx, d, s = 5, color="white", edgecolor="black", linewidth=1)
# mean and error bars
mean = np.mean(d)
stdev = np.sqrt(np.var(d))
ax.scatter(x, mean, color="lightgrey", edgecolor="black", linewidth=1, zorder=10)
ax.vlines(x, mean - stdev, mean+stdev, color='lightgrey', linestyle='-', lw=2, zorder=9)
ax.set(**{"xticks": xticks, "xticklabels":xticklabels})
ax.set_xticklabels(labels, rotation=45, ha="right")
return ax
fig, ax = plt.subplots(figsize=(15,5))
fig.subplots_adjust(hspace=0, wspace=0)
unique = np.unique(gene.obs.subclass_label.values)
labels = unique
lidx = np.arange(1, len(labels)+1) # the label locations
midx = np.where(unique==specific_cluster)[0][0]
#######3# Gene
x = []
for c in unique:
x.append(np.asarray(gene[gene.obs.subclass_label==c][:,gene.var.gene_name.values==specific_gene].layers["log1p"].todense()).reshape(-1).tolist())
violinplot(x, ax,selected=midx, xticks=lidx, xticklabels=labels)
ax.set(**{
"ylabel": "Gene",
"title": "{} gene expression $log(TPM + 1)$".format(specific_gene)
})
#plt.savefig("./figures/class_DE_violin_{}.png".format(specific_gene.split("_")[0]), bbox_inches='tight',dpi=300)
plt.show()
identified_genes = markers_gene["name"].explode().astype(str)
identified_genes = identified_genes[identified_genes!="nan"]
print("{} genes identified.".format(identified_genes.nunique()))
markers_gene.to_csv(trackfig("../../tables/unordered/bad_gene_subclass_DE-GLUT.csv", TRACKFIG, NB))
###Output
_____no_output_____ |
Week05/L04/Lecture04.ipynb | ###Markdown
Lecture 5: Input and Output Topic coverage: * Review, Q&A* Input/Output * Reading from the prompt * Reading from the command line * File input/output * Common binary formats Based on the material developed by Jennifer Barnes for Phys77, Spring '16 AnnouncementsThe grades for homework 1 and workshop 2 have been released. Please consult the syllabus for the procedure to follow if you spot issues with your grades.My offices hours this week will be on Friday (1 October) at 1 pm and next week on Monday (4 October) at 9:30 am on zoom: https://berkeley.zoom.us/j/99219769879?pwd=T0VsUEM1Q2dRRXh2S2VveXA1QzRzdz09Please remember that you need to have a green campus badge to come to class and workshops in person. If you are have symptoms or know you've been exposed, please don't come to class or workshops and send me a message on slack. Course material and recordings will be available on bcourses and please connect to office hours on zoom with any questions.Another reminder: the primary method of communication for this course is slack. Please send me a message on slack rather than an email or a message in bcourses. Recap from last time Plotting with MatplotlibMatplotlib provides an interface, and a set of convenient tools for graphing (2-dimensional, i.e. a graph with 2 axes, as well as 3-dimensional). The interface and appearance of the plots are deliberately made to resemble Matlab. One could argue with this aesthetic choice, but the interface makes it much easier for users used to Matlab to transition to Python (and vice versa!)We went over only a few examples. Documentation and examples are available at https://matplotlib.org/ . In particular, my favorite -- examples: https://matplotlib.org/stable/gallery/index.html(make sure to cite in your code) Line attributes ColorsHuge range of colors in python! Here is the full table, but you can also just start with the base colors: b, g, r, c, m, y, k, w  MarkersSee (http://matplotlib.org/) for more details Sub-plots
###Code
import matplotlib.pyplot as plt
import numpy as np
# Simple data to display in various forms
x = np.linspace(0, 2 * np.pi, 400)
y = np.sin(x ** 2)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col', sharey='row')
ax1.plot(x, y, 'r')
ax1.set_title('Sharing x per column, y per row')
ax2.scatter(x, y, color='g')
ax3.scatter(x, 2 * y ** 2 - 2, color='b')
ax4.plot(x, 2 * y ** 2 - 1, 'y')
###Output
_____no_output_____
###Markdown
Input and Output Most of the time, your code will need to process external data -- either entered by a human (through a keyboard), or read from external media. *This is an example of abstraction*: you write generic code that is kept separately from the data. The same is true for the data generated by your code. You may want to display it on the screen or store it in a file.Let's look at some basic examples Keyboard prompt
###Code
s = input('What is your name ? ')
print(len(s))
print (type(s), "Hello,", s)
###Output
_____no_output_____
###Markdown
You may want to convert strings to numerical types in order to perform calculations. See
###Code
age = input('What is your age ? ')
print (type(age))
number = int(age)
print (type(number), number)
nextYear = number+1
print('Next year you will be',nextYear)
###Output
_____no_output_____
###Markdown
This code may fail (see examples) if the user inputs something that can't be parsed as an integer. `Exception handling` can allow you to catch errors and recover. See the following example, where we will loop until the user enters correctly parsable number
###Code
failed = True
while failed:
age = input('What is your age ? ')
try:
number = int(age)
print (type(number), number)
nextYear = number+1
print('Next year you will be',nextYear)
failed = False
except:
print('Try again: please enter an integer value')
failed = True
print('Success !')
###Output
_____no_output_____
###Markdown
Slightly more flexible (but also more dangerous) way to do conversion is to use eval()
###Code
import numpy as np
x = 5
age = input('What is your age ? ')
number = eval(age)
print (type(number))
print('You are',number,'years old')
###Output
_____no_output_____
###Markdown
Most often, you would want to enter several values and parse them. Use string method *split()*:But pay attention: the parsing is pretty rudimantary ! (examples)
###Code
s = input('Enter coordinates (x,y,z):')
print(s)
[x,y,z] = s.split(',')
print ("x=",x,"y=",y,"z=",z)
print (type(x), type(y), type(z))
###Output
_____no_output_____
###Markdown
Sometimes you would want to convert to float or int immediately, so can use list comprehension:
###Code
s = input('Enter coordinates (x,y,z):')
mylist = s.split(',')
print(mylist)
print(type(mylist))
print(type(mylist[0]))
#listOfFloats = [float(var) for var in mylist]
#print(listOfFloats)
[x,y,z] = [float(var) for var in mylist]
print ("x=",x,"y=",y,"z=",z)
print (type(x), type(y), type(z))
print('x squared = ',x**2)
###Output
_____no_output_____
###Markdown
Reading and writing files
###Code
%pylab inline
###Output
_____no_output_____
###Markdown
Part I: ASCII files Think of ASCII files as text files. ASCII stands for American Standard Code for Information Interchange -- which developed standards for encoding text and control information in files as far back as 1967. These standards are still in use today. You can open them using a text editor (like vim or emacs in Unix, Notepad in Windows, or TextEdit on a Mac) and read the information they contain directly. There are a few ways to produce these files, and to read them once they've been produced. In Python, the simplest way is to use file objects. Let's give it a try. We create a file object by calling the function `open( filename, access_mode )` and assigning its return value to a variable (usually `f`). This variable is often called a "file descriptor", or a "handle". It keeps information about the current state of the file, and also allows operating on the file, e.g. for reading or writing. The argument `filename` just specifices the name of the file we're interested in, and `access_mode` tells Python what we plan to do with that file: * 'r': read the file * 'w': write to the file (creates a new file, or clears an existing file) * 'a': append the file Note that both arguments should be strings.For full syntax and special arguments, see documentation at https://docs.python.org/3/library/functions.htmlopen
###Code
f = open( 'welcome.txt', 'w' )
print(f) # see what we got
f.write('One more line\n')
f.write('And another\n')
###Output
_____no_output_____
###Markdown
A note of caution: as soon as you call `open()`, Python creates a new file with the name you pass to it. Python will overwrite existing files if you open a file of the same name in write ('`w`') mode. Now we can write to the file using `f.write( thing_to_write )`. We can write anything we want, but it must be formatted as a string.
###Code
topics = ['Data types', 'Loops', 'Functions', 'Arrays', 'Plotting', 'Statistics', 'Cats']
f.write( 'Welcome to Physics 77, Spring 2021\n' ) # the newline command \n tells Python to start a new line
f.write( 'Topics we will learn about include:\n' )
for top in topics:
f.write( top + '\n')
f.close() # don't forget this part!
###Output
_____no_output_____
###Markdown
Now go to your directory on datahub (or local computer) and look for this file.You can also use Jupyter/iPython magic commands to examine the contents:
###Code
%cat welcome.txt
###Output
_____no_output_____
###Markdown
We can then read the data back out again:
###Code
f = open( 'welcome.txt', 'r' ) # note that we reused the handle
print(f)
for line in f:
print (line.strip())
#print(line.upper())
#print(line) # see the difference ?
f.close()
###Output
_____no_output_____
###Markdown
There are also shortcuts available, if we only want to read in some of the data:
###Code
f = open( 'welcome.txt', 'r' )
for i in range(4):
f.readline() # skip the first two lines
topicList = [line.strip() for line in f]
#topicList = [line for line in f]
f.close()
print (len(topicList))
print (topicList)
###Output
_____no_output_____
###Markdown
Python reads in spacing commands from files as well as strings. The `.strip()` just tells Python to ignore those spacing commands. What happens if you remove it from the code above? What can you do if you want to add a few more items to a file that already has data and you don't want to overwrite it?
###Code
f = open( 'welcome.txt', 'a' )
print(f) # see what we got
f.write( 'Neural networks\n' ) # the newline command \n tells Python to start a new line
f.close()
###Output
_____no_output_____
###Markdown
Numerical data For the most part, our text files will contain numeric information, not strings. These can be somewhat trickier to read in. Let's read in a file produced in another program, that contains results from a BaBar experiment, where we searched for a "dark photon" produced in e+e- collisions [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.131804, https://arxiv.org/abs/1702.03327]. The data are presented in two columns: mass charge First, let's peek into the file using iPython magic (direct interface to Unix operating system):
###Code
%cat BaBar_2016.dat
###Output
_____no_output_____
###Markdown
Now let's read the file into python data structures
###Code
fname = 'BaBar_2016.dat'
f = open(fname, 'r')
# read each line, split the data wherever there's a blank space,
# and convert the values to floats
mass = []
charge = []
for line in f:
m, c = [float(dat) for dat in line.split()]
mass.append(m)
charge.append(c)
f.close()
print('Read',len(mass),'lines from file',fname)
###Output
_____no_output_____
###Markdown
We got it; let's plot it!
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(mass, charge, 'r-' )
plt.xlim(0, 8)
plt.ylim(0, 3e-3)
plt.xlabel('mass (GeV)')
plt.ylabel('charge, 90% C.L. limit')
plt.show()
###Output
_____no_output_____
###Markdown
If only there were an easier way! Fortunately, Python's `numpy` library has functions for converting file information into numpy arrays, which can be easily analyzed and plotted. The above can be accomplished with a lot less code (and a lot less head scratching!) The two most common functions to read a tabulated text are numpy's `loadtxt` and `genfromtxt`. They are subtly different and mostly interchangable.The most useful feature of `genfromtxt` is that it is able to assign default values to missing fields. See https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html and https://numpy.org/doc/stable/reference/generated/numpy.genfromtxt.html
###Code
import numpy as np
mass, charge = np.genfromtxt('BaBar_2016.dat', unpack = True)
print(type(mass))
plt.plot(mass, charge,'r-')
plt.xlim(0, 8)
plt.ylim(0, 2e-3)
plt.xlabel('mass (GeV)')
plt.ylabel('charge, 90% C.L. limit')
plt.show()
###Output
_____no_output_____
###Markdown
Part II: CSV CSV stands for Comma Separated Values. Python's `csv` module allows easy reading and writing of sequences. CSV is especially useful for loading data from spreadsheets and databases. Let's make a list and write a file! First, we load the module
###Code
import csv
###Output
_____no_output_____
###Markdown
Next, we create a file object that opens the file we want to write to. Then, we create a *csv writer*, a special object that is built specificly to writesequences to our csv file.
###Code
f_csv = open( 'nationData.csv', 'w' )
SAWriter = csv.writer( f_csv, # write to this file object
delimiter = ',', # place comma between items we write
quotechar = '', # Don't place quotes around strings
quoting = csv.QUOTE_NONE )# made up of multiple words
###Output
_____no_output_____
###Markdown
Now let's write some data:
###Code
countries = ['Argentina', 'Bolivia', 'Brazil', 'Chile', 'Colombia', 'Ecuador', 'Guyana',\
'Paraguay', 'Peru', 'Suriname', 'Uruguay', 'Venezuela']
capitals = ['Buenos Aires', 'Sucre', 'Brasilia', 'Santiago', 'Bogota', 'Quito', 'Georgetown',\
'Asuncion', 'Lima', 'Paramaribo', 'Montevideo', 'Caracas']
population_mils = [ 42.8, 10.1, 203.4, 16.9, 46.4, 15.0, 0.7, 6.5, 29.2, 0.5,\
3.3, 27.6]
SAWriter.writerow(['Data on South American Nations'])
SAWriter.writerow(['Country', 'Capital', 'Population (millions)'])
for i in range(len(countries)):
SAWriter.writerow( [countries[i], capitals[i], population_mils[i]] )
f_csv.close()
###Output
_____no_output_____
###Markdown
Now let's see if we can open your file using a spreadsheet program, like MS Excel. How did we do? We can use a similar process to read data into Python from a csv file. Let's read in a list of the most populous cities and store them for analysis.
###Code
cities = []
cityPops = []
metroPops = []
f_csv = open( 'cities.csv', 'r')
readCity = csv.reader( f_csv, delimiter = ',' )
next(readCity) # skip the header row
for row in readCity:
# print(row)
print (', '.join(row)) # join the element of the list together, with the strng ', ' in between
city_country = ', '.join(row[0:2])
cities.append(city_country)
if row[2] != '':
cityPops.append( float(row[2]) )
else: cityPops.append(-1)
if row[3] != '':
metroPops.append( float(row[3]) )
else: metroPops.append(-1)
f_csv.close()
print(cityPops)
metroPops, cityPops = np.array(metroPops), np.array(cityPops)
cIds = np.argsort(cityPops)[::-1] # sort in descending order
mIds= np.argsort(metroPops)[::-1]
print ("The five most populous cities (within city proper) are:\n")
for j in range(5):
print (cities[cIds[j]], "with a population of {} million".format(cityPops[cIds[j]]))
print ("\nThe five most populous metropolitan regions in the world are:\n")
for i in range(5):
print (cities[mIds[i]], "with a metro population of {} million".format(metroPops[mIds[i]]))
###Output
_____no_output_____
###Markdown
Binary files So far, we've been dealing with text files. If you opened these files up with a text editor, you could see what was written in them. Binary files are different. They're written in a form that Python (and other languages) understand how to read, but we can't access them directly. The most common binary file you'll encounter in python is a *.npy* file, which stores numpy arrays. You can create these files using the command `np.save( filename, arr )`. That command will store the array `arr` as a file called filename, which should have the extension .npy. We can then reload the data with the command `np.load(filename)`
###Code
x = np.linspace(-1.0, 1.0, 100)
y = np.sin(10*x)*np.exp(-x) - x
plt.plot(x,y,'r-')
# save the array
xy = np.hstack((xy))
print(xy)
np.save('y_of_x.npy', xy )
print(len(xy))
###Output
_____no_output_____
###Markdown
Let's also save it as ascii:
###Code
f = open('y_of_x.txt','w')
for var in xy:
f.write('{0:.16f}\n'.format(var))
f.close()
del x, y, xy # erase these variables from Python's memory
###Output
_____no_output_____
###Markdown
Now reload the data and check that you can use it just as before.
###Code
xy = np.load('y_of_x.npy')
print (len(xy))
x = xy[:100]
y = xy[100:]
print (len(x),len(y))
plt.plot(x,y,'r-')
###Output
_____no_output_____
###Markdown
HDF5 Files HDF5 files are ideally suited for managing large amounts of complex data. Python can read them using the module `h5py.`
###Code
import h5py
###Output
_____no_output_____
###Markdown
Let's load our first hdf5 file:
###Code
fh5 = h5py.File( 'solar.h5py', 'r' )
###Output
_____no_output_____
###Markdown
hdf5 files are made up of data sets. Each data set has a name, or a key. Let's take a look at our data sets:
###Code
for k in fh5.keys(): # loop through the keys
print (k)
###Output
_____no_output_____
###Markdown
We access the data sets in our file by name:
###Code
print(len(fh5["names"]))
for nm in fh5["names"]: # make sure to include the quotation marks!
print (nm)
###Output
_____no_output_____
###Markdown
It looks like we've got some planet data on our hands! Names is a special case, in that it's elements are strings. The other data sets contain float values, and can be treated like numpy arrays.
###Code
print (fh5["solar_AU"][:])
###Output
_____no_output_____
###Markdown
Let's make a plot of the solar system that shows each planet's: * distance from the sun (position on the x-axis)* orbital period (position on the y-axis* mass (size of scatter plot marker)* surface temperature (color of marker)* density (transparency (or alpha, in matplotlib language))
###Code
distAU = fh5["solar_AU"][:]
mass = fh5["mass_earthM"][:]
torb = fh5["TOrbit_yr"][:]
temp = fh5["surfT_K"][:]
rho = fh5["density"][:]
names = fh5["names"][:]
import numpy as np
def get_size( ms ):
m = 400.0/(np.max(mass) - np.min(mass))
return 100.0 + (ms - np.min(mass))*m
def get_alpha( p ):
m = .9/(np.max(rho)-np.min(rho))
return .1+(p - np.min(rho))*m
alfs = get_alpha(rho)
import matplotlib as mpl
import matplotlib.pyplot as plt
norm = mpl.colors.Normalize(vmin=np.min(temp), vmax=np.max(temp))
cmap = plt.cm.cool
m = plt.cm.ScalarMappable(norm=norm, cmap=cmap)
fig, ax = plt.subplots(1)
for i in range(8):
ax.scatter( distAU[i], torb[i], s = get_size(mass[i]), color = m.to_rgba(temp[i]), alpha=alfs[i] )
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim(0.1,200)
ax.set_ylabel( 'orbital period (y)' )
ax.set_xlabel( 'average dist. from sun (AU)' )
ax.set_title( 'Our solar system' )
plt.show()
###Output
_____no_output_____
###Markdown
If you ever want to write your own HDF5 file, you can open an h5py file object by calling: fh5 = h5py.File('filename.h5py', 'w') Data sets are created with dset = fh5.create_dataset( "dset_name", (shape,)) The default data type is float. The values for the data set are then set with: dset[...] = ( ) where the parenthesis contain an array or similar data of the correct shape. After you've added all your data sets, close the file with fh5.close() If you have extra time, try creating your own data set and read it back in to verify that you've done it correctly! Pandas There are more sophisticated tools to store and process the data. We will look at Pandas example in a workshop Direct binary readsThere is also a more direct, less polished interface for reading data from a file:
###Code
f = open('BaBar_2016.dat', 'r') # open file for reading
#f.write('Try me\n')
s = f.readline() # read one line (including end-of-line character, '\n')
print (s) # print it
s2 = f.readline() # this will now read the second line
print (s2)
f = open('BaBar_2016.dat', 'rb') # opening the file again will reset the handle to the beginning of the file.NB: binary mode !
f.seek(5) # skip 5 bytes (5 characters)
s2 = f.readline() # read from that point until the end of the line
print (s2) # notice trancation
f.seek(-25, 2) # go back 15 bytes from the current position (i.e. beginning of next line)
s2 = f.readline() # notice what is read
print (s2)
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.