markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
So tweets about Jeb Bush, on average, aren't as positive as the other candidates, but the people tweeting about Bush get more retweets and followers. I used the formula influence = sqrt(followers + 1) * sqrt(retweets + 1). You can experiment with different functions if you like [preprocess.py:influence]. We can look at the most influential tweets about Jeb Bush to see what's up.
jeb = candidate_groupby.get_group('Jeb Bush') jeb_influence = jeb.sort('influence', ascending = False) jeb_influence[['influence', 'polarity', 'influenced_polarity', 'user_name', 'text', 'created_at']].head(5)
arrows.ipynb
savioabuga/arrows
mit
Side note: you can see that sentiment analysis isn't perfect - the last tweet is certainly negative toward Jeb Bush, but it was actually assigned a positive polarity. Over a large number of tweets, though, sentiment analysis is more meaningful. As to the high influence of tweets about Bush: it looks like Donald Trump (someone with a lot of followers) has been tweeting a lot about Bush over the other candidates - one possible reason for Jeb's greater influenced_polarity.
df[df.user_name == 'Donald J. Trump'].groupby('candidate').size()
arrows.ipynb
savioabuga/arrows
mit
Looks like our favorite toupéed candidate hasn't even been tweeting about anyone else! What else can we do? We know the language each tweet was (tweeted?) in.
language_groupby = df.groupby(['candidate', 'lang']) language_groupby.size()
arrows.ipynb
savioabuga/arrows
mit
That's a lot of languages! Let's try plotting to get a better idea, but first, I'll remove smaller language/candidate groups. By the way, each lang value is an IANA language tag - you can look them up at https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry.
largest_languages = language_groupby.filter(lambda group: len(group) > 10)
arrows.ipynb
savioabuga/arrows
mit
I'll also remove English, since it would just dwarf all the other languages.
non_english = largest_languages[largest_languages.lang != 'en'] non_english_groupby = non_english.groupby(['lang', 'candidate'], as_index = False) sizes = non_english_groupby.text.agg(np.size) sizes = sizes.rename(columns={'text': 'count'}) sizes_pivot = sizes.pivot_table(index='lang', columns='candidate', values='count', fill_value=0) plot = sns.heatmap(sizes_pivot) plot.set_title('Number of non-English Tweets by Candidate', family='Ubuntu') plot.set_ylabel('language code', family='Ubuntu') plot.set_xlabel('candidate', family='Ubuntu') plot.figure.set_size_inches(12, 7)
arrows.ipynb
savioabuga/arrows
mit
Looks like Spanish and Portuguese speakers mostly tweet about Jeb Bush, while Francophones lean more liberal, and Clinton tweeters span the largest range of languages. We also have the time-of-tweet information - I'll plot influenced polarity over time for each candidate. I'm also going to resample the influenced_polarity values to 1 hour intervals to get a smoother graph.
mean_polarities = df.groupby(['candidate', 'created_at']).influenced_polarity.mean() plot = mean_polarities.unstack('candidate').resample('60min').plot() plot.set_title('Influenced Polarity over Time by Candidate', family='Ubuntu') plot.set_ylabel('influenced polarity', family='Ubuntu') plot.set_xlabel('time', family='Ubuntu') plot.figure.set_size_inches(12, 7)
arrows.ipynb
savioabuga/arrows
mit
Since I only took the last 20,000 tweets for each candidate, I didn't receive as large a timespan from Clinton (a candidate with many, many tweeters) compared to Rand Paul. But we can still analyze the data in terms of hour-of-day. I'd like to know when tweeters in each language tweet each day, and I'm going to use percentages instead of raw number of tweets so I can compare across different languages easily. By the way, the times in the dataframe are in UTC.
language_sizes = df.groupby('lang').size() threshold = language_sizes.quantile(.75) top_languages_df = language_sizes[language_sizes > threshold] top_languages = set(top_languages_df.index) - {'und'} top_languages df['hour'] = df.created_at.apply(lambda datetime: datetime.hour) for language_code in top_languages: lang_df = df[df.lang == language_code] normalized = lang_df.groupby('hour').size() / lang_df.lang.count() plot = normalized.plot(label = language_code) plot.set_title('Tweet Frequency in non-English Languages by Hour of Day', family='Ubuntu') plot.set_ylabel('normalized frequency', family='Ubuntu') plot.set_xlabel('hour of day (UTC)', family='Ubuntu') plot.legend() plot.figure.set_size_inches(12, 7)
arrows.ipynb
savioabuga/arrows
mit
Note that English, French, and Spanish are significantly flatter than the other languages - this means that there's a large spread of speakers all over the globe. But why is Portuguese spiking at 11pm Brasilia time / 3 am Lisbon time? Let's find out! My first guess was that maybe there's a single person making a ton of posts at that time.
df_of_interest = df[(df.hour == 2) & (df.lang == 'pt')] print('Number of tweets:', df_of_interest.text.count()) print('Number of unique users:', df_of_interest.user_name.unique().size)
arrows.ipynb
savioabuga/arrows
mit
So that's not it. Maybe there was a major event everyone was retweeting?
df_of_interest.text.head(25).unique()
arrows.ipynb
savioabuga/arrows
mit
Seems to be a lot of these 'Jeb Bush diz que foi atingido...' tweets. How many? We can't just count unique ones because they all are different slightly, but we can check for a large-enough substring.
df_of_interest[df_of_interest.text.str.contains('Jeb Bush diz que foi atingido')].text.count()
arrows.ipynb
savioabuga/arrows
mit
That's it! Looks like there was a news article from a Brazilian website (http://jconline.ne10.uol.com.br/canal/mundo/internacional/noticia/2015/07/05/jeb-bush-diz-que-foi-atingido-por-criticas-de-trump-a-mexicanos-188801.php) that happened to get a lot of retweets at that time period. A similar article in English is at http://www.nytimes.com/politics/first-draft/2015/07/04/an-angry-jeb-bush-says-he-takes-donald-trumps-remarks-personally/. Since languages can span across different countries, we might get results if we search by location, rather than just language. We don't have very specific geolocation information other than timezone, so let's try plotting candidate sentiment over the 4 major U.S. timezones (Los Angeles, Denver, Chicago, and New York). This is also be a good opportunity to look at a geographical map.
tz_df = english_df.dropna(subset=['user_time_zone']) us_tz_df = tz_df[tz_df.user_time_zone.str.contains("US & Canada")] us_tz_candidate_groupby = us_tz_df.groupby(['candidate', 'user_time_zone']) us_tz_candidate_groupby.influenced_polarity.mean()
arrows.ipynb
savioabuga/arrows
mit
That's our raw data: now to plot it on a map. I got the timezone Shapefile from http://efele.net/maps/tz/world/. First, I read in the Shapefile with Cartopy.
tz_shapes = cartopy.io.shapereader.Reader('arrows/world/tz_world_mp.shp') tz_records = list(tz_shapes.records()) tz_translator = { 'Eastern Time (US & Canada)': 'America/New_York', 'Central Time (US & Canada)': 'America/Chicago', 'Mountain Time (US & Canada)': 'America/Denver', 'Pacific Time (US & Canada)': 'America/Los_Angeles', } american_tz_records = { tz_name: next(filter(lambda record: record.attributes['TZID'] == tz_id, tz_records)) for tz_name, tz_id in tz_translator.items() }
arrows.ipynb
savioabuga/arrows
mit
Next, I have to choose a projection and plot it (again using Cartopy). The Albers Equal-Area is good for maps of the U.S. I'll also download some featuresets from the Natural Earth dataset to display state borders.
albers_equal_area = cartopy.crs.AlbersEqualArea(-95, 35) plate_carree = cartopy.crs.PlateCarree() states_and_provinces = cartopy.feature.NaturalEarthFeature( category='cultural', name='admin_1_states_provinces_lines', scale='50m', facecolor='none' ) cmaps = [matplotlib.cm.Blues, matplotlib.cm.Greens, matplotlib.cm.Reds, matplotlib.cm.Purples] norm = matplotlib.colors.Normalize(vmin=0, vmax=30) candidates = df['candidate'].unique() plt.rcParams['figure.figsize'] = [6.0, 4.0] for index, candidate in enumerate(candidates): plt.figure() plot = plt.axes(projection=albers_equal_area) plot.set_extent((-125, -66, 20, 50)) plot.add_feature(cartopy.feature.LAND) plot.add_feature(cartopy.feature.COASTLINE) plot.add_feature(cartopy.feature.BORDERS) plot.add_feature(states_and_provinces, edgecolor='gray') plot.add_feature(cartopy.feature.LAKES, facecolor="#00BCD4") for tz_name, record in american_tz_records.items(): tz_specific_df = us_tz_df[us_tz_df.user_time_zone == tz_name] tz_candidate_specific_df = tz_specific_df[tz_specific_df.candidate == candidate] mean_polarity = tz_candidate_specific_df.influenced_polarity.mean() plot.add_geometries( [record.geometry], crs=plate_carree, color=cmaps[index](norm(mean_polarity)), alpha=.8 ) plot.set_title('Influenced Polarity toward {} by U.S. Timezone'.format(candidate), family='Ubuntu') plot.figure.set_size_inches(6, 3.5) plt.show() print()
arrows.ipynb
savioabuga/arrows
mit
My friend Gabriel Wang pointed out that U.S. timezones other than Pacific don't mean much since each timezone covers both blue and red states, but the data is still interesting. As expected, midwestern states lean toward Jeb Bush. I wasn't expecting Jeb Bush's highest polarity-tweets to come from the East; this is probably Donald Trump (New York, New York) messing with our data again. In a few months I'll look at these statistics with the latest tweets and compare. What are tweeters outside the U.S. saying about our candidates? Outside of the U.S., if someone is in a major city, the timezone is often that city itself. Here are the top (by number of tweets) non-American 25 timezones in our dataframe.
american_timezones = ('US & Canada|Canada|Arizona|America|Hawaii|Indiana|Alaska' '|New_York|Chicago|Los_Angeles|Detroit|CST|PST|EST|MST') foreign_tz_df = tz_df[~tz_df.user_time_zone.str.contains(american_timezones)] foreign_tz_groupby = foreign_tz_df.groupby('user_time_zone') foreign_tz_groupby.size().sort(inplace = False, ascending = False).head(25)
arrows.ipynb
savioabuga/arrows
mit
I also want to look at polarity, so I'll only use English tweets. (Sorry, Central/South Americans - my very rough method of filtering out American timezones gets rid of some of your timezones too. Let me know if there's a better way to do this.)
foreign_english_tz_df = foreign_tz_df[foreign_tz_df.lang == 'en']
arrows.ipynb
savioabuga/arrows
mit
Now we have a dataframe containing (mostly) world cities as time zones. Let's get the top cities by number of tweets for each candidate, then plot polarities.
foreign_tz_groupby = foreign_english_tz_df.groupby(['candidate', 'user_time_zone']) top_foreign_tz_df = foreign_tz_groupby.filter(lambda group: len(group) > 40) top_foreign_tz_groupby = top_foreign_tz_df.groupby(['user_time_zone', 'candidate'], as_index = False) mean_influenced_polarities = top_foreign_tz_groupby.influenced_polarity.mean() pivot = mean_influenced_polarities.pivot_table( index='user_time_zone', columns='candidate', values='influenced_polarity', fill_value=0 ) plot = sns.heatmap(pivot) plot.set_title('Influenced Polarity in Major Foreign Cities by Candidate', family='Ubuntu') plot.set_ylabel('city', family='Ubuntu') plot.set_xlabel('candidate', family='Ubuntu') plot.figure.set_size_inches(12, 7)
arrows.ipynb
savioabuga/arrows
mit
Exercise for the reader: why is Rand Paul disliked in Athens? You can probably guess, but the actual tweets causing this are rather amusing. Greco-libertarian relations aside, the data shows that London and Amsterdam are among the most influential of cities, with the former leaning toward Jeb Bush and the latter about neutral. In India, Clinton-supporters reside in New Delhi while Chennai tweeters back Rand Paul. By contrast, in 2014, New Delhi constituents voted for the conservative Bharatiya Janata Party while Chennai voted for the more liberal All India Anna Dravida Munnetra Kazhagam Party - so there seems to be some kind of cultural difference between the voters of 2014 and the tweeters of today. Last thing I thought was interesting: Athens has the highest mean polarity for Bernie Sanders, the only city for which this is the case. Could this have anything to do with the recent economic crisis, 'no' vote for austerity, and Bernie's social democratic tendencies? Finally, I'll look at specific geolocation (latitude and longitude) data. Since only about 750 out of 80,000 tweets had geolocation enabled, this data can't really be used for sentiment analysis, but we can still get a good idea of international spread. First I'll plot everything on a world map, then break it up by candidate in the U.S.
df_place = df.dropna(subset=['place']) mollweide = cartopy.crs.Mollweide() plot = plt.axes(projection=mollweide) plot.set_global() plot.add_feature(cartopy.feature.LAND) plot.add_feature(cartopy.feature.COASTLINE) plot.add_feature(cartopy.feature.BORDERS) plot.scatter( list(df_place.longitude), list(df_place.latitude), transform=plate_carree, zorder=2 ) plot.set_title('International Tweeters with Geolocation Enabled', family='Ubuntu') plot.figure.set_size_inches(14, 9) plot = plt.axes(projection=albers_equal_area) plot.set_extent((-125, -66, 20, 50)) plot.add_feature(cartopy.feature.LAND) plot.add_feature(cartopy.feature.COASTLINE) plot.add_feature(cartopy.feature.BORDERS) plot.add_feature(states_and_provinces, edgecolor='gray') plot.add_feature(cartopy.feature.LAKES, facecolor="#00BCD4") candidate_groupby = df_place.groupby('candidate', as_index = False) colors = ['#1976d2', '#7cb342', '#f4511e', '#7b1fa2'] for index, (name, group) in enumerate(candidate_groupby): longitudes = group.longitude.values latitudes = group.latitude.values plot.scatter( longitudes, latitudes, transform=plate_carree, color=colors[index], label=name, zorder=2 ) plot.set_title('U.S. Tweeters by Candidate', family='Ubuntu') plt.legend(loc='lower left') plot.figure.set_size_inches(12, 7)
arrows.ipynb
savioabuga/arrows
mit
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('generator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
def discriminator(x, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('discriminator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Hyperparameters
# Size of input image to discriminator input_size = 784 # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Smoothing smooth = 0.1
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Build the model g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha) # g_model is the generator output d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
# Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_real))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
# Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('generator')] d_vars = [var for var in t_vars if var.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Training
batch_size = 100 epochs = 100 samples = [] losses = [] # Only save generator variables saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f)
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Training loss Here we'll check out the training losses for the generator and discriminator.
fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend()
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training.
def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f)
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
_ = view_samples(-1, samples)
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False)
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha), feed_dict={input_z: sample_z}) _ = view_samples(0, [gen_samples])
gan_mnist/Intro_to_GANs_Solution.ipynb
flaviocordova/udacity_deep_learn_project
mit
definitions
# define symbol x = sym.Symbol('x') # function to be approximated f = sym.cos( x ) f = sym.exp( x ) #f = sym.sqrt( x ) # define lower and upper bound for L[a,b] # -> might be relevant to be changed if you are adapting the function to be approximated a = -1 b = 1
sigNT/tutorial/approximation.ipynb
kit-cel/wt
gpl-2.0
Define Gram-Schmidt
# basis and their number of functions M = [ x**c for c in range( 0, 4 ) ] n = len( M ) print(M) # apply Gram-Schmidt for user-defined set M # init ONB ONB = [ ] # loop for new functions and apply Gram-Schmidt for _n in range( n ): # get function f_temp = M[ _n ] # subtract influence of past ONB functions if _n >= 1: for _k in range( _n ): f_temp -= sym.integrate( M[ _n ] * ONB[ _k ], (x,a,b) ) * ONB[ _k ] # get norm norm = float( sym.integrate( f_temp * f_temp , (x,a,b) ) ) # return normalized function ONB.append( f_temp / np.sqrt( norm) ) print(ONB) # opt in if you like to see the correlation matrix if 0: corr_matrix = np.zeros( ( n, n ) ) for _m in range( n ): for _n in range( n ): corr_matrix[ _m, _n ] = float( sym.integrate( ONB[_m] * ONB[_n], (x,a,b) ) ) np.set_printoptions(precision=2) corr_matrix[ np.isclose( corr_matrix, 0 ) ] = 0 print( corr_matrix ) # opt in if you like to see figures of the base functions # NOTE: Become unhandy if it's too many of them if 0: for _n in range( n): p = plot( M[_n], (x,a,b), show=False ) p.extend( plot( ONB[_n], (x,a,b), line_color='r', show=False ) ) p.show()
sigNT/tutorial/approximation.ipynb
kit-cel/wt
gpl-2.0
now approximate a function
# init approx and extend successively approx = 0 # add next ONB function with according coefficient for _n in range( n ): coeff = sym.integrate( f * ONB[ _n ], (x,a,b) ) approx += coeff * ONB[ _n ] # if you like to see the function print( approx ) p = plot( f, (x,a,b), show=False) p.extend( plot( approx, (x,a,b), line_color='r', show=False) ) p.show() plot( f - approx, (x,a,b) )
sigNT/tutorial/approximation.ipynb
kit-cel/wt
gpl-2.0
Load the data.
dataset = 'SPARC' if dataset == 'HCP': subject_path = conf['HCP']['data_paths']['mgh_1007'] loader = get_HCP_loader(subject_path) small_data_path = '{}/mri/small_data.npy'.format(subject_path) loader.update_filename_data(small_data_path) data = loader.data gtab = loader.gtab voxel_size = loader.voxel_size elif dataset == 'SPARC': subject_path = conf['SPARC']['data_paths']['gradient_60'] gtab, data, voxel_size = preprocess_SPARC(subject_path, normalize=True) btable = np.loadtxt(get_data('dsi4169btable')) #btable = np.loadtxt(get_data('dsi515btable')) gtab_dsi = gradient_table(btable[:, 0], btable[:, 1:], big_delta=gtab.big_delta, small_delta=gtab.small_delta)
notebooks/show_ODFs.ipynb
jsjol/GaussianProcessRegressionForDiffusionMRI
bsd-3-clause
Fit a MAPL model to the data.
map_model_laplacian_aniso = mapmri.MapmriModel(gtab, radial_order=6, laplacian_regularization=True, laplacian_weighting='GCV') mapfit_laplacian_aniso = map_model_laplacian_aniso.fit(data)
notebooks/show_ODFs.ipynb
jsjol/GaussianProcessRegressionForDiffusionMRI
bsd-3-clause
We want to use an FA image as background, this requires us to fit a DTI model.
tenmodel = dti.TensorModel(gtab) tenfit = tenmodel.fit(data) fitted = {'MAPL': mapfit_laplacian_aniso.predict(gtab)[:, :, 0], 'DTI': tenfit.predict(gtab)[:, :, 0]}
notebooks/show_ODFs.ipynb
jsjol/GaussianProcessRegressionForDiffusionMRI
bsd-3-clause
Fit GP without mean and with DTI and MAPL as mean.
kern = get_default_kernel(n_max=6, spatial_dims=2) gp_model = GaussianProcessModel(gtab, spatial_dims=2, kernel=kern, verbose=False) gp_fit = gp_model.fit(np.squeeze(data), mean=None, voxel_size=voxel_size[0:2], retrain=True) kern = get_default_kernel(n_max=2, spatial_dims=2) gp_dti_model = GaussianProcessModel(gtab, spatial_dims=2, kernel=kern, verbose=False) gp_dti_fit = gp_dti_model.fit(np.squeeze(data), mean=fitted['DTI'], voxel_size=voxel_size[0:2], retrain=True) kern = get_default_kernel(n_max=2, spatial_dims=2) gp_mapl_model = GaussianProcessModel(gtab, spatial_dims=2, kernel=kern, verbose=False) gp_mapl_fit = gp_mapl_model.fit(np.squeeze(data), mean=fitted['MAPL'], voxel_size=voxel_size[0:2], retrain=True)
notebooks/show_ODFs.ipynb
jsjol/GaussianProcessRegressionForDiffusionMRI
bsd-3-clause
gp_model = GaussianProcessModel(gtab, spatial_dims=2, q_magnitude_transform=np.sqrt, verbose=False) gp_fit = gp_model.fit(np.squeeze(data), mean=None, voxel_size=voxel_size[0:2], retrain=True) gp_dti_fit = gp_model.fit(np.squeeze(data), mean=fitted['DTI'], voxel_size=voxel_size[0:2], retrain=True) gp_mapl_fit = gp_model.fit(np.squeeze(data), mean=fitted['MAPL'], voxel_size=voxel_size[0:2], retrain=True)
pred = {'MAPL': mapfit_laplacian_aniso.predict(gtab_dsi)[:, :, 0], 'DTI': tenfit.predict(gtab_dsi)[:, :, 0]}
notebooks/show_ODFs.ipynb
jsjol/GaussianProcessRegressionForDiffusionMRI
bsd-3-clause
Compute the ODFs Load an odf reconstruction sphere
sphere = get_sphere('symmetric724').subdivide(1)
notebooks/show_ODFs.ipynb
jsjol/GaussianProcessRegressionForDiffusionMRI
bsd-3-clause
The radial order $s$ can be increased to sharpen the results, but it might also make the odfs noisier. Note that a "proper" ODF corresponds to $s=0$.
odf = {'MAPL': mapfit_laplacian_aniso.odf(sphere, s=0), 'DTI': tenfit.odf(sphere)} odf['GP'] = gp_fit.odf(sphere, gtab_dsi=gtab_dsi, mean=None)[:, :, None, :] odf['DTI_GP'] = gp_dti_fit.odf(sphere, gtab_dsi=gtab_dsi, mean=pred['DTI'])[:, :, None, :] odf['MAPL_GP'] = gp_mapl_fit.odf(sphere, gtab_dsi=gtab_dsi, mean=pred['MAPL'])[:, :, None, :]
notebooks/show_ODFs.ipynb
jsjol/GaussianProcessRegressionForDiffusionMRI
bsd-3-clause
Display the ODFs
for name, _odf in odf.items(): ren = window.Renderer() ren.background((1, 1, 1)) odf_actor = actor.odf_slicer(_odf, sphere=sphere, scale=0.5, colormap='jet') background_actor = actor.slicer(tenfit.fa, opacity=1) odf_actor.display(z=0) odf_actor.RotateZ(90) background_actor.display(z=0) background_actor.RotateZ(90) background_actor.SetPosition(0, 0, -1) ren.add(background_actor) ren.add(odf_actor) window.record(ren, out_path='odfs_{}.png'.format(name), size=(1000, 1000))
notebooks/show_ODFs.ipynb
jsjol/GaussianProcessRegressionForDiffusionMRI
bsd-3-clause
Load data Let us load training data and store features, labels and other data into numpy arrays.
# Load data from file data = pd.read_csv('../facies_vectors.csv') # Store features and labels X = data[feature_names].values # features y = data['Facies'].values # labels # Store well labels and depths well = data['Well Name'].values depth = data['Depth'].values
ar4/ar4_submission2_VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
Data inspection Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that: - Some features seem to be affected by a few outlier measurements. - Only a few wells contain samples from all classes. - PE measurements are available only for some wells.
# Define function for plotting feature statistics def plot_feature_stats(X, y, feature_names, facies_colors, facies_names): # Remove NaN nan_idx = np.any(np.isnan(X), axis=1) X = X[np.logical_not(nan_idx), :] y = y[np.logical_not(nan_idx)] # Merge features and labels into a single DataFrame features = pd.DataFrame(X, columns=feature_names) labels = pd.DataFrame(y, columns=['Facies']) for f_idx, facies in enumerate(facies_names): labels[labels[:] == f_idx] = facies data = pd.concat((labels, features), axis=1) # Plot features statistics facies_color_map = {} for ind, label in enumerate(facies_names): facies_color_map[label] = facies_colors[ind] sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names))) # Feature distribution # plot_feature_stats(X, y, feature_names, facies_colors, facies_names) # mpl.rcParams.update(inline_rc) # Facies per well for w_idx, w in enumerate(np.unique(well)): ax = plt.subplot(3, 4, w_idx+1) hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5) plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center') ax.set_xticks(np.arange(len(hist[0]))) ax.set_xticklabels(facies_names) ax.set_title(w) # Features per well for w_idx, w in enumerate(np.unique(well)): ax = plt.subplot(3, 4, w_idx+1) hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0)) plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center') ax.set_xticks(np.arange(len(hist))) ax.set_xticklabels(feature_names) ax.set_yticks([0, 1]) ax.set_yticklabels(['miss', 'hit']) ax.set_title(w)
ar4/ar4_submission2_VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
Feature imputation Let us fill missing PE values. This is the only cell that differs from the approach of Paolo Bestagini. Currently no feature engineering is used, but this should be explored in the future.
def make_pe(X, seed): reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=seed) DataImpAll = data[feature_names].copy() DataImp = DataImpAll.dropna(axis = 0, inplace=False) Ximp=DataImp.loc[:, DataImp.columns != 'PE'] Yimp=DataImp.loc[:, 'PE'] reg.fit(Ximp, Yimp) X[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False)) return X
ar4/ar4_submission2_VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
Feature augmentation Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by: - Aggregating features at neighboring depths. - Computing feature spatial gradient.
# Feature windows concatenation function def augment_features_window(X, N_neig): # Parameters N_row = X.shape[0] N_feat = X.shape[1] # Zero padding X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat))))) # Loop over windows X_aug = np.zeros((N_row, N_feat*(2*N_neig+1))) for r in np.arange(N_row)+N_neig: this_row = [] for c in np.arange(-N_neig,N_neig+1): this_row = np.hstack((this_row, X[r+c])) X_aug[r-N_neig] = this_row return X_aug # Feature gradient computation function def augment_features_gradient(X, depth): # Compute features gradient d_diff = np.diff(depth).reshape((-1, 1)) d_diff[d_diff==0] = 0.001 X_diff = np.diff(X, axis=0) X_grad = X_diff / d_diff # Compensate for last missing value X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1])))) return X_grad # Feature augmentation function def augment_features(X, well, depth, seed=None, pe=True, N_neig=1): seed = seed or None if pe: X = make_pe(X, seed) # Augment features X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2))) for w in np.unique(well): w_idx = np.where(well == w)[0] X_aug_win = augment_features_window(X[w_idx, :], N_neig) X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx]) X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1) # Find padded rows padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0]) return X_aug, padded_rows # Augment features X_aug, padded_rows = augment_features(X, well, depth)
ar4/ar4_submission2_VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
Generate training, validation and test data splits The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that: - Features from each well belongs to training or validation set. - Training and validation sets contain at least one sample for each class.
# Initialize model selection methods lpgo = LeavePGroupsOut(2) # Generate splits split_list = [] for train, val in lpgo.split(X, y, groups=data['Well Name']): hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5) hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5) if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0): split_list.append({'train':train, 'val':val}) # Print splits for s, split in enumerate(split_list): print('Split %d' % s) print(' training: %s' % (data['Well Name'][split['train']].unique())) print(' validation: %s' % (data['Well Name'][split['val']].unique()))
ar4/ar4_submission2_VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
Classification parameters optimization Let us perform the following steps for each set of parameters: - Select a data split. - Normalize features using a robust scaler. - Train the classifier on training data. - Test the trained classifier on validation data. - Repeat for all splits and average the F1 scores. At the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data.
# Parameters search grid (uncomment parameters for full grid search... may take a lot of time) N_grid = [100] # [50, 100, 150] M_grid = [10] # [5, 10, 15] S_grid = [25] # [10, 25, 50, 75] L_grid = [5] # [2, 3, 4, 5, 10, 25] param_grid = [] for N in N_grid: for M in M_grid: for S in S_grid: for L in L_grid: param_grid.append({'N':N, 'M':M, 'S':S, 'L':L}) # Train and test a classifier def train_and_test(X_tr, y_tr, X_v, well_v, clf): # Feature normalization scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr) X_tr = scaler.transform(X_tr) X_v = scaler.transform(X_v) # Train classifier clf.fit(X_tr, y_tr) # Test classifier y_v_hat = clf.predict(X_v) # Clean isolated facies for each well for w in np.unique(well_v): y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5) return y_v_hat # For each set of parameters # score_param = [] # for param in param_grid: # # For each data split # score_split = [] # for split in split_list: # # Remove padded rows # split_train_no_pad = np.setdiff1d(split['train'], padded_rows) # # Select training and validation data from current split # X_tr = X_aug[split_train_no_pad, :] # X_v = X_aug[split['val'], :] # y_tr = y[split_train_no_pad] # y_v = y[split['val']] # # Select well labels for validation data # well_v = well[split['val']] # # Train and test # y_v_hat = train_and_test(X_tr, y_tr, X_v, well_v, param) # # Score # score = f1_score(y_v, y_v_hat, average='micro') # score_split.append(score) # # Average score for this param # score_param.append(np.mean(score_split)) # print('F1 score = %.3f %s' % (score_param[-1], param)) # # Best set of parameters # best_idx = np.argmax(score_param) # param_best = param_grid[best_idx] # score_best = score_param[best_idx] # print('\nBest F1 score = %.3f %s' % (score_best, param_best))
ar4/ar4_submission2_VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
Predict labels on test data Let us now apply the selected classification technique to test data.
param_best = {'S': 25, 'M': 10, 'L': 5, 'N': 100} # Load data from file test_data = pd.read_csv('../validation_data_nofacies.csv') # Prepare test data well_ts = test_data['Well Name'].values depth_ts = test_data['Depth'].values X_ts = test_data[feature_names].values y_pred = [] print('o' * 100) for seed in range(100): np.random.seed(seed) # Make training data. X_train, padded_rows = augment_features(X, well, depth, seed=seed) y_train = y X_train = np.delete(X_train, padded_rows, axis=0) y_train = np.delete(y_train, padded_rows, axis=0) param = param_best clf = OneVsOneClassifier(RandomForestClassifier(n_estimators=param['N'], criterion='entropy', max_features=param['M'], min_samples_split=param['S'], min_samples_leaf=param['L'], class_weight='balanced', random_state=seed), n_jobs=-1) # Make blind data. X_test, _ = augment_features(X_ts, well_ts, depth_ts, seed=seed, pe=False) # Train and test. y_ts_hat = train_and_test(X_train, y_train, X_test, well_ts, clf) # Collect result. y_pred.append(y_ts_hat) print('.', end='') np.save('100_realizations.npy', y_pred)
ar4/ar4_submission2_VALIDATION.ipynb
esa-as/2016-ml-contest
apache-2.0
Lectura de mapas de direcciones y de elevación: Trazado de cuencas y corrientes
cuCap = wmf.SimuBasin(0,0,0,0,rute='/media/nicolas/discoGrande/01_SIATA/nc_cuencas/Picacha_Abajo.nc') #Guarda Vector de la cuenca cuCap.Save_Basin2Map('/media/nicolas/discoGrande/01_SIATA/vector/Cuenca_AltaVista2.shp') cuCap.Save_Net2Map('/media/nicolas/discoGrande/01_SIATA/vector/Red_Altavista_Abajo.shp',dx = 12.7, umbral=470)
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Tiempo de viaje
cuCap.GetGeo_Parameters() cuCap.Tc #Parametros Geomorfologicos de las cuencas cuCap.GetGeo_Parameters(rutaParamASC=ruta_images+'param_cap.txt', plotTc=True, rutaTcPlot=ruta_images+'Tc_cap.png')
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
No se tienen en cuenta los tiempos de concentración de campo y munera y Giandotti, los demás si, se tiene como tiempo de concentración medio un valor de $T_c = 2.69 hrs$
0.58*60.0 #Tiempo medio y mapas de tiempos de viajes TcCap = np.array(cuCap.Tc.values()).mean() #Calcula tiempos de viajes cuCap.GetGeo_IsoChrones(TcCap, Niter= 6) #Figura de tiempos de viaje cuCap.Plot_basin(cuCap.CellTravelTime, ruta = '/media/nicolas/discoGrande/01_SIATA/ParamCuencas/AltaVistaAbajo/IsoCronas.png', lines_spaces=0.01)
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Este mapa debe ser recalculado con una mayor cantidad de iteraciones, lo dejamos haciendo luego, ya que toma tiempo, de momento esta malo.
ruta_images = '/media/nicolas/discoGrande/01_SIATA/ParamCuencas/AltaVistaAbajo/' cuCap.Plot_Travell_Hist(ruta=ruta_images + 'Histogram_IsoCronas.png')
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Curva hipsometrica y cauce ppal
cuCap.GetGeo_Cell_Basics() cuCap.GetGeo_Ppal_Hipsometric(intervals=50) cuCap.Plot_Hipsometric(normed=True,ventana=10, ruta=ruta_images+'Hipsometrica_Captacion.png') cuCap.PlotPpalStream(ruta=ruta_images+'Perfil_cauce_ppal_Capta.png')
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
El cauce principal presenta un desarrollo típico de una cuenca mediana-grande, en donde de ve claramente una zona de producción de sedimentos entre los 0 y 10 km, y de los 10km en adelante se presenta una zona de transporte y depositación con pendientes que oscilan entre 0.0 y 0.8 %
cuCap.PlotSlopeHist(bins=[0,2,0.2],ruta=ruta_images+'Slope_hist_cap.png')
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
El histograma de pendientes muestra que gran parte de las pendientes son inferiores al 0.6, por lo cual se considera que el cauce ppal de la cuenca se desarrolla ppalmente sobre un valle. Mapas de Variables Geomorfo
cuCap.GetGeo_HAND() cuCap.Plot_basin(cuCap.CellHAND_class, ruta=ruta_images+'Map_HAND_class.png', lines_spaces=0.01) cuCap.Plot_basin(cuCap.CellSlope, ruta=ruta_images + 'Map_Slope.png', lines_spaces=0.01)
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
El mapa de pendientes muestra como las mayores pendientes en la cuenca se desarrollan en la parte alta de la misma, en la medida en que se observa el desarrollo de la cuenca en la zona baja esta muestra claramente como las pendientes comienzan a ser bajas.
IT = cuCap.GetGeo_IT() cuCap.Plot_basin(IT, ruta = ruta_images+'Indice_topografico.png', lines_spaces= 0.01)
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Precipitación A continuación se realiza el análisis de la precipitación en la zona, de esta manera se conocen las condiciones climáticas en la región. Procedimiento para Desagregar lluvia (obtener IDF) Lee la estación de epm con datos horarios Caudales Calculo de caudales medio de largo plazo mediante el campo de precipitación estimado para la zona mediante el uso de las estaciones del IDEAM Q medio por balance. Qmax por regionalización y HU sintéticas Qmin por regionalización y análisis de serie de caudales simulada a la salida de la cuenca Caudal Medio Largo Plazo
Precip = 1650 cuCap.GetQ_Balance(Precip) cuCap.Plot_basin(cuCap.CellETR, ruta=ruta_images+'Map_ETR_Turc.png', lines_spaces=0.01) cuCap.GetQ_Balance(Precip) print 'Caudal Captacion:', cuCap.CellQmed[-1] cuCap.Plot_basin(Precip - cuCap.CellETR, ruta = ruta_images+'Map_RunOff_mm_ano.png', lines_spaces=0.01, colorTable = 'jet_r')
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Caudales extremos por regionalización Se calculan los caudales extremos máximos y mínimos para los periodos de retorno de: - 2.33, 5, 10, 25, 50, 75 y 100 Se utilizan las siguientes metodologías: Regionalización con gumbel y lognormal.
#Periodos de retrorno para obtener maximos y minimos Tr=[2.33, 5, 10, 25, 50, 100] QmaxRegGum = cuCap.GetQ_Max(cuCap.CellQmed, Dist='gumbel', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9]) QmaxRegLog = cuCap.GetQ_Max(cuCap.CellQmed, Dist='lognorm', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9]) QminRegLog = cuCap.GetQ_Min(cuCap.CellQmed, Dist='lognorm', Tr= Tr,) QminRegGum = cuCap.GetQ_Min(cuCap.CellQmed, Dist='gumbel', Tr= Tr,)
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Se guarda el mapa con el caudal medio, y los maximos y minimos para toda la red hídrica de la cuenca
Dict = {'Qmed':cuCap.CellQmed} for t,q in zip([2.33,5,10,25,50,100],QminRegGum): Dict.update({'min_g'+str(t):q}) for t,q in zip([2.33,5,10,25,50,100],QminRegLog): Dict.update({'min_l'+str(t):q}) for t,q in zip([2.33,5,10,25,50,100],QmaxRegGum): Dict.update({'max_g'+str(t):q}) for t,q in zip([2.33,5,10,25,50,100],QmaxRegLog): Dict.update({'max_l'+str(t):q})
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Caudales Máximos Adicional a los caudales máximos estimados por regionalización, se estiman los caudales máximos por el método de hidrógrafa unitaria sintética: sneyder. scs. williams
cuCap.GetGeo_Parameters() #Parametros pára maximos TcCap = np.median(cuCap.Tc.values()) #CN = 50 CN=80 print 'tiempo viaje medio Captacion:', TcCap print u'Número de curva:', CN # Obtención de la lluvia de diseño. Intensidad = [40.9, 49.5, 55.5, 60.6, 67.4, 75.7] # Lluvia efectiva lluviaTr,lluvEfect,S = cuCap.GetHU_DesingStorm(np.array(Intensidad), TcCap, CN=CN, plot='si', ruta=ruta_images + 'Q_max_LLuvia_Efectiva_descarga.png', Tr=[2.33, 5, 10, 25, 50, 75, 100])
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Se presenta en la figura como para diferentes periodos de retorno se da una pérdida de la cantidad de lluvia efectiva
#Calcula los HU para la descarga Tscs,Qscs,HU=cuCap.GetHU_SCS(cuCap.GeoParameters['Area[km2]'], TcCap,) Tsnyder,Qsnyder,HU,Diferencia=cuCap.GetHU_Snyder(cuCap.GeoParameters['Area[km2]'], TcCap, Cp=0.8, Fc=2.9) #Cp=1.65/(np.sqrt(PendCauce)**0.38)) Twilliam,Qwilliam,HU=cuCap.GetHU_Williams(cuCap.GeoParameters['Area[km2]'], cuCap.GeoParameters['Long_Cuenca [km]'], 780, TcCap) #Agrupa los hidrogramas unitarios para luego plotearlos D = {'snyder':{'time':Tsnyder,'HU':Qsnyder}, 'scs':{'time':Tscs,'HU':Qscs}, 'williams':{'time':Twilliam,'HU':Qwilliam}} #Hace el plot de ellos cuCap.PlotHU_Synthetic(D,ruta=ruta_images + 'Q_max_HU.png')
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Hidrogramas unitarios calibrados para la cuenca, williams muestra un rezago en este caso con las demás metodologías
#QmaxRegGum = cuCap.GetQ_Max(cuCap.CellQmed, Dist='gumbel', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9]) #QmaxRegLog = cuCap.GetQ_Max(cuCap.CellQmed, Dist='lognorm', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9]) #Realiza la convolucion de los hidrogramas sinteticos con la tormenta de diseno HidroSnyder,QmaxSnyder,Tsnyder = cuCap.GetHU_Convolution(Tsnyder,Qsnyder,lluvEfect) HidroWilliam,QmaxWilliam,Twilliam = cuCap.GetHU_Convolution(Twilliam,Qwilliam,lluvEfect) HidroSCS,QmaxSCS,Tscs = cuCap.GetHU_Convolution(Tscs,Qscs,lluvEfect) DicQmax = {#'Snyder':QmaxSnyder, #'Williams':QmaxWilliam, #'SCS': QmaxSCS, 'Gumbel': QmaxRegGum[:,-1], 'Log-Norm': QmaxRegLog[:,-1]} #Plot de maximos pyt.PlotQmaxTr(DicQmax,Tr,ruta=ruta_images + 'Q_max_Metodos_Tr_descarga.png') #Tablas de maximos DataQmax = pd.DataFrame(DicQmax, index=Tr) #Escritura en excel. writer = pd.ExcelWriter(ruta_images + 'Qmax_captacion.xlsx') DataQmax.to_excel(ruta_images + 'Qmax_captacion.xls') cuCap.GeoParameters
Examples/Ejemplo_Hidrologia_Maximos.ipynb
nicolas998/wmf
gpl-3.0
Pandas Pandas is an excellent library for handling tabular data and quickly performing data analysis on it. It can handle many textfile types.
import pandas as pd df = pd.read_csv('../Pandas/Jan17_CO_ASOS.txt', sep='\t') df.head()
notebooks/Python_Ecosystem/Scientific_Python_Ecosystem_Overview.ipynb
julienchastang/unidata-python-workshop
mit
xarray xarray is a Python library meant to handle N-dimensional arrays with metadata (think netCDF files). With the Dask library, it can work with Big Data efficiently in a Python framework.
import xarray as xr ds = xr.open_dataset('../../data/NARR_19930313_0000.nc') ds
notebooks/Python_Ecosystem/Scientific_Python_Ecosystem_Overview.ipynb
julienchastang/unidata-python-workshop
mit
Dask Dask is a parallel-computing library in Python. You can use it on your laptop, cloud environment, or on a high-performance computer (NCAR's Cheyenne for example). It allows for lazy evaluations so that computations only occur after you've chained all of your operations together. Additionally, it has a built-in scheduler to scale with your computational demand to optimize your parellel resources. SciPy The SciPy library has a lot of advanced mathematical functions that are not contained in Numpy, including Fast Fourier Transforms, interpolation methods, and linear algebra operations. Scikit-learn Scikit-learn is the primary machine learning library for Python. It can do simple things like regressions and classifications, or more advanced techniques like random forests. It can perform some neural network operations, but for big data implementations, check out the keras library. Scikit-image An image processing library built on NumPy Visualization Libraries Matplotlib Matplotlib is one of the core visualization libraries in Python and produces publication-quality figures without much configuration.
import matplotlib.pyplot as plt plt.plot(x,y) plt.title('Demo of Matplotlib') plt.show()
notebooks/Python_Ecosystem/Scientific_Python_Ecosystem_Overview.ipynb
julienchastang/unidata-python-workshop
mit
Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll call the images, which will be the input to our neural network, X and their corresponding labels Y. We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened data For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
# Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
# Visualizing the data import matplotlib.pyplot as plt %matplotlib inline # Function for displaying a training image by it's index in the MNIST set def show_digit(index): label = trainY[index].argmax(axis=0) # Reshape 784 array into 28x28 image image = trainX[index].reshape([28,28]) plt.title('Training data, index: %d, Label: %d' % (index, label)) plt.imshow(image, cmap='gray_r') plt.show() # Display the first (index 0) training image show_digit(0) show_digit(10)
Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the output layer, and The output layer, which defines how the network learns and outputs a label for a given image. Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example, net = tflearn.input_data([None, 100]) would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). Then, to set how you train the network, use: net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with categorical cross-entropy. Finally, you put all this together to create the model with tflearn.DNN(net). Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
# Define the neural network def build_model(learning_rate): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model # Input layer net = tflearn.input_data([None, 784]) # Hidden layers net = tflearn.fully_connected(net, 200, activation='ReLU') net = tflearn.fully_connected(net, 40, activation='ReLU') # Output layers net = tflearn.fully_connected(net, 10, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=learning_rate, loss='categorical_crossentropy') # This model assumes that your network is named "net" model = tflearn.DNN(net) return model # Build the model model = build_model(learning_rate=0.1)
Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
# Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=10)
Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Testing After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results. A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy.
# Compare the labels that our model predicts with the actual labels predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_) # Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels test_accuracy = np.mean(predictions == testY[:,0], axis=0) # Print out the result print("Test accuracy: ", test_accuracy)
Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb
nehal96/Deep-Learning-ND-Exercises
mit
Let us test the above function on the simple example: full triangle with values 0, 1 and 2 on the vertices labeled with 1, 2 and 3.
K = closure([(1, 2, 3)]) f = {1: 0, 2: 1, 3: 2} for v in (1, 2, 3): print"{0}: {1}".format((v,), lower_link((v,), K, f))
2015_2016/lab13/Extending values on vertices.ipynb
gregorjerse/rt2
gpl-3.0
Now let us implement an extension algorithm. We are leaving out the cancelling step for clarity.
def join(a, b): """Return the join of 2 simplices a and b.""" return tuple(sorted(set(a).union(b))) def extend(K, f): """Extend the field to the complex K. Function on vertices is given in f. Returns the pair V, C, where V is the dictionary containing discrete gradient vector field and C is the list of all critical cells. """ V = dict() C = [] for v in (s for s in K if len(s)==1): ll = lower_link(v, K, f) if len(ll) b== 0: C.append(v) else: V1, C1 = extend(ll, f) mv, mc = min([(f[c[0]], c) for c in C1 if len(c)==1]) V[v] = join(v, mc) for c in (c for c in C1 if c != mc): C.append(join(v, c)) for a, b in V1.items(): V[join(a, v)] = join(b, v) return V, C
2015_2016/lab13/Extending values on vertices.ipynb
gregorjerse/rt2
gpl-3.0
Let us test the algorithm on the example from the previous step (full triangle).
K = closure([(1, 2, 3)]) f = {1: 0, 2: 1, 3: 2} extend(K, f) K = closure([(1, 2, 3), (2, 3, 4)]) f = {1: 0, 2: 1, 3: 2, 4: 0} extend(K, f) K = closure([(1, 2, 3), (2, 3, 4)]) f = {1: 0, 2: 1, 3: 2, 4: 3} extend(K, f)
2015_2016/lab13/Extending values on vertices.ipynb
gregorjerse/rt2
gpl-3.0
Preparation
# for VGG, ResNet, and MobileNet INPUT_SHAPE = (224, 224) # for InceptionV3, InceptionResNetV2, Xception # INPUT_SHAPE = (299, 299) import os import skimage.data import skimage.transform from keras.utils.np_utils import to_categorical import numpy as np def load_data(data_dir, type=".ppm"): num_categories = 6 # Get all subdirectories of data_dir. Each represents a label. directories = [d for d in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, d))] # Loop through the label directories and collect the data in # two lists, labels and images. labels = [] images = [] for d in directories: label_dir = os.path.join(data_dir, d) file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(type)] # For each label, load it's images and add them to the images list. # And add the label number (i.e. directory name) to the labels list. for f in file_names: images.append(skimage.data.imread(f)) labels.append(int(d)) images64 = [skimage.transform.resize(image, INPUT_SHAPE) for image in images] y = np.array(labels) y = to_categorical(y, num_categories) X = np.array(images64) return X, y # Load datasets. ROOT_PATH = "./" original_dir = os.path.join(ROOT_PATH, "speed-limit-signs") original_images, original_labels = load_data(original_dir, type=".ppm") X, y = original_images, original_labels
notebooks/workshops/tss/cnn-standard-architectures.ipynb
DJCordhose/ai
mit
Uncomment next three cells if you want to train on augmented image set Otherwise Overfitting can not be avoided because image set is simply too small
# !curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/augmented-signs.zip # from zipfile import ZipFile # zip = ZipFile('augmented-signs.zip') # zip.extractall('.') data_dir = os.path.join(ROOT_PATH, "augmented-signs") augmented_images, augmented_labels = load_data(data_dir, type=".png") # merge both data sets all_images = np.vstack((X, augmented_images)) all_labels = np.vstack((y, augmented_labels)) # shuffle # https://stackoverflow.com/a/4602224 p = numpy.random.permutation(len(all_labels)) shuffled_images = all_images[p] shuffled_labels = all_labels[p] X, y = shuffled_images, shuffled_labels
notebooks/workshops/tss/cnn-standard-architectures.ipynb
DJCordhose/ai
mit
Split test and train data 80% to 20%
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y) X_train.shape, y_train.shape
notebooks/workshops/tss/cnn-standard-architectures.ipynb
DJCordhose/ai
mit
Training Xception Slighly optimized version of Inception: https://keras.io/applications/#xception Inception V3 no longer using non-sequential tower architecture, rahter short cuts: https://keras.io/applications/#inceptionv3 Uses Batch Normalization: https://keras.io/layers/normalization/#batchnormalization http://cs231n.github.io/neural-networks-2/#batchnorm Batch Normalization still exist even in prediction model normalizes activations for each batch around 0 and standard deviation close to 1 replaces Dropout except for final fc layers as a next step might make sense to alter classifier to again have Dropout for training All that makes it ideal for our use case
from keras.applications.xception import Xception model = Xception(classes=6, weights=None) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # !rm -rf ./tf_log # https://keras.io/callbacks/#tensorboard tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log') # To start tensorboard # tensorboard --logdir=./tf_log # open http://localhost:6006
notebooks/workshops/tss/cnn-standard-architectures.ipynb
DJCordhose/ai
mit
This is a truly complex model Batch size needs to be small overthise model does not fit in memory Will take long to train, even on GPU on augmented dataset 4 minutes on K80 per Epoch: 400 Minutes for 100 Epochs = 6-7 hours
# Depends on harware GPU architecture, model is really complex, batch needs to be small (this works well on K80) BATCH_SIZE = 25 early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=1) %time model.fit(X_train, y_train, epochs=50, validation_split=0.2, callbacks=[tb_callback, early_stopping_callback], batch_size=BATCH_SIZE)
notebooks/workshops/tss/cnn-standard-architectures.ipynb
DJCordhose/ai
mit
Each Epoch takes very long Extremely impressing how fast it converges: Almost 100% for validation starting from epoch 25 TODO: Metrics for Augmented Data Accuracy Validation Accuracy
train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE) train_loss, train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE) test_loss, test_accuracy original_loss, original_accuracy = model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE) original_loss, original_accuracy model.save('xception-augmented.hdf5') !ls -lh xception-augmented.hdf5
notebooks/workshops/tss/cnn-standard-architectures.ipynb
DJCordhose/ai
mit
Alternative: ResNet basic ideas depth does matter 8x deeper than VGG possible by using shortcuts and skipping final fc layer https://keras.io/applications/#resnet50 https://medium.com/towards-data-science/neural-network-architectures-156e5bad51ba http://arxiv.org/abs/1512.03385
from keras.applications.resnet50 import ResNet50 model = ResNet50(classes=6, weights=None) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1) !rm -rf ./tf_log # https://keras.io/callbacks/#tensorboard tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log') # To start tensorboard # tensorboard --logdir=./tf_log # open http://localhost:6006 # Depends on harware GPU architecture, model is really complex, batch needs to be small (this works well on K80) BATCH_SIZE = 50 # https://github.com/fchollet/keras/issues/6014 # batch normalization seems to mess with accuracy when test data set is small, accuracy here is different from below %time model.fit(X_train, y_train, epochs=50, validation_split=0.2, batch_size=BATCH_SIZE, callbacks=[tb_callback, early_stopping_callback]) # %time model.fit(X_train, y_train, epochs=50, validation_split=0.2, batch_size=BATCH_SIZE, callbacks=[tb_callback])
notebooks/workshops/tss/cnn-standard-architectures.ipynb
DJCordhose/ai
mit
Results are a bit less good Maybe need to train longer? Batches can be larger, training is faster even though more epochs Metrics for Augmented Data Accuracy Validation Accuracy
train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE) train_loss, train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE) test_loss, test_accuracy original_loss, original_accuracy = model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE) original_loss, original_accuracy model.save('resnet-augmented.hdf5') !ls -lh resnet-augmented.hdf5
notebooks/workshops/tss/cnn-standard-architectures.ipynb
DJCordhose/ai
mit
1 - The data
# We load our data from some available ones shipped with dcgpy. # In this particular case we use the problem sinecosine from the paper: # Vladislavleva, Ekaterina J., Guido F. Smits, and Dick Den Hertog. # "Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic # programming." IEEE Transactions on Evolutionary Computation 13.2 (2008): 333-349. X, Y = dcgpy.generate_sinecosine() from mpl_toolkits.mplot3d import Axes3D # And we plot them as to visualize the problem. fig = plt.figure() ax = fig.add_subplot(111, projection='3d') _ = ax.scatter(X[:,0], X[:,1], Y[:,0])
doc/sphinx/notebooks/symbolic_regression_3.ipynb
darioizzo/d-CGP
gpl-3.0
2 - The symbolic regression problem
# We define our kernel set, that is the mathematical operators we will # want our final model to possibly contain. What to choose in here is left # to the competence and knowledge of the user. A list of kernels shipped with dcgpy # can be found on the online docs. The user can also define its own kernels (see the corresponding tutorial). ss = dcgpy.kernel_set_double(["sum", "diff", "mul", "sin", "cos"]) # We instantiate the symbolic regression optimization problem # Note how we specify to consider one ephemeral constant via # the kwarg n_eph. We also request 100 kernels with a linear # layout (this allows for the construction of longer expressions) and # we set the level back to 101 (in an attempt to skew the search towards # simple expressions) udp = dcgpy.symbolic_regression( points = X, labels = Y, kernels=ss(), rows = 1, cols = 100, n_eph = 1, levels_back = 101, multi_objective=True) prob = pg.problem(udp) print(udp)
doc/sphinx/notebooks/symbolic_regression_3.ipynb
darioizzo/d-CGP
gpl-3.0
3 - The search algorithm
# We instantiate here the evolutionary strategy we want to use to # search for models. Note we specify we want the evolutionary operators # to be applied also to the constants via the kwarg *learn_constants* uda = dcgpy.momes4cgp(gen = 250, max_mut = 4) algo = pg.algorithm(uda) algo.set_verbosity(10)
doc/sphinx/notebooks/symbolic_regression_3.ipynb
darioizzo/d-CGP
gpl-3.0
4 - The search
# We use a population of 100 individuals pop = pg.population(prob, 100) # Here is where we run the actual evolution. Note that the screen output # will show in the terminal (not on your Jupyter notebook in case # you are using it). Note you will have to run this a few times before # solving the problem entirely. pop = algo.evolve(pop)
doc/sphinx/notebooks/symbolic_regression_3.ipynb
darioizzo/d-CGP
gpl-3.0
5 - Inspecting the non dominated front
# Compute here the non dominated front. ndf = pg.non_dominated_front_2d(pop.get_f()) # Inspect the front and print the proposed expressions. print("{: >20} {: >30}".format("Loss:", "Model:"), "\n") for idx in ndf: x = pop.get_x()[idx] f = pop.get_f()[idx] a = parse_expr(udp.prettier(x))[0] print("{: >20} | {: >30}".format(str(f[0]), str(a)), "|") # Lets have a look to the non dominated fronts in the final population. ax = pg.plot_non_dominated_fronts(pop.get_f()) _ = plt.xlabel("loss") _ = plt.ylabel("complexity") _ = plt.title("Non dominate fronts")
doc/sphinx/notebooks/symbolic_regression_3.ipynb
darioizzo/d-CGP
gpl-3.0
6 - Lets have a look to the log content
# Here we get the log of the latest call to the evolve log = algo.extract(dcgpy.momes4cgp).get_log() gen = [it[0] for it in log] loss = [it[2] for it in log] compl = [it[4] for it in log] # And here we plot, for example, the generations against the best loss _ = plt.plot(gen, loss) _ = plt.title('last call to evolve') _ = plt.xlabel('generations') _ = plt.ylabel('loss')
doc/sphinx/notebooks/symbolic_regression_3.ipynb
darioizzo/d-CGP
gpl-3.0
Open your dataset up using pandas in a Jupyter notebook
df = pd.read_csv("congress.csv", error_bad_lines=False)
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
Do a .head() to get a feel for your data
df.head() #bioguide: The alphanumeric ID for legislators in http://bioguide.congress.gov.
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
Write down 12 questions to ask your data, or 12 things to hunt for in the data 1)How many senators and how many representatives in total since 1947?
df['chamber'].value_counts() #sounds like a lot. We might have repetitions. df['bioguide'].describe() #we count the bioguide, which is unique to each legislator. #There are only 3188 unique values, hence only 3188 senators and representatives in total.
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
2) How many from each party in total ?
total_democrats = (df['party'] == 'D').value_counts() total_democrats total_republicans =(df['party'] == 'R').value_counts() total_republicans
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
3) What is the average age for people that have worked in congress (both Senators and Representatives)
df['age'].describe()
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
4) What is the average age of Senators that have worked in the Senate? And for Representatives in the house?
df.groupby("chamber")['age'].describe()
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
5) How many in total from each state?
df['state'].value_counts()
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
5) How many Senators in total from each state? How many Representatives?
df.groupby("state")['chamber'].value_counts()
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
6) How many terms are recorded in this dataset?
df['termstart'].describe() #here we would look at unique.
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
7) Who has been the oldest serving in the US, a senator or a representative? How old was he/she?
df.sort_values(by='age').tail(1) #A senator!
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
8) Who have been the oldest and youngest serving Representative in the US?
representative = df[df['chamber'] == 'house'] representative.sort_values(by='age').tail(1) representative.sort_values(by='age').head(2)
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
9) Who have been the oldest and youngest serving Senator in the US?
senator = df[df['chamber'] == 'senate'] senator.sort_values(by='age') senator.sort_values(by='age').head(2)
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit
10) Who has served for more periods (in this question I am not paying attention to the period length)?
# Store a new column df['complete_name'] = df['firstname']+ " "+ df['middlename'] + " "+df['lastname'] df.head() period_count = df.groupby('complete_name')['termstart'].value_counts().sort_values(ascending=False) pd.DataFrame(period_count) #With the help of Stephan we figured out that term-start is every 2 years #(so this is not giving us info about how many terms has each legislator served)
foundations_hw/08/Homework8_benzaquen_congress_data.ipynb
mercybenzaquen/foundations-homework
mit