markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We can check the result by utility function.
print_causal_directions(cdc, n_sampling)
examples/RESIT.ipynb
cdt15/lingam
mit
We can check the result by utility function.
print_dagc(dagc, n_sampling)
examples/RESIT.ipynb
cdt15/lingam
mit
Bootstrap Probability of Path Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [0, 1, 3] shows the path from variable X0 through variable X1 to variable X3.
from_index = 0 # index of x0 to_index = 3 # index of x3 pd.DataFrame(result.get_paths(from_index, to_index))
examples/RESIT.ipynb
cdt15/lingam
mit
Installation From the root of this repository, run pip install .[demos] to install both dm_construction and extra dependencies needed to run this notebook. Install ffmpeg: Cross-platform with Anaconda: conda install ffmpeg Ubuntu: apt-get install ffmpeg Mac with Homebrew: brew install ffmpeg
import matplotlib.pyplot as plt import dm_construction def show_difficulties(env_, difficulties=None): """Generate and plot episodes at each difficulty level.""" if not difficulties: difficulties = range(0, env_.core_env.max_difficulty + 1) frames = [] for difficulty in difficulties: _ = env_.reset(difficulty=difficulty, curriculum_sample=False) frames.append(env_.core_env.last_time_step.observation["RGB"].squeeze()) base_size = 5 num_frames = len(frames) _, axes = plt.subplots( 1, num_frames, squeeze=False, figsize=(base_size*num_frames, base_size)) for i, rgb_observation in enumerate(frames): ax = axes[0, i] ax.imshow(rgb_observation) ax.set_axis_off() ax.set_aspect("equal") if isinstance(difficulties[i], str): ax.set_title(difficulties[i]) else: ax.set_title("difficulty = {}".format(difficulties[i]))
demos/task_difficulties.ipynb
deepmind/dm_construction
apache-2.0
Load Environments First we will load a copy of each environment. We can reuse the same underlying Unity process for all of them, which makes loading a bit faster.
# Create a new Unity process. Use a higher res on the camera for nicer images. unity_env = dm_construction.get_unity_environment(width=600, height=600) # Create one copy of each environment. envs = {} env_names = [ "marble_run", "covering_hard", "covering", "connecting", "silhouette"] for task in env_names: envs[task] = dm_construction.get_environment( task, unity_environment=unity_env, curriculum_sample=None, difficulty=None)
demos/task_difficulties.ipynb
deepmind/dm_construction
apache-2.0
Silhouette The difficulty levels of Silhouette involve increasing the number of targets, the number of obstacles, and the maximum height of the targets. Generalization involves increasing the number of targets beyond what was seen during training.
# Curriculum difficulties. show_difficulties(envs["silhouette"], difficulties=[0, 1, 2, 3]) show_difficulties(envs["silhouette"], difficulties=[4, 5, 6, 7]) # Generalization. show_difficulties(envs["silhouette"], difficulties=["double_the_targets"])
demos/task_difficulties.ipynb
deepmind/dm_construction
apache-2.0
Connecting The difficulty levels in Connecting involve increasing the number of obstacles, the number of layers of obstacles, and the height of the targets. Generalization in connecting involves having mixed heights of the targets, or adding an additional layer of obstacles (and also increasing the height of the targets).
# Curriculum difficulties. show_difficulties(envs["connecting"], difficulties=[0, 1, 2, 3, 4]) show_difficulties(envs["connecting"], difficulties=[5, 6, 7, 8, 9]) # Generalization. show_difficulties(envs["connecting"], difficulties=["mixed_height_targets", "additional_layer"])
demos/task_difficulties.ipynb
deepmind/dm_construction
apache-2.0
Covering The difficulty levels in the Covering task involves increasing the number of obstacles and the maximum height of the obstacles.
# Curriculum difficulties. show_difficulties(envs["covering"])
demos/task_difficulties.ipynb
deepmind/dm_construction
apache-2.0
Covering Hard Like in Covering, the difficulty levels involve increasing the number of obstacles and the maximum height of the obstacles.
# Curriculum difficulties. show_difficulties(envs["covering_hard"])
demos/task_difficulties.ipynb
deepmind/dm_construction
apache-2.0
Marble Run The difficulty levels in Marble Run involve the distance between the ball and the goal, the number of obstacles, and the height of the target.
# Curriculum difficulties. show_difficulties(envs["marble_run"], difficulties=[0, 1, 2, 3, 4]) show_difficulties(envs["marble_run"], difficulties=[5, 6, 7, 8])
demos/task_difficulties.ipynb
deepmind/dm_construction
apache-2.0
Close Environments The Unity environment won't get garbage collected since it is actually running as a separate process, so make sure to always shut down all environments after they are finished running.
for name, env in envs.items(): print("Closing '{}'".format(name)) env.close()
demos/task_difficulties.ipynb
deepmind/dm_construction
apache-2.0
Here, we have created a fictional dataset that contains earnings for years 2016 and 2017
messy_df
first_steps_in_data_science.ipynb
yassineAlouini/first-steps-data-science
mit
You might ask, what is the problem with this dataset? <br> There are two main ones: The coloumns 2016 and 2017 contain the same type of variable (earnings) The columns 2016 and 2017 contain an information about the year Now that we have a "messy" dataset, let's clean it.
tidy_df = pd.melt(messy_df, id_vars=['company'], value_name='earnings', var_name='year') tidy_df
first_steps_in_data_science.ipynb
yassineAlouini/first-steps-data-science
mit
That's much better! <br> In summary, a tidy dataset has the following properties: Each column represents only one variable Each row represents an observation Example Import pacakges
import pandas as pd import missingno as msno
first_steps_in_data_science.ipynb
yassineAlouini/first-steps-data-science
mit
Loading data Kaggle offers many free datasets with lots of metadata, descriptions, kernels, discussions and so on. <br> Today, we will be working with the San Francisco Salaries dataset. You can download it from here (you need a Kaggle account) or get it from the workshop repository. The dataset we will be working with is a CSV file. Fortunately for us, Pandas has a handy method .read_csv. Let's try it out!
sf_slaries_df = pd.read_csv('data/Salaries.csv')
first_steps_in_data_science.ipynb
yassineAlouini/first-steps-data-science
mit
Data exploration
sf_slaries_df.head(3).transpose() sf_slaries_df.sample(5).transpose() sf_slaries_df.columns sf_slaries_df.dtypes sf_slaries_df.describe() msno.matrix(sf_slaries_df)
first_steps_in_data_science.ipynb
yassineAlouini/first-steps-data-science
mit
Some analysis What are the different job titles? How many?
sf_slaries_df.JobTitle.value_counts() sf_slaries_df.JobTitle.nunique()
first_steps_in_data_science.ipynb
yassineAlouini/first-steps-data-science
mit
Highest and lowest salaries per year? Which jobs?
sf_slaries_df.groupby('Year').TotalPay.agg(['min', 'max']) lowest_idx = sf_slaries_df.groupby('Year').apply(lambda df: df.TotalPay.argmin()) sf_slaries_df.loc[lowest_idx, ['Year', 'JobTitle']] highest_idx = sf_slaries_df.groupby('Year').apply(lambda df: df.TotalPay.argmax()) sf_slaries_df.loc[highest_idx, ['Year', 'JobTitle']]
first_steps_in_data_science.ipynb
yassineAlouini/first-steps-data-science
mit
Acoustics Animation to link from Acoustics.ipynb.
from exact_solvers import acoustics_demos def make_bump_animation_html(numframes, file_name): video_html = acoustics_demos.bump_animation(numframes) f = open(file_name,'w') f.write('<html>\n') file_name = 'acoustics_bump_animation.html' descr = """<h1>Acoustics Bump Animation</h1> This animation is to accompany <a href="http://www.clawpack.org/riemann_book/html/Acoustics.html">this notebook</a>,\n from the book <a href="http://www.clawpack.org/riemann_book/index.html">Riemann Problems and Jupyter Solutions</a>\n""" f.write(descr) f.write("<p>") f.write(video_html) print("Created ", file_name) f.close() file_name = 'html_animations/acoustics_bump_animation.html' anim = make_bump_animation_html(numframes=50, file_name=file_name) FileLink(file_name)
Make_html_animations.ipynb
maojrs/riemann_book
bsd-3-clause
Burgers Animations to link from Burgers.ipynb.
from exact_solvers import burgers_demos from importlib import reload reload(burgers_demos) video_html = burgers_demos.bump_animation(numframes = 50) file_name = 'html_animations/burgers_animation0.html' f = open(file_name,'w') f.write('<html>\n') descr = """<h1>Burgers' Equation Animation</h1> This animation is to accompany <a href="http://www.clawpack.org/riemann_book/html/Burgers.html">this notebook</a>,\n from the book <a href="http://www.clawpack.org/riemann_book/index.html">Riemann Problems and Jupyter Solutions</a>\n <p> Burgers' equation with hump initial data, evolving into a shock wave followed by a rarefaction wave.""" f.write(descr) f.write("<p>") f.write(video_html) print("Created ", file_name) f.close() FileLink(file_name) def make_burgers_animation_html(ql, qm, qr, file_name): video_html = burgers_demos.triplestate_animation(ql,qm,qr,numframes=50) f = open(file_name,'w') f.write('<html>\n') descr = """<h1>Burgers' Equation Animation</h1> This animation is to accompany <a href="http://www.clawpack.org/riemann_book/html/Burgers.html">this notebook</a>,\n from the book <a href="http://www.clawpack.org/riemann_book/index.html">Riemann Problems and Jupyter Solutions</a>\n <p> Burgers' equation with three constant states as initial data,\n ql = %.1f, qm = %.1f, qr = %.1f""" % (ql,qm,qr) f.write(descr) f.write("<p>") f.write(video_html) print("Created ", file_name) f.close() file_name = 'html_animations/burgers_animation1.html' make_burgers_animation_html(4., 2., 0., file_name) FileLink(file_name) file_name = 'html_animations/burgers_animation2.html' make_burgers_animation_html(4., -1.5, 0.5, file_name) FileLink(file_name) file_name = 'html_animations/burgers_animation3.html' make_burgers_animation_html(-1., 3., -2., file_name) FileLink(file_name)
Make_html_animations.ipynb
maojrs/riemann_book
bsd-3-clause
Let's examine the first criterion: the mean, median, and mode of a Gaussian distribution are all the same. To calculate the mode, we need to import another module called the stats module. The median can still be calculated from the numpy module.
#import stats module from scipy import stats
notebooks/Lectures2018/Lecture3/Lecture3_Gaussians-Answer Key.ipynb
astroumd/GradMap
gpl-3.0
Now calculate the median and mode of the variable lifetimes and display them.
#your code here lifemode = stats.mode(lifetimes) #calculate mode lifemedian = np.median(lifetimes) #calculate median print(lifemean) print(lifemode) print(lifemedian)
notebooks/Lectures2018/Lecture3/Lecture3_Gaussians-Answer Key.ipynb
astroumd/GradMap
gpl-3.0
Does the lifetimes data fulfill the first criterion of a Gaussian distribution? Now let's check the second criterion. Is there symmetry about the mean? First, let's find out how many samples are in the variable lifetimes and display it.
#your code here numsamp = len(lifetimes) print(numsamp)
notebooks/Lectures2018/Lecture3/Lecture3_Gaussians-Answer Key.ipynb
astroumd/GradMap
gpl-3.0
Now that you have the number of samples, you will need to use the median value to find out how many samples lie above and below it.
#Put your code here #why doesn't this work? #uppermask = lifetimes>lifemedian #upperhalf = lifetimes(uppermask) #this should work, but doesn't? #lowermask = lifetimes<=lifemedian #lowerhalf = lifetimes(lowermask) #ditto #but this does? upperhalf = [ii for ii in lifetimes if ii>lifemedian] #get upper 50% lowerhalf = [jj for jj in lifetimes if jj<=lifemedian] #get lower 50% upperperc = len(upperhalf)/numsamp lowerperc = len(lowerhalf)/numsamp print(upperperc) print(lowerperc)
notebooks/Lectures2018/Lecture3/Lecture3_Gaussians-Answer Key.ipynb
astroumd/GradMap
gpl-3.0
Does the lifetimes data fulfill the second criterion of a Gaussian distribution? Now let's check the last criterion. How much data falls within a standard deviation or two (or three)? Remember, you already calculated the standard deviation of the lifetimes data as the variable lifestd.
#Put your code here plus_std = (lifemedian+1*lifestd, lifemedian+2*lifestd, lifemedian+3*lifestd) minus_std = (lifemedian-1*lifestd, lifemedian-2*lifestd, lifemedian-3*lifestd) aboveperc = [None]*3 belowperc = [None]*3 ii=0 while ii<len(plus_std): data_above = [jj for jj in lifetimes if jj>lifemedian and jj<plus_std[ii]] aboveperc[ii] = len(data_above)/numsamp data_below = [kk for kk in lifetimes if kk<=lifemedian and kk>minus_std[ii]] belowperc[ii] = len(data_below)/numsamp ii+=1 print('% of data within', ii, 'standard deviations of the median:', aboveperc[ii-1]+belowperc[ii-1])
notebooks/Lectures2018/Lecture3/Lecture3_Gaussians-Answer Key.ipynb
astroumd/GradMap
gpl-3.0
Usage A Simple Tokenizer The tokenize function is a high level API for splitting a text into tokens. It returns a generator of tokens.
from mecabwrap import tokenize, print_token for token in tokenize('すもももももももものうち'): print_token(token)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Token is defined as a namedtuple (v0.3.2+) with the following fields: surface: Word that appear in the text pos: Part of speech pos1: Part of speech, detail 1 pos2: Part of speech, detail 2 pos3: Part of speech, detail 3 infl_type: Inflection type infl_form: Inflection form baseform: Original form reading: Surface written in katakana phoenetic: Surface pronunciation lemma: Representative form of the word. 語彙素 lemma_reading: Reading of lemma Among these, lemma and lemma_reading are not available in ipadic. They are defined in unidic-based dictionaries.
token
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Using MeCab Options To configure the MeCab calls, one may use do_ functions that support arbitrary number of MeCab options. Currently, the following three do_ functions are provided. - do_mecab: works with a single input text and returns the result as a string. - do_mecab_vec: works with a multiple input texts and returns a string of concatenated results. - do_mecab_iter: works with a multiple input texts and returns a generator. For example, following code invokes the wakati option, so the outcome be words separated by spaces with no meta information. See the official site for more details.
from mecabwrap import do_mecab out = do_mecab('人生楽ありゃ苦もあるさ', '-Owakati') print(out)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
The exapmle below uses do_mecab_vec to parse multiple texts. Note that -F option configures the outcome formatting.
from mecabwrap import do_mecab_vec ins = ['春はあけぼの', 'やうやう白くなりゆく山際', '少し明かりて', '紫だちたる雲の細くたなびきたる'] out = do_mecab_vec(ins, '-F%f[6](%f[1]) | ', '-E...ここまで\n') print(out)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Returning Iterators When the number of input text is large, then holding the outcomes in the memory may not be a good idea. do_mecab_iter function, which works for multiple texts, returns a generator of MeCab results. When byline=True, chunks are separated by line breaks; a chunk corresponds to a token in the default setting. When byline=False, chunks are separated by EOS; hence a chunk corresponds to a sentence.
from mecabwrap import do_mecab_iter ins = ['春はあけぼの', 'やうやう白くなりゆく山際', '少し明かりて', '紫だちたる雲の細くたなびきたる'] print('\n*** generating tokens ***') i = 0 for text in do_mecab_iter(ins, byline=True): i += 1 print('(' + str(i) + ')\t' + text) print('\n*** generating tokenized sentences ***') i = 0 for text in do_mecab_iter(ins, '-E', '(文の終わり)', byline=False): i += 1 print('---(' + str(i) + ')\n' + text)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Writing the outcome to a file To write the MeCab outcomes directly to a file, one may either use -o option or outpath argument. Note that this does not work with do_mecab_iter, since it is designed to write the outcomes to a temporary file.
do_mecab('すもももももももものうち', '-osumomo1.txt') # or, do_mecab('すもももももももものうち', outpath='sumomo2.txt') with open('sumomo1.txt') as f: print(f.read()) with open('sumomo2.txt') as f: print(f.read()) import os # clean up os.remove('sumomo1.txt') os.remove('sumomo2.txt') # these get error try: res = do_mecab_iter(['すもももももももものうち'], '-osumomo3.txt') next(res) except Exception as e: print(e) try: res = do_mecab_iter(['すもももももももものうち'], outpath='sumomo3.txt') next(res) except Exception as e: print(e)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Using Dictionary (v0.3.0+) do_ functions accepts dictionary option to specify the location of the system directory. dictionary can be either: path to the system directory sub-directory name under the mecab's default dicdir (note: mecab-config is required for this) This provides an intuitive syntax for using extended dictionaries such as ipadic-neologd or unidic-nelogd.
# this cell assumes that mecab-ipadic-neologd is already installed # otherwise, follow the instruction at https://github.com/neologd/mecab-ipadic-neologd print("*** Default ipadic ***") print(do_mecab("メロンパンを食べたい")) print("*** With ipadic neologd ***") print(do_mecab("メロンパンを食べたい", dictionary="mecab-ipadic-neologd")) # this is equivalent to giving the path dicdir, = !mecab-config --dicdir print(do_mecab("メロンパンを食べたい", dictionary=os.path.join(dicdir, "mecab-ipadic-neologd")))
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Very Long Input and Buffer Size (v0.2.3+) When input text is longer than the input buffer size (default: 8192), MeCab automatically split it into two "sentences", by inserting an extra EOS (and a few letters are lost around the separation point). As a result, do_mecab_vec and do_mecab_iter might produce output of length longer than the input. The do_ functions provide two workarounds for this: 1. If the option auto_buffer_size is True, the input-buffer-size option is automatically adjusted to the level as large as covering all input text. Note that it won't work when the input size exceeds the MeCab's maximum buffer size, 8192 * 640 ~ 5MB. 1. If the option trancate is True, input text is truncated so that they are covered by the input buffer size. Note that do_mecab does not have these features.
import warnings x = 'すもももももももものうち!' * 225 print("input buffer size =", len(x.encode())) with warnings.catch_warnings(record=True) as w: res1 = list(do_mecab_iter([x])) # the text is split into two since it exceeds the input buffer size print("output length =", len(res1)) print('***\nEnd of the first element') print(res1[0][-150:]) print('***\nBeginning of the second element') print(res1[1][0:150]) import re res2 = list(do_mecab_iter([x], auto_buffer_size=True)) print("output length =", len(res2)) print('***\nEnd of the first element') print(res2[0][-150:]) # count the number of '!', to confirm all 223 repetitions are covered print('number of "!" =', len(re.findall(r'!', ''.join(res2)))) print() res3 = list(do_mecab_iter([x], truncate=True)) print("output length =", len(res3)) print('***\nEnd of the first element') print(res3[0][-150:]) # count the number of '!', to confirm some are not covered due to trancation print('number of "!" =', len(re.findall(r'!', ''.join(res3))))
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Batch processing (v0.3.2+) mecab_batch function supports multiple text input. The function takes a list of strings and apply mecab tokenizer to each. The output is the list of tokenization outcomes. mecab_batch_iter function works the similarly but returns a generator instead.
from mecabwrap import mecab_batch x = ["明日は晴れるかな", "雨なら読書をしよう"] mecab_batch(x)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
By default, each string is converted into a list of Token objects. To obtain a more concise outcome, We can specify a converter function to the tokens as format_func option. format_func must be a function that takes a single Token object and returns the parsed outcome.
# use baseform if exists, otherwise surface mecab_batch(x, format_func=lambda x: x.baseform or x.surface)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
We can filter certain part-of-speeches by pos_filter option. More complex filtering can be achieved by filter_func option.
mecab_batch(x, format_func=lambda x: x.baseform or x.surface, pos_filter=("名詞", "動詞")) mecab_batch(x, format_func=lambda x: x.baseform or x.surface, filter_func=lambda x: len(x.surface)==2)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Scikit-learn compatible transformer MecabTokenizer is a scikit-learn compatible transformer that applies mecab_batch to a list of string inputs.
from mecabwrap import MecabTokenizer tokenizer = MecabTokenizer(format_func=lambda x: x.surface) tokenizer.transform(x) from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import TfidfVectorizer import pandas as pd x = ["明日は晴れるかな", "明日天気になあれ"] p = Pipeline([ ("mecab", MecabTokenizer(format_func=lambda x: x.surface)), ("tfidf", TfidfVectorizer(tokenizer=lambda x: x, lowercase=False)) ]) y = p.fit_transform(x).todense() pd.DataFrame(y, columns=p.steps[-1][-1].get_feature_names())
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Note on Python 2 All text inputs are assumed to be unicode. In Python2, inputs must be u'' string, not ''. In python3, str type is unicode, so u'' and '' are equivalent.
o1 = do_mecab('すもももももももものうち') # this works only for python 3 o2 = do_mecab(u'すもももももももものうち') # this works both for python 2 and 3 print(o1) print(o2)
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Note on dictionary encodings The functions takes mecab_enc option, which indicates the encoding of the MeCab dictionary being used. Usually this can be left as the default value None, so that the encoding is automatically detected. Alternatively, one may specify the encoding explicitly.
# show mecab dict ! mecab -D | grep charset print() o1 = do_mecab('日本列島改造論', mecab_enc=None) # default print(o1) o2 = do_mecab('日本列島改造論', mecab_enc='utf-8') # explicitly specified print(o2) #o3 = do_mecab('日本列島改造論', mecab_enc='cp932') # wrong encoding, fails
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
kota7/mecabwrap-py
mit
Example 1
if testing: f = np.array([[1,0,0,0,0,0], [0,0,0,0,0,0], [0,0,0,1,0,0], [0,0,0,0,0,1], [0,0,0,0,0,0]]) g = polar(f, (6,6)) print(g)
src/polar.ipynb
robertoalotufo/ia898
mit
Example 2
if testing: f = mpimg.imread("../data/cameraman.tif") ia.adshow(f, "Figure a) - Original Image") g = polar(f,(250,250)) ia.adshow(g, "Figure b) - Image converted to polar coordinates, 0 to 2*pi") g = polar(f,(250,250), np.pi) ia.adshow(g, "Figure c) - Image converted to polar coordinates, 0 to pi")
src/polar.ipynb
robertoalotufo/ia898
mit
Example 3 - non square image
if testing: f = mpimg.imread('../data/astablet.tif') ia.adshow(f,'original') g = polar(f, (256,256)) ia.adshow(g,'polar') f1 = f.transpose() ia.adshow(f1,'f1: transposed') g1 = polar(f1, (256,256)) ia.adshow(g1,'polar of f1')
src/polar.ipynb
robertoalotufo/ia898
mit
Generate uniform random distributions based on the number of cells given
cellstosim = [(2,12)] #,(2,1140),(3,476),(4,130)] iterations = 10 for elem in cellstosim: dent, cells = elem positions = np.zeros(((cells*dent),iterations)) fname = str(dent)+'_montecarlo_positions_replicates.csv' for it in range(0,iterations): this = np.reshape(np.random.rand(cells, dent),(1,-1)) positions[:,it] = this np.savetxt(fname, positions, delimiter=',') positions.shape
statistical_modeling_PYTHON/montecarlo_simulations/montecarlo_denticlepositions.ipynb
ZallenLab/denticleorganization
gpl-3.0
calculate KS test data, and count how many tests pass for each dentincell number (output in summarydata.csv file)
def TestPasses(pval, cutoff): if pval <= cutoff: return 'different' elif pval > cutoff: return 'same' def IndivStatTest(simdata, filename_out): # IN: 3D np array, list of strings with length=arr[X,:,:] (array axis 0), name of csv file test_ks = sps.ks_2samp(invivo_d, simdata) # outputs [ks-statistic, p-value] with open(filename_out, 'a') as f: csv.writer(f).writerows([[column, test_ks[0], test_ks[1], TestPasses(test_ks[1], 0.05)]]) return test_ks[1], TestPasses(test_ks[1], 0.05) dicmap = ['null','A','B','C','D'] invivo_file = 'yw_all_RelativePosition.csv' dentnumbers = [1,2,3,4] invivo_data = pd.read_csv(invivo_file) for dentincell in dentnumbers: # clear out missing data invivo = invivo_data[dicmap[dentincell]] invivo = invivo.replace(0,np.nan) # turn zeros into NaNs invivo = invivo.dropna(how='all') # drop any column (axis=0) or row (axis=1) where ALL values are NaN invivo_d = invivo/100 mcname = str(dentincell)+'_montecarlo_positions_replicates.csv' sfname = 'summarydata.csv' montecarlo = pd.read_csv(mcname,header=None) pf = [] for column in montecarlo: pval, dif = IndivStatTest(montecarlo[column], 'montecarlo_kstests_'+str(dentincell)+'dent.csv') pf.append(dif) pfr = pd.Series(pf) with open(sfname,'a') as f: f.write(str(dentincell) + ',' + str(pfr[pfr == 'same'].count()) + ',\n') pfr = pd.Series(pf) with open(sfname,'a') as f: f.write(str(dentincell) + ',' + str(pfr[pfr == 'same'].count()) + ',\n')
statistical_modeling_PYTHON/montecarlo_simulations/montecarlo_denticlepositions.ipynb
ZallenLab/denticleorganization
gpl-3.0
make basic plots
hist, bins = np.histogram(positions,bins=50) width = 0.7 * (bins[1] - bins[0]) center = (bins[:-1] + bins[1:]) / 2 plt.bar(center, hist, align='center', width=width)
statistical_modeling_PYTHON/montecarlo_simulations/montecarlo_denticlepositions.ipynb
ZallenLab/denticleorganization
gpl-3.0
pick out first 25 for plotting
dentincell = 1 mcname = str(dentincell)+'_montecarlo_positions_replicates.csv' mc = pd.read_csv(mcname,header=None) mc = mc.loc[:,0:49] mc.to_csv('25reps_'+mcname) mc
statistical_modeling_PYTHON/montecarlo_simulations/montecarlo_denticlepositions.ipynb
ZallenLab/denticleorganization
gpl-3.0
Phase 1a: -Create predictions specifically for the most difficult facies -at this stage we focus on TP and FP only training for facies 9 specifically
df0 = test_data[test_data['Well Name'] == 'STUART'] df1 = df0.drop(['Formation', 'Well Name', 'Depth'], axis=1) df1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)] blind=magic(df1a) df1a.head() features_blind = blind.drop(['Formation', 'Well Name', 'Depth'], axis=1) #============================================================ df0=training_data0.dropna() df1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1) df1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)] all1=magic(df1a) X, y = make_balanced_binary(all1, 9,6) #X, y = make_balanced_binary(all1, 9,9) #============================================================ correct_train=y clf = RandomForestClassifier(max_depth = 6, n_estimators=600) clf.fit(X,correct_train) predicted_blind1 = clf.predict(features_blind) predicted_regime9=predicted_blind1.copy() print(sum(predicted_regime9))
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
esa-as/2016-ml-contest
apache-2.0
training for facies 1 specifically
features_blind = blind.drop(['Formation', 'Well Name', 'Depth'], axis=1) #============================================================ df0=training_data0.dropna() df1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1) df1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)] all1=magic(df1a) X, y = make_balanced_binary(all1, 1,5) #============================================================ #============================================= go_A=StandardScaler().fit_transform(X) go_blind=StandardScaler().fit_transform(features_blind) correct_train_A=binarify(y, 1) clf = linear_model.LogisticRegression() clf.fit(go_A,correct_train_A) predicted_blind1 = clf.predict(go_blind) clf = KNeighborsClassifier(n_neighbors=5) clf.fit(go_A,correct_train_A) predicted_blind2 = clf.predict(go_blind) clf = svm.SVC(decision_function_shape='ovo') clf.fit(go_A,correct_train_A) predicted_blind3 = clf.predict(go_blind) clf = svm.LinearSVC() clf.fit(go_A,correct_train_A) predicted_blind4 = clf.predict(go_blind) ##################################### predicted_blind=predicted_blind1+predicted_blind2+predicted_blind3+predicted_blind4 for ii in range(len(predicted_blind)): if predicted_blind[ii] > 3: predicted_blind[ii]=1 else: predicted_blind[ii]=0 for ii in range(len(predicted_blind)): if predicted_blind[ii] == 1 and predicted_blind[ii-1] == 0 and predicted_blind[ii+1] == 0: predicted_blind[ii]=0 if predicted_blind[ii] == 1 and predicted_blind[ii-1] == 0 and predicted_blind[ii+2] == 0: predicted_blind[ii]=0 if predicted_blind[ii] == 1 and predicted_blind[ii-2] == 0 and predicted_blind[ii+1] == 0: predicted_blind[ii]=0 ##################################### print "-------" predicted_regime1=predicted_blind.copy() print(sum(predicted_regime1))
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
esa-as/2016-ml-contest
apache-2.0
training for facies 5 specifically
features_blind = blind.drop(['Formation', 'Well Name', 'Depth'], axis=1) #============================================================ df0=training_data0.dropna() df1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1) df1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)] all1=magic(df1a) X, y = make_balanced_binary(all1, 5,10) #X, y = make_balanced_binary(all1, 5,16) #============================================================ go_A=StandardScaler().fit_transform(X) go_blind=StandardScaler().fit_transform(features_blind) correct_train_A=binarify(y, 1) #============================================= clf = KNeighborsClassifier(n_neighbors=4,algorithm='brute') clf.fit(go_A,correct_train_A) predicted_blind1 = clf.predict(go_blind) clf = KNeighborsClassifier(n_neighbors=5,leaf_size=10) clf.fit(go_A,correct_train_A) predicted_blind2 = clf.predict(go_blind) clf = KNeighborsClassifier(n_neighbors=5) clf.fit(go_A,correct_train_A) predicted_blind3 = clf.predict(go_blind) clf = tree.DecisionTreeClassifier() clf.fit(go_A,correct_train_A) predicted_blind4 = clf.predict(go_blind) clf = tree.DecisionTreeClassifier() clf.fit(go_A,correct_train_A) predicted_blind5 = clf.predict(go_blind) clf = tree.DecisionTreeClassifier() clf.fit(go_A,correct_train_A) predicted_blind6 = clf.predict(go_blind) ##################################### predicted_blind=predicted_blind1+predicted_blind2+predicted_blind3+predicted_blind4+predicted_blind5+predicted_blind6 for ii in range(len(predicted_blind)): if predicted_blind[ii] > 5: predicted_blind[ii]=1 else: predicted_blind[ii]=0 ##################################### print "-------" ##################################### print "-------" predicted_regime5=predicted_blind.copy() print(sum(predicted_regime5))
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
esa-as/2016-ml-contest
apache-2.0
training for facies 7 specifically
features_blind = blind.drop(['Formation', 'Well Name', 'Depth'], axis=1) #============================================================ df0=training_data0.dropna() df1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1) df1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)] all1=magic(df1a) X, y = make_balanced_binary(all1, 7,11) X, y = make_balanced_binary(all1, 7,13) #============================================================ go_A=StandardScaler().fit_transform(X) go_blind=StandardScaler().fit_transform(features_blind) correct_train_A=binarify(y, 1) #============================================= clf = KNeighborsClassifier(n_neighbors=4,algorithm='brute') clf.fit(go_A,correct_train_A) predicted_blind1 = clf.predict(go_blind) clf = KNeighborsClassifier(n_neighbors=5,leaf_size=10) clf.fit(go_A,correct_train_A) predicted_blind2 = clf.predict(go_blind) clf = KNeighborsClassifier(n_neighbors=5) clf.fit(go_A,correct_train_A) predicted_blind3 = clf.predict(go_blind) clf = tree.DecisionTreeClassifier() clf.fit(go_A,correct_train_A) predicted_blind4 = clf.predict(go_blind) clf = tree.DecisionTreeClassifier() clf.fit(go_A,correct_train_A) predicted_blind5 = clf.predict(go_blind) clf = tree.DecisionTreeClassifier() clf.fit(go_A,correct_train_A) predicted_blind6 = clf.predict(go_blind) ##################################### predicted_blind=predicted_blind1+predicted_blind2+predicted_blind3+predicted_blind4+predicted_blind5+predicted_blind6 for ii in range(len(predicted_blind)): if predicted_blind[ii] > 5: predicted_blind[ii]=1 else: predicted_blind[ii]=0 ##################################### print "-------" predicted_regime7=predicted_blind.copy() print(sum(predicted_regime7))
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
esa-as/2016-ml-contest
apache-2.0
PHASE Ib Making several predictions using dataset A PREPARE THE BLIND DATA FOR SERIAL MODELLING
# #blindwell='CHURCHMAN BIBLE' #df0 = training_data0[training_data0['Well Name'] == blindwell] #df1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1) #df1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)] #blind=magic(df1a) #correct_facies_labels = blind['Facies'].values #features_blind = blind.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1) #pred_blind=0*correct_facies_labels df0 = test_data[test_data['Well Name'] == 'STUART'] df1 = df0.drop(['Formation', 'Well Name', 'Depth'], axis=1) df1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)] blind=magic(df1a) pred_blind=0*predicted_regime7
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
esa-as/2016-ml-contest
apache-2.0
PREPARE THE DATA FOR SERIAL MODELLING This could be done smarter but basically manual at this point selecting bias towards the REGIME the blind data has been classified as for CHURCHMAN BIBLE this is regime 3 For CRAWFORD this is regime 1 For STUART this is regime 2
main_regime=regime2A_train other1=regime1A_train other2=regime3A_train other3=regime4A_train main_test=regime2A_test other1_test=regime1A_test other2_test=regime3A_test other3_test=regime4A_test tmp2=[regime1B_train, regime2B_train, regime3B_train, regime4B_train] go_B= pd.concat(tmp2, axis=0) correctB=np.concatenate((regime1B_test, regime2B_test, regime3B_test, regime4B_test)) #=================================================== tmp1=[main_regime, other1, other2, other3] regime_train1= pd.concat(tmp1, axis=0) correctA1=np.concatenate((main_test, other1_test, other2_test, other3_test)) #=================================================== tmp1=[main_regime, other2, other3] regime_train2= pd.concat(tmp1, axis=0) correctA2=np.concatenate((main_test, other2_test, other3_test)) #=================================================== tmp1=[main_regime, other1, other3] regime_train3= pd.concat(tmp1, axis=0) correctA3=np.concatenate((main_test, other1_test, other3_test)) #=================================================== tmp1=[main_regime, other1, other2] regime_train4= pd.concat(tmp1, axis=0) correctA4=np.concatenate((main_test, other1_test, other2_test)) #=================================================== tmp1=[main_regime, other1] regime_train5= pd.concat(tmp1, axis=0) correctA5=np.concatenate((main_test, other1_test)) #=================================================== tmp1=[main_regime, other2] regime_train6= pd.concat(tmp1, axis=0) correctA6=np.concatenate((main_test, other2_test)) #=================================================== tmp1=[main_regime, other3] regime_train7= pd.concat(tmp1, axis=0) correctA7=np.concatenate((main_test, other3_test)) #=================================================== tmp1=[main_regime] regime_train8= pd.concat(tmp1, axis=0) correctA8=main_test
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
esa-as/2016-ml-contest
apache-2.0
Phase II: Stacking the predictions from phase Ib. New predictions from data B First prediction of B data without Phase I input:
clf = RandomForestClassifier(max_depth = 15, n_estimators=1600,min_samples_leaf=15) clf.fit(go_B,correctB) predicted_blind_PHASE_I = clf.predict(features_blind) #out_f1=metrics.f1_score(correct_facies_labels, predicted_blind_PHASE_I, average = 'micro') #print "f1 score on the prediction of blind" #print out_f1 predicted_blind_PHASE_I
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
esa-as/2016-ml-contest
apache-2.0
TO DO- some more steps here Permute facies based on earlier prediction:
print(sum(predicted_regime5)) predicted_blind_PHASE_IIa=permute_facies_nr(predicted_regime5, predicted_blind_PHASE_II, 5) print(sum(predicted_regime7)) predicted_blind_PHASE_IIb=permute_facies_nr(predicted_regime7, predicted_blind_PHASE_IIa, 7) print(sum(predicted_regime1)) predicted_blind_PHASE_IIc=permute_facies_nr(predicted_regime1, predicted_blind_PHASE_IIb, 1) print(sum(predicted_regime9)) predicted_blind_PHASE_IId=permute_facies_nr(predicted_regime9, predicted_blind_PHASE_IIc, 9) sum(predicted_blind_PHASE_IIa-predicted_blind_PHASE_IId) predicted_STUART=predicted_blind_PHASE_IId predicted_STUART predicted_CRAWFORD= array([8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 5, 5, 5, 7, 7, 7, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 8, 8, 8, 8, 8, 8, 8, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 8, 8, 8, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 6, 8, 8, 6, 6, 6, 6, 6, 6, 6, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 8, 8, 8, 8, 8, 8, 6, 6, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 4, 4, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 2, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 6, 2, 2, 2, 2, 2, 2, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 8, 8, 8, 8, 8, 8, 8, 8, 6, 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 2, 2, 2, 2, 2, 2, 3, 3, 2, 2, 2, 8, 8, 8, 8, 8, 8, 8, 8, 5, 7, 7, 7, 7, 7, 7, 7, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 7, 7, 8, 6, 8, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 4, 8, 8, 8, 8, 8, 4, 4, 4, 4, 8, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3])
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
esa-as/2016-ml-contest
apache-2.0
Define functions to compute MFCC features from librosa We define functions that extract MFCC features from an audio signal or a file with optionally their delat-1 and delta-2 coefficients:
def extract_mfcc(signal, sr=16000, n_mfcc=16, n_fft=256, hop_length=128, n_mels = 40, delta_1 = False, delta_2 = False): mfcc = librosa.feature.mfcc(y=signal, sr=sr, n_mfcc=n_mfcc, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels) if not (delta_1 or delta_2): return mfcc.T feat = [mfcc] if delta_1: mfcc_delta_1 = librosa.feature.delta(mfcc, order=1) feat.append(mfcc_delta_1) if delta_2: mfcc_delta_2 = librosa.feature.delta(mfcc, order=2) feat.append(mfcc_delta_2) return np.vstack(feat).T def file_to_mfcc(filename, sr=16000, **kwargs): signal, sr = librosa.load(filename, sr = sr) return extract_mfcc(signal, sr, **kwargs)
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Define a class for multi-class classification using GMMs We define our multi-class classifier that uses sklearn's GMM objects:
class GMMClassifier(): def __init__(self, models): """ models is a dictionary: {"class_of_sound" : GMM_model_for_that_class, ...} """ self.models = models def predict(self, data): result = [] for cls in self.models: llk = self.models[cls].score_samples(data)[0] llk = np.sum(llk) result.append((cls, llk)) """ return classification result as a sorted list of tuples ("class_of_sound", log_likelihood) best class is the first element in the list """ return sorted(result, key=lambda f: - f[1])
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Default auditok frame validator class is AudioEnergyValidator which only computes the log energy of a given slice of signal (here referred to as frame or analysis window), returns True if the result equals or is above a certain threshold, and False otherwise. Thus, AudioEnergyValidator is not capable of distinguishing between different classes of sounds such as speech, cough or a noise of an electric engine. To build a validator that can track a particular class of sound (e.g. speech or whistle) over an audio stream, we build a validator that uses a more sophisticated tool to decide whether a frame is valid (belongs to the class of interest) or not. A validator that relies on a GMM classifier The following validator encapsulates an instance of the GMMClassifier defined above, and checks, for each frame, if the best label the GMMClassifier returns is the same as its target (i.e. class of interest).
class ClassifierValidator(DataValidator): def __init__(self, classifier, target): """ classifier: a GMMClassifier object target: string """ self.classifier = classifier self.target = target def is_valid(self, data): r = self.classifier.predict(data) return r[0][0] == self.target
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
A DataSource class that returns feature vectors Although auditok is meant for audio segmentation, its core class, StreamTokenizer, does not expect a a particular type of data (see API Tutorial for examples that use strings instead of audio data). It just expects an object that has a read() method with no arguments. In the following, we will implement a class that encapsulates an audio stream as sequence of precomputed audio feature vectors (e.g. MFCC) and return one vector each time its read() method is called. Furthermore, we want our class to be able to return a vector and its context for a read() call. By context we mean k previous and k next vectors. This is a valuable feature, because, as we well see, for our audio classification problem, GMMs work better if object to classify contains multiple observations (i.e. vectors) and not only one single vector.
class VectorDataSource(DataSource): def __init__(self, data, scope=0): self.scope = scope self._data = data self._current = 0 def read(self): if self._current >= len(self._data): return None start = self._current - self.scope if start < 0: start = 0 end = self._current + self.scope + 1 self._current += 1 return self._data[start : end] def set_scope(self, scope): self.scope = scope def rewind(self): self._current = 0
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Initialize some global variables In the following we are going to define some global variables:
""" Size of audio window for which MFCC coefficients are calculated """ ANALYSIS_WINDOW = 0.02 # 0.02 second = 20 ms """ Step of ANALYSIS_WINDOW """ ANALYSIS_STEP = 0.01 # 0.01 second overlap between consecutive windows """ number of vectors around the current vector to return. This will cause VectorDataSource.read() method to return a sequence of (SCOPE_LENGTH * 2 + 1) vectors (if enough data is available), with the current vetor in the middle """ SCOPE_LENGTH = 10 """ Number of Mel filters """ MEL_FILTERS = 40 """ Number of MFCC coefficients to keep """ N_MFCC = 16 """ Sampling rate of audio data """ SAMPLING_RATE = 16000 """ ANALYSIS_WINDOW and ANALYSIS_STEP as number of samples """ BLOCK_SIZE = int(SAMPLING_RATE * ANALYSIS_WINDOW) HOP_SIZE = int(SAMPLING_RATE * ANALYSIS_STEP) """ Compute delta and delta-delta of MFCC coefficients ? """ DELTA_1 = True DELTA_2 = True """ Where to find data """ PREFIX = "data/train"
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Train GMM models and initialize validators In the following cell we create our GMM models (one per class of sound) using training audio files. We then create a validator object for each audio class:
train_data = {} train_data["silence"] = ["silence_1.wav", "silence_2.wav", "silence_3.wav"] train_data["speech"] = ["speech_1.wav", "speech_2.wav", "speech_3.wav", "speech_4.wav", "speech_5.wav"] train_data["breath"] = ["breath_1.wav", "breath_2.wav", "breath_3.wav", "breath_4.wav", "breath_5.wav"] train_data["whistle"] = ["whistle_1.wav", "whistle_2.wav", "whistle_3.wav", "whistle_4.wav", "whistle_5.wav"] train_data["wrapping_paper"] = ["wrapping_paper.wav"] train_data["sewing_machine"] = ["sewing_machine.wav"] models = {} # build models for cls in train_data: data = [] for fname in train_data[cls]: data.append(file_to_mfcc(PREFIX + '/' + fname, sr=16000, n_mfcc=N_MFCC, n_fft=BLOCK_SIZE, hop_length=HOP_SIZE, n_mels=MEL_FILTERS, delta_1=DELTA_1, delta_2=DELTA_2)) data = np.vstack(data) print("Class '{0}': {1} training vectors".format(cls, data.shape[0])) mod = GMM(n_components=10) mod.fit(data) models[cls] = mod gmm_classifier = GMMClassifier(models) # create a validator for each sound class silence_validator = ClassifierValidator(gmm_classifier, "silence") speech_validator = ClassifierValidator(gmm_classifier, "speech") breath_validator = ClassifierValidator(gmm_classifier, "breath") whistle_validator = ClassifierValidator(gmm_classifier, "whistle") sewing_machine_validator = ClassifierValidator(gmm_classifier, "sewing_machine") wrapping_paper_validator = ClassifierValidator(gmm_classifier, "wrapping_paper")
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Or load pre-trained GMM models Unfortunately, sklrean's GMM implementation is not deterministic. If you'd prefer to use exactly the same models as mine, run the following cell:
models = {} for cls in ["silence" , "speech", "breath", "whistle", "sewing_machine", "wrapping_paper"]: fp = open("models/%s.gmm" % (cls), "r") models[cls] = pickle.load(fp) fp.close() gmm_classifier = GMMClassifier(models) # create a validator for each sound class silence_validator = ClassifierValidator(gmm_classifier, "silence") speech_validator = ClassifierValidator(gmm_classifier, "speech") breath_validator = ClassifierValidator(gmm_classifier, "breath") whistle_validator = ClassifierValidator(gmm_classifier, "whistle") sewing_machine_validator = ClassifierValidator(gmm_classifier, "sewing_machine") wrapping_paper_validator = ClassifierValidator(gmm_classifier, "wrapping_paper")
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
If you wan to save your models to disk, run the following code
# if you wan to save models for cls in train_data: fp = open("models/%s.gmm" % (cls), "wb") pickle.dump(models[cls], fp, pickle.HIGHEST_PROTOCOL) fp.close()
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Transform stream to be analyzed into a sequence of vectors We need to transform the audio stream we want to analyze into a sequence of MFCC vectors. We then use the sequence of MFCC vectors to create a VectorDataSource object that will make it possible to read a vector and its surrounding context if required:
# transform audio stream to be analyzed into a sequence of MFCC vectors # create a DataSource object using MFCC vectors mfcc_data_source = VectorDataSource(data=file_to_mfcc("data/analysis_stream.wav", sr=16000, n_mfcc=N_MFCC, n_fft=BLOCK_SIZE, hop_length=HOP_SIZE, n_mels=MEL_FILTERS, delta_1=DELTA_1, delta_2=DELTA_2), scope=SCOPE_LENGTH)
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Initialize the tokenizer object We will use the same tokenizer object all over our tests. We however need to set a different validator to track a particular sound class (examples below).
# create a tokenizer analysis_window_per_second = 1. / ANALYSIS_STEP min_seg_length = 0.5 # second, min length of an accepted audio segment max_seg_length = 10 # seconds, max length of an accepted audio segment max_silence = 0.3 # second, max length tolerated of tolerated continuous signal that's not from the same class tokenizer = StreamTokenizer(validator=speech_validator, min_length=int(min_seg_length * analysis_window_per_second), max_length=int(max_seg_length * analysis_window_per_second), max_continuous_silence= max_silence * analysis_window_per_second, mode = StreamTokenizer.DROP_TRAILING_SILENCE)
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Read audio signal used for visualization purposes
# read all audio data from stream wfp = wave.open("data/analysis_stream.wav") audio_data = wfp.readframes(-1) width = wfp.getsampwidth() wfp.close() # data as numpy array will be used to plot signal fmt = {1: np.int8 , 2: np.int16, 4: np.int32} signal = np.array(np.frombuffer(audio_data, dtype=fmt[width]), dtype=np.float64)
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Define plot function
%matplotlib inline import matplotlib.pyplot as plt import matplotlib.pylab as pylab pylab.rcParams['figure.figsize'] = 24, 18 def plot_signal_and_segmentation(signal, sampling_rate, segments=[]): _time = np.arange(0., np.ceil(float(len(signal))) / sampling_rate, 1./sampling_rate ) if len(_time) > len(signal): _time = _time[: len(signal) - len(_time)] pylab.subplot(211) for seg in segments: fc = seg.get("fc", "g") ec = seg.get("ec", "b") lw = seg.get("lw", 2) alpha = seg.get("alpha", 0.4) ts = seg["timestamps"] # plot first segmentation outside loop to show one single legend for this class p = pylab.axvspan(ts[0][0], ts[0][1], fc=fc, ec=ec, lw=lw, alpha=alpha, label = seg.get("title", "")) for start, end in ts[1:]: p = pylab.axvspan(start, end, fc=fc, ec=ec, lw=lw, alpha=alpha) pylab.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, borderaxespad=0., fontsize=22, ncol=2) pylab.plot(_time, signal) pylab.xlabel("Time (s)", fontsize=22) pylab.ylabel("Signal Amplitude", fontsize=22) pylab.show()
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Read and plot manual annotations (used for visualization an comparison purposes)
annotations = {} ts = [line.rstrip("\r\n\t ").split(" ") for line in open("data/speech.lst").readlines()] ts = [(float(t[0]), float(t[1])) for t in ts] annotations["speech"] = {"fc" : "r", "ec" : "r", "lw" : 0, "alpha" : 0.4, "title" : "Speech", "timestamps" : ts} ts = [line.rstrip("\r\n\t ").split(" ") for line in open("data/breath.lst").readlines()] ts = [(float(t[0]), float(t[1])) for t in ts] annotations["breath"] = {"fc" : "y", "ec" : "y", "lw" : 0, "alpha" : 0.4, "title" : "Breath", "timestamps" : ts} ts = [line.rstrip("\r\n\t ").split(" ") for line in open("data/whistle.lst").readlines()] ts = [(float(t[0]), float(t[1])) for t in ts] annotations["whistle"] = {"fc" : "m", "ec" : "m", "lw" : 0, "alpha" : 0.4, "title" : "Whistle", "timestamps" : ts} ts = [line.rstrip("\r\n\t ").split(" ") for line in open("data/sewing_machine.lst").readlines()] ts = [(float(t[0]), float(t[1])) for t in ts] annotations["sewing_machine"] = {"fc" : "g", "ec" : "g", "lw" : 0, "alpha" : 0.4, "title" : "Sewing machine", "timestamps" : ts} ts = [line.rstrip("\r\n\t ").split(" ") for line in open("data/wrapping_paper.lst").readlines()] ts = [(float(t[0]), float(t[1])) for t in ts] annotations["wrapping_paper"] = {"fc" : "b", "ec" : "b", "lw" : 0, "alpha" : 0.4, "title" : "Wrapping paper", "timestamps" : ts} def plot_annot(): plot_signal_and_segmentation(signal, SAMPLING_RATE, [annotations["speech"], annotations["breath"], annotations["whistle"], annotations["sewing_machine"], annotations["wrapping_paper"] ]) plot_annot()
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Try out the the first segmentation with sewing_machine class Now, let us start off with a somehow easy class. The sewing_machine is a good candidate. This sound has strong components in low frequencies and less strong high frequencies that both remain very stable over time. It is easy to distinguish from our other classes, even with absolute frame-level validation (i.e. no context, scope = 0)
tokenizer = StreamTokenizer(validator=speech_validator, min_length= int(0.5 * analysis_window_per_second), max_length=int(15 * analysis_window_per_second), max_continuous_silence= 0.3 * analysis_window_per_second, mode = StreamTokenizer.DROP_TRAILING_SILENCE) tokenizer.validator = sewing_machine_validator mfcc_data_source.rewind() mfcc_data_source.scope = 0 tokens = tokenizer.tokenize(mfcc_data_source) ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens] seg = {"fc" : "g", "ec" : "g", "lw" : 0, "alpha" : 0.3, "title" : "Sewing machine (auto)", "timestamps" : ts} plot_signal_and_segmentation(signal, SAMPLING_RATE, [seg])
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
doesn't, please re-run the training to obtain (hopefully) better models or use the models the worked for me by running the respective cell. Note that, we used a scope size of zero. That means that only one single vector is returned by the read() and evaluated by is_valid() methods. This absolute frame-level classification scheme will not have as much success for less stationary classes such speech. Let us try the same strategy with class breath. Track breath with an absolute frame-level scheme We will keep the same tokenizer but set its validator object to breath_validator so that it tracks breath over the stream:
tokenizer.validator = breath_validator mfcc_data_source.rewind() mfcc_data_source.scope = 0 tokens = tokenizer.tokenize(mfcc_data_source) ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens] seg = {"fc" : "y", "ec" : "y", "lw" : 0, "alpha" : 0.4, "title" : "Breath (auto)", "timestamps" : ts} plot_signal_and_segmentation(signal, SAMPLING_RATE, [seg])
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
As you can see, this results in a considerable number of false alarms, almost all silence is classified as breath (remember that you can plot annotations using plot_annot()). The good news is that only silence and no other class is wrongly classified as breath. Hence, there are good chances that using another audio feature such energy would help. Track breath with a larger scope Let us now use a wider scope, so that a vector is evaluated within its context. We will set the scope of our mfcc_data_source to 25. Note that by reading 25 vectors before and after the current vector, we are analyzing audio chunks of 51 * 10 = 510 ms (analysis step is 10 ms).
mfcc_data_source.rewind() mfcc_data_source.scope = 25 tokens = tokenizer.tokenize(mfcc_data_source) ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens] seg = {"fc" : "y", "ec" : "y", "lw" : 0, "alpha" : 0.4, "title" : "Breath (auto)", "timestamps" : ts} plot_signal_and_segmentation(signal, SAMPLING_RATE, [seg])
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Using a wider scope yields a much better segmentation for class breath (again, if you are not having the same result, please load models that worked for me, trained using the SAME training data). The number of false alarms is greatly reduced. Scope size should however be chosen with precaution. Using very large scopes may lead to poorer temporal precision or increase false alarms. Track all classes, this is multi-class segmentation! Now we are going to automatically track all our classes (except silence) within the stream. You might have noticed that the end of the streams contains the most challenging part. It contains 5 juxtaposed sections representing our 5 classes with almost no silence between them. If we intend to use a Segmentation then Classification scheme, an energy-based detector would definitely fail to isolate the five events. Let us see if we can do better with a Segmentation by Classification scheme. As you know, StreamTokenizer objects are binary classifiers. For our multi-class classification problem, we will use as much StreamTokenizer objects as there are sound classes. We will therefore run a tokenizer for each class and then plot the whole obtained results. Although one can use some workaround to speed up processing (e.g. use a DataSource of precomputed log likelihoods instead of MFCC vectors, etc.), this is not the goal of this tutorial. The following code will plot the automatic segmentation followed by the manual annotation.
segments = [] mfcc_data_source.scope = 25 # track speech mfcc_data_source.rewind() tokenizer.validator = speech_validator tokens = tokenizer.tokenize(mfcc_data_source) speech_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens] seg = {"fc" : "r", "ec" : "r", "lw" : 0, "alpha" : 0.4, "title" : "Speech (auto)", "timestamps" : speech_ts} segments.append(seg) # track breath mfcc_data_source.rewind() tokenizer.validator = breath_validator tokens = tokenizer.tokenize(mfcc_data_source) breath_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens] seg = {"fc" : "y", "ec" : "y", "lw" : 0, "alpha" : 0.4, "title" : "Breath (auto)", "timestamps" : breath_ts} segments.append(seg) # track whistle mfcc_data_source.rewind() tokenizer.validator = whistle_validator tokens = tokenizer.tokenize(mfcc_data_source) whistle_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens] seg = {"fc" : "m", "ec" : "m", "lw" : 0, "alpha" : 0.4, "title" : "Whistle (auto)", "timestamps" : whistle_ts} segments.append(seg) # track sewing_machine mfcc_data_source.rewind() tokenizer.validator = sewing_machine_validator tokens = tokenizer.tokenize(mfcc_data_source) sewing_machine_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens] seg = {"fc" : "g", "ec" : "g", "lw" : 0, "alpha" : 0.4, "title" : "Sewing machine (auto)", "timestamps" : sewing_machine_ts} segments.append(seg) # track wrapping_paper mfcc_data_source.rewind() tokenizer.validator = wrapping_paper_validator tokens = tokenizer.tokenize(mfcc_data_source) wrapping_paper_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens] seg = {"fc" : "b", "ec" : "b", "lw" : 0, "alpha" : 0.4, "title" : "Wrapping paper (auto)", "timestamps" : wrapping_paper_ts} segments.append(seg) # plot automatic segmentation plot_signal_and_segmentation(signal, SAMPLING_RATE, segments) # plot manual segmentation plot_annot()
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
If it wasn't for breath false alarm, we'd have a perfect automatic output... If you want to play some audio segments, prepare this...
# BufferAudioSource is useful if we want to navigate quickly within audio data and play bas = BufferAudioSource(audio_data, SAMPLING_RATE, width, 1) bas.open() # audio playback requires pyaudio player = player_for(bas)
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Play first instance of wrapping_paper class
start , end = wrapping_paper_ts[0] bas.set_time_position(start) data = bas.read(int((end-start) * bas.get_sampling_rate())) player.play(data)
multiclass_audio_segmentation.ipynb
amsehili/audio-segmentation-by-classification-tutorial
gpl-3.0
Step 1: Fit the Initial Random Forest Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
%timeit X_train, X_test, y_train, y_test, rf = utils.generate_rf_example(sklearn_ds = load_breast_cancer() , train_split_propn = 0.9 , n_estimators = 3 , random_state_split = 2017 , random_state_classifier = 2018)
jupyter/backup_deprecated_nbs/10_RIT_initial_setup.ipynb
Yu-Group/scikit-learn-sandbox
mit
Design the single function to get the key tree information Get data from the first and second decision tree
tree_dat0 = utils.getTreeData(X_train = X_train, dtree = estimator0, root_node_id = 0) tree_dat1 = utils.getTreeData(X_train = X_train, dtree = estimator1, root_node_id = 0) tree_dat1 = utils.getTreeData(X_train = X_train, dtree = estimator2, root_node_id = 0)
jupyter/backup_deprecated_nbs/10_RIT_initial_setup.ipynb
Yu-Group/scikit-learn-sandbox
mit
Decision Tree 0 (First) - Get output Check the output against the decision tree graph
# Now plot the trees individually utils.draw_tree(decision_tree = estimator0) utils.prettyPrintDict(inp_dict = tree_dat0) # Count the number of samples passing through the leaf nodes sum(tree_dat0['tot_leaf_node_values'])
jupyter/backup_deprecated_nbs/10_RIT_initial_setup.ipynb
Yu-Group/scikit-learn-sandbox
mit
Now we can start setting up the RIT class Overview At it's core, the RIT is comprised of 3 main modules * FILTERING: Subsetting to either the 1's or the 0's * RANDOM SAMPLING: The path-nodes in a weighted manner, with/ without replacement, within tree/ outside tree * INTERSECTION: Intersecting the selected node paths in a systematic manner For now we will just work with a single decision tree outputs
utils.prettyPrintDict(inp_dict = all_rf_outputs['rf_metrics']) all_rf_outputs['dtree0']
jupyter/backup_deprecated_nbs/10_RIT_initial_setup.ipynb
Yu-Group/scikit-learn-sandbox
mit
One-hot 编码 和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。 提示:不要重复发明轮子。
from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder() each_label = np.array(list(range(10))).reshape(-1,1) enc.fit(each_label) print(enc.n_values_) print(enc.feature_indices_) def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function X = np.array(x).reshape(-1, 1) return enc.transform(X).toarray() """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
构建网络 对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。 注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。 但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。 我们开始吧! 输入 神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数: 实现 neural_net_image_input 返回 TF Placeholder 使用 image_shape 设置形状,部分大小设为 None 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "x" 命名 实现 neural_net_label_input 返回 TF Placeholder 使用 n_classes 设置形状,部分大小设为 None 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "y" 命名 实现 neural_net_keep_prob_input 返回 TF Placeholder,用于丢弃保留概率 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "keep_prob" 命名 这些名称将在项目结束时,用于加载保存的模型。 注意:TensorFlow 中的 None 表示形状可以是动态大小。
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name='x') def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=[None, n_classes], name='y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=None, name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
卷积和最大池化层 卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化: 使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。 使用权重和 conv_strides 对 x_tensor 应用卷积。 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。 添加偏置 向卷积中添加非线性激活(nonlinear activation) 使用 pool_ksize 和 pool_strides 应用最大池化 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。 注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs], stddev=0.1)) biases = tf.Variable(tf.zeros([conv_num_outputs])) net = tf.nn.conv2d(x_tensor, weights, [1, conv_strides[0], conv_strides[1], 1], 'SAME') net = tf.nn.bias_add(net, biases) net = tf.nn.relu(net) pool_kernel = [1, pool_ksize[0], pool_ksize[1], 1] pool_strides = [1, pool_strides[0], pool_strides[1], 1] net = tf.nn.max_pool(net, pool_kernel, pool_strides, 'VALID') return net """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
扁平化层 实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
import numpy as np def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function shape = x_tensor.get_shape().as_list() dim = np.prod(shape[1:]) x_tensor = tf.reshape(x_tensor, [-1,dim]) return x_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
全连接层 实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function weights = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[-1], num_outputs], mean=0, stddev=1)) biases = tf.Variable(tf.zeros(shape=[num_outputs])) net = tf.nn.bias_add(tf.matmul(x_tensor, weights), biases) net = tf.nn.relu(net) return net """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
输出层 实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function weights = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[-1], num_outputs], mean=0, stddev=1)) biases = tf.Variable(tf.zeros(shape=[num_outputs])) net = tf.nn.bias_add(tf.matmul(x_tensor, weights), biases) return net """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
创建卷积模型 实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型: 应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers) 应用一个扁平层(Flatten Layer) 应用 1、2 或 3 个完全连接层(Fully Connected Layers) 应用一个输出层(Output Layer) 返回输出 使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) # net = conv2d_maxpool(x, 32, (5,5), (1,1), (2,2), (2,2)) # net = tf.nn.dropout(net, keep_prob) # net = conv2d_maxpool(net, 32, (5,5), (1,1), (2,2), (2,2)) # net = conv2d_maxpool(net, 64, (5,5), (1,1), (2,2), (2,2)) net = conv2d_maxpool(x, 32, (3,3), (1,1), (2,2), (2,2)) net = tf.nn.dropout(net, keep_prob) net = conv2d_maxpool(net, 64, (3,3), (1,1), (2,2), (2,2)) net = tf.nn.dropout(net, keep_prob) net = conv2d_maxpool(net, 64, (3,3), (1,1), (2,2), (2,2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) net = flatten(net) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) net = fully_conn(net, 64) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) net = output(net, enc.n_values_) # TODO: return output return net """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
训练神经网络 单次优化 实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数: x 表示图片输入 y 表示标签 keep_prob 表示丢弃的保留率 每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。 注意:不需要返回任何内容。该函数只是用来优化神经网络。
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function _ = session.run([optimizer, cost, accuracy], feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network)
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
显示数据 实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function valid_loss, valid_accuracy = session.run([cost, accuracy], feed_dict={x: valid_features, y: valid_labels, keep_prob: 1}) print("valid loss {:.3f}, accuracy {:.3f}".format(valid_loss, valid_accuracy))
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
超参数 调试以下超参数: * 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数 * 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小: 64 128 256 ... 设置 keep_probability 表示使用丢弃时保留节点的概率
# TODO: Tune Parameters epochs = 100 batch_size = 256 keep_probability = 0.8
image-classification/dlnd_image_classification.ipynb
xaibeing/cn-deep-learning
mit
Data on a (unevenly spaced) grid: $c = f(x_i,z_j)$ When $c$ is defined for all points on the grid
# Fabricate a dataset: x=np.sort(1000*np.random.random(10)) z=np.sort(5*np.random.random(5)) scalar=np.random.random( (len(x),len(z)) ) scalar.sort(axis=1) scalar.sort(axis=0) # Package into data array transect_data = xr.DataArray(scalar, coords=[ ('x',x), ('z',z)]) fig,axs=plt.subplots(2,1,sharex=True,sharey=True) Y,X = np.meshgrid(transect_data.z,transect_data.x) axs[0].scatter(X,Y,30,scalar,cmap=cmap) axs[0].set_title('Point Data') coll=plot_utils.transect_tricontourf(transect_data,ax=axs[1],V=20, cmap=cmap, xcoord='x', ycoord='z') axs[1].set_title('Contoured') ;
examples/transects_0.ipynb
rustychris/stompy
mit
Partial data on a (unevenly spaced) grid: $c = f(x_i,z_j)$ When $c$ is np.nan for some $(x_i,z_j)$.
# Have to specify the limits of the contours now. # fabricate unevenly spaced, monotonic x,z variables x=np.sort(1000*np.random.random(10)) z=np.sort(5*np.random.random(5)) scalar=np.random.random( (len(x),len(z)) ) scalar.sort(axis=0) ; scalar.sort(axis=1) # Randomly drop about 20% of the bottom of each profile mask = np.sort(np.random.random( (10,5) ),axis=1) < 0.2 scalar[mask]=np.nan # also supports masked array: # scalar=np.ma.masked_array(scalar,mask=mask) # Same layout for the DataArray, but now scalar is missing some data. transect_data = xr.DataArray(scalar, coords=[ ('x',x), ('z',z)]) fig,axs=plt.subplots(2,1,sharex=True,sharey=True) Y,X = np.meshgrid(transect_data.z,transect_data.x) axs[0].scatter(X,Y,30,scalar,cmap=cmap) axs[0].set_title('Point Data') coll=plot_utils.transect_tricontourf(transect_data,ax=axs[1],V=np.linspace(0,1,20), cmap=cmap, xcoord='x', ycoord='z') axs[1].set_title('Contoured') ;
examples/transects_0.ipynb
rustychris/stompy
mit
Per-profile vertical coordinate: $c = f(x_i,z_{(i,j)})$ In other words, data is composed of profiles and each profile has a constant $x$ but its own $z$ coordinate. This example also shows how Datasets can be used to organize multiple variables, and how pulling out a variable into a DataArray brings the coordinate variablyes along with it.
# fabricate unevenly spaced, monotonic x variable x=np.linspace(0,10000,10) + 500*np.random.random(10) # vertical coordinate is now a 2D variable, ~ (profile,sample) cast_z=np.sort(-5*np.random.random((10,5)),axis=1) cast_z[:,-1]=0 scalar=np.sort( np.random.random( cast_z.shape),axis=1) ds=xr.Dataset() ds['x']=('x',x) ds['cast_z']=( ('x','z'),cast_z) ds['scalar']=( ('x','z'), scalar ) ds=ds.set_coords( ['x','cast_z']) transect_data = ds['scalar'] fig,axs=plt.subplots(2,1,sharex=True,sharey=True) Y=transect_data.cast_z X=np.ones_like(transect_data.cast_z) * transect_data.x.values[:,None] axs[0].scatter(X,Y,30,scalar,cmap=cmap) axs[0].set_title('Point Data') coll=plot_utils.transect_tricontourf(transect_data,ax=axs[1],V=np.linspace(0,1,20), cmap=cmap, xcoord='x', ycoord='cast_z') axs[1].set_title('Interpolated') ;
examples/transects_0.ipynb
rustychris/stompy
mit
High-order interpolation Same data "shape" as above, but when the data are sufficiently well-behaved, it is possible to use a high-order interpolation. This is also introduces access to the underlying triangulation object, for more detailed plotting and interpolation. The plot shows the smoothed, interpolated field, as well as the original triangulation and the refined triangulation.
import matplotlib.tri as mtri # fabricate unevenly spaced, monotonic x variable # Make the points more evenly spaced to x=np.linspace(0,10000,10) + 500*np.random.random(10) # vertical coordinate is now a 2D variable, ~ (cast,sample) cast_z=np.linspace(0,1,5)[None,:] + 0.1*np.random.random((10,5)) cast_z=np.sort(-5*cast_z,axis=1) cast_z[:,-1]=0 scalar=np.sort(np.sort(np.random.random( cast_z.shape),axis=0),axis=1) ds=xr.Dataset() ds['x']=('x',x) ds['cast_z']=( ('x','z'),cast_z) ds['scalar']=( ('x','z'), scalar ) ds=ds.set_coords( ['x','cast_z']) transect_data = ds['scalar'] fig,ax=plt.subplots(figsize=(10,7)) tri,mapper=plot_utils.transect_to_triangles(transect_data,xcoord='x',ycoord='cast_z') # This only works with relatively smooth data! refiner = mtri.UniformTriRefiner(tri) tri_refi, z_refi = refiner.refine_field(mapper(transect_data.values), subdiv=2) plt.tricontourf(tri_refi, z_refi, levels=np.linspace(0,1,20), cmap=cmap) # Show how the interpolation is constructed: ax.triplot(tri_refi,color='k',lw=0.3,alpha=0.5) ax.triplot(tri,color='k',lw=0.7,alpha=0.5) ax.set_title('Refined interpolation') ;
examples/transects_0.ipynb
rustychris/stompy
mit
Add Master, Solution Species and Phases by executing PHREEQC input code
pp.ip.run_string(""" SOLUTION_MASTER_SPECIES N(-3) NH4+ 0.0 N SOLUTION_SPECIES NH4+ = NH3 + H+ log_k -9.252 delta_h 12.48 kcal -analytic 0.6322 -0.001225 -2835.76 NO3- + 10 H+ + 8 e- = NH4+ + 3 H2O log_k 119.077 delta_h -187.055 kcal -gamma 2.5000 0.0000 PHASES NH3(g) NH3 = NH3 log_k 1.770 delta_h -8.170 kcal """)
examples/4. Gas/7. Gas-Phase Calculations.ipynb
VitensTC/phreeqpython
apache-2.0
Run Calculation
# add empty solution 1 solution1 = pp.add_solution({}) # equalize solution 1 with Calcite and CO2 solution1.equalize(['Calcite', 'CO2(g)'], [0,-1.5]) # create a fixed pressure gas phase fixed_pressure = pp.add_gas({ 'CO2(g)': 0, 'CH4(g)': 0, 'N2(g)': 0, 'H2O(g)': 0, }, pressure=1.1, fixed_pressure=True) # create a fixed volume gas phase fixed_volume = pp.add_gas({ 'CO2(g)': 0, 'CH4(g)': 0, 'N2(g)': 0, 'H2O(g)': 0, }, volume=23.19, fixed_pressure=False, fixed_volume=True, equilibrate_with=solution1) mmol = [1, 2, 3, 4, 8, 16, 32, 64, 125, 250, 500, 1000] # instantiate result lists fp_vol = []; fp_pres = []; fp_frac = []; fv_vol = []; fv_pres = []; fv_frac = [] for m in mmol: sol = solution1.copy() fp = fixed_pressure.copy() # equlibriate with solution sol.add('CH2O(NH3)0.07', m, 'mmol') sol.interact(fp) fp_vol.append(fp.volume) fp_pres.append(fp.pressure) fp_frac.append(fp.partial_pressures) sol.forget(); fp.forget() # clean up solutions after use sol = solution1.copy() fv = fixed_volume.copy() sol.add('CH2O(NH3)0.07', m, 'mmol') sol.interact(fv) fv_vol.append(fv.volume) fv_pres.append(fv.pressure) fv_frac.append(fv.partial_pressures) sol.forget(); fv.forget() # clean up solutions after use
examples/4. Gas/7. Gas-Phase Calculations.ipynb
VitensTC/phreeqpython
apache-2.0
Total Gas Pressure and Volume
plt.figure(figsize=[8,5]) # create two y axes ax1 = plt.gca() ax2 = ax1.twinx() # plot pressures ax1.plot(mmol, np.log10(fp_pres), 'x-', color='tab:purple', label='Fixed_P - Pressure') ax1.plot(mmol, np.log10(fv_pres), 's-', color='tab:purple', label='Fixed_V - Pressure') # add dummy handlers for legend ax1.plot(np.nan, np.nan, 'x-', color='tab:blue', label='Fixed_P - Volume') ax1.plot(np.nan, np.nan, 's-', color='tab:blue', label='Fixed_V - Volume') # plot volumes ax2.plot(mmol, fp_vol, 'x-') ax2.plot(mmol, fv_vol, 's-', color='tab:blue') # set log scale to both y axes ax2.set_xscale('log') ax2.set_yscale('log') # set axes limits ax1.set_xlim([1e0, 1e3]) ax2.set_xlim([1e0, 1e3]) ax1.set_ylim([-5,1]) ax2.set_ylim([1e-3,1e5]) # add legend and gridlines ax1.legend(loc=4) ax1.grid() # set labels ax1.set_xlabel('Organic matter reacted, in millimoles') ax1.set_ylabel('Log(Pressure, in atmospheres)') ax2.set_ylabel('Volume, in liters)')
examples/4. Gas/7. Gas-Phase Calculations.ipynb
VitensTC/phreeqpython
apache-2.0
Fixed Pressure Gas Composition
fig = plt.figure(figsize=[16,5]) # plot fixed pressure gas composition fig.add_subplot(1,2,1) pd.DataFrame(fp_frac, index=mmol).apply(np.log10)[2:].plot(style='-x', ax=plt.gca()) plt.title('Fixed Pressure gas composition') plt.xscale('log') plt.ylim([-5,1]) plt.grid() plt.xlim(1e0, 1e3) plt.xlabel('Organic matter reacted, in millimoles') plt.ylabel('Log(Partial pressure, in atmospheres)') # plot fixed volume gas composition fig.add_subplot(1,2,2) pd.DataFrame(fv_frac, index=mmol).apply(np.log10).plot(style='-o', ax=plt.gca()) plt.title('Fixed Volume gas composition') plt.xscale('log') plt.xlabel('Organic matter reacted, in millimoles') plt.ylabel('Log(Partial pressure, in atmospheres)') plt.grid() plt.ylim([-5,1])
examples/4. Gas/7. Gas-Phase Calculations.ipynb
VitensTC/phreeqpython
apache-2.0
Creating an External Data Source Object Now we need to create a special ExternalDataSource object that refers to the data, which can, in turn, be used as a table in our BigQuery queries. We need to provide a schema for BigQuery to use the data. The CSV file has a header row that we want to skip; we will use a CSVOptions object to do this.
options = bq.CSVOptions(skip_leading_rows=1) # Skip the header row schema = bq.Schema([ {'name': 'id', 'type': 'INTEGER'}, # row ID {'name': 'name', 'type': 'STRING'}, # friendly name {'name': 'terminal', 'type': 'STRING'}, # terminal ID {'name': 'lat', 'type': 'FLOAT'}, # latitude {'name': 'long', 'type': 'FLOAT'}, # longitude {'name': 'dockcount', 'type': 'INTEGER'}, # bike capacity {'name': 'online', 'type': 'STRING'} # date station opened ]) drivedata = bq.ExternalDataSource(source=sample_object.uri, # The gs:// URL of the file csv_options=options, schema=schema, max_bad_records=10) drivedata
tutorials/BigQuery/Using External Tables from BigQuery.ipynb
googledatalab/notebooks
apache-2.0
Querying the Table Now let's verify that we can access the data. We will run a simple query to show the first five rows. Note that we specify the federated table by using a name in the query, and then pass the table in using a data_sources dictionary parameter.
bq.Query('SELECT * FROM drivedatasource LIMIT 5', data_sources={'drivedatasource': drivedata}).execute().result()
tutorials/BigQuery/Using External Tables from BigQuery.ipynb
googledatalab/notebooks
apache-2.0
Finally, let's clean up.
sample_object.delete() sample_bucket.delete()
tutorials/BigQuery/Using External Tables from BigQuery.ipynb
googledatalab/notebooks
apache-2.0
Mix SQLite and DataFrame When a dataset is huge (~3Gb), it takes some time to load it into a DataFrame. It is difficult to look at it in any tool (Python, Excel, ...) One option I usually do is to load it a SQL server if you have one. If you do not, then SQLite is the best option. Let's see how it works with a custom datasets.
import pyensae import pyensae.datasource pyensae.datasource.download_data("velib_vanves.zip", website = "xd")
_doc/notebooks/pyensae_flat2db3.ipynb
sdpython/pyensae
mit
As this file is small (just an example), let's see how it looks like with a DataFrame.
import pandas df = pandas.read_csv("velib_vanves.txt",sep="\t") df.head(n=2)
_doc/notebooks/pyensae_flat2db3.ipynb
sdpython/pyensae
mit