markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
实例 : 对于一列数据,每隔两行取一个累加起来,最后把和插入到列的Series对象中
out = pd.Series() i=0 pieces = pd.read_csv('myCSV_01.csv',chunksize=3) for piece in pieces: print piece out.set_value(i,piece['white'].sum()) i += 1 out
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
写入文件 to_csv(filenmae) to_csv(filename,index=False,header=False) to_csv(filename,na_rep='NaN') HTML文件读写 写入HTML文件
frame = pd.DataFrame(np.arange(4).reshape((2,2))) print frame.to_html()
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
创建复杂的DataFrame
frame = pd.DataFrame(np.random.random((4,4)), index=['white','black','red','blue'], columns=['up','down','left','right']) frame s = ['<HTML>'] s.append('<HEAD><TITLE>MY DATAFRAME</TITLE></HEAD>') s.append('<BODY>') s.append(frame.to_html()) s.append('</BODY></HTML>') html=''.join(s) with open('myFrame.html','w') as html_file: html_file.write(html)
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
HTML读表格
web_frames = pd.read_html('myFrame.html') web_frames[0] # 以网址作为参数 ranking = pd.read_html('http://www.meccanismocomplesso.org/en/meccanismo-complesso-sito-2/classifica-punteggio/') ranking[0]
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
读写xml文件 使用的第三方的库 lxml
from lxml import objectify xml = objectify.parse('books.xml') xml root =xml.getroot() root.Book.Author root.Book.PublishDate root.getchildren() [child.tag for child in root.Book.getchildren()] [child.text for child in root.Book.getchildren()] def etree2df(root): column_names=[] for i in range(0,len(root.getchildren()[0].getchildren())): column_names.append(root.getchildren()[0].getchildren()[i].tag) xml_frame = pd.DataFrame(columns=column_names) for j in range(0,len(root.getchildren())): obj = root.getchildren()[j].getchildren() texts = [] for k in range(0,len(column_names)): texts.append(obj[k].text) row = dict(zip(column_names,texts)) row_s=pd.Series(row) row_s.name=j xml_frame = xml_frame.append(row_s) return xml_frame etree2df(root)
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
读写Excel文件
pd.read_excel('data.xlsx') pd.read_excel('data.xlsx','Sheet2') frame = pd.DataFrame(np.random.random((4,4)), index=['exp1','exp2','exp3','exp4'], columns=['Jan2015','Feb2015','Mar2015','Apr2015']) frame frame.to_excel('data2.xlsx')
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
JSON数据
frame = pd.DataFrame(np.arange(16).reshape((4,4)), index=['white','black','red','blue'], columns=['up','down','right','left']) frame.to_json('frame.json') # 读取json pd.read_json('frame.json')
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
HDF5数据 HDF文件(hierarchical data from)等级数据格式,用二进制文件存储数据。
from pandas.io.pytables import HDFStore store = HDFStore('mydata.h5') store['obj1']=frame store['obj1']
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
pickle数据
frame.to_pickle('frame.pkl') pd.read_pickle('frame.pkl')
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
数据库连接 以sqlite3为例介绍
frame=pd.DataFrame(np.arange(20).reshape((4,5)), columns=['white','red','blue','black','green']) frame from sqlalchemy import create_engine enegine=create_engine('sqlite:///foo.db') frame.to_sql('colors',enegine) pd.read_sql('colors',enegine)
Data_Analytics_in_Action/pandasIO.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Craigslist houses for sale Look on the Craigslist website, select relevant search criteria, and then take a look at the web address: Houses for sale in the East Bay: http://sfbay.craigslist.org/search/eby/rea?housing_type=6 Houses for sale in selected neighborhoods in the East Bay: http://sfbay.craigslist.org/search/eby/rea?nh=46&nh=47&nh=48&nh=49&nh=112&nh=54&nh=55&nh=60&nh=62&nh=63&nh=66&housing_type=6 General Procedure ```python Get the data using the requests module url = 'http://sfbay.craigslist.org/search/eby/rea?housing_type=6' resp = requests.get(url) BeautifulSoup can quickly parse the text, specify text is html txt = bs4(resp.text, 'html.parser') ``` House entries Looked through output via print(txt.prettify()) to display the html in a more readable way, to note the structure of housing listings Saw housing entries contained in &lt;p class="row"&gt; houses = txt.find_all('p', attrs={'class': 'row'}) Get data from multiple pages on Craigslist First page: url = 'http://sfbay.craigslist.org/search/eby/rea?housing_type=6' For multiple pages, the pattern is: http://sfbay.craigslist.org/search/eby/rea?s=100&housing_type=6 http://sfbay.craigslist.org/search/eby/rea?s=200&housing_type=6 etc.
# Get the data using the requests module npgs = np.arange(0,10,1) npg = 100 base_url = 'http://sfbay.craigslist.org/search/eby/rea?' urls = [base_url + 'housing_type=6'] for pg in range(len(npgs)): url = base_url + 's=' + str(npg) + '&housing_type=6' urls.append(url) npg += 100 more_reqs = [] for p in range(len(npgs)+1): more_req = requests.get(urls[p]) more_reqs.append(more_req) print(urls) # USe BeautifulSoup to parse the text more_txts = [] for p in range(len(npgs)+1): more_txt = bs4(more_reqs[p].text, 'html.parser') more_txts.append(more_txt) # Save the housing entries to a list more_houses = [more_txts[h].findAll(attrs={'class': "row"}) for h in range(len(more_txts))] print(len(more_houses)) print(len(more_houses[0])) # Make a list of housing entries from all of the pages of data npg = len(more_houses) houses_all = [] for n in range(npg): houses_all.extend(more_houses[n]) print(len(houses_all))
ds/Webscraping_Craigslist_multi.ipynb
jljones/portfolio
apache-2.0
Extract and clean data to put in a database
# Define 4 functions for the price, neighborhood, sq footage & # bedrooms, and time # that can deal with missing values (to prevent errors from showing up when running the code) # Prices def find_prices(results): prices = [] for rw in results: price = rw.find('span', {'class': 'price'}) if price is not None: price = float(price.text.strip('$')) else: price = np.nan prices.append(price) return prices # Define a function for neighborhood in case a field is missing in 'class': 'pnr' def find_neighborhood(results): neighborhoods = [] for rw in results: split = rw.find('span', {'class': 'pnr'}).text.strip(' (').split(')') #split = rw.find(attrs={'class': 'pnr'}).text.strip(' (').split(')') if len(split) == 2: neighborhood = split[0] elif 'pic map' or 'pic' or 'map' in split[0]: neighborhood = np.nan neighborhoods.append(neighborhood) return neighborhoods # Make a function to deal with size in case #br or ft2 is missing def find_size_and_brs(results): sqft = [] bedrooms = [] for rw in results: split = rw.find('span', attrs={'class': 'housing'}) # If the field doesn't exist altogether in a housing entry if split is not None: #if rw.find('span', {'class': 'housing'}) is not None: # Removes leading and trailing spaces and dashes, splits br & ft #split = rw.find('span', attrs={'class': 'housing'}).text.strip('/- ').split(' - ') split = split.text.strip('/- ').split(' - ') if len(split) == 2: n_brs = split[0].replace('br', '') size = split[1].replace('ft2', '') elif 'br' in split[0]: # in case 'size' field is missing n_brs = split[0].replace('br', '') size = np.nan elif 'ft2' in split[0]: # in case 'br' field is missing size = split[0].replace('ft2', '') n_brs = np.nan else: size = np.nan n_brs = np.nan sqft.append(float(size)) bedrooms.append(float(n_brs)) return sqft, bedrooms # Time posted def find_times(results): times = [] for rw in results: time = rw.findAll(attrs={'class': 'pl'})[0].time['datetime'] if time is not None: time# = time else: time = np.nan times.append(time) return pd.to_datetime(times) # Apply functions to data to extract useful information prices_all = find_prices(houses_all) neighborhoods_all = find_neighborhood(houses_all) sqft_all, bedrooms_all = find_size_and_brs(houses_all) times_all = find_times(houses_all) # Check print(len(prices_all)) #print(len(neighborhoods_all)) #print(len(sqft_all)) #print(len(bedrooms_all)) #print(len(times_all))
ds/Webscraping_Craigslist_multi.ipynb
jljones/portfolio
apache-2.0
Add data to pandas database
# Make a dataframe to export cleaned data data = np.array([sqft_all, bedrooms_all, prices_all]).T print(data.shape) alldata = pd.DataFrame(data = data, columns = ['SqFeet', 'nBedrooms', 'Price']) alldata.head(4) alldata['DatePosted'] = times_all alldata['Neighborhood'] = neighborhoods_all alldata.head(4) # Check data types print(alldata.dtypes) print(type(alldata.DatePosted[0])) print(type(alldata.SqFeet[0])) print(type(alldata.nBedrooms[0])) print(type(alldata.Neighborhood[0])) print(type(alldata.Price[0])) # To change index to/from time field # alldata.set_index('DatePosted', inplace = True) # alldata.reset_index(inplace=True)
ds/Webscraping_Craigslist_multi.ipynb
jljones/portfolio
apache-2.0
Download data to csv file
alldata.to_csv('./webscraping_craigslist.csv', sep=',', na_rep=np.nan, header=True, index=False)
ds/Webscraping_Craigslist_multi.ipynb
jljones/portfolio
apache-2.0
Data for Berkeley
# Get houses listed in Berkeley print(len(alldata[alldata['Neighborhood'] == 'berkeley'])) alldata[alldata['Neighborhood'] == 'berkeley'] # Home prices in Berkeley (or the baseline) # Choose a baseline, based on proximity to current location # 'berkeley', 'berkeley north / hills', 'albany / el cerrito' neighborhood_name = 'berkeley' print('The average home price in %s is: $' %neighborhood_name, '{0:8,.0f}'.format(alldata.groupby('Neighborhood').mean().Price.ix[neighborhood_name]), '\n') print('The most expensive home price in %s is: $' %neighborhood_name, '{0:8,.0f}'.format(alldata.groupby('Neighborhood').max().Price.ix[neighborhood_name]), '\n') print('The least expensive home price in %s is: $' %neighborhood_name, '{0:9,.0f}'.format(alldata.groupby('Neighborhood').min().Price.ix[neighborhood_name]), '\n')
ds/Webscraping_Craigslist_multi.ipynb
jljones/portfolio
apache-2.0
Scatter plots
# Plot house prices in the East Bay def scatterplot(X, Y, labels, xmax): # =X.max()): # labels=[] # Set up the figure fig = plt.figure(figsize=(15,8)) # width, height fntsz=20 titlefntsz=25 lablsz=20 mrkrsz=8 matplotlib.rc('xtick', labelsize = lablsz); matplotlib.rc('ytick', labelsize = lablsz) # Plot a scatter plot ax = fig.add_subplot(111) # row column position ax.plot(X,Y,'bo') # Grid ax.grid(b = True, which='major', axis='y') # which='major','both'; options/kwargs: color='r', linestyle='-', linewidth=2) # Format x axis #ax.set_xticks(range(0,len(X))); ax.set_xlabel(labels[0], fontsize = titlefntsz) #ax.set_xticklabels(X.index, rotation='vertical') # 90, 45, 'vertical' ax.set_xlim(0,xmax) # Format y axis #minor_yticks = np.arange(0, 1600000, 100000) #ax.set_yticks(minor_yticks, minor = True) ax.set_ylabel(labels[1], fontsize = titlefntsz) # Set Title ax.set_title('$\mathrm{Average \; Home \; Prices \; in \; the \; East \; Bay \; (Source: Craigslist)}$', fontsize = titlefntsz) #fig.suptitle('Home Prices in the East Bay (Source: Craigslist)') # Save figure #plt.savefig("home_prices.pdf",bbox_inches='tight') # Return plot object return fig, ax X = alldata.SqFeet Y = alldata.Price/1000 # in 1000's of Dollars labels = ['$\mathrm{Square \; Feet}$', '$\mathrm{Price \; (in \; 1000\'s \; of \; Dollars)}$'] ax = scatterplot(X,Y,labels,20000) X = alldata.nBedrooms Y = alldata.Price/1000 # in 1000's of Dollars labels = ['$\mathrm{Number \; of \; Bedrooms}$', '$\mathrm{Price \; (in \; 1000\'s \; of \; Dollars)}$'] ax = scatterplot(X,Y,labels,X.max())
ds/Webscraping_Craigslist_multi.ipynb
jljones/portfolio
apache-2.0
Price
# How many houses for sale are under $700k? price_baseline = 700000 print(alldata[(alldata.Price < price_baseline)].count()) # Return entries for houses under $700k # alldata[(alldata.Price < price_baseline)] # In which neighborhoods are these houses located? set(alldata[(alldata.Price < price_baseline)].Neighborhood) # Would automate this later, just do "quick and dirty" solution for now, to take a fast look # Neighborhoods to plot neighborhoodsplt = ['El Dorado Hills', 'richmond / point / annex', 'hercules, pinole, san pablo, el sob', 'albany / el cerrito', 'oakland downtown', 'san leandro', 'pittsburg / antioch', 'fremont / union city / newark', 'walnut creek', 'brentwood / oakley', 'oakland west', 'vallejo / benicia', 'berkeley north / hills', 'oakland north / temescal', 'oakland hills / mills', 'berkeley', 'oakland lake merritt / grand', 'sacramento', 'Oakland', 'concord / pleasant hill / martinez', 'alameda', 'dublin / pleasanton / livermore', 'hayward / castro valley', 'Tracy, CA', 'Oakland Berkeley San Francisco', 'danville / san ramon', 'oakland rockridge / claremont', 'Eastmont', 'Stockton', 'Folsom', 'Tracy', 'Brentwood', 'Twain Harte, CA', 'oakland east', 'fairfield / vacaville', 'Pinole, Hercules, Richmond, San Francisc'] #neighborhoodsplt = set(alldata[(alldata.Price < price_baseline)].Neighborhood.sort_values(ascending=True, inplace=True))
ds/Webscraping_Craigslist_multi.ipynb
jljones/portfolio
apache-2.0
Group results by neighborhood and plot
by_neighborhood = alldata.groupby('Neighborhood').Price.mean() #by_neighborhood #alldata.groupby('Neighborhood').Price.mean().ix[neighborhoodsplt] # Home prices in the East Bay # Group the results by neighborhood, and then take the average home price in each neighborhood by_neighborhood = alldata.groupby('Neighborhood').Price.mean().ix[neighborhoodsplt] by_neighborhood_sort_price = by_neighborhood.sort_values(ascending = True) # uncomment by_neighborhood_sort_price.index # a list of the neighborhoods sorted by price # Plot average home price for each neighborhood in the East Bay fig = plt.figure() fig.set_figheight(8.0) fig.set_figwidth(13.0) fntsz=20 titlefntsz=25 lablsz=20 mrkrsz=8 matplotlib.rc('xtick', labelsize = lablsz); matplotlib.rc('ytick', labelsize = lablsz) ax = fig.add_subplot(111) # row column position # Plot a bar chart ax.bar(range(len(by_neighborhood_sort_price.index)), by_neighborhood_sort_price, align='center') # Add a horizontal line for Berkeley's average home price, corresponds with Berkeley bar ax.axhline(y=by_neighborhood.ix['berkeley'], linestyle='--') # Add a grid ax.grid(b = True, which='major', axis='y') # which='major','both'; options/kwargs: color='r', linestyle='-', linewidth=2) # Format x axis ax.set_xticks(range(0,len(by_neighborhood))); ax.set_xticklabels(by_neighborhood_sort_price.index, rotation='vertical') # 90, 45, 'vertical' ax.set_xlim(-1, len(by_neighborhood_sort_price.index)) # Format y axis ax.set_ylabel('$\mathrm{Price \; (Dollars)}$', fontsize = titlefntsz) # in Hundreds of Thousands of Dollars # Set figure title ax.set_title('$\mathrm{Average \; Home \; Prices \; in \; the \; East \; Bay \; (Source: Craigslist)}$', fontsize = titlefntsz) # Save figure #plt.savefig("home_prices.pdf",bbox_inches='tight')
ds/Webscraping_Craigslist_multi.ipynb
jljones/portfolio
apache-2.0
The example tardis_example can be downloaded here tardis_example.yml
config = Configuration.from_yaml('tardis_example.yml') sim = Simulation.from_config(config)
docs/research/code_comparison/plasma_compare/plasma_compare.ipynb
kaushik94/tardis
bsd-3-clause
Accessing the plasma states In this example, we are accessing Si and also the unionized number density (0)
# All Si ionization states sim.plasma.ion_number_density.loc[14] # Normalizing by si number density sim.plasma.ion_number_density.loc[14] / sim.plasma.number_density.loc[14] # Accessing the first ionization state sim.plasma.ion_number_density.loc[14, 1] sim.plasma.update(density=[1e-13]) sim.plasma.ion_number_density
docs/research/code_comparison/plasma_compare/plasma_compare.ipynb
kaushik94/tardis
bsd-3-clause
Updating the plasma state It is possible to update the plasma state with different temperatures or dilution factors (as well as different densities.). We are updating the radiative temperatures and plotting the evolution of the ionization state
si_ionization_state = None for cur_t_rad in range(1000, 20000, 100): sim.plasma.update(t_rad=[cur_t_rad]) if si_ionization_state is None: si_ionization_state = sim.plasma.ion_number_density.loc[14].copy() si_ionization_state.columns = [cur_t_rad] else: si_ionization_state[cur_t_rad] = sim.plasma.ion_number_density.loc[14].copy() %pylab inline fig = figure(0, figsize=(10, 10)) ax = fig.add_subplot(111) si_ionization_state.T.iloc[:, :3].plot(ax=ax) xlabel('radiative Temperature [K]') ylabel('Number density [1/cm$^3$]')
docs/research/code_comparison/plasma_compare/plasma_compare.ipynb
kaushik94/tardis
bsd-3-clause
Check for dependencies, Set Directories The below code is a simple check that makes sure AFNI and FSL are installed. <br> We also set the input, data, and atlas paths. Make sure that AFNI and FSL are installed
# FSL try: print(f"Your fsl directory is located here: {os.environ['FSLDIR']}") except KeyError: raise AssertionError("You do not have FSL installed! See installation instructions here: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FslInstallation") # AFNI try: print(f"Your AFNI directory is located here: {subprocess.check_output('which afni', shell=True, universal_newlines=True)}") except subprocess.CalledProcessError: raise AssertionError("You do not have AFNI installed! See installation instructions here: https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/main_toc.html")
tutorials/Overview.ipynb
neurodata/ndmg
apache-2.0
Set Input, Output, and Atlas Locations Here, you set: 1. the input_dir - this is where your input data lives. 2. the out_dir - this is where your output data will go.
# get atlases ndmg_dir = Path.home() / ".ndmg" atlas_dir = ndmg_dir / "ndmg_atlases" get_atlas(str(atlas_dir), "2mm") # These input_dir = ndmg_dir / "input" out_dir = ndmg_dir / "output" print(f"Your input and output directory will be : {input_dir} and {out_dir}") assert op.exists(input_dir), f"You must have an input directory with data. Your input directory is located here: {input_dir}"
tutorials/Overview.ipynb
neurodata/ndmg
apache-2.0
Choose input parameters Naming Conventions Here, we define input variables to the pipeline. To run the ndmg pipeline, you need four files: 1. a t1w - this is a high-resolution anatomical image. 2. a dwi - the diffusion image. 3. bvecs - this is a text file that defines the gradient vectors created by a DWI scan. 4. bvals - this is a text file that defines magnitudes for the gradient vectors created by a DWI scan. The naming convention is in the BIDs spec.
# Specify base directory and paths to input files (dwi, bvecs, bvals, and t1w required) subject_id = 'sub-0025864' # Define the location of our input files. t1w = str(input_dir / f"{subject_id}/ses-1/anat/{subject_id}_ses-1_T1w.nii.gz") dwi = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.nii.gz") bvecs = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.bvec") bvals = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.bval") print(f"Your anatomical image location: {t1w}") print(f"Your dwi image location: {dwi}") print(f"Your bvector location: {bvecs}") print(f"Your bvalue location: {bvals}")
tutorials/Overview.ipynb
neurodata/ndmg
apache-2.0
Parameter Choices and Output Directory Here, we choose the parameters to run the pipeline with. If you are inexperienced with diffusion MRI theory, feel free to just use the default parameters. atlases = ['desikan', 'CPAC200', 'DKT', 'HarvardOxfordcort', 'HarvardOxfordsub', 'JHU', 'Schaefer2018-200', 'Talairach', 'aal', 'brodmann', 'glasser', 'yeo-7-liberal', 'yeo-17-liberal'] : The atlas that defines the node location of the graph you create. mod_types = ['det', 'prob'] : Deterministic or probablistic tractography. track_types = ['local', 'particle'] : Local or particle tracking. mods = ['csa', 'csd'] : Constant Solid Angle or Constrained Spherical Deconvolution. regs = ['native', 'native_dsn', 'mni'] : Registration style. If native, do all registration in each scan's space; if mni, register scans to the MNI atlas; if native_dsn, do registration in native space, and then fit the streamlines to MNI space. vox_size = ['1mm', '2mm'] : Whether our voxels are 1mm or 2mm. seeds = int : Seeding density for tractography. More seeds generally results in a better graph, but at a much higher computational cost.
# Use the default parameters. atlas = 'desikan' mod_type = 'prob' track_type = 'local' mod_func = 'csd' reg_style = 'native' vox_size = '2mm' seeds = 1
tutorials/Overview.ipynb
neurodata/ndmg
apache-2.0
Get masks and labels The pipeline needs these two variables as input. <br> Running the pipeline via ndmg_bids does this for you.
# Auto-set paths to neuroparc files mask = str(atlas_dir / "atlases/mask/MNI152NLin6_res-2x2x2_T1w_descr-brainmask.nii.gz") labels = [str(i) for i in (atlas_dir / "atlases/label/Human/").glob(f"*{atlas}*2x2x2.nii.gz")] print(f"mask location : {mask}") print(f"atlas location : {labels}")
tutorials/Overview.ipynb
neurodata/ndmg
apache-2.0
Run the pipeline!
ndmg_dwi_pipeline.ndmg_dwi_worker(dwi=dwi, bvals=bvals, bvecs=bvecs, t1w=t1w, atlas=atlas, mask=mask, labels=labels, outdir=str(out_dir), vox_size=vox_size, mod_type=mod_type, track_type=track_type, mod_func=mod_func, seeds=seeds, reg_style=reg_style, clean=False, skipeddy=True, skipreg=True)
tutorials/Overview.ipynb
neurodata/ndmg
apache-2.0
Import Section class, which contains all calculations
from Section import Section
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy)
ureg = UnitRegistry() sympy.init_printing()
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Define sympy parameters used for geometric description of sections
A, A0, t, t0, a, b, h, L = sympy.symbols('A A_0 t t_0 a b h L', positive=True)
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
We also define numerical values for each symbol in order to plot scaled section and perform calculations
values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \ (b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter)] datav = [(v[0],v[1].magnitude) for v in values]
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
First example: Closed section Define graph describing the section: 1) stringers are nodes with parameters: - x coordinate - y coordinate - Area 2) panels are oriented edges with parameters: - thickness - lenght which is automatically calculated
stringers = {1:[(sympy.Integer(0),h),A], 2:[(a/2,h),A], 3:[(a,h),A], 4:[(a-b,sympy.Integer(0)),A], 5:[(b,sympy.Integer(0)),A]} panels = {(1,2):t, (2,3):t, (3,4):t, (4,5):t, (5,1):t}
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Define section and perform first calculations
S1 = Section(stringers, panels)
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Verify that we find a simply closed section
S1.cycles
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Plot of S1 section in original reference frame Define a dictionary of coordinates used by Networkx to plot section as a Directed graph. Note that arrows are actually just thicker stubs
start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos) plt.arrow(0,0,20,0) plt.arrow(0,0,0,20) #plt.text(0,0, 'CG', fontsize=24) plt.axis('equal') plt.title("Section in starting reference Frame",fontsize=16);
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Expression of Inertial properties wrt Center of Gravity in with original rotation
S1.Ixx0, S1.Iyy0, S1.Ixy0, S1.α0
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Plot of S1 section in inertial reference Frame Section is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes. Center of Gravity and Shear Center are drawn
positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } x_ct, y_ct = S1.ct.subs(datav) plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, pos=positions) plt.plot([0],[0],'o',ms=12,label='CG') plt.plot([x_ct],[y_ct],'^',ms=12, label='SC') #plt.text(0,0, 'CG', fontsize=24) #plt.text(x_ct,y_ct, 'SC', fontsize=24) plt.legend(loc='lower right', shadow=True) plt.axis('equal') plt.title("Section in pricipal reference Frame",fontsize=16);
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Expression of inertial properties in principal reference frame
S1.Ixx, S1.Iyy, S1.Ixy, S1.θ
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Shear center expression
S1.ct
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Analisys of symmetry properties of the section For x and y axes pair of symmetric nodes and edges are searched for
S1.symmetry
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Compute axial loads in Stringers in S1 We first define some symbols:
Tx, Ty, Nz, Mx, My, Mz, F, ry, ry, mz = sympy.symbols('T_x T_y N_z M_x M_y M_z F r_y r_x m_z')
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Set loads on the section: Example 1: shear in y direction and bending moment in x direction
S1.set_loads(_Tx=0, _Ty=Ty, _Nz=0, _Mx=Mx, _My=0, _Mz=0)
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Compute axial loads in stringers and shear flows in panels
S1.compute_stringer_actions() S1.compute_panel_fluxes();
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Axial loads
S1.N
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Shear flows
S1.q
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Example 2: twisting moment in z direction
S1.set_loads(_Tx=0, _Ty=0, _Nz=0, _Mx=0, _My=0, _Mz=Mz) S1.compute_stringer_actions() S1.compute_panel_fluxes();
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Panel fluxes
S1.q
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Set loads on the section: Example 3: shear in x direction and bending moment in y direction
S1.set_loads(_Tx=Tx, _Ty=0, _Nz=0, _Mx=0, _My=My, _Mz=0) S1.compute_stringer_actions() S1.compute_panel_fluxes();
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Panel fluxes Not really an easy expression
S1.q
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Compute Jt Computation of torsional moment of inertia:
S1.compute_Jt() S1.Jt
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Second example: Open section
stringers = {1:[(sympy.Integer(0),h),A], 2:[(sympy.Integer(0),sympy.Integer(0)),A], 3:[(a,sympy.Integer(0)),A], 4:[(a,h),A]} panels = {(1,2):t, (2,3):t, (3,4):t}
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Define section and perform first calculations
S2 = Section(stringers, panels)
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Verify that the section is open
S2.cycles
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Plot of S2 section in original reference frame Define a dictionary of coordinates used by Networkx to plot section as a Directed graph. Note that arrows are actually just thicker stubs
start_pos={ii: [float(S2.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S2.g.nodes() } plt.figure(figsize=(12,8),dpi=300) nx.draw(S2.g,with_labels=True, arrows= True, pos=start_pos) plt.arrow(0,0,20,0) plt.arrow(0,0,0,20) #plt.text(0,0, 'CG', fontsize=24) plt.axis('equal') plt.title("Section in starting reference Frame",fontsize=16);
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Expression of Inertial properties wrt Center of Gravity in with original rotation
S2.Ixx0, S2.Iyy0, S2.Ixy0, S2.α0
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Plot of S2 section in inertial reference Frame Section is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes. Center of Gravity and Shear Center are drawn
positions={ii: [float(S2.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S2.g.nodes() } x_ct, y_ct = S2.ct.subs(datav) plt.figure(figsize=(12,8),dpi=300) nx.draw(S2.g,with_labels=True, pos=positions) plt.plot([0],[0],'o',ms=12,label='CG') plt.plot([x_ct],[y_ct],'^',ms=12, label='SC') #plt.text(0,0, 'CG', fontsize=24) #plt.text(x_ct,y_ct, 'SC', fontsize=24) plt.legend(loc='lower right', shadow=True) plt.axis('equal') plt.title("Section in pricipal reference Frame",fontsize=16);
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Expression of inertial properties in principal reference frame
S2.Ixx, S2.Iyy, S2.Ixy, S2.θ
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Shear center expression
S2.ct
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Analisys of symmetry properties of the section For x and y axes pair of symmetric nodes and edges are searched for
S2.symmetry
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Compute axial loads in Stringers in S2 Set loads on the section: Example 2: shear in y direction and bending moment in x direction
S2.set_loads(_Tx=0, _Ty=Ty, _Nz=0, _Mx=Mx, _My=0, _Mz=0)
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Compute axial loads in stringers and shear flows in panels
S2.compute_stringer_actions() S2.compute_panel_fluxes();
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Axial loads
S2.N
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Shear flows
S2.q
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Set loads on the section: Example 2: shear in x direction and bending moment in y direction
S2.set_loads(_Tx=Tx, _Ty=0, _Nz=0, _Mx=0, _My=My, _Mz=0) S2.compute_stringer_actions() S2.compute_panel_fluxes(); S2.N S2.q
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Second example (2): Open section
stringers = {1:[(a,h),A], 2:[(sympy.Integer(0),h),A], 3:[(sympy.Integer(0),sympy.Integer(0)),A], 4:[(a,sympy.Integer(0)),A]} panels = {(1,2):t, (2,3):t, (3,4):t}
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Define section and perform first calculations
S2_2 = Section(stringers, panels)
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Plot of S2 section in original reference frame Define a dictionary of coordinates used by Networkx to plot section as a Directed graph. Note that arrows are actually just thicker stubs
start_pos={ii: [float(S2_2.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S2_2.g.nodes() } plt.figure(figsize=(12,8),dpi=300) nx.draw(S2_2.g,with_labels=True, arrows= True, pos=start_pos) plt.arrow(0,0,20,0) plt.arrow(0,0,0,20) #plt.text(0,0, 'CG', fontsize=24) plt.axis('equal') plt.title("Section in starting reference Frame",fontsize=16);
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Expression of Inertial properties wrt Center of Gravity in with original rotation
S2_2.Ixx0, S2_2.Iyy0, S2_2.Ixy0, S2_2.α0
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Plot of S2 section in inertial reference Frame Section is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes. Center of Gravity and Shear Center are drawn
positions={ii: [float(S2_2.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S2_2.g.nodes() } x_ct, y_ct = S2_2.ct.subs(datav) plt.figure(figsize=(12,8),dpi=300) nx.draw(S2_2.g,with_labels=True, pos=positions) plt.plot([0],[0],'o',ms=12,label='CG') plt.plot([x_ct],[y_ct],'^',ms=12, label='SC') #plt.text(0,0, 'CG', fontsize=24) #plt.text(x_ct,y_ct, 'SC', fontsize=24) plt.legend(loc='lower right', shadow=True) plt.axis('equal') plt.title("Section in pricipal reference Frame",fontsize=16);
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Expression of inertial properties in principal reference frame
S2_2.Ixx, S2_2.Iyy, S2_2.Ixy, S2_2.θ
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Shear center expression
S2_2.ct
01_SemiMonoCoque.ipynb
Ccaccia73/semimonocoque
mit
Adversarial Regularization for Image Classification The core idea of adversarial learning is to train a model with adversarially-perturbed data (called adversarial examples) in addition to the organic training data. The adversarial examples are constructed to intentionally mislead the model into making wrong predictions or classifications. By training with such examples, the model learns to be robust against adversarial perturbation when making predictions. In this tutorial, we illustrate the following procedure of applying adversarial learning to obtain robust models using the Neural Structured Learning framework: Create a neural network as a base model. In this tutorial, the base model is created with the tf.keras functional API; this procedure is compatible with models created by tf.keras sequential and subclassing APIs as well. Wrap the base model with the AdversarialRegularization wrapper class, which is provided by the NSL framework, to create a new tf.keras.Model instance. This new model will include the adversarial loss as a regularization term in its training objective. Convert examples in the training data to feature dictionaries. Train and evaluate the new model. Both the base and the new model will be evaluated against natural and adversarial inputs. Setup Install the Neural Structured Learning package.
!pip install --quiet neural-structured-learning import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_datasets as tfds import numpy as np import neural_structured_learning as nsl
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Hyperparameters We collect and explain the hyperparameters (in an HParams object) for model training and evaluation. Input/Output: input_shape: The shape of the input tensor. Each image is 28-by-28 pixels with 1 channel. num_classes: There are a total of 10 classes, corresponding to 10 digits [0-9]. Model architecture: conv_filters: A list of numbers, each specifying the number of filters in a convolutional layer. kernel_size: The size of 2D convolution window, shared by all convolutional layers. pool_size: Factors to downscale the image in each max-pooling layer. num_fc_units: The number of units (i.e., width) of each fully-connected layer. Training and evaluation: batch_size: Batch size used for training and evaluation. epochs: The number of training epochs. Adversarial learning: adv_multiplier: The weight of adversarial loss in the training objective, relative to the labeled loss. adv_step_size: The magnitude of adversarial perturbation. adv_grad_norm: The norm to measure the magnitude of adversarial perturbation. pgd_iterations: The number of iterative steps to take when using PGD. pgd_epsilon: The bounds of the perturbation. PGD will project back to this epsilon ball when generating the adversary. clip_value_min: Clips the final adversary to be at least as large as this value. This keeps the perturbed pixel values in a valid domain. clip_value_max: Clips the final adversary to be no larger than this value. This also keeps the perturbed pixel values in a valid domain.
class HParams(object): def __init__(self): self.input_shape = [28, 28, 1] self.num_classes = 10 self.conv_filters = [32, 64, 64] self.kernel_size = (3, 3) self.pool_size = (2, 2) self.num_fc_units = [64] self.batch_size = 32 self.epochs = 5 self.adv_multiplier = 0.2 self.adv_step_size = 0.01 self.adv_grad_norm = 'infinity' self.pgd_iterations = 40 self.pgd_epsilon = 0.2 self.clip_value_min = 0.0 self.clip_value_max = 1.0 HPARAMS = HParams()
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
MNIST dataset The MNIST dataset contains grayscale images of handwritten digits (from '0' to '9'). Each image showes one digit at low resolution (28-by-28 pixels). The task involved is to classify images into 10 categories, one per digit. Here we load the MNIST dataset from TensorFlow Datasets. It handles downloading the data and constructing a tf.data.Dataset. The loaded dataset has two subsets: train with 60,000 examples, and test with 10,000 examples. Examples in both subsets are stored in feature dictionaries with the following two keys: image: Array of pixel values, ranging from 0 to 255. label: Groundtruth label, ranging from 0 to 9.
datasets = tfds.load('mnist') train_dataset = datasets['train'] test_dataset = datasets['test'] IMAGE_INPUT_NAME = 'image' LABEL_INPUT_NAME = 'label'
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
To make the model numerically stable, we normalize the pixel values to [0, 1] by mapping the dataset over the normalize function. After shuffling training set and batching, we convert the examples to feature tuples (image, label) for training the base model. We also provide a function to convert from tuples to dictionaries for later use.
def normalize(features): features[IMAGE_INPUT_NAME] = tf.cast( features[IMAGE_INPUT_NAME], dtype=tf.float32) / 255.0 return features def convert_to_tuples(features): return features[IMAGE_INPUT_NAME], features[LABEL_INPUT_NAME] def convert_to_dictionaries(image, label): return {IMAGE_INPUT_NAME: image, LABEL_INPUT_NAME: label} train_dataset = train_dataset.map(normalize).shuffle(10000).batch(HPARAMS.batch_size).map(convert_to_tuples) test_dataset = test_dataset.map(normalize).batch(HPARAMS.batch_size).map(convert_to_tuples)
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Base model Our base model will be a neural network consisting of 3 convolutional layers follwed by 2 fully-connected layers (as defined in HPARAMS). Here we define it using the Keras functional API. Feel free to try other APIs or model architectures.
def build_base_model(hparams): """Builds a model according to the architecture defined in `hparams`.""" inputs = tf.keras.Input( shape=hparams.input_shape, dtype=tf.float32, name=IMAGE_INPUT_NAME) x = inputs for i, num_filters in enumerate(hparams.conv_filters): x = tf.keras.layers.Conv2D( num_filters, hparams.kernel_size, activation='relu')( x) if i < len(hparams.conv_filters) - 1: # max pooling between convolutional layers x = tf.keras.layers.MaxPooling2D(hparams.pool_size)(x) x = tf.keras.layers.Flatten()(x) for num_units in hparams.num_fc_units: x = tf.keras.layers.Dense(num_units, activation='relu')(x) pred = tf.keras.layers.Dense(hparams.num_classes, activation=None)(x) # pred = tf.keras.layers.Dense(hparams.num_classes, activation='softmax')(x) model = tf.keras.Model(inputs=inputs, outputs=pred) return model base_model = build_base_model(HPARAMS) base_model.summary()
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Next we train and evaluate the base model.
base_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True), metrics=['acc']) base_model.fit(train_dataset, epochs=HPARAMS.epochs) results = base_model.evaluate(test_dataset) named_results = dict(zip(base_model.metrics_names, results)) print('\naccuracy:', named_results['acc'])
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Adversarial-regularized model Here we show how to incorporate adversarial training into a Keras model with a few lines of code, using the NSL framework. The base model is wrapped to create a new tf.Keras.Model, whose training objective includes adversarial regularization. We will train one using the FGSM adversary and one using a stronger PGD adversary. First, we create config objects with relevant hyperparameters.
fgsm_adv_config = nsl.configs.make_adv_reg_config( multiplier=HPARAMS.adv_multiplier, # With FGSM, we want to take a single step equal to the epsilon ball size, # to get the largest allowable perturbation. adv_step_size=HPARAMS.pgd_epsilon, adv_grad_norm=HPARAMS.adv_grad_norm, clip_value_min=HPARAMS.clip_value_min, clip_value_max=HPARAMS.clip_value_max ) pgd_adv_config = nsl.configs.make_adv_reg_config( multiplier=HPARAMS.adv_multiplier, adv_step_size=HPARAMS.adv_step_size, adv_grad_norm=HPARAMS.adv_grad_norm, pgd_iterations=HPARAMS.pgd_iterations, pgd_epsilon=HPARAMS.pgd_epsilon, clip_value_min=HPARAMS.clip_value_min, clip_value_max=HPARAMS.clip_value_max )
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Now we can wrap a base model with AdversarialRegularization. Here we create new base models (base_fgsm_model, base_pgd_model), so that the existing one (base_model) can be used in later comparison. The returned adv_model is a tf.keras.Model object, whose training objective includes a regularization term for the adversarial loss. To compute that loss, the model has to have access to the label information (feature label), in addition to regular input (feature image). For this reason, we convert the examples in the datasets from tuples back to dictionaries. And we tell the model which feature contains the label information via the label_keys parameter. We will create two adversarially regularized models: fgsm_adv_model (regularized with FGSM) and pgd_adv_model (regularized with PGD).
# Create model for FGSM. base_fgsm_model = build_base_model(HPARAMS) # Create FGSM-regularized model. fgsm_adv_model = nsl.keras.AdversarialRegularization( base_fgsm_model, label_keys=[LABEL_INPUT_NAME], adv_config=fgsm_adv_config ) # Create model for PGD. base_pgd_model = build_base_model(HPARAMS) # Create PGD-regularized model. pgd_adv_model = nsl.keras.AdversarialRegularization( base_pgd_model, label_keys=[LABEL_INPUT_NAME], adv_config=pgd_adv_config ) # Data for training. train_set_for_adv_model = train_dataset.map(convert_to_dictionaries) test_set_for_adv_model = test_dataset.map(convert_to_dictionaries)
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Next we compile, train, and evaluate the adversarial-regularized model. There might be warnings like "Output missing from loss dictionary," which is fine because the adv_model doesn't rely on the base implementation to calculate the total loss.
fgsm_adv_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True), metrics=['acc']) fgsm_adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs) results = fgsm_adv_model.evaluate(test_set_for_adv_model) named_results = dict(zip(fgsm_adv_model.metrics_names, results)) print('\naccuracy:', named_results['sparse_categorical_accuracy']) pgd_adv_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True), metrics=['acc']) pgd_adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs) results = pgd_adv_model.evaluate(test_set_for_adv_model) named_results = dict(zip(pgd_adv_model.metrics_names, results)) print('\naccuracy:', named_results['sparse_categorical_accuracy'])
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Both adversarially regularized models perform well on the test set. Robustness under Adversarial Perturbations Now we compare the base model and the adversarial-regularized model for robustness under adversarial perturbation. We will show how the base model is vulnerable to attacks from both FGSM and PGD, the FGSM-regularized model can resist FGSM attacks but is vulnerable to PGD, and the PGD-regularized model is able to resist both forms of attack. We use gen_adv_neighbor to generate adversaries for our models. Attacking the Base Model
# Set up the neighbor config for FGSM. fgsm_nbr_config = nsl.configs.AdvNeighborConfig( adv_grad_norm=HPARAMS.adv_grad_norm, adv_step_size=HPARAMS.pgd_epsilon, clip_value_min=0.0, clip_value_max=1.0, ) # The labeled loss function provides the loss for each sample we pass in. This # will be used to calculate the gradient. labeled_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, ) %%time # Generate adversarial images using FGSM on the base model. perturbed_images, labels, predictions = [], [], [] # We want to record the accuracy. metric = tf.keras.metrics.SparseCategoricalAccuracy() for batch in test_set_for_adv_model: # Record the loss calculation to get the gradient. with tf.GradientTape() as tape: tape.watch(batch) losses = labeled_loss_fn(batch[LABEL_INPUT_NAME], base_model(batch[IMAGE_INPUT_NAME])) # Generate the adversarial example. fgsm_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor( batch[IMAGE_INPUT_NAME], losses, fgsm_nbr_config, gradient_tape=tape ) # Update our accuracy metric. y_true = batch['label'] y_pred = base_model(fgsm_images) metric(y_true, y_pred) # Store images for later use. perturbed_images.append(fgsm_images) labels.append(y_true.numpy()) predictions.append(tf.argmax(y_pred, axis=-1).numpy()) print('%s model accuracy: %f' % ('base', metric.result().numpy()))
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Let's examine what some of these images look like.
def examine_images(perturbed_images, labels, predictions, model_key): batch_index = 0 batch_image = perturbed_images[batch_index] batch_label = labels[batch_index] batch_pred = predictions[batch_index] batch_size = HPARAMS.batch_size n_col = 4 n_row = (batch_size + n_col - 1) / n_col print('accuracy in batch %d:' % batch_index) print('%s model: %d / %d' % (model_key, np.sum(batch_label == batch_pred), batch_size)) plt.figure(figsize=(15, 15)) for i, (image, y) in enumerate(zip(batch_image, batch_label)): y_base = batch_pred[i] plt.subplot(n_row, n_col, i+1) plt.title('true: %d, %s: %d' % (y, model_key, y_base), color='r' if y != y_base else 'k') plt.imshow(tf.keras.preprocessing.image.array_to_img(image), cmap='gray') plt.axis('off') plt.show() examine_images(perturbed_images, labels, predictions, 'base')
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
Our perturbation budget of 0.2 is quite large, but even so, the perturbed numbers are clearly recognizable to the human eye. On the other hand, our network is fooled into misclassifying several examples. As we can see, the FGSM attack is already highly effective, and quick to execute, heavily reducing the model accuracy. We will see below, that the PGD attack is even more effective, even with the same perturbation budget.
# Set up the neighbor config for PGD. pgd_nbr_config = nsl.configs.AdvNeighborConfig( adv_grad_norm=HPARAMS.adv_grad_norm, adv_step_size=HPARAMS.adv_step_size, pgd_iterations=HPARAMS.pgd_iterations, pgd_epsilon=HPARAMS.pgd_epsilon, clip_value_min=HPARAMS.clip_value_min, clip_value_max=HPARAMS.clip_value_max, ) # pgd_model_fn generates a prediction from which we calculate the loss, and the # gradient for a given interation. pgd_model_fn = base_model # We need to pass in the loss function for repeated calculation of the gradient. pgd_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, ) labeled_loss_fn = pgd_loss_fn %%time # Generate adversarial images using PGD on the base model. perturbed_images, labels, predictions = [], [], [] # Record the accuracy. metric = tf.keras.metrics.SparseCategoricalAccuracy() for batch in test_set_for_adv_model: # Gradient tape to calculate the loss on the first iteration. with tf.GradientTape() as tape: tape.watch(batch) losses = labeled_loss_fn(batch[LABEL_INPUT_NAME], base_model(batch[IMAGE_INPUT_NAME])) # Generate the adversarial examples. pgd_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor( batch[IMAGE_INPUT_NAME], losses, pgd_nbr_config, gradient_tape=tape, pgd_model_fn=pgd_model_fn, pgd_loss_fn=pgd_loss_fn, pgd_labels=batch[LABEL_INPUT_NAME], ) # Update our accuracy metric. y_true = batch['label'] y_pred = base_model(pgd_images) metric(y_true, y_pred) # Store images for visualization. perturbed_images.append(pgd_images) labels.append(y_true.numpy()) predictions.append(tf.argmax(y_pred, axis=-1).numpy()) print('%s model accuracy: %f' % ('base', metric.result().numpy())) examine_images(perturbed_images, labels, predictions, 'base')
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
The PGD attack is much stronger, but it also takes longer to run. Attacking the FGSM Regularized Model
# Set up the neighbor config. fgsm_nbr_config = nsl.configs.AdvNeighborConfig( adv_grad_norm=HPARAMS.adv_grad_norm, adv_step_size=HPARAMS.pgd_epsilon, clip_value_min=0.0, clip_value_max=1.0, ) # The labeled loss function provides the loss for each sample we pass in. This # will be used to calculate the gradient. labeled_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, ) %%time # Generate adversarial images using FGSM on the regularized model. perturbed_images, labels, predictions = [], [], [] # Record the accuracy. metric = tf.keras.metrics.SparseCategoricalAccuracy() for batch in test_set_for_adv_model: # Record the loss calculation to get its gradients. with tf.GradientTape() as tape: tape.watch(batch) # We attack the adversarially regularized model. losses = labeled_loss_fn(batch[LABEL_INPUT_NAME], fgsm_adv_model.base_model(batch[IMAGE_INPUT_NAME])) # Generate the adversarial examples. fgsm_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor( batch[IMAGE_INPUT_NAME], losses, fgsm_nbr_config, gradient_tape=tape ) # Update our accuracy metric. y_true = batch['label'] y_pred = fgsm_adv_model.base_model(fgsm_images) metric(y_true, y_pred) # Store images for visualization. perturbed_images.append(fgsm_images) labels.append(y_true.numpy()) predictions.append(tf.argmax(y_pred, axis=-1).numpy()) print('%s model accuracy: %f' % ('base', metric.result().numpy())) examine_images(perturbed_images, labels, predictions, 'fgsm_reg')
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
As we can see, the FGSM-regularized model performs much better than the base model on images perturbed by FGSM. How does it do against PGD?
# Set up the neighbor config for PGD. pgd_nbr_config = nsl.configs.AdvNeighborConfig( adv_grad_norm=HPARAMS.adv_grad_norm, adv_step_size=HPARAMS.adv_step_size, pgd_iterations=HPARAMS.pgd_iterations, pgd_epsilon=HPARAMS.pgd_epsilon, clip_value_min=HPARAMS.clip_value_min, clip_value_max=HPARAMS.clip_value_max, ) # pgd_model_fn generates a prediction from which we calculate the loss, and the # gradient for a given interation. pgd_model_fn = fgsm_adv_model.base_model # We need to pass in the loss function for repeated calculation of the gradient. pgd_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, ) labeled_loss_fn = pgd_loss_fn %%time # Generate adversarial images using PGD on the FGSM-regularized model. perturbed_images, labels, predictions = [], [], [] metric = tf.keras.metrics.SparseCategoricalAccuracy() for batch in test_set_for_adv_model: # Gradient tape to calculate the loss on the first iteration. with tf.GradientTape() as tape: tape.watch(batch) losses = labeled_loss_fn(batch[LABEL_INPUT_NAME], fgsm_adv_model.base_model(batch[IMAGE_INPUT_NAME])) # Generate the adversarial examples. pgd_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor( batch[IMAGE_INPUT_NAME], losses, pgd_nbr_config, gradient_tape=tape, pgd_model_fn=pgd_model_fn, pgd_loss_fn=pgd_loss_fn, pgd_labels=batch[LABEL_INPUT_NAME], ) # Update our accuracy metric. y_true = batch['label'] y_pred = fgsm_adv_model.base_model(pgd_images) metric(y_true, y_pred) # Store images for visualization. perturbed_images.append(pgd_images) labels.append(y_true.numpy()) predictions.append(tf.argmax(y_pred, axis=-1).numpy()) print('%s model accuracy: %f' % ('base', metric.result().numpy())) examine_images(perturbed_images, labels, predictions, 'fgsm_reg')
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
While the FGSM regularized model was robust to attacks via FGSM, as we can see it is still vulnerable to attacks from PGD, which is a stronger attack mechanism than FGSM. Attacking the PGD Regularized Model
# Set up the neighbor config. fgsm_nbr_config = nsl.configs.AdvNeighborConfig( adv_grad_norm=HPARAMS.adv_grad_norm, adv_step_size=HPARAMS.pgd_epsilon, clip_value_min=0.0, clip_value_max=1.0, ) # The labeled loss function provides the loss for each sample we pass in. This # will be used to calculate the gradient. labeled_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, ) %%time # Generate adversarial images using FGSM on the regularized model. perturbed_images, labels, predictions = [], [], [] # Record the accuracy. metric = tf.keras.metrics.SparseCategoricalAccuracy() for batch in test_set_for_adv_model: # Record the loss calculation to get its gradients. with tf.GradientTape() as tape: tape.watch(batch) # We attack the adversarially regularized model. losses = labeled_loss_fn(batch[LABEL_INPUT_NAME], pgd_adv_model.base_model(batch[IMAGE_INPUT_NAME])) # Generate the adversarial examples. fgsm_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor( batch[IMAGE_INPUT_NAME], losses, fgsm_nbr_config, gradient_tape=tape ) # Update our accuracy metric. y_true = batch['label'] y_pred = pgd_adv_model.base_model(fgsm_images) metric(y_true, y_pred) # Store images for visualization. perturbed_images.append(fgsm_images) labels.append(y_true.numpy()) predictions.append(tf.argmax(y_pred, axis=-1).numpy()) print('%s model accuracy: %f' % ('base', metric.result().numpy())) examine_images(perturbed_images, labels, predictions, 'pgd_reg') # Set up the neighbor config for PGD. pgd_nbr_config = nsl.configs.AdvNeighborConfig( adv_grad_norm=HPARAMS.adv_grad_norm, adv_step_size=HPARAMS.adv_step_size, pgd_iterations=HPARAMS.pgd_iterations, pgd_epsilon=HPARAMS.pgd_epsilon, clip_value_min=HPARAMS.clip_value_min, clip_value_max=HPARAMS.clip_value_max, ) # pgd_model_fn generates a prediction from which we calculate the loss, and the # gradient for a given interation. pgd_model_fn = pgd_adv_model.base_model # We need to pass in the loss function for repeated calculation of the gradient. pgd_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, ) labeled_loss_fn = pgd_loss_fn %%time # Generate adversarial images using PGD on the PGD-regularized model. perturbed_images, labels, predictions = [], [], [] metric = tf.keras.metrics.SparseCategoricalAccuracy() for batch in test_set_for_adv_model: # Gradient tape to calculate the loss on the first iteration. with tf.GradientTape() as tape: tape.watch(batch) losses = labeled_loss_fn(batch[LABEL_INPUT_NAME], pgd_adv_model.base_model(batch[IMAGE_INPUT_NAME])) # Generate the adversarial examples. pgd_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor( batch[IMAGE_INPUT_NAME], losses, pgd_nbr_config, gradient_tape=tape, pgd_model_fn=pgd_model_fn, pgd_loss_fn=pgd_loss_fn, pgd_labels=batch[LABEL_INPUT_NAME], ) # Update our accuracy metric. y_true = batch['label'] y_pred = pgd_adv_model.base_model(pgd_images) metric(y_true, y_pred) # Store images for visualization. perturbed_images.append(pgd_images) labels.append(y_true.numpy()) predictions.append(tf.argmax(y_pred, axis=-1).numpy()) print('%s model accuracy: %f' % ('base', metric.result().numpy())) examine_images(perturbed_images, labels, predictions, 'pgd_reg')
workshops/kdd_2020/adversarial_regularization_mnist.ipynb
tensorflow/neural-structured-learning
apache-2.0
2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials.
from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True )
colabs/google_api_to_bigquery.ipynb
google/starthinker
apache-2.0
3. Enter Google API To BigQuery Recipe Parameters Enter an api name and version. Specify the function using dot notation. Specify the arguments using json. Iterate is optional, use if API returns a list of items that are not unpacking correctly. The API Key may be required for some calls. The Developer Token may be required for some calls. Give BigQuery dataset and table where response will be written. All API calls are based on discovery document, for example the Campaign Manager API. Modify the values below for your use case, can be done multiple times, then click play.
FIELDS = { 'auth_read':'user', # Credentials used for reading data. 'api':'displayvideo', # See developer guide. 'version':'v1', # Must be supported version. 'function':'advertisers.list', # Full function dot notation path. 'kwargs':{'partnerId': 234340}, # Dictionray object of name value pairs. 'kwargs_remote':{}, # Fetch arguments from remote source. 'api_key':'', # Associated with a Google Cloud Project. 'developer_token':'', # Associated with your organization. 'login_customer_id':'', # Associated with your Adwords account. 'dataset':'', # Existing dataset in BigQuery. 'table':'', # Table to write API call results to. } print("Parameters Set To: %s" % FIELDS)
colabs/google_api_to_bigquery.ipynb
google/starthinker
apache-2.0
4. Execute Google API To BigQuery This does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'google_api':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}}, 'api':{'field':{'name':'api','kind':'string','order':1,'default':'displayvideo','description':'See developer guide.'}}, 'version':{'field':{'name':'version','kind':'string','order':2,'default':'v1','description':'Must be supported version.'}}, 'function':{'field':{'name':'function','kind':'string','order':3,'default':'advertisers.list','description':'Full function dot notation path.'}}, 'kwargs':{'field':{'name':'kwargs','kind':'json','order':4,'default':{'partnerId':234340},'description':'Dictionray object of name value pairs.'}}, 'kwargs_remote':{'field':{'name':'kwargs_remote','kind':'json','order':5,'default':{},'description':'Fetch arguments from remote source.'}}, 'key':{'field':{'name':'api_key','kind':'string','order':6,'default':'','description':'Associated with a Google Cloud Project.'}}, 'headers':{ 'developer-token':{'field':{'name':'developer_token','kind':'string','order':7,'default':'','description':'Associated with your organization.'}}, 'login-customer-id':{'field':{'name':'login_customer_id','kind':'string','order':8,'default':'','description':'Associated with your Adwords account.'}} }, 'results':{ 'bigquery':{ 'dataset':{'field':{'name':'dataset','kind':'string','order':9,'default':'','description':'Existing dataset in BigQuery.'}}, 'table':{'field':{'name':'table','kind':'string','order':10,'default':'','description':'Table to write API call results to.'}} } } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True)
colabs/google_api_to_bigquery.ipynb
google/starthinker
apache-2.0
Generate Features And Target Data
# Generate features matrix and target vector X, y = make_classification(n_samples = 10000, n_features = 3, n_informative = 3, n_redundant = 0, n_classes = 2, random_state = 1)
machine-learning/f1_score.ipynb
tpin3694/tpin3694.github.io
mit
Create Logistic Regression
# Create logistic regression logit = LogisticRegression()
machine-learning/f1_score.ipynb
tpin3694/tpin3694.github.io
mit
Cross-Validate Model Using F1
# Cross-validate model using precision cross_val_score(logit, X, y, scoring="f1")
machine-learning/f1_score.ipynb
tpin3694/tpin3694.github.io
mit
Just adding some imports and setting graph display options.
from textblob import TextBlob import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib import seaborn as sns import cartopy pd.set_option('display.max_colwidth', 200) pd.options.display.mpl_style = 'default' matplotlib.style.use('ggplot') sns.set_context('talk') sns.set_style('whitegrid') plt.rcParams['figure.figsize'] = [12.0, 8.0] % matplotlib inline
arrows.ipynb
savioabuga/arrows
mit
Let's look at our data! load_df loads it in as a pandas.DataFrame, excellent for statistical analysis and graphing.
df = load_df('arrows/data/results.csv') df.info()
arrows.ipynb
savioabuga/arrows
mit
We'll be looking primarily at candidate, created_at, lang, place, user_followers_count, user_time_zone, polarity, and influenced_polarity, and text.
df[['candidate', 'created_at', 'lang', 'place', 'user_followers_count', 'user_time_zone', 'polarity', 'influenced_polarity', 'text']].head(1)
arrows.ipynb
savioabuga/arrows
mit
First I'll look at sentiment, calculated with TextBlob using the text column. Sentiment is composed of two values, polarity - a measure of the positivity or negativity of a text - and subjectivity. Polarity is between -1.0 and 1.0; subjectivity between 0.0 and 1.0.
TextBlob("Tear down this wall!").sentiment
arrows.ipynb
savioabuga/arrows
mit
Unfortunately, it doesn't work too well on anything other than English.
TextBlob("Radix malorum est cupiditas.").sentiment
arrows.ipynb
savioabuga/arrows
mit
TextBlob has a cool translate() function that uses Google Translate to take care of that for us, but we won't be using it here - just because tweets include a lot of slang and abbreviations that can't be translated very well.
sentence = TextBlob("Radix malorum est cupiditas.").translate() print(sentence) print(sentence.sentiment)
arrows.ipynb
savioabuga/arrows
mit
All right - let's figure out the most (positively) polarized English tweets.
english_df = df[df.lang == 'en'] english_df.sort('polarity', ascending = False).head(3)[['candidate', 'polarity', 'subjectivity', 'text']]
arrows.ipynb
savioabuga/arrows
mit
Extrema don't mean much. We might get more interesting data with mean polarities for each candidate. Let's also look at influenced polarity, which takes into account the number of retweets and followers.
candidate_groupby = english_df.groupby('candidate') candidate_groupby[['polarity', 'influence', 'influenced_polarity']].mean()
arrows.ipynb
savioabuga/arrows
mit