markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in. | cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
cat.load_catalog(a)
#cat.h = 1.0
#halo_masses = cat.halocat.halo_table['halo_mvir']
cat.load_model(a, 'redMagic')
hdulist.info() | notebooks/wt Integral calculation.ipynb | mclaughlin6464/pearce | mit |
Take the zspec in our selected zbin to calculate the dN/dz distribution. The below cell calculate the redshift distribution prefactor
$$ W = \frac{2}{c}\int_0^{\infty} dz H(z) \left(\frac{dN}{dz} \right)^2 $$ | nz_zspec = hdulist[8]
#N = 0#np.zeros((5,))
N_total = np.sum([row[2+zbin] for row in nz_zspec.data])
dNdzs = []
zs = []
W = 0
for row in nz_zspec.data:
N = row[2+zbin]
dN = N*1.0/N_total
#volIn, volOut = cat.cosmology.comoving_volume(row[0]), cat.cosmology.comoving_volume(row[2])
#fullsky_volume = volOut-volIn
#survey_volume = fullsky_volume*area/full_sky
#nd = dN/survey_volume
dz = row[2] - row[0]
#print row[2], row[0]
dNdz = dN/dz
H = cat.cosmology.H(row[1])
W+= dz*H*(dNdz)**2
dNdzs.append(dNdz)
zs.append(row[1])
#for idx, n in enumerate(row[3:]):
# N[idx]+=n
W = 2*W/const.c
print W
N_z = [row[2+zbin] for row in nz_zspec.data]
N_total = np.sum(N_z)#*0.01
plt.plot(zs,N_z/N_total)
plt.xlim(0,1.0)
len(dNdzs)
plt.plot(zs, dNdzs)
plt.vlines(z, 0,8)
plt.xlim(0,1.0)
plt.xlabel(r'$z$')
plt.ylabel(r'$dN/dz$')
len(nz_zspec.data)
np.sum(dNdzs)
np.sum(dNdzs)/len(nz_zspec.data)
W.to(1/unit.Mpc) | notebooks/wt Integral calculation.ipynb | mclaughlin6464/pearce | mit |
If we happened to choose a model with assembly bias, set it to 0. Leave all parameters as their defaults, for now. | 4.51077317e-03
params = cat.model.param_dict.copy()
#params['mean_occupation_centrals_assembias_param1'] = 0.0
#params['mean_occupation_satellites_assembias_param1'] = 0.0
params['logMmin'] = 13.4
params['sigma_logM'] = 0.1
params['f_c'] = 0.19
params['alpha'] = 1.0
params['logM1'] = 14.0
params['logM0'] = 12.0
print params
cat.populate(params)
nd_cat = cat.calc_analytic_nd()
print nd_cat
area = 5063 #sq degrees
full_sky = 41253 #sq degrees
volIn, volOut = cat.cosmology.comoving_volume(z_bins[zbin-1]), cat.cosmology.comoving_volume(z_bins[zbin])
fullsky_volume = volOut-volIn
survey_volume = fullsky_volume*area/full_sky
nd_mock = N_total/survey_volume
print nd_mock
nd_mock.value/nd_cat
#compute the mean mass
mf = cat.calc_mf()
HOD = cat.calc_hod()
mass_bin_range = (9,16)
mass_bin_size = 0.01
mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
print mean_host_mass
10**0.35
N_total
theta_bins = np.logspace(np.log10(0.004), 0, 24)#/60
tpoints = (theta_bins[1:]+theta_bins[:-1])/2
r_bins = np.logspace(-0.5, 1.7, 16)
rpoints = (r_bins[1:]+r_bins[:-1])/2 | notebooks/wt Integral calculation.ipynb | mclaughlin6464/pearce | mit |
Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now. | kernel = ExpSquaredKernel(0.05)
gp = george.GP(kernel)
gp.compute(np.log10(rpoints))
print xi
xi[xi<=0] = 1e-2 #ack
from scipy.stats import linregress
m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
plt.plot(rpoints, (2.22353827e+03)*(rpoints**(-1.88359)))
#plt.plot(rpoints, b2*(rpoints**m2))
plt.scatter(rpoints, xi)
plt.loglog();
plt.plot(np.log10(rpoints), b+(np.log10(rpoints)*m))
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
#plt.loglog();
print m,b
rpoints_dense = np.logspace(-0.5, 2, 500)
plt.scatter(rpoints, xi)
plt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))
plt.loglog(); | notebooks/wt Integral calculation.ipynb | mclaughlin6464/pearce | mit |
This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing. | theta_bins_rm = np.logspace(np.log10(2.5), np.log10(250), 21)/60 #binning used in buzzard mocks
tpoints_rm = (theta_bins_rm[1:]+theta_bins_rm[:-1])/2.0
rpoints_dense = np.logspace(-1.5, 2, 500)
x = cat.cosmology.comoving_distance(z)
plt.scatter(rpoints, xi)
plt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))
plt.vlines((a*x*np.radians(tpoints_rm)).value, 1e-2, 1e4)
plt.vlines((a*np.sqrt(x**2*np.radians(tpoints_rm)**2+unit.Mpc*unit.Mpc*10**(1.7*2))).value, 1e-2, 1e4, color = 'r')
plt.loglog(); | notebooks/wt Integral calculation.ipynb | mclaughlin6464/pearce | mit |
Perform the below integral in each theta bin:
$$ w(\theta) = W \int_0^\infty du \xi \left(r = \sqrt{u^2 + \bar{x}^2(z)\theta^2} \right) $$
Where $\bar{x}$ is the median comoving distance to z. | x = cat.cosmology.comoving_distance(z)
print x
-
np.radians(tpoints_rm)
#a subset of the data from above. I've verified it's correct, but we can look again.
wt_redmagic = np.loadtxt('/u/ki/swmclau2/Git/pearce/bin/mcmc/buzzard2_wt_%d%d.npy'%(zbin,zbin))
tpoints_rm
mathematica_calc = np.array([122.444, 94.8279, 73.4406, 56.8769, 44.049, 34.1143, 26.4202, \
20.4614, 15.8466, 12.2726, 9.50465, 7.36099, 5.70081, 4.41506, \
3.41929, 2.64811, 2.05086, 1.58831, 1.23009, 0.952656])#*W | notebooks/wt Integral calculation.ipynb | mclaughlin6464/pearce | mit |
The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say. | print W.value
print W.to("1/Mpc").value
print W.value
from scipy.special import gamma
def wt_analytic(m,b,t,x):
return W.to("1/Mpc").value*b*np.sqrt(np.pi)*(t*x)**(1 + m)*(gamma(-(1./2) - m/2.)/(2*gamma(-(m/2.))) )
plt.plot(tpoints_rm, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
#plt.plot(tpoints_rm, W.to("1/Mpc").value*mathematica_calc, label = 'Mathematica Calc')
#plt.plot(tpoints_rm, wt_analytic(m,10**b, np.radians(tpoints_rm), x),label = 'Mathematica Calc' )
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
wt_redmagic/(W.to("1/Mpc").value*mathematica_calc)
import cPickle as pickle
with open('/u/ki/jderose/ki23/bigbrother-addgals/bbout/buzzard-flock/buzzard-0/buzzard0_lb1050_xigg_ministry.pkl') as f:
xi_rm = pickle.load(f)
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].mbins
xi_rm.metrics[0].cbins
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(rpoints, xi)
for i in xrange(3):
for j in xrange(3):
plt.plot(xi_rm.metrics[0].rbins[:-1], xi_rm.metrics[0].xi[:,i,j,0])
plt.loglog();
plt.subplot(211)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
#plt.ylim([0,10])
plt.subplot(212)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
plt.ylim([2.0,4])
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].rbins #Mpc/h | notebooks/wt Integral calculation.ipynb | mclaughlin6464/pearce | mit |
Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.
Display the PMF.
Define <tt>BiasPmf</tt>. | def BiasPmf(pmf, label=''):
"""Returns the Pmf with oversampling proportional to value.
If pmf is the distribution of true values, the result is the
distribution that would be seen if values are oversampled in
proportion to their values; for example, if you ask students
how big their classes are, large classes are oversampled in
proportion to their size.
Args:
pmf: Pmf object.
label: string label for the new Pmf.
Returns:
Pmf object
"""
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf | code/.ipynb_checkpoints/chap03ex-checkpoint.ipynb | goodwordalchemy/thinkstats_notes_and_exercises | gpl-3.0 |
A single-process, univariate example
First we need a process model. In this case it will be a single stochastic process, | process = proc.WienerProcess.create_from_cov(mean=3., cov=0.0001) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
This we pass to a newly created particle filter, along with the initial time and initial state. The latter takes the form of a normal distribution. We have chosen to use Python datetimes as our data type for time, but we could have chosen ints or something else. | t0 = dt.datetime(2017, 5, 12, 16, 18, 25, 204000)
pf = particle.ParticleFilter(t0, state_distr=N(mean=100., cov=0.0000000000001), process=process) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
Next we create an observable, which incorporates a particular observation model. In this case, the observation model is particularly simple, since we are observing the entire state of the particle filter. Our observation model is a 1x1 identity: | observable = pf.create_observable(kalman.LinearGaussianObsModel.create(1.), process) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
We confirm that this is consistent with how our (linear-Gaussian) process model scales over time: | np.mean(pf._prior_particles), 100. + 3./24.
prior_predicted_obs1
prior_predicted_obs1 = observable.predict(t1)
npt.assert_almost_equal(prior_predicted_obs1.distr.mean, 100. + 3./24.)
npt.assert_almost_equal(prior_predicted_obs1.distr.cov, 250. + 25./24.)
npt.assert_almost_equal(prior_predicted_obs1.cross_cov, prior_predicted_obs1.distr.cov) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
A multi-process, multivariate example
The real power of our particle filter interface is demonstrated for process models consisting of several (independent) stochastic processes: | process1 = proc.WienerProcess.create_from_cov(mean=3., cov=25.)
process2 = proc.WienerProcess.create_from_cov(mean=[1., 4.], cov=[[36.0, -9.0], [-9.0, 25.0]]) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
Such models are common in finance, where, for example, the dynamics of a yield curve may be represented by a (multivariate) stochastic process, whereas the idiosyncratic spread for each bond may be an independent stochastic process.
Let us pass process1 and process2 as a (compound) process model to our particle filter, along with the initial time and state: | t0 = dt.datetime(2017, 5, 12, 16, 18, 25, 204000)
kf = kalman.KalmanFilter(
t0,
state_distr=N(
mean=[100.0, 120.0, 130.0],
cov=[[250.0, 0.0, 0.0],
[0.0, 360.0, 0.0],
[0.0, 0.0, 250.0]]),
process=(process1, process2)) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
We shall now create several observables, each corresponding to a distinct observation model. The first one will observe the entire state: | state_observable = kf.create_observable(
kalman.KalmanFilterObsModel.create(1.0, np.eye(2)),
process1, process2) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
The second observable will observe the first coordinate of the first process: | coord0_observable = kf.create_observable(
kalman.KalmanFilterObsModel.create(1.),
process1) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
The third, the first coordinate of the second process: | coord1_observable = kf.create_observable(
kalman.KalmanFilterObsModel.create(npu.row(1., 0.)),
process2) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
The fourth, the second coordinate of the second process: | coord2_observable = kf.create_observable(
kalman.KalmanFilterObsModel.create(npu.row(0., 1.)),
process2) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
The fifth will observe the sum of the entire state (across the two processes): | sum_observable = kf.create_observable(
kalman.KalmanFilterObsModel.create(npu.row(1., 1., 1.)),
process1, process2) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
And the sixth a certain linear combination thereof: | lin_comb_observable = kf.create_observable(
kalman.KalmanFilterObsModel.create(npu.row(2., 0., -3.)),
process1, process2) | src/jupyter/python/particle.ipynb | thalesians/tsa | apache-2.0 |
1. Please rewrite following functions to lambda expressions
Example:
```
def AddOne(x):
y=x+1
return y
addOneLambda = lambda x: x+1
``` | def foolOne(x): # note: assume x is a number
y = x * 2
y -= 25
return y
## Type Your Answer Below ##
foolOne_lambda = lambda x: x*2-25
# Generate a random 3*4 matrix for test
tlist = np.random.randn(3,4)
tlist
# Check if the lambda function yields same results as previous function
def test_foolOne(tlist, func1, func2):
if func1(tlist).all() == func2(tlist).all():
print("Same results!")
test_foolOne(tlist, foolOne, foolOne_lambda)
def foolTwo(x): # note: assume x here is a string
if x.startswith('g'):
return True
else:
return False
## Type Your Answer Below ##
foolTwo_lambda = lambda x: x.startswith('g')
# Generate a random 3*4 matrix of strings for test
# reference: https://pythontips.com/2013/07/28/generating-a-random-string/
# reference: http://www.programcreek.com/python/example/1246/string.ascii_lowercase
import random
import string
def random_string(size):
new_string = ''.join([random.choice(string.ascii_letters + string.digits) for n in range(size)])
return new_string
def test_foolTwo():
test_string = random_string(6)
if foolTwo_lambda(test_string) == foolTwo(test_string):
return True
for i in range(10):
if test_foolTwo() is False:
print('Different results!') | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
2. What's the difference between tuple and list? | ## Type Your Answer Below ##
# reference: https://docs.python.org/3/tutorial/datastructures.html
# tuple is immutable. They cannot be changed once they are made.
# tuples are easier for the python interpreter to deal with and therefore might end up being easier
# tuples might indicate that each entry has a distinct meaning and their order has some meaning (e.g., year)
# Another pragmatic reason to use tuple is when you have data which you know should not be changed (e.g., constant)
# tuples can be used as keys in dictionaries
# tuples usually contain a heterogeneous sequence of elements that are accessed via unpacking or indexing (or even by attribute in the case of namedtuples).
tuple1 = (1, 2, 3, 'a', True)
print('tuple: ', tuple1)
print('1st item of tuple: ', tuple1[0])
tuple1[0] = 4 # item assignment won't work for tuple
# tuple with just one element
tuple2 = (1) # just a number, so has no elements
print(type(tuple2))
tuple2[0]
# tuple with just one element
tuple3 = (1, )
print(type(tuple3))
tuple3[0]
# Question for TA: is tuple comprehension supported?
tuple4 = (char for char in 'abcdabcdabcd' if char not in 'ac')
print(tuple4)
# Question for TA: is the following two tuples the same?
tuple4= (1,2,'a'),(True, False)
tuple5 = ((1,2,'a'),(True, False))
print(tuple4)
print(tuple5)
# lists' elements are usually homogeneous and are accessed by iterating over the list.
list1 = [1, 2, 3, 'a', True]
print('list1: ', list1)
print('1st item of list: ', list1[0])
list1[0] = 4 # item assignment works for list
# list comprehensions
list_int = [element for element in list1 if type(element)==int]
print("list_int", list2)
## Type Your Answer Below ##
# A set is an unordered collection with no duplicate elements.
# set() can be used to eliminate duplicate entries
list1 = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
set1 = set(list1)
print(set1)
# set can be used for membership testing
set2 = {1, 2, 'abc', True}
print('abc' in set2) # membership testing
set1[0] # set does not support indexing
# set comprehensions
set4 = {char for char in 'abcdabcdabcd' if char not in 'ac'}
print(set4) | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
3. Why set is faster than list in python?
Answers:
Set and list are implemented using two different data structures - Hash tables and Dynamic arrays.
. Python lists are implemented as dynamic arrays (which can preserve ), which must be searched one by one to compare every single member for equality, with lookup speed O(n) depending on the size of the list.
. Python sets are implemented as hash tables, which can directly jump and locate the bucket (the position determined by the object's hash) using hash in a constant speed O(1), regardless of the size of the set. | # Calculate the time cost differences between set and list
import time
import random
def compute_search_speed_difference(scope):
list1 = []
dic1 = {}
set1 = set(dic1)
for i in range(0,scope):
list1.append(i)
set1.add(i)
random_n = random.randint(0,100000) # look for this random integer in both list and set
list_search_starttime = time.time()
list_search = random_n in list1
list_search_endtime = time.time()
list_search_time = list_search_endtime - list_search_starttime # Calculate the look-up time in list
#print("The look up time for the list is:")
#print(list_search_time)
set_search_starttime = time.time()
set_search = random_n in set1
set_search_endtime = time.time()
set_search_time = set_search_endtime - set_search_starttime # Calculate the look-up time in set
#print("The look up time for the set is:")
#print(set_search_time)
speed_difference = list_search_time - set_search_time
return(speed_difference)
def test(testing_times, scope):
test_speed_difference = []
for i in range(0,testing_times):
test_speed_difference.append(compute_search_speed_difference(scope))
return(test_speed_difference)
#print(test(1000, 100000)) # test 10 times can print out the time cost differences
print("On average, the look up time for a list is more than a set in:")
print(np.mean(test(100, 1000))) | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
4. What's the major difference between array in numpy and series in pandas?
Pandas series (which can contain values of different data types) is much more general and flexible than the one-dimensional Numpy array(which can only contain one data type).
While Numpy array has an implicitly defined integer used to access the values, the Pandas series has an explicitly defined index (which can be any data type) associated with the values (which gives the series object additonal capabilities).
What's the relationships among Numpy, Pandas and SciPy:
. Numpy is a libary for efficient array computations, modeled after Matlab. Arrays differ from plain Python lists in the way they are stored and handled. Array elements stay together in memory, so they can be quickly accessed. Numpy also supports quick subindexing (a[0,:,2]). Furthermore, Numpy provides vectorized mathematical functions (when you call numpy.sin(a), the sine function is applied on every element of array a), which are faster than a Python for loop.
. Pandas library is good for analyzing tabular data for exploratory data analysis, statistics and visualization. It's used to understand the data you have.
. Scipy provides large menu of libraries for scientific computation, such as integration, interpolation, signal processing, linear algebra, statistics. It's built upon the infrastructure of Numpy. It's good for performing scientific and engineering calculation.
. Scikit-learn is a collection of advanced machine-learning algorithms for Python. It is built upon Numpy and SciPy. It's good to use the data you have to train a machine-learning algorithm. | ## Type Your Answer Below ##
student = np.array([0, 'Alex', 3, 'M'])
print(student) # all the values' datatype is converted to str | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
Question 5-11 are related to titanic data (train.csv) on kaggle website
You can download the data from the following link:<br />https://www.kaggle.com/c/titanic/data
5. Read titanic data (train.csv) into pandas dataframe, and display a sample of data. | ## Type Your Answer Below ##
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/pcsanwald/kaggle-titanic/master/train.csv')
df.sample(3)
df.tail(3)
df.describe()
df.info() | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
6. What's the percentage of null value in 'Age'? | ## Type Your Answer Below ##
len(df[df.age.isnull()])/len(df)*100
| DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
7. How many unique classes in 'Embarked' ? | ## Type Your Answer Below ##
df.embarked.value_counts()
print('number of classes: ', len(df.embarked.value_counts().index))
print('names of classes: ', df.embarked.value_counts().index)
# Another method
embarked_set = set(df.embarked)
print(df.embarked.unique()) | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
8. Compare survival chance between male and female passangers.
Please use pandas to plot a chart you think can address this question | ## Type Your Answer Below ##
male_survived = df[df.survived==1][df.sex=='male']
male_survived_n = len(df.query('''sex=='male' and survived ==1'''))
female_survived = df[df.survived==1][df.sex=='female']
female_survived_n = len(df.query('''sex=='female' and survived ==1'''))
df_survived = pd.DataFrame({'male':male_survived_n, 'female': female_survived_n}, index=['Survived_number'])
df_survived
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
df_survived.plot(kind='bar', title='survived female and male', legend='True')
sns.pointplot(x='embarked', y='survived', hue='sex', data=df, palette={'male':'blue', 'female':'pink'}, markers=["*", "o"], linestyles=['-', '--'])
grid = sns.FacetGrid(df, col='embarked')
grid.map(sns.pointplot, 'pclass', 'survived', 'sex', palette={'male':'blue', 'female':'pink'}, markers=["*", "o"], linestyles=['-', '--'])
grid.add_legend()
grid = sns.FacetGrid(data_train, col='pclass')
grid.map(sns.barplot, 'embarked', 'age', 'sex')
grid.add_legend() | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
Observations from barplot above:
In Pclass = 1 and 2, female has higher mean age than male. But in Pclass = 3, female has lower mean age than male.
Passengers in Pclass = 1 has the highest average age, followed by Pclass = 2 and Pclass = 3.
Age trend among Embarked is not abvious
Decisions:
Use 'Pclass'and 'Sex' in estimating missing values in 'Age'.
9. Show the table of passangers who are 23 years old. | ## Type Your Answer Below ##
df_23=df.query('''age>23''')
df_23 | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
10. Is there a Jack or Rose in our dataset? | # first split name into string lists by ' '
def format_name(df):
df['split_name'] = df.name.apply(lambda x: x.split(' '))
return df
print(df.sample(3).split_name, '\n')
# for each subset string of name, check if "jack" or "rose" in it
for i in format_name(df).split_name:
for l in i:
if (("jack" in l.lower()) | ("rose" in l.lower()) ):
print("found names that contain jack or rose: ", l) | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
11. What's the percentage of surviving when passangers' pclass is 1? | ## Type Your Answer Below ##
df4 = df.query('''pclass==1''')
def percent(x):
m = int(x.count())
n = m/len(df4)
return(n)
df[['survived','pclass']].query('''pclass==1''').groupby([ 'survived']).agg({'pclass':percent}) | DS_HW1_Huimin Qian_052617.ipynb | emmaqian/DataScientistBootcamp | mit |
Series | ser = pd.Series(data = [ 100, 200, 300, 400, 500],
index = ['tom', 'bob', 'nancy', 'dan', 'eric'])
ser
ser.index # list of indices
# we can use rectangular brackets to access data at that location
print(ser['nancy'])
print(ser.loc['nancy']) # we can explicitly use the loc (location) function
# accessing multiple locations
print(ser[['nancy', 'bob']])
print()
print(ser[[4, 3, 1]])
print()
print(ser.iloc[[2]]) # we can explicitly use the iloc (ilocation) function
# check if an index exists in the Series
'bob' in ser
# multiply whole Series by two
ser * 2 | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
DataFrame | # create a DataFrame from a dictionary
d = {'one': pd.Series([100., 200., 300.], index = ['apple', 'ball', 'clock']),
'two': pd.Series([111., 222., 333., 444.], index = ['apple', 'ball', 'cerill', 'dancy'])}
df = pd.DataFrame(d)
df
df.index # indices
df.columns # columns
# subsetting by indices
pd.DataFrame(d, index = ['dancy', 'ball', 'apple'])
# subsetting, but adding a new column
pd.DataFrame(d, index = ['dancy', 'ball', 'apple'], columns = ['one', 'five'])
# create a DataFrame from a Python list of dictionaries
data = [{'alex': 1, 'joe': 2}, {'ema': 5, 'dora': 10, 'alice': 20}]
pd.DataFrame(data) # indices are inferred
pd.DataFrame(data, index = ['orange', 'red']) #inserting indices
# column subsetting
pd.DataFrame(data, columns = ['joe', 'dora', 'alice']) | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
Basic DataFrame operations | df
# slice one column
df['one']
df['three'] = df['one'] * df['two']
df
# logical operation
df['flag'] = df['one'] > 250
df
# remove data from DataFrame using the pop function
three = df.pop('three')
three
df
# we could also use the del function
del df['two']
df
# creates a new column from another existing column
df.insert(2, 'copy_of_one', df['one'])
df
# get the first two values and assign it to a new column
df['one_upper_half'] = df['one'][:2]
df | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.1.4. Pandas: Data Ingestion
csv (comma-separated format) using the pandas.read_csv. json using pandas.read_json. html (hyper-text markup language) using read_html, the output is a list of Pandas DataFrames. sql (structured query language) using read_sql_query. The pandas.read_sql_table imports all sql file.
4.1.5. Live Code: Data Ingestion | !ls ./ml-latest-small # contents of the movie-lens
!cat ./ml-latest-small/movies.csv
!cat ./ml-latest-small/movies.csv | wc -l # number of movies
!head -5 ./ml-latest-small/tags.csv
!head -5 ./ml-latest-small/ratings.csv | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
Let's load the movies.csv, tags.csv and ratings.csv using the pandas.read_csv function | import pandas as pd
movies = pd.read_csv('./ml-latest-small/movies.csv')
print(type(movies))
movies.head()
tags = pd.read_csv('./ml-latest-small/tags.csv')
tags.head()
ratings = pd.read_csv('./ml-latest-small/ratings.csv')
ratings.head()
# later we'll work on timestamps, for now we'll deleted them
del ratings['timestamp']
del tags['timestamp'] | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
Playing with data structures | # extract the 0th row, notice it's indeed a Series
row_0 = tags.iloc[0]
type(row_0)
print(row_0)
row_0.index
row_0['userId']
'rating' in row_0
row_0.name
row_0 = row_0.rename('first_row')
row_0.name
tags.head()
tags.index
tags.columns
# extract row 0, 11, 1000 from DataFrame
tags.iloc[[0, 11, 1000]] | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.1.6. Pandas: Descriptive Statistics
describe() shows summary statistics, corr() shows pairwise Pearson coefficient of columns, min(), max(), mode(), median(). Generally the syntax is dataframe.function(), frequently used optional parameter is axis = 0 (rows) or 1 (columns).
Also the logical any() returns whether any element is True and all() returns whether all element is True.
Other functions: count(), clip(), rank(), round()
4.1.7. Live Code: Descriptive Statistics | ratings['rating'].describe()
ratings['rating'].mean()
ratings['rating'].min()
ratings['rating'].max()
ratings['rating'].std()
ratings['rating'].mode()
ratings.corr()
filter1 = ratings['rating'] > 5
filter1.any()
filter2 = ratings['rating'] > 0
filter2.all() | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.2. Working with Pandas Part 2
4.2.1. Pandas: Data Cleaning
Real world is messy: missings, outliers, invalid, NaN, None etc.
Handling the problem: replace the value, fill the gaps, drop fields, interpolation.
Some functions: df.replace(), df.fillna(method = 'ffill' | 'backfill') - forward fill or backward fill), df.dropna(axis = 0|1), df.interpolate().
4.2.2. Live Code: Data Cleaning | !ls
!ls ml-latest-small/
import pandas as pd
movies = pd.read_csv('./ml-latest-small/movies.csv')
ratings = pd.read_csv('./ml-latest-small/ratings.csv')
tags = pd.read_csv('./ml-latest-small/tags.csv')
movies.shape
# is any row NULL?
movies.isnull().any()
ratings.shape
# is any row NULL?
ratings.isnull().any()
tags.shape
# is any row NULL?
import numpy as np
tags['tag'][:5] = np.nan
tags.isnull().any()
tags = tags.dropna()
# check again: is any row NULL?
tags.isnull().any() | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.2.3. Pandas: Data Visualization
df.plot.bar() - bar charts, df.plot.box() - box plots, df.plot.hist() - histograms, df.plot() - line graphs etc.
4.2.4. Live Code: Data Visualization | %matplotlib inline
ratings.hist(column = 'rating', figsize = (10, 5));
ratings.boxplot(column = 'rating'); | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.2.5. Pandas: Frequent Data Operations
df['sensor1'] - slice a column, df[df['sensor2'] > 0] - filter out rows from a column, df['sensor4'] = df['sensor1'] ** 2 - create a new column, df.loc[10] = [10, 20, 30, 40] - insert a new row, df.drop(df.index[[5]]) - delete the 5th row from DataFrame, del df['sensor4'] - delete a column, df.groupby('student_id').mean() - mean of grades by student etc.
4.2.6. Live Code: Frequent Data Operations
Slicing | tags['tag'].head() # head of the tag column
movies[['title', 'genres']].head() # head of the title and genres columns
ratings[1000:1010] # rows 1000 to 1010 from ratings df
ratings[-10:] # last ten rows of ratings
tag_counts = tags['tag'].value_counts() # count the number of unique values in the columns tag from tags
tag_counts[:10] # top 10 tag counts
tag_counts[:10].plot(kind = 'bar', figsize = (10, 5)); | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
Filter | is_highly_rated = ratings['rating'] >= 4.0 # filter movies with a rating more or equal to 4.0
ratings[is_highly_rated][-5:] # bottom 5 movies
is_animation = movies['genres'].str.contains('Animation') # search for the Animation string in the genres column
movies[is_animation][5:15]
movies[movies['title'].str.contains('Christmas')].head() # search for movies titles that contain the string Christmas | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
Groupby and Aggregate | ratings_count = ratings[['movieId', 'rating']].groupby('rating').count() # number of movies by rating grade
ratings_count
average_rating = ratings[['movieId', 'rating']].groupby('movieId').mean() # average rating grade by movieId
average_rating.tail()
movie_count = ratings[['movieId', 'rating']].groupby('movieId').count() # how many ratings per movie?
movie_count.head() | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.3. Working with Pandas Part 3
4.3.1. Pandas: Merging DataFrames
pd.concat([left, right]): stack DataFrames vertically (one on top of the other)
pd.concat([left, right], axis = 1, join = 'inner'): stack DataFrames horizontally, preserve both key columns
left.append(right): the same of concat, but it is a DataFrame function
pd.merge(left, right, how = 'inner'): the same as concat horizontally, but dumps duplicate key columns
Live Code | tags.head()
movies.head()
t = movies.merge(tags, on = 'movieId', how = 'inner')
t.head() | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
Combine aggregation, merging, and filters to get useful analytics | avg_ratings = ratings.groupby('movieId', as_index = False).mean() # average movie rating
del avg_ratings['userId'] # delete unused columns
del avg_ratings['timestamp'] # delete unused column
avg_ratings.head()
box_office = movies.merge(avg_ratings, on = 'movieId', how = 'inner') # merge DataFrames
box_office.tail()
is_highly_rated = box_office['rating'] >= 4.0
box_office[is_highly_rated][-5:]
is_comedy = box_office['genres'].str.contains('Comedy')
box_office[is_comedy][:5]
box_office[is_comedy & is_highly_rated][-5:] | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.3.2. Pandas: Frequent String Operations
str.split(): separates two strings around a delimiter character
str.contains(): check if a given string contains a given character
str.replace(): replace some characeters for another set of characeters
str.extract(): | import pandas as pd
import re
city = pd.DataFrame(('city_' + str(i) for i in range(4)), columns = ['city'])
city
# extract words in the strings
city['city'].str.extract('([a-z]\w{0,})')
# extract single digit in the strings
city['city'].str.extract('(\d)')
import pandas as pd
movies = pd.read_csv('./ml-latest-small/movies.csv')
ratings = pd.read_csv('./ml-latest-small/ratings.csv')
tags = pd.read_csv('./ml-latest-small/tags.csv')
movies.head()
# split 'genres' into multiple columns
movies_genres = movies['genres'].str.split('|', expand = True)
movies_genres[:10]
# by default, split() will return a series of lists, by providing expand = True, we make it returns a DataFrame
# add a new column for comedy genre flag
movies_genres['isComedy'] = movies['genres'].str.contains('Comedy')
movies_genres[:10]
# extract the year from the title
movies['year'] = movies['title'].str.extract('.*\((.*)\).*', expand = True)
movies.tail() | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.3.3. Pandas: Parsing Timestamps
Unix time tracks the progress of time by counting the number of seconds since an arbitrary date, 1970-01-01 00:00, as per the UTC time zone. Generic data type is datatime64[ns].
pandas.to_datetime() function parses timestamps. Now it we can filter information on dates, sort dates, | tags['parsed_time'] = pd.to_datetime(tags['timestamp'], unit = 's')
tags.head()
tags[tags['parsed_time'] > '2015-02-01'].head()
tags.sort_values(by = 'parsed_time', ascending = True)[:10]
tags.dtypes
tags['parsed_time'].dtype | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
Are movie ratings related to the year of launch? | average_rating = ratings[['movieId', 'rating']].groupby('movieId', as_index = False).count()
average_rating.head(5)
joined = movies.merge(average_rating, on = 'movieId', how = 'inner')
joined.corr()
yearly_average = joined[['year', 'rating']].groupby('year', as_index = False ).count()
yearly_average = yearly_average[yearly_average['year'] != '2007-'] # remove a '2007-' row from DataFrame
%matplotlib inline
# yearly_average.sort_values(by = 'year', ascending = True)[-20:].plot(x = 'year', y = 'rating',
# figsize = (10, 5), grid = True)
yearly_average[-20:].plot(x = 'year', y = 'rating', figsize = (10, 5), grid = True); | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
4.3.4. Pandas: Summary of Movie Rating Notebook
Data Ingestion (Importing), Statistical Analysis, Data Cleaning, Data Visualization, Data Transformation, Merging DataFrames, String Operations, Timestamps.
4.3.5. Coding Practice
4.3.6. Pandas Discussion
4.3.7. Pandas Efficiency - Extra Video Resource
4.4. Assessment | movies.isnull().any() | courses/python_for_data_analysis/week4_pandas.ipynb | jayme-anchante/cv-bio | mit |
Signal creation
Below we create a fake Gaussian signal for the example. | nb_points =100
x = np.sort(np.random.uniform(0,100,nb_points)) # increasing point
y = 120.0*scipy.stats.norm.pdf(x,loc=50,scale=5)
plt.plot(x,y)
plt.ylabel("Y")
plt.xlabel("X")
plt.show() | examples/Normalisation.ipynb | charlesll/RamPy | gpl-2.0 |
We can consider that the area of the Gaussian peak should be equal to 1, as it is the value of the intergral of a Gaussian distribution.
To normalise the spectra, we can do: | y_norm_area = rp.normalise(y,x=x,method="area")
plt.plot(x,y_norm_area)
plt.ylabel("Y")
plt.xlabel("X")
plt.show() | examples/Normalisation.ipynb | charlesll/RamPy | gpl-2.0 |
We could also just want the signal to be comprised between 0 and 1, so we normalise to the maximum: | y_norm_area = rp.normalise(y,method="intensity")
plt.plot(x,y_norm_area)
plt.ylabel("Y")
plt.xlabel("X")
plt.show() | examples/Normalisation.ipynb | charlesll/RamPy | gpl-2.0 |
Now, if our signal intensity was shifted from 0 by a constant, the "intensity" method will not work well. For instance, I can add 0.1 to y and plot it. | y2 = y + 1
plt.plot(x,y2)
plt.ylabel("Y")
plt.xlabel("X")
plt.ylim(0,12)
plt.show() | examples/Normalisation.ipynb | charlesll/RamPy | gpl-2.0 |
In this case, the "intensity" method will not work well: | y_norm_area = rp.normalise(y2,method="intensity")
plt.plot(x,y_norm_area)
plt.ylabel("Y")
plt.xlabel("X")
plt.ylim(0,1)
plt.show() | examples/Normalisation.ipynb | charlesll/RamPy | gpl-2.0 |
The signal remains shifted from 0. For safety, we can do a min-max normalisation, which will put the minimum to 0 and maximum to 1: | y_norm_area = rp.normalise(y2,method="minmax")
plt.plot(x,y_norm_area)
plt.ylabel("Y")
plt.xlabel("X")
plt.show() | examples/Normalisation.ipynb | charlesll/RamPy | gpl-2.0 |
The Dataset | train = pd.read_csv("train.csv", index_col='PassengerId')
test = pd.read_csv("test.csv", index_col='PassengerId')
train.head(3)
test.head(3)
# print(train.shape)
# print(test.shape)
print('Number of features: {}'.format(test.shape[1]))
print('Training samples: {}'.format(train.shape[0]))
print('Test samples: {}'.format(test.shape[0]))
print('Total number of samples: {}'.format(train.shape[0]+test.shape[0])) | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
The data contains the following features:
PassengerId - a number describing a unique passenger
Survived - the binary dependent variable indicating whether a passenger survived (1) or died (0)
Pclass - the passenger's class, from first class (1) to third class (3)
Name
Sex
Age
SibSp - the number of siblings or spouses aboard
Parch - the number of parents or children aboard
Ticket - the ticket number
Fare - the fare that the passenger paid
Cabin - the cabin number the passenger stayed in
Embarked - the port where the passenger embarked, whether at Cherbourg (C), Queenstown (Q), or Southampton (S)
It's time to explore the dataset to get a general idea of what it's like.
Exploratory Data Analysis
We first do some general overviews of the data via summary statistics and histograms before moving on to preprocessing. | # First, combine datasets
total = pd.concat([train, test])
# View summary statistics
total.describe() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
Most numerical data appear to be fairly complete, with the exception of fare (which only has one missing value) and age (which has 263 missing values). We can deal with the missing values later.
Let's also visualize the data with histograms to see the general distribution of the data. | # Generate histograms
sns.set_color_codes('muted')
total.hist(color='g')
plt.tight_layout()
plt.show() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
A fairly obvious observation here is that the PassengerId variable is not very useful -- we should drop this column. The rest of the data is quite interesting, with most passengers being somewhat young (around 20 to 30 years of age) and most people traveling without too much family.
Pclass serves as a proxy for the passengers' socioeconomic stata. Interestingly, the middle class appears to be the lowest in size, though not by much compared to upperclass passengers.
Looking at the data, given that we don't have the ticket number does not appear to be too informative. | totalwithoutnas = total.dropna()
scattermatrix = sns.pairplot(totalwithoutnas)
plt.show() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
Data Preprocessing
The first thing we should do is drop columns that will not be particularly helpful in our analysis. This includes the Ticket variable identified previously. | total.drop('Ticket', axis=1, inplace=True) | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
Feature Engineering
A number of the variables in the data present opportunities to be further generate meaningful features. One particular feature that appears to contain a lot of meaning is the names of the passengers. As in the notebook of Megan, we will be able to extract titles (which are indicative of both gender and marriage status) and families (given by shared surnames, under the assumption that incidences of unrelated people having the same surname are trivial).
Surnames and Titles | Surnames = pd.DataFrame(total['Name'].str.split(",").tolist(), columns=['Surname', 'Rest'])
Titles = pd.DataFrame(Surnames['Rest'].str.split(".").tolist(), columns=['Title', 'Rest1', 'Rest2'])
Surnames.drop('Rest',axis=1,inplace=True)
Titles = pd.DataFrame(Titles['Title'])
Surnames['Surname'].str.strip()
Titles['Title'].str.strip()
total['Surname'] = Surnames.set_index(np.arange(1,1310))
total['Title'] = Titles.set_index(np.arange(1,1310))
total.head() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
Let's tabulate our titles against sex to see the frequency of the various titles. | pd.crosstab(total['Sex'], total['Title']) | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
We see that with the exception of Master, Mr, Miss, and Mrs, the other titles are relatively rare. We can group rare titles together to simplify our analysis. Also note that Mlle and Ms are synonymous with Miss, and Mme is synonymous with Mrs. | raretitles = ['Dona', 'Lady', 'the Countess','Capt', 'Col', 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer']
total.ix[total['Title'].str.contains('Mlle|Ms|Miss'), 'Title'] = 'Miss'
total.ix[total['Title'].str.contains('Mme|Mrs'), 'Title'] = 'Mrs'
total.ix[total['Title'].str.contains('|'.join(raretitles)), 'Title'] = 'Rare Title'
pd.crosstab(total['Sex'], total['Title'])
total['Surname'].nunique() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
We have 875 unique surnames.
Family Sizes
Family size may have an impact on survival. To this end, we create a family size attribute and plot the relationship. | total['FamilySize'] = total['SibSp'] + total['Parch'] + 1
total['Family'] = total['Surname'] + "_" + total['FamilySize'].apply(str)
total.head(1)
# Plot family size
famsizebarplot = sns.countplot(total['FamilySize'].loc[1:len(train.index)], hue=total['Survived'])
famsizebarplot.set_xlabel('Family Size')
plt.show() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
The chart above clearly shows an interesting phenomenon -- single people and families of over 4 people have a significantly lower chance of survival than those in small (2 to 4 person) families. | # Categorize family size
total['FamSizeCat'] = 'small'
total.loc[(total['FamilySize'] == 1), 'FamSizeCat'] = 'singleton'
total.loc[(total['FamilySize'] > 4), 'FamSizeCat'] = 'large'
# Create mosaic plot
# To be done in the future using statsmodel | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
Dealing with Missing Values
We first check columns with missing values. | total.isnull().sum() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
It appears that age, cabin, embarked, and fare have missing values. Let's first work on "Embarked" and "Fare" given that there are few enough NaN's for us to be able to manually work out what values they should have. For Cabin, given that there are 1309 samples and more than 75% of them are missing, we can probably just drop this column. It might have been useful given that location on the ship might influence their chance of survival, but data is too sparse on this particular attribute. | total[(total['Embarked'].isnull()) | (total['Fare'].isnull())] | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
Miss Icard and Mrs. Stone, both shared the same cabin, both survived, both paid the same fare, and are both of the same class, interestingly enough. Mr. Storey is of the third class and embarked from Southampton.
Visualizing the fares by embarkation location may shed some light on where the two first class ladies embarked. | sns.boxplot(x='Embarked',y='Fare',data=train.dropna(),hue='Pclass')
plt.tight_layout()
plt.show()
trainwithoutnas = train.dropna()
print("Mean fares for passengers traveling in first class:")
print(trainwithoutnas[trainwithoutnas['Pclass']==1].groupby('Embarked')['Fare'].mean())
print("\nMedian fares for passengers traveling in first class:")
print(trainwithoutnas[trainwithoutnas['Pclass']==1].groupby('Embarked')['Fare'].median()) | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
The closest value to the $80 fare paid by both ladies for first class is very close to the mean fare paid by first class passengers embarking from Southampton, but also aligns very nicely with the median fare paid by those embarking from Cherbourg. Perhaps a swarm plot will better show how passengers are distributed. | sns.swarmplot(x='Embarked',y='Fare',data=train.dropna(),hue='Pclass')
plt.show() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
This is a tough call. Looking at the spread of the points, however, it seems that those that embarked from Southampton generally paid lower fares. It appears that the mean fare paid by those from Cherbourg is pulled up by the extreme outliers that paid more than \$500 for their tickets, with a majority of first class passengers indeed paying around $80. As such, we classify the two ladies as having embarked from Cherbourg (C). | total.loc[(62,830), 'Embarked'] = "C"
total.loc[(62,830), 'Embarked'] | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
The swarm plot also shows that the passengers embarking from Southampton in third class have paid around the same fare. It would be reasonable to use the mean value of third class passengers from Southampton as his fare value. | total.loc[1044,'Fare'] = total[(total['Embarked']=="S") & (total['Pclass']==3)]['Fare'].mean()
total.loc[1044, ['Name','Fare']] | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
We could do mice imputation similar to Megan's notebook via the fancyimpute package. | AgeHistogram = total['Age'].hist(bins=20, edgecolor="black")
AgeHistogram.set_xlabel("Age")
AgeHistogram.set_ylabel("Count")
AgeHistogram.set_title("Age (Prior to Missing Value Imputation)")
plt.show()
import fancyimpute
total.isnull().sum()
totalforMICE = total.drop(['Survived','Cabin','FamSizeCat','Family','Name','Surname'], axis=1)
# totalforMICE.fillna(np.nan)
totalforMICE['Sex'] = pd.get_dummies(totalforMICE['Sex'])['male']
dummycodedTitles = pd.get_dummies(totalforMICE['Title']).drop('Rare Title', axis=1)
totalforMICE = pd.merge(totalforMICE, dummycodedTitles, left_index=True, right_index=True, how='outer')
totalforMICE = totalforMICE.drop(['Title'],axis=1)
dummycodedEmbarked = pd.get_dummies(totalforMICE['Embarked'])[['C','Q']]
totalforMICE = totalforMICE.join(dummycodedEmbarked).drop(['Embarked'],axis=1)
dummycodedPclass = pd.get_dummies(totalforMICE['Pclass'], columns=[list("123")]).drop(3,axis=1)
totalforMICE = totalforMICE.join(dummycodedPclass).drop('Pclass',axis=1)
MICEdtotal = fancyimpute.MICE().complete(totalforMICE.values.astype(float))
MICEdtotal = pd.DataFrame(MICEdtotal, columns=totalforMICE.columns)
MICEdtotal.isnull().sum() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
We see that the MICE'd data has no more missing Age values. Plotting these values in the histogram: | MICEAgeHistogram = MICEdtotal['Age'].hist(bins=20, edgecolor="black")
MICEAgeHistogram.set_xlabel("Age")
MICEAgeHistogram.set_ylabel("Count")
MICEAgeHistogram.set_title("Age (After Missing Value Imputation)")
plt.show()
AgeHists, AgeHistAxes = plt.subplots(nrows=1,ncols=2, figsize=(10,5), sharey=True)
AgeHistAxes[0].hist(total['Age'].dropna(), bins=20, edgecolor='black', normed=True)
AgeHistAxes[0].set_xlabel("Age")
AgeHistAxes[0].set_ylabel("Density")
AgeHistAxes[0].set_title("Age Density (Original Data)")
AgeHistAxes[1].hist(MICEdtotal['Age'], bins=20, edgecolor='black', normed=True)
AgeHistAxes[1].set_xlabel("Age")
AgeHistAxes[1].set_ylabel("Density")
AgeHistAxes[1].set_title("Age Density (After MICE)")
AgeHists.tight_layout()
AgeHists | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
Most age values were added around the 20 to 30 year-old age range, which makes sense given the distribution of the ages in the data that we had. Note that the fancyimpute version of MICE uses Bayesian Ridge Regression. The density is not perfectly preserved but is useful enough to proceed with the analysis.
We use the new Age column with the imputed values for our analysis. | newtotal = total
newtotal['Age'] = MICEdtotal['Age'] | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
We can create some additional categorical columns based on our complete age feature -- whether the person is a child (18 or under) and whether a person is a mother (female, over 18, with children, and does not have the title "Miss"). | AgeandSexHist = sns.FacetGrid(newtotal.iloc[0:891,:], col = 'Sex', hue='Survived', size=5)
# AgeandSexHist.map(sns.distplot, 'Age', kde=False, hist_kws={'edgecolor':'black','stacked':True})
AgeandSexHist.map(plt.hist, 'Age', alpha=0.5, bins=20)
AgeandSexHist.add_legend()
# plt.close('all')
plt.show(AgeandSexHist)
AgeandSexHist, AgeandSexHistAxes = plt.subplots(nrows=1,ncols=2, figsize=(10,5), sharey=True)
AgeandSexHistAxes[0].hist([newtotal.loc[0:891, 'Age'].loc[(newtotal['Sex']=='male') & (newtotal['Survived']==1)],
newtotal.loc[0:891, 'Age'].loc[(newtotal['Sex']=='male') & (newtotal['Survived']==0)]],stacked=True, edgecolor='black', label=['Survived','Did Not Survive'], bins=24)
AgeandSexHistAxes[1].hist([newtotal.loc[0:891, 'Age'].loc[(newtotal['Sex']=='female') & (newtotal['Survived']==1)],
newtotal.loc[0:891, 'Age'].loc[(newtotal['Sex']=='female') & (newtotal['Survived']==0)]],stacked=True, edgecolor='black', bins=24)
AgeandSexHistAxes[0].set_title('Survival By Age for Males')
AgeandSexHistAxes[1].set_title('Survival By Age for Females')
for i in range(2):
AgeandSexHistAxes[i].set_xlabel('Age')
AgeandSexHistAxes[0].set_ylabel('Count')
AgeandSexHistAxes[0].legend()
plt.show()
# Create the 'Child' variable
newtotal['Child'] = 1
newtotal.loc[newtotal['Age']>=18, 'Child'] = 0
pd.crosstab(newtotal['Child'],newtotal['Survived'])
# Create the 'Mother' variable
newtotal['Mother'] = 0
newtotal.loc[(newtotal['Sex']=='female') & (newtotal['Parch'] > 0) & (newtotal['Age']>18) & (newtotal['Title'] != "Miss"), 'Mother'] = 1
pd.crosstab(newtotal['Mother'], newtotal['Survived']) | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
Let's take a look at the dataset once again. | newtotal.head()
newtotal.shape | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
We ensure that all important categorical variables are dummy coded. | dummycodedFamSizeCat = pd.get_dummies(newtotal['FamSizeCat']).drop('large',axis=1)
newtotal = newtotal.drop(['Title','Embarked','Pclass', 'Cabin', 'Name', 'Family', 'Surname'], axis=1)
newtotal['Sex'] = pd.get_dummies(newtotal['Sex'])['male']
newtotal = newtotal.join(dummycodedEmbarked)
newtotal = newtotal.join(dummycodedPclass)
newtotal = newtotal.join(dummycodedTitles)
newtotal = newtotal.join(dummycodedFamSizeCat)
newtotal.head() | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
After we split the data back into training and test sets, our data set will be ready to use for modeling. | newtrain = newtotal.loc[:891,:]
newtest = newtotal.loc[892:,:] | Predicting Survival on the Titanic.ipynb | rayjustinhuang/DataAnalysisandMachineLearning | mit |
<p>
Video can be decomposed into a 3D array, which has dimensions width x height x time. To tease out periodicity in geometric form, we will do the exact same thing as with sliding window 1D signal embeddings, but instead of just one sample per time shift, we need to take every pixel in every frame in the time window. The figure below depicts this
</p>
<img src = "VideoStackTime.svg"><BR><BR>
To see this visually in the video next to PCA of the embedding, look at the following video | video = io.open('jumpingjackssliding.ogg', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))) | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
<h2>PCA Preprocessing for Efficiency</h2>
<BR>
One issue we have swept under the rug so far is memory consumption and computational efficiency. Doing a raw sliding window of every pixel of every frame in the video would blow up in memory. However, even though there are <code>WH</code> pixels in each frame, there are only <code>N</code> frames in the video. This means that each frame in the video can be represented in an <code>(N-1)</code> dimensional subspace of the pixel space, and the coordinates of this subspace can be used in lieu of the pixels in the sliding window embedding. This can be done efficiently with a PCA step before the sliding window embedding. Run the cell below to load code that does PCA efficiently | #Do all of the imports and setup inline plotting
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from mpl_toolkits.mplot3d import Axes3D
import scipy.interpolate
from ripser import ripser
from persim import plot_diagrams
from VideoTools import *
##Here is the actual PCA code
def getPCAVideo(I):
ICov = I.dot(I.T)
[lam, V] = linalg.eigh(ICov)
V = V*np.sqrt(lam[None, :])
return V | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
<h2>Jumping Jacks Example Live Demo</h2>
<BR>
Let's now load in code that does sliding window embeddings of videos. The code is very similar to the 1D case, and it has the exact same parameters. The only difference is that each sliding window lives in a Euclidean space of dimension the number of pixels times <code>dim</code>. We're also using linear interpolation instead of spline interpolation to keep things fast | def getSlidingWindowVideo(I, dim, Tau, dT):
N = I.shape[0] #Number of frames
P = I.shape[1] #Number of pixels (possibly after PCA)
pix = np.arange(P)
NWindows = int(np.floor((N-dim*Tau)/dT))
X = np.zeros((NWindows, dim*P))
idx = np.arange(N)
for i in range(NWindows):
idxx = dT*i + Tau*np.arange(dim)
start = int(np.floor(idxx[0]))
end = int(np.ceil(idxx[-1]))+2
if end >= I.shape[0]:
X = X[0:i, :]
break
f = scipy.interpolate.interp2d(pix, idx[start:end+1], I[idx[start:end+1], :], kind='linear')
X[i, :] = f(pix, idxx).flatten()
return X | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
Finally, let's load in the jumping jacks video and perform PCA to reduce the number of effective pixels. <BR>
<i>Note that loading the video may take a few seconds on the virtual image</i> | #Load in video and do PCA to compress dimension
(X, FrameDims) = loadImageIOVideo("jumpingjacks.ogg")
X = getPCAVideo(X) | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
Now let's do a sliding window embedding and examine the sliding window embedding using TDA. As before, you should tweak the parameters of the sliding window embedding and study the effect on the geometry. | #Given that the period is 30 frames per cycle, choose a dimension and a Tau that capture
#this motion in the roundest possible way
#Plot persistence diagram and PCA
dim = 30
Tau = 1
dT = 1
#Get sliding window video
XS = getSlidingWindowVideo(X, dim, Tau, dT)
#Mean-center and normalize sliding window
XS = XS - np.mean(XS, 1)[:, None]
XS = XS/np.sqrt(np.sum(XS**2, 1))[:, None]
#Get persistence diagrams
dgms = ripser(XS)['dgms']
#Do PCA for visualization
pca = PCA(n_components = 3)
Y = pca.fit_transform(XS)
fig = plt.figure(figsize=(12, 6))
plt.subplot(121)
plot_diagrams(dgms)
plt.title("1D Persistence Diagram")
c = plt.get_cmap('nipy_spectral')
C = c(np.array(np.round(np.linspace(0, 255, Y.shape[0])), dtype=np.int32))
C = C[:, 0:3]
ax2 = fig.add_subplot(122, projection = '3d')
ax2.set_title("PCA of Sliding Window Embedding")
ax2.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)
ax2.set_aspect('equal', 'datalim')
plt.show() | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
<h1>Periodicities in The KTH Dataset</h1>
<BR>
We will now examine videos from the <a href = "http://www.nada.kth.se/cvap/actions/">KTH dataset</a>, which is a repository of black and white videos of human activities. It consists of 25 subjects performing 6 different actions in each of 4 scenarios. We will use the algorithms developed in this section to measure and rank the periodicity of the different video clips.
<h2>Varying Window Length</h2>
<BR>
For our first experiment, we will be showing some precomputed results of varying the sliding window length, while choosing Tau and dT appropriately to keep the dimension and the number of points, respectively, the same in the sliding window embedding. As an example, we will apply it to one of the videos of a subject waving his hands back and forth, as shown below | video = io.open('KTH/handwaving/person01_handwaving_d1_uncomp.ogg', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))) | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
We have done some additional preprocessing, including applying a bandpass filter to each PCA pixel to cut down on drift in the video. Below we show a video varying the window size of the embedding and plotting the persistence diagram, "self-similarity matrix" (distance matrix), and PCA of the embedding, as well as an evolving plot of the maximum persistence versus window size: | video = io.open('Handwaving_Deriv10_Block160_PCA10.ogg', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))) | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
As you can see, the maximum persistence peaks at around 40 frames, which is the period of each hand wave. This is what the theory we developed for 1D time series would have predicted as the roundest window.<BR>
<h1>Quasiperiodicity Quantification in Video</h1>
<BR>
<p>
We now examine how this pipeline can be used to detect quasiperiodicity in videos. As an example, we examine videos from high-speed glottography, or high speed videos (4000 fps) of the left and right vocal folds in the human vocal tract. When a person has a normal voice, the vocal folds oscillate in a periodic fashion. On the other hand, if they have certain types of paralysis or near chaotic dynamics, they can exhibit biphonation just as the horse whinnies did. More info can be found in <a href = "https://arxiv.org/abs/1704.08382">this paper</a>.
</p>
<h2>Healthy Subject</h2>
<p>
Let's begin by analyzing a video of a healthy person. In this example and in the following example, we will be computing both persistent H1 and persistent H2, so the code may take a bit longer to run.
</p>
Questions
What can we say about the vocal folds of a healthy subject based on the persistence diagram? | video = io.open('NormalPeriodicCrop.ogg', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
(X, FrameDims) = loadVideo("NormalPeriodicCrop.ogg")
X = getPCAVideo(X)
dim = 70
Tau = 0.5
dT = 1
derivWin = 10
#Take a bandpass filter in time at each pixel to smooth out noise
[X, validIdx] = getTimeDerivative(X, derivWin)
#Do the sliding window
XS = getSlidingWindowVideo(X, dim, Tau, dT)
#Mean-center and normalize sliding window
XS = XS - np.mean(XS, 1)[:, None]
XS = XS/np.sqrt(np.sum(XS**2, 1))[:, None]
#Compute and plot persistence diagrams
print("Computing persistence diagrams...")
dgms = ripser(XS, maxdim=2)['dgms']
print("Finished computing persistence diagrams")
plt.figure()
plot_diagrams(dgms)
plt.title("Persistence Diagrams$")
plt.show() | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
<h2>Subject with Biphonation</h2>
<p>
Let's now examine a video of someone with a vocal pathology. This video may still appear periodic, but if you look closely there's a subtle shift going on over time
</p> | video = io.open('ClinicalAsymmetry.mp4', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
(X, FrameDims) = loadVideo("ClinicalAsymmetry.mp4")
X = getPCAVideo(X)
X = X[0:200, :]
#'dim':32, 'Tau':0.25, 'dT':0.25, 'derivWin':2
dim = 100
Tau = 0.25
dT = 0.5
derivWin = 5
#Take a bandpass filter in time at each pixel to smooth out noise
[X, validIdx] = getTimeDerivative(X, derivWin)
#Do the sliding window
XS = getSlidingWindowVideo(X, dim, Tau, dT)
print("XS.shape = ", XS.shape)
#Mean-center and normalize sliding window
XS = XS - np.mean(XS, 1)[:, None]
XS = XS/np.sqrt(np.sum(XS**2, 1))[:, None]
#Compute and plot persistence diagrams
print("Computing persistence diagrams...")
dgms = ripser(XS, maxdim=2)['dgms']
print("Finished computing persistence diagrams")
plt.figure()
plt.title("Persistence Diagrams$")
plot_diagrams(dgms)
plt.show() | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
Question:
What shape is this? What does this say about the underlying frequencies involved?
<h2>Another Subject with Biphonation</h2>
<p>
Let's now examine another person with a vocal pathology, this time due to mucus that is pushed out of the vocal folds every other oscillation. This time, we will look at both $\mathbb{Z} / 2\mathbb{Z}$ coefficients and $\mathbb{Z} / 3 \mathbb{Z}$ coefficients.
</p>
Questions
Can you see any changes between $\mathbb{Z} / 2\mathbb{Z}$ coefficients and $\mathbb{Z} / 3 \mathbb{Z}$ coefficients? the What shape is this? Can you relate this to something we've seen before? | video = io.open('LTR_ED_MucusBiphonCrop.ogg', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
(X, FrameDims) = loadVideo("LTR_ED_MucusBiphonCrop.ogg")
X = getPCAVideo(X)
X = X[0:200, :]
#'dim':32, 'Tau':0.25, 'dT':0.25, 'derivWin':2
dim = 100
Tau = 1
dT = 0.25
derivWin = 5
#Take a bandpass filter in time at each pixel to smooth out noise
[X, validIdx] = getTimeDerivative(X, derivWin)
#Do the sliding window
XS = getSlidingWindowVideo(X, dim, Tau, dT)
print("XS.shape = ", XS.shape)
#Mean-center and normalize sliding window
XS = XS - np.mean(XS, 1)[:, None]
XS = XS/np.sqrt(np.sum(XS**2, 1))[:, None]
#Compute and plot persistence diagrams
print("Computing persistence diagrams...")
dgms2 = ripser(XS, maxdim=2, coeff=2)['dgms']
dgms3 = ripser(XS, maxdim=2, coeff=3)['dgms']
print("Finished computing persistence diagrams")
plt.figure(figsize=(8, 4))
plt.subplot(121)
plot_diagrams(dgms2)
plt.title("Persistence Diagrams $\mathbb{Z}2$")
plt.subplot(122)
plot_diagrams(dgms3)
plt.title("Persistence Diagrams $\mathbb{Z}3$")
plt.show() | SlidingWindow4-Video.ipynb | ctralie/TUMTopoTimeSeries2016 | apache-2.0 |
Screw dislocation | sim=md.atoms.import_cfg('configs/Fe_300K.cfg');
nlyrs_fxd=2
a=sim.H[0][0];
b_norm=0.5*a*np.sqrt(3.0);
b=np.array([1.0,1.0,1.0])
s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0) | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
Create a $\langle110\rangle\times\langle112\rangle\times\frac{1}{2}\langle111\rangle$ cell
create a $\langle110\rangle\times\langle112\rangle\times\langle111\rangle$ cell
Since mapp4py.md.atoms.cell_chenge() only accepts integer values start by creating a $\langle110\rangle\times\langle112\rangle\times\langle111\rangle$ cell | sim.cell_change([[1,-1,0],[1,1,-2],[1,1,1]]) | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
Remove half of the atoms and readjust the position of remaining
Now one needs to cut the cell in half in $[111]$ direction. We can achive this in three steps:
Remove the atoms that are above located above $\frac{1}{2}[111]$
Double the position of the remiaing atoms in the said direction
Shrink the box affinly to half on that direction | H=np.array(sim.H);
def _(x):
if x[2] > 0.5*H[2, 2] - 1.0e-8:
return False;
else:
x[2]*=2.0;
sim.do(_);
_ = np.full((3,3), 0.0)
_[2, 2] = - 0.5
sim.strain(_) | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
Readjust the postions | displace(sim,np.array([sim.H[0][0]/6.0,sim.H[1][1]/6.0,0.0])) | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
Replicating the unit cell | max_natms=100000
H=np.array(sim.H);
n_per_area=sim.natms/(H[0,0] * H[1,1]);
_ =np.sqrt(max_natms/n_per_area);
N0 = np.array([
np.around(_ / sim.H[0][0]),
np.around(_ / sim.H[1][1]),
1], dtype=np.int32)
sim *= N0;
H = np.array(sim.H);
H_new = np.array(sim.H);
H_new[1][1] += 50.0
resize(sim, H_new, np.full((3),0.5) @ H)
C_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);
Q=np.array([np.cross(s,b)/np.linalg.norm(np.cross(s,b)),s/np.linalg.norm(s),b/np.linalg.norm(b)])
hirth = HirthScrew(rot(C_Fe,Q), rot(b*0.5*a,Q))
ctr = np.full((3),0.5) @ H_new;
s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])
def _(x,x_d,x_dof):
sy=(x[1]-ctr[1])/H[1, 1];
x0=(x-ctr)/H[0, 0];
if sy>s_fxd or sy<=-s_fxd:
x_dof[1]=x_dof[2]=False;
x+=b_norm*hirth.ave_disp(x0)
else:
x+=b_norm*hirth.disp(x0)
sim.do(_)
H = np.array(sim.H);
H_inv = np.array(sim.B);
H_new = np.array(sim.H);
H_new[0,0]=np.sqrt(H[0,0]**2+(0.5*b_norm)**2)
H_new[2,0]=H[2,2]*0.5*b_norm/H_new[0,0]
H_new[2,2]=np.sqrt(H[2,2]**2-H_new[2,0]**2)
F = np.transpose(H_inv @ H_new);
sim.strain(F - np.identity(3))
xprt(sim, "dumps/screw.cfg") | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
putting it all together | def make_scrw(nlyrs_fxd,nlyrs_vel,vel):
#this is for 0K
#c_Fe=cubic(1.5187249951755375,0.9053185628093443,0.7249256807942608);
#this is for 300K
c_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);
#N0=np.array([80,46,5],dtype=np.int32)
sim=md.atoms.import_cfg('configs/Fe_300K.cfg');
a=sim.H[0][0];
b_norm=0.5*a*np.sqrt(3.0);
b=np.array([1.0,1.0,1.0])
s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)
Q=np.array([np.cross(s,b)/np.linalg.norm(np.cross(s,b)),s/np.linalg.norm(s),b/np.linalg.norm(b)])
c0=rot(c_Fe,Q)
hirth = HirthScrew(rot(c_Fe,Q),np.dot(Q,b)*0.5*a)
sim.cell_change([[1,-1,0],[1,1,-2],[1,1,1]])
displace(sim,np.array([sim.H[0][0]/6.0,sim.H[1][1]/6.0,0.0]))
max_natms=1000000
n_per_vol=sim.natms/sim.vol;
_=np.power(max_natms/n_per_vol,1.0/3.0);
N1=np.full((3),0,dtype=np.int32);
for i in range(0,3):
N1[i]=int(np.around(_/sim.H[i][i]));
N0=np.array([N1[0],N1[1],1],dtype=np.int32);
sim*=N0;
sim.kB=8.617330350e-5
sim.create_temp(300.0,8569643);
H=np.array(sim.H);
H_new=np.array(sim.H);
H_new[1][1]+=50.0
resize(sim, H_new, np.full((3),0.5) @ H)
ctr=np.dot(np.full((3),0.5),H_new);
s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])
s_vel=0.5-0.5*float(nlyrs_vel)/float(N0[1])
def _(x,x_d,x_dof):
sy=(x[1]-ctr[1])/H[1][1];
x0=(x-ctr)/H[0][0];
if sy>s_fxd or sy<=-s_fxd:
x_d[1]=0.0;
x_dof[1]=x_dof[2]=False;
x+=b_norm*hirth.ave_disp(x0)
else:
x+=b_norm*hirth.disp(x0)
if sy<=-s_vel or sy>s_vel:
x_d[2]=2.0*sy*vel;
sim.do(_)
H = np.array(sim.H);
H_inv = np.array(sim.B);
H_new = np.array(sim.H);
H_new[0,0]=np.sqrt(H[0,0]**2+(0.5*b_norm)**2)
H_new[2,0]=H[2,2]*0.5*b_norm/H_new[0,0]
H_new[2,2]=np.sqrt(H[2,2]**2-H_new[2,0]**2)
F = np.transpose(H_inv @ H_new);
sim.strain(F - np.identity(3))
return N1[2],sim; | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
Edge dislocation | sim=md.atoms.import_cfg('configs/Fe_300K.cfg');
nlyrs_fxd=2
a=sim.H[0][0];
b_norm=0.5*a*np.sqrt(3.0);
b=np.array([1.0,1.0,1.0])
s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)
sim.cell_change([[1,1,1],[1,-1,0],[1,1,-2]])
H=np.array(sim.H);
def _(x):
if x[0] > 0.5*H[0, 0] - 1.0e-8:
return False;
else:
x[0]*=2.0;
sim.do(_);
_ = np.full((3,3), 0.0)
_[0,0] = - 0.5
sim.strain(_)
displace(sim,np.array([0.0,sim.H[1][1]/4.0,0.0]))
max_natms=100000
H=np.array(sim.H);
n_per_area=sim.natms/(H[0, 0] * H[1, 1]);
_ =np.sqrt(max_natms/n_per_area);
N0 = np.array([
np.around(_ / sim.H[0, 0]),
np.around(_ / sim.H[1, 1]),
1], dtype=np.int32)
sim *= N0;
# remove one layer along ... direction
H=np.array(sim.H);
frac=H[0,0] /N0[0]
def _(x):
if x[0] < H[0, 0] /N0[0] and x[1] >0.5*H[1, 1]:
return False;
sim.do(_)
H = np.array(sim.H);
H_new = np.array(sim.H);
H_new[1][1] += 50.0
resize(sim, H_new, np.full((3),0.5) @ H)
C_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);
_ = np.cross(b,s)
Q = np.array([b/np.linalg.norm(b), s/np.linalg.norm(s), _/np.linalg.norm(_)])
hirth = HirthEdge(rot(C_Fe,Q), rot(b*0.5*a,Q))
_ = (1.0+0.5*(N0[0]-1.0))/N0[0];
ctr = np.array([_,0.5,0.5]) @ H_new;
frac = H[0][0]/N0[0]
s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])
def _(x,x_d,x_dof):
sy=(x[1]-ctr[1])/H[1, 1];
x0=(x-ctr);
if(x0[1]>0.0):
x0/=(H[0, 0]-frac)
else:
x0/= H[0, 0]
if sy>s_fxd or sy<=-s_fxd:
x+=b_norm*hirth.ave_disp(x0);
x_dof[0]=x_dof[1]=False;
else:
x+=b_norm*hirth.disp(x0);
x[0]-=0.25*b_norm;
sim.do(_)
H = np.array(sim.H)
H_new = np.array(sim.H);
H_new[0, 0] -= 0.5*b_norm;
resize(sim, H_new, np.full((3),0.5) @ H)
xprt(sim, "dumps/edge.cfg") | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
putting it all together | def make_edge(nlyrs_fxd,nlyrs_vel,vel):
#this is for 0K
#c_Fe=cubic(1.5187249951755375,0.9053185628093443,0.7249256807942608);
#this is for 300K
c_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);
#N0=np.array([80,46,5],dtype=np.int32)
sim=md.atoms.import_cfg('configs/Fe_300K.cfg');
a=sim.H[0][0];
b_norm=0.5*a*np.sqrt(3.0);
b=np.array([1.0,1.0,1.0])
s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)
# create rotation matrix
_ = np.cross(b,s)
Q=np.array([b/np.linalg.norm(b), s/np.linalg.norm(s), _/np.linalg.norm(_)])
hirth = HirthEdge(rot(c_Fe,Q),np.dot(Q,b)*0.5*a)
# create a unit cell
sim.cell_change([[1,1,1],[1,-1,0],[1,1,-2]])
H=np.array(sim.H);
def f0(x):
if x[0]>0.5*H[0][0]-1.0e-8:
return False;
else:
x[0]*=2.0;
sim.do(f0);
_ = np.full((3,3), 0.0)
_[0,0] = - 0.5
sim.strain(_)
displace(sim,np.array([0.0,sim.H[1][1]/4.0,0.0]))
max_natms=1000000
n_per_vol=sim.natms/sim.vol;
_=np.power(max_natms/n_per_vol,1.0/3.0);
N1=np.full((3),0,dtype=np.int32);
for i in range(0,3):
N1[i]=int(np.around(_/sim.H[i][i]));
N0=np.array([N1[0],N1[1],1],dtype=np.int32);
N0[0]+=1;
sim*=N0;
# remove one layer along ... direction
H=np.array(sim.H);
frac=H[0][0]/N0[0]
def _(x):
if x[0] < H[0][0]/N0[0] and x[1]>0.5*H[1][1]:
return False;
sim.do(_)
sim.kB=8.617330350e-5
sim.create_temp(300.0,8569643);
H = np.array(sim.H);
H_new = np.array(sim.H);
H_new[1][1] += 50.0
ctr=np.dot(np.full((3),0.5),H);
resize(sim,H_new, np.full((3),0.5) @ H)
l=(1.0+0.5*(N0[0]-1.0))/N0[0];
ctr=np.dot(np.array([l,0.5,0.5]),H_new);
frac=H[0][0]/N0[0]
s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])
s_vel=0.5-0.5*float(nlyrs_vel)/float(N0[1])
def f(x,x_d,x_dof):
sy=(x[1]-ctr[1])/H[1][1];
x0=(x-ctr);
if(x0[1]>0.0):
x0/=(H[0][0]-frac)
else:
x0/= H[0][0]
if sy>s_fxd or sy<=-s_fxd:
x_d[1]=0.0;
x_dof[0]=x_dof[1]=False;
x+=b_norm*hirth.ave_disp(x0);
else:
x+=b_norm*hirth.disp(x0);
if sy<=-s_vel or sy>s_vel:
x_d[0]=2.0*sy*vel;
x[0]-=0.25*b_norm;
sim.do(f)
H = np.array(sim.H)
H_new = np.array(sim.H);
H_new[0, 0] -= 0.5*b_norm;
resize(sim, H_new, np.full((3),0.5) @ H)
return N1[2], sim;
nlyrs_fxd=2
nlyrs_vel=7;
vel=-0.004;
N,sim=make_edge(nlyrs_fxd,nlyrs_vel,vel)
xprt(sim, "dumps/edge.cfg")
_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);
Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;
C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)
B = np.linalg.inv(
np.array([
[C[0, 0, 0, 0], C[0, 0, 1, 1], C[0, 0, 0, 1]],
[C[0, 0, 1, 1], C[1, 1, 1, 1], C[1, 1, 0, 1]],
[C[0, 0, 0, 1], C[1, 1, 0, 1], C[0, 1, 0, 1]]
]
))
_ = np.roots([B[0, 0], -2.0*B[0, 2],2.0*B[0, 1]+B[2, 2], -2.0*B[1, 2], B[1, 1]])
mu = np.array([_[0],0.0]);
if np.absolute(np.conjugate(mu[0]) - _[1]) > 1.0e-12:
mu[1] = _[1];
else:
mu[1] = _[2]
alpha = np.real(mu);
beta = np.imag(mu);
p = B[0,0] * mu**2 - B[0,2] * mu + B[0, 1]
q = B[0,1] * mu - B[0, 2] + B[1, 1]/ mu
K = np.stack([p, q]) * np.array(mu[1], mu[0]) /(mu[1] - mu[0])
K_r = np.real(K)
K_i = np.imag(K)
Tr = np.stack([
np.array(np.array([[1.0, alpha[0]], [0.0, beta[0]]])),
np.array([[1.0, alpha[1]], [0.0, beta[1]]])
], axis=1)
def u_f0(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) + x[0])
def u_f1(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) - x[0]) * np.sign(x[1])
def disp(x):
_ = Tr @ x
return K_r @ u_f0(_) + K_i @ u_f1(_) | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
Putting it all together | _ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);
Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;
C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)
disp = crack(C)
n = 300;
r = 10;
disp_scale = 0.3;
n0 = int(np.round(n/ (1 +np.pi), ))
n1 = n - n0
xs = np.concatenate((
np.stack([np.linspace(0, -r , n0), np.full((n0,), -1.e-8)]),
r * np.stack([np.cos(np.linspace(-np.pi, np.pi , n1)),np.sin(np.linspace(-np.pi, np.pi , n1))]),
np.stack([np.linspace(-r, 0 , n0), np.full((n0,), 1.e-8)]),
), axis =1)
xs_def = xs + disp_scale * disp(xs)
fig, ax = plt.subplots(figsize=(10.5,5), ncols = 2)
ax[0].plot(xs[0], xs[1], "b-", label="non-deformed");
ax[1].plot(xs_def[0], xs_def[1], "r-.", label="deformed"); | examples/fracture-gcmc-tutorial/dislocation.ipynb | sinamoeini/mapp4py | mit |
Test data
First, we generate a causal structure with 7 variables. Then we create a dataset with 6 variables from x0 to x5, with x6 being the latent variable for x2 and x3. | X = pd.read_csv('nonlinear_data.csv')
m = np.array([
[0, 0, 0, 0, 0],
[1, 0, 0, 0, 0],
[1, 1, 0, 0, 0],
[0, 1, 1, 0, 0],
[0, 0, 0, 1, 0]])
dot = make_dot(m)
# Save pdf
dot.render('dag')
# Save png
dot.format = 'png'
dot.render('dag')
dot | examples/RESIT.ipynb | cdt15/lingam | mit |
Causal Discovery
To run causal discovery, we create a RESIT object and call the fit method. | from sklearn.ensemble import RandomForestRegressor
reg = RandomForestRegressor(max_depth=4, random_state=0)
model = lingam.RESIT(regressor=reg)
model.fit(X) | examples/RESIT.ipynb | cdt15/lingam | mit |
Bootstrapping
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling. | import warnings
warnings.filterwarnings('ignore', category=UserWarning)
n_sampling = 100
model = lingam.RESIT(regressor=reg)
result = model.bootstrap(X, n_sampling=n_sampling) | examples/RESIT.ipynb | cdt15/lingam | mit |
Subsets and Splits