markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
8.3. Nutrients Present Is Required: TRUE    Type: ENUM    Cardinality: 1.N List nutrient species present in ocean biogeochemistry model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.4. Nitrous Species If N Is Required: FALSE    Type: ENUM    Cardinality: 0.N If nitrogen present, list nitrous species.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.5. Nitrous Processes If N Is Required: FALSE    Type: ENUM    Cardinality: 0.N If nitrogen present, list nitrous processes.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
9. Tracers --> Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE    Type: STRING    Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
9.2. Upper Trophic Levels Treatment Is Required: TRUE    Type: STRING    Cardinality: 1.1 Define how upper trophic level are treated
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
10. Tracers --> Ecosystem --> Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Type of phytoplankton
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
10.2. Pft Is Required: FALSE    Type: ENUM    Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
10.3. Size Classes Is Required: FALSE    Type: ENUM    Cardinality: 0.N Phytoplankton size classes (if applicable)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
11. Tracers --> Ecosystem --> Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Type of zooplankton
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
11.2. Size Classes Is Required: FALSE    Type: ENUM    Cardinality: 0.N Zooplankton size classes (if applicable)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
12. Tracers --> Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is there bacteria representation ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
12.2. Lability Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Describe treatment of lability in dissolved organic matter
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13. Tracers --> Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE    Type: ENUM    Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.2. Types If Prognostic Is Required: FALSE    Type: ENUM    Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.3. Size If Prognostic Is Required: FALSE    Type: ENUM    Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.4. Size If Discrete Is Required: FALSE    Type: STRING    Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.5. Sinking Speed If Prognostic Is Required: FALSE    Type: ENUM    Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14. Tracers --> Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE    Type: ENUM    Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.2. Abiotic Carbon Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is abiotic carbon modelled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.3. Alkalinity Is Required: TRUE    Type: ENUM    Cardinality: 1.1 How is alkalinity modelled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
Motivation: Feature engineering in Machine Learning In ML, one classic way to handle nonlinear relations in data (non-numerical data) with linear methods is to map the data to so called features using a nonlinear function $\FM$ (a function mapping from the data to a vector space).
display(Image(filename="monomials.jpg", width=200))
RKHS_in_Machine_learning.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
In the Feature Space (the domain of $\FM$), we can then use linear algebra, such as angles, norms and inner products, inducing nonlinear operations on the Input Space (codomain of $\FM$). The central thing we need, apart from the feature space being a vector space are inner products, as they induce norms and a possibility to measure angles. Simple classification algorithm using only inner products Say we are given data points from the mixture of two distributions with densities $p_0,p_1$: $$x_i \sim w_0 p_0 + w_1 p_1$$ and labels $l_i = 0$ if $x_i$ was actually generated by $p_0$, $l_i = 1$ otherwise. A very simple classification algorithm would be to compute the mean in feature space $\mu_c = \frac{1}{N_c} \sum_{l_i = c} \FM(x_i)$ for $c \in {0,1}$ and assign a test point to the class which is the most similar in terms of inner product. In other words, the decision function $ f_d:\IS\to{0,1}$ is defined by $$f_d(x) = \argmax_{c\in{0,1}} \prodDot{\FM(x)}{\mu_c}$$
data = np.vstack([stats.multivariate_normal(np.array([-2,2]), np.eye(2)*1.5).rvs(100), stats.multivariate_normal(np.ones(2)*2, np.eye(2)*1.5).rvs(100)]) distr_idx = np.r_[[0]*100, [1]*100] for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, alpha=0.4, marker=marker) pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.2, head_length=0.2, fc=c, ec=c) pl.show()
RKHS_in_Machine_learning.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Remarkably, all positive definite functions are inner products in some feature space. Theorem Let $\IS$ be a nonempty set and let $\PDK:\IS\times\IS \to \Reals$, called a kernel. The following two conditions are equivalent: * $\PDK$ is symmetric and positive semi definite (psd), i.e. for all $x_1, \dots, x_m \in \IS$ the matrix $\Gram$ defined by with entries $\Gram_{i,j} = \PDK(x_i, x_j)$ is symmetric psd $\FM$ is called the Feature Map and $\RKHS_\FM$ the feature space. * there exists a map $\FM: \IS \to \RKHS_\FM$ to a hilbert space $\RKHS_\FM$ such that $$\PDK(x_i, x_j) = \prodDot{\FM(x_i)}{\FM(x_j)}_\RKHS$$ In other words, $\PDK$ computes the inner product in some $\RKHS_\FM$. We furthermore endow the space with the norm induced by the dot product $\|\cdot\|_\PDK$. From the second condition, it is easy to construct $\PDK$ given $\FM$. A general construction for $\FM$ given $\PDK$ is not as trivial but still elementary. Construction of the canonical feature map (Aronszajn map) We give the canonical construction of $\FM$ from $\PDK$, together with a definition of the inner product in the new space. In particular, the feature for each $x \in \IS$ will be a function from $\IS$ to $\Reals$. $$\FM:\IS \to \Reals^\IS\ \FM(x) = \PDK(\cdot, x)$$ Thus for the linear kernel $\PDK(x,y)=\prodDot{x}{y}$ we have $\FM(x) = \prodDot{\cdot}{x}$ and for the gaussian kernel $\PDK(x,y)=\exp\left(-0.5{\|x-y\|^2}/{\sigma^2}\right)$ we have $\FM(x) = \exp\left(-0.5{\|\cdot -x \|^2}/{\sigma^2}\right)$. Now $\RKHS$ is the closure of $\FM(\IS)$ wrt. linear combinations of its elements: $$\RKHS = \left{f: f(\cdot)=\sum_{i=1}^m a_i \PDK(\cdot, x_i) \right} = span(\FM(\IS))$$ where $m \in \Nats, a_i \in \Reals, x \in \IS$. This makes $\RKHS$ a vector space over $\Reals$. For $f(\cdot)=\sum_{i=1}^m a_i \PDK(\cdot, x_i)$ and $g(\cdot)=\sum_{i=1}^m' b_j \PDK(\cdot, x'j)$ we define the inner product in $\RKHS$ as $$\prodDot{f}{g} = \sum{i=1}^m \sum_{i=1}^m' b_j a_i \PDK(x'_j, x_i)$$ In particular, for $f(\cdot) = \PDK(\cdot,x), g(\cdot) = \PDK(\cdot,x')$, we have $\prodDot{f}{g} = \prodDot{ \PDK(\cdot,x)}{ \PDK(\cdot,x')}=\PDK(x,x')$. This is called the reproducing property of the kernel of this particular $\RKHS$. Obviously $\RKHS$ with this inner product satisfies all conditions for a hilbert space: the inner product is * positive definite * linear in its first argument * symmetric which is why $\RKHS$ is called a Reproducing Kernel Hilbert Space (RKHS). Inner product classification algorithm is equivalent to a classification with KDEs The naive classification algorithm we outlined earlier is actually equivalent to a simple classification algorithm using KDEs. For concreteness, let $\PDK(x,x') = { {{(2\pi )^{-N/2}\left|\Sigma \right|^{-1/2}}}\exp({-{ 0.5}(x-x' )^{\top }\Sigma ^{-1}(x-x' )}})$. Then the mean in feature space of data from distribution $c$ with the canonical feature map is $$\mu_c = \frac{1}{N_c} \sum_{l_i = c} \FM(x_i) = \frac{1}{N_c} \sum_{l_i = c} \PDK(x_i, \cdot) = \frac{1}{N_c} \sum_{l_i = c} { {{(2\pi )^{-N/2}\left|\Sigma \right|^{-1/2}}}\exp({-{ 0.5}(\cdot-x_i )^{\top }\Sigma ^{-1}(\cdot-x_i )}})$$ which is just a KDE of the density $p_c$ using gaussian kernels with parameter $\Sigma$. For a test point $y$ that we want to classify, its feature is just $\PDK(y,\cdot) = { {{(2\pi )^{-N/2}\left|\Sigma \right|^{-1/2}}}\exp({-{ 0.5}(y-\cdot )^{\top }\Sigma ^{-1}(y-\cdot )}})$. Its inner product with the class mean is just the evaluation of the KDE at $y$ (because of the reproducing property). Thus each point is classified as belonging to the class for which the KDE estimate assigns highest probability to $y$.
class Kernel(object): def mean_emb(self, samps): return lambda Y: self.k(samps, Y).sum()/len(samps) def mean_emb_len(self, samps): return self.k(samps, samps).sum()/len(samps**2) def k(self, X, Y): raise NotImplementedError() class FeatMapKernel(Kernel): def __init__(self, feat_map): self.features = feat_map def features_mean(self, samps): return self.features(samps).mean(0) def mean_emb_len(self, samps): featue_space_mean = self.features_mean(samps) return featue_space_mean.dot(featue_space_mean) def mean_emb(self, samps): featue_space_mean = self.features(samps).mean(0) return lambda Y: self.features(Y).dot(featue_space_mean) def k(self, X, Y): gram = self.features(X).dot(self.features(Y).T) return gram class LinearKernel(FeatMapKernel): def __init__(self): FeatMapKernel.__init__(self, lambda x: x) class GaussianKernel(Kernel): def __init__(self, sigma): self.width = sigma def k(self, X, Y=None): assert(len(np.shape(X))==2) # if X=Y, use more efficient pdist call which exploits symmetry if Y is None: sq_dists = squareform(pdist(X, 'sqeuclidean')) else: assert(len(np.shape(Y))==2) assert(np.shape(X)[1]==np.shape(Y)[1]) sq_dists = cdist(X, Y, 'sqeuclidean') K = exp(-0.5 * (sq_dists) / self.width ** 2) return K class StudentKernel(Kernel): def __init__(self, s2, df): self.dens = dist.mvt(0,s2,df) def k(self, X,Y=None): if Y is None: sq_dists = squareform(pdist(X, 'sqeuclidean')) else: assert(len(np.shape(Y))==2) assert(np.shape(X)[1]==np.shape(Y)[1]) sq_dists = cdist(X, Y, 'sqeuclidean') dists = np.sqrt(sq_dists) return exp(self.dens.logpdf(dists.flatten())).reshape(dists.shape) def kernel_mean_inner_prod_classification(samps1, samps2, kernel): mean1 = kernel.mean_emb(samps1) norm_mean1 = kernel.mean_emb_len(samps1) mean2 = kernel.mean_emb(samps2) norm_mean2 = kernel.mean_emb_len(samps2) def sim(test): return (mean1(test) - mean2(test)) def decision(test): if sim(test) >= 0: return 1 else: return 0 return sim, decision def apply_to_mg(func, *mg): #apply a function to points on a meshgrid x = np.vstack([e.flat for e in mg]).T return np.array([func(i.reshape((1,2))) for i in x]).reshape(mg[0].shape) def plot_with_contour(samps, data_idx, cont_func, method_name, delta = 0.025, pl = pl): x = np.arange(samps.T[0].min()-delta, samps.T[1].max()+delta, delta) y = np.arange(samps.T[1].min()-delta, samps.T[1].max()+delta, delta) X, Y = np.meshgrid(x, y) Z = apply_to_mg(cont_func, X,Y) Z = Z.reshape(X.shape) # contour labels can be placed manually by providing list of positions # (in data coordinate). See ginput_manual_clabel.py for interactive # placement. fig = pl.figure() pl.pcolormesh(X, Y, Z > 0, cmap=pl.cm.Pastel2) pl.contour(X, Y, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'], levels=[-.5, 0, .5]) pl.title('Decision for '+method_name) #plt.clabel(CS, inline=1, fontsize=10) for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, alpha=0.7, marker=marker) pl.show() for (kern_name, kern) in [("Linear", LinearKernel()), ("Student-t", StudentKernel(0.1,10)), ("Gauss", GaussianKernel(0.1)) ]: (sim, dec) = kernel_mean_inner_prod_classification(data[distr_idx==1,:], data[distr_idx==0,:], kern) plot_with_contour(data, distr_idx, sim, 'Inner Product classif. '+kern_name, pl = plt)
RKHS_in_Machine_learning.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Obviously, the linear kernel might be enough already for this simple dataset. Another interesting observation however is that the Student-t based kernel is more robust to outliers of the datasets and yields a lower variance classification algorithm as compared to using a Gaussian kernel. This is to be expected, given the fatter tails of the Student-t. Now lets look at a dataset that is not linearly separable.
data, distr_idx = sklearn.datasets.make_circles(n_samples=400, factor=.3, noise=.05) for (kern_name, kern) in [("Linear", LinearKernel()), ("Stud", StudentKernel(0.1,10)), ("Gauss1", GaussianKernel(0.1)), ]: (sim, dec) = kernel_mean_inner_prod_classification(data[distr_idx==1,:], data[distr_idx==0,:], kern) plot_with_contour(data, distr_idx, sim, 'Inner Product classif. '+kern_name, pl = plt)
RKHS_in_Machine_learning.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Ok, now we start by defining a starting chemical composition of interest (warning to put dots so that Python interpretes the numbers as float and not int!):
CO2 = 4890. H2O = 3.67 PPMS0 = 3560. PPMCl0 = 1572. Si = 52.12 Al = 16.38 Fe = 5.82 Ca = 10.72 Mg = 6.71 Na = 2.47 K = 1.89
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Now we define the value of pi to use, which is the parameterisation described in Dixon(1997). This value is not used unless piswitch is set to 1.
pi = -0.05341
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Below is the wt% of SiO2 that was used for the SiO2 only parameterisation. From Fred: "I think (I should check but dont have the code available now) that this is no-longer used as the SiO2 wt% is calculated from Si mol%."
SiO2 = 52.12 #should match the good value
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Now the pisol switch: if 1 solubility is based on the value of pi, if 0 it is used on SiO2 wt% only.
pisol = 0
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Now a second switch to determine if pisol is given or should be calculated: if 1 then the value pi is used for solubility calculations, and if 0 pi is calculated from the composition of the melt.
piswitch = 0
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Now let's fix the other parameters of our system: temperature, pressure, and oxygen fugacity! We will start with a fixed oxygen fugacity, pressure and temperature:
T = 1153.; #in K P = 100.; #in bars NNO = 1.8;
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Ok, for this single calculation, SolEx has a flag for terminal output, but it is not working in the Notebook. So let's put the flag to 0:
flagout = bool(0) #has to be a bool value
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
And the function to call is pysolex.pyex:
output = pysolex.pyex(H2O,CO2,PPMS0,PPMCl0,Si,Al,Fe,Ca,Mg,Na,K,pi,SiO2,pisol,piswitch,flagout,T,P,NNO) output
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Oh... Here is the result contained in output => a SWIG Object containing double number... To read it, I only found one way online: reading directly the memory block allocated to this Ojbect using ctypes:
rawPointer = output.__long__() # we're going to read the "address" pC = ctypes.cast(rawPointer, ctypes.POINTER( ctypes.c_double )) # and we read the array stored at this address print(("wt% H2O = "+str(pC[0]))) print(("PPM CO2 = "+str(pC[1]))) print(("PPM S = "+str(pC[2]))) print(("PPM Cl = "+str(pC[3]))) print(("Vol% Exsolve = "+str(pC[4]))) print(("XV H2O (mass) = "+str(pC[5]))) print(("XV CO2 (mass) = "+str(pC[6]))) print(("XV S (mass) = "+str(pC[7]))) print(("XV Cl (mass) = "+str(pC[8]))) print(("molV H2O (mass) = "+str(pC[9]))) print(("molV CO2 (mass) = "+str(pC[10]))) print(("molV S (mass) = "+str(pC[11]))) print(("molV Cl (mass) = "+str(pC[12])))
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Ok, it's working. Now let's complicate the case. Let's imagine that we have a closed-system degassing, going from P = 4000 to 100 bar, as you can do in SolEx. You will write something like that to reproduce the calculation in Python:
Pint = np.arange(100,4000,100) #start, stop, step rev_Pint = Pint[::-1] # To have the first values being the highest ones results = np.zeros((len(Pint),13)) # For storing the results
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
We create a loop in which we will call pysolex for doing the calculation:
for i in range(len(rev_Pint)): output = pysolex.pyex(H2O,CO2,PPMS0,PPMCl0,Si,Al,Fe,Ca,Mg,Na,K,pi,SiO2,pisol,piswitch,flagout,T,rev_Pint[i],NNO) rawPointer = output.__long__() pC = ctypes.cast(rawPointer, ctypes.POINTER( ctypes.c_double )) results[i,0] = pC[0] #wt% water results[i,1] = pC[1] #co2 ppm results[i,2] = pC[2] #S ppm results[i,3] = pC[3] #Cl ppm results[i,4] = pC[4] #EXSOLVE results[i,5] = pC[5] #XV H2O results[i,6] = pC[6] #XV CO2 results[i,7] = pC[7] #XV S results[i,8] = pC[8] #XV Cl results[i,9] = pC[9] #molV H2O results[i,10] = pC[10] #molV CO2 results[i,11] = pC[11] #molV S results[i,12] = pC[12] #molV Cl
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Done! Let's do a nice graph for those results:
plt.plot(rev_Pint[:],results[:,0]) plt.xlabel("Pressure, bars", fontsize = 14) plt.ylabel("Water content in melt, wt%", fontsize = 14) plt.title("Fig. 1: Water concentration vs pressure, closed system",fontsize = 14,fontweight = "bold") plt.text(2000,2,("T ="+str(T)+"\nNNO ="+str(NNO)),fontsize = 14)
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Let's do the same thing for the CO2 now:
plt.plot(rev_Pint[:],results[:,1]) plt.xlabel("Pressure, bars", fontsize = 14) plt.ylabel("CO$_2$ content in melt, ppm", fontsize = 14) plt.title("Fig. 2: CO$_2$ concentration vs pressure, closed system",fontsize = 14,fontweight = "bold") plt.text(1000,800,("T ="+str(T)+"\nNNO ="+str(NNO)),fontsize = 14)
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Now let's make the case of an open system. Easy, we will just take the H2O, CO2, S and Cl values from the past output to input them in the next...
for i in range(len(rev_Pint)): if i == 0: output = pysolex.pyex(H2O,CO2,PPMS0,PPMCl0,Si,Al,Fe,Ca,Mg,Na,K,pi,SiO2,pisol,piswitch,flagout,T,rev_Pint[i],NNO) else: H2O = results[i-1,0] CO2 = results[i-1,1] PPMS = results[i-1,2] PPMCl = results[i-1,3] output = pysolex.pyex(H2O,CO2,PPMS,PPMCl,Si,Al,Fe,Ca,Mg,Na,K,pi,SiO2,pisol,piswitch,flagout,T,rev_Pint[i],NNO) rawPointer = output.__long__() pC = ctypes.cast(rawPointer, ctypes.POINTER( ctypes.c_double )) results[i,0] = pC[0] #wt% water results[i,1] = pC[1] #co2 ppm results[i,2] = pC[2] #S ppm results[i,3] = pC[3] #Cl ppm results[i,4] = pC[4] #EXSOLVE results[i,5] = pC[5] #XV H2O results[i,6] = pC[6] #XV CO2 results[i,7] = pC[7] #XV S results[i,8] = pC[8] #XV Cl results[i,9] = pC[9] #molV H2O results[i,10] = pC[10] #molV CO2 results[i,11] = pC[11] #molV S results[i,12] = pC[12] #molV Cl
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
We can now plot the results as we did for the closed system case:
plt.plot(rev_Pint[:],results[:,0]) plt.xlabel("Pressure, bars", fontsize = 14) plt.ylabel("Water content in melt, wt%", fontsize = 14) plt.title("Fig. 3: Open system ",fontsize = 14,fontweight = "bold") plt.text(2000,2,("T ="+str(T)+"\nNNO ="+str(NNO))) plt.plot(rev_Pint[:],results[:,1]) plt.xlabel("Pressure, bars", fontsize = 14) plt.ylabel("CO$_2$ content in melt, ppm", fontsize = 14) plt.title("Fig. 4: Open system ",fontsize = 14,fontweight = "bold") #plt.text(1000,800,("T ="+str(T)+"\nNNO ="+str(NNO))) results[0,0] rev_Pint[0]
PySolExExample.ipynb
charlesll/Examples
gpl-2.0
Download the .rec files from https://s3.amazonaws.com/smallya-test/randallnotrandall/rnr_train.lst.rec https://s3.amazonaws.com/smallya-test/randallnotrandall/rnr_valid.lst.rec
download('https://s3.amazonaws.com/smallya-test/randallnotrandall/rnr_train.lst.rec') download('https://s3.amazonaws.com/smallya-test/randallnotrandall/rnr_valid.lst.rec') # Data Iterators for cats vs dogs dataset import mxnet as mx def get_iterators(batch_size, data_shape=(3, 224, 224)): train = mx.io.ImageRecordIter( path_imgrec = './rnr_train.lst.rec', data_name = 'data', label_name = 'softmax_label', batch_size = batch_size, data_shape = data_shape, shuffle = True, rand_crop = True, rand_mirror = True) val = mx.io.ImageRecordIter( path_imgrec = './rnr_valid.lst.rec', data_name = 'data', label_name = 'softmax_label', batch_size = batch_size, data_shape = data_shape, rand_crop = False, rand_mirror = False) return (train, val) def get_fine_tune_model(symbol, arg_params, num_classes, layer_name='flatten0'): """ symbol: the pre-trained network symbol arg_params: the argument parameters of the pre-trained model num_classes: the number of classes for the fine-tune datasets layer_name: the layer name before the last fully-connected layer """ all_layers = sym.get_internals() net = all_layers[layer_name + '_output'] net = mx.symbol.FullyConnected(data=net, num_hidden=num_classes, name='fc1') net = mx.symbol.SoftmaxOutput(data=net, name='softmax') new_args = dict({k:arg_params[k] for k in arg_params if 'fc1' not in k}) return (net, new_args) num_classes = 2 # RANDALL OR NOT (new_sym, new_args) = get_fine_tune_model(sym, arg_params, num_classes) import logging head = '%(asctime)-15s %(message)s' logging.basicConfig(level=logging.DEBUG, format=head) def fit(symbol, arg_params, aux_params, train, val, batch_size, num_gpus=1, num_epoch=1): devs = [mx.gpu(i) for i in range(num_gpus)] # replace mx.gpu by mx.cpu for CPU training mod = mx.mod.Module(symbol=new_sym, context=devs) mod.bind(data_shapes=train.provide_data, label_shapes=train.provide_label) mod.init_params(initializer=mx.init.Xavier(rnd_type='gaussian', factor_type="in", magnitude=2)) mod.set_params(new_args, aux_params, allow_missing=True) mod.fit(train, val, num_epoch=num_epoch, batch_end_callback = mx.callback.Speedometer(batch_size, 10), kvstore='device', optimizer='sgd', optimizer_params={'learning_rate':0.009}, eval_metric='acc') return mod num_classes = 2 # This is binary classification (Randall vs not Randall) batch_per_gpu = 16 num_gpus = 4 (new_sym, new_args) = get_fine_tune_model(sym, arg_params, num_classes) batch_size = batch_per_gpu * num_gpus (train, val) = get_iterators(batch_size) mod = fit(new_sym, new_args, aux_params, train, val, batch_size, num_gpus) #metric = mx.metric.Accuracy() #mod_score = mod.score(val, metric) #print mod_score prefix = 'resnet-mxnet-rnr' epoch = 1 mc = mod.save_checkpoint(prefix, epoch) # load the model, make sure you have executed previous cells to train import cv2 dshape = [('data', (1,3,224,224))] def load_model(s_fname, p_fname): """ Load model checkpoint from file. :return: (arg_params, aux_params) arg_params : dict of str to NDArray Model parameter, dict of name to NDArray of net's weights. aux_params : dict of str to NDArray Model parameter, dict of name to NDArray of net's auxiliary states. """ symbol = mx.symbol.load(s_fname) save_dict = mx.nd.load(p_fname) arg_params = {} aux_params = {} for k, v in save_dict.items(): tp, name = k.split(':', 1) if tp == 'arg': arg_params[name] = v if tp == 'aux': aux_params[name] = v return symbol, arg_params, aux_params model_symbol = "resnet-mxnet-rnr-symbol.json" model_params = "resnet-mxnet-rnr-0001.params" sym, arg_params, aux_params = load_model(model_symbol, model_params) mod = mx.mod.Module(symbol=sym) # bind the model and set training == False; Define the data shape mod.bind(for_training=False, data_shapes=dshape) mod.set_params(arg_params, aux_params)
E3_finetuning_randall_not_randall.ipynb
sunilmallya/dl-twitch-series
apache-2.0
Lets see if we can predict if that's a Randall image <img src="https://d0.awsstatic.com/Developer%20Marketing/evangelists/evangelist-bio-randall-hunt.png"/>
import urllib2 import numpy as np from collections import namedtuple Batch = namedtuple('Batch', ['data']) def preprocess_image(img, show_img=False): ''' convert the image to a numpy array ''' img = cv2.resize(img, (224, 224)) img = np.swapaxes(img, 0, 2) img = np.swapaxes(img, 1, 2) img = img[np.newaxis, :] return img url = 'https://d0.awsstatic.com/Developer%20Marketing/evangelists/evangelist-bio-randall-hunt.png' req = urllib2.urlopen(url) image = np.asarray(bytearray(req.read()), dtype="uint8") image = cv2.imdecode(image, cv2.IMREAD_COLOR) img = preprocess_image(image) mod.forward(Batch([mx.nd.array(img)])) # predict prob = mod.get_outputs()[0].asnumpy() labels = ["Randall", "Not Randall"] print labels[prob.argmax()], max(prob[0])
E3_finetuning_randall_not_randall.ipynb
sunilmallya/dl-twitch-series
apache-2.0
yay! that's Randall Lets visualize the filters
## Feature extraction import matplotlib.pyplot as plt import cv2 import numpy as np # define a simple data batch from collections import namedtuple Batch = namedtuple('Batch', ['data']) def get_image(url, show=False): # download and show the image fname = mx.test_utils.download(url) img = cv2.cvtColor(cv2.imread(fname), cv2.COLOR_BGR2RGB) if img is None: return None if show: plt.imshow(img) plt.axis('off') # convert into format (batch, RGB, width, height) img = cv2.resize(img, (224, 224)) img = np.swapaxes(img, 0, 2) img = np.swapaxes(img, 1, 2) img = img[np.newaxis, :] return img # list the last 10 layers all_layers = sym.get_internals() print all_layers.list_outputs()[-10:] #fe_sym = all_layers['flatten0_output'] fe_sym = all_layers['conv0_output'] fe_mod = mx.mod.Module(symbol=fe_sym, context=mx.cpu(), label_names=None) fe_mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))]) fe_mod.set_params(arg_params, aux_params) url = 'https://d0.awsstatic.com/Developer%20Marketing/evangelists/evangelist-bio-randall-hunt.png' img = get_image(url) fe_mod.forward(Batch([mx.nd.array(img)])) features = fe_mod.get_outputs()[0].asnumpy() print features.shape from PIL import Image import numpy as np %matplotlib inline w, h = 112, 112 # Plot helpers def plots(ims, figsize=(12,6), rows=1, interp=False, titles=None): if type(ims[0]) is np.ndarray: ims = np.array(ims).astype(np.uint8) #print ims.shape #if (ims.shape[-1] != 3): # ims = ims.transpose((0,2,3,1)) f = plt.figure(figsize=figsize) for i in range(len(ims)): sp = f.add_subplot(rows, len(ims)//rows, i+1) sp.axis('Off') if titles is not None: sp.set_title(titles[i], fontsize=16) plt.imshow(ims[i], interpolation=None if interp else 'none') def plots_idx(idx, titles=None): plots([features[0][i] for i in idx]) fname = mx.test_utils.download(url) img = cv2.cvtColor(cv2.imread(fname), cv2.COLOR_BGR2RGB) plt.axis('off') plt.imshow(img) plots_idx(range(0,5)) plots_idx(range(5,10)) plots_idx(range(10,15)) #data = np.zeros((h, w, 3), dtype=np.uint8) #img = Image.fromarray(features[0][:2028], 'RGB') #img.show()
E3_finetuning_randall_not_randall.ipynb
sunilmallya/dl-twitch-series
apache-2.0
Send CX to service using requests module Services are built on a server You don't have to construct graph libraries in your local environment. It is very easy to use python-igraph and graph-tools. In order to send CX requests : to send CX file to service in Python. (curl also can be used.) json : to convert object to a CX formatted string.
import requests import json url_community = 'http://localhost:80' # igraph's community detection service URL url_layout = 'http://localhost:3000' # graph-tool's layout service URL headers = {'Content-type': 'application/json'}
notebooks/DEMO.ipynb
idekerlab/graph-services
mit
Network used for DEMO This DEMO uses yeastHQSubnet.cx as original network. - 2924 nodes - 6827 edges <img src="example1.png" alt="Drawing" style="width: 500px;"/> 1. igraph community detection and color generator service In order to detect communities, igraph's community detection service can be used. How to use the service on Jupyter Notebook open the CX file using open() set parameters in dictionary format. (About parameters, see the document of service.) post the CX data to URL of service using requests.post()
data = open('./yeastHQSubnet.cx') # 1. parameter = {'type': 'leading_eigenvector', 'clusters': 5, 'palette': 'husl'} # 2. r = requests.post(url=url_community, headers=headers, data=data, params=parameter) # 3.
notebooks/DEMO.ipynb
idekerlab/graph-services
mit
What happened? Output contains graph with community membership + color assignment for each group. - node1 : group 1, red - node2 : group 1, red - node3 : group 2, green ... You don't have to create your own color palette manually. To save and look the output data, you can use r.json()['data'] Note - When you use this output as input of next service, you must use json.dumps(r.json()['data']) - You must replace single quotation to double quotation in output file.
import re with open('output1.cx', 'w') as f: # single quotation -> double quotation output = re.sub(string=str(r.json()['data']), pattern="'", repl='"') f.write(output)
notebooks/DEMO.ipynb
idekerlab/graph-services
mit
3. graph-tool layout service In order to perform layout algorithm, graph-tool's layout algorithm service can be used. C++ optimized parallel, community-structure-aware layout algorithms You can use the community structure as a parameter for layout, and result reflects its structure. You can use graph-tool's service in the same way as igraph's service. Both input and output of cxMate service are CX, NOT igraph's object, graph-tool's object and so on. So, you don't have to convert igraph object to graph-tools object. <img src="service.png" alt="Drawing" style="width: 750px;"/> How to use the service on Jupyter Notebook open the CX file using json.dumps(r.json()['data']) set parameters in dictionary format. (About parameters, see the document of service.) post the CX data to URL of service using requests.post()
data2 = json.dumps(r.json()['data']) # 1. parameter = {'only-layout': False, 'groups': 'community'} # 2. r2 = requests.post(url=url_layout, headers=headers, data=data2, params=parameter) # 3.
notebooks/DEMO.ipynb
idekerlab/graph-services
mit
Save .cx file To save and look the output data, you can use r.json()['data']
import re with open('output2.cx', 'w') as f: # single quotation -> double quotation output = re.sub(string=str(r2.json()['data']), pattern="'", repl='"') f.write(output)
notebooks/DEMO.ipynb
idekerlab/graph-services
mit
Color Palette If you want to change color of communities, you can do it easily. Many color palettes of seaborn can be used. (See http://seaborn.pydata.org/tutorial/color_palettes.html)
%matplotlib inline import seaborn as sns, numpy as np from ipywidgets import interact, FloatSlider
notebooks/DEMO.ipynb
idekerlab/graph-services
mit
Default Palette Without setting parameter 'palette', 'husl' is used as color palette.
def show_husl(n): sns.palplot(sns.color_palette('husl', n)) print('palette: husl') interact(show_husl, n=10);
notebooks/DEMO.ipynb
idekerlab/graph-services
mit
Other palettes
def show_pal0(palette): sns.palplot(sns.color_palette(palette, 24)) interact(show_pal0, palette='deep muted pastel bright dark colorblind'.split()); sns.choose_colorbrewer_palette('qualitative'); sns.choose_colorbrewer_palette('sequential');
notebooks/DEMO.ipynb
idekerlab/graph-services
mit
Load a lens file
zfile = os.path.join(l.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx') l.zLoadFile(zfile)
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Perform a quick-focus
l.zQuickFocus()
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Example of a Layout plot Using ipzCaptureWindow to directly embed a Layout plot into the notebook.
l.ipzCaptureWindow('Lay', percent=15, gamma=0.4)
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Why do we need to set gamma? Is there one gamma value good for all analysis window rendering? Upto Zemax13 there was no way to control othe thickness of the lines produced by ZEMAX for the metafiles. Generally the lines produced were very thin and the rescaled version would be too light to be visible. One way in which this problem was addressed is to lowpass filter the original image, rescale and then use a gamma value less than one during the conversion from metafile to PNG. This is probably not the optimal solution. One obvious side effect is that the black text becomes very thick and ugly. Instead of embedding the figure directly, we can also get a pixel array using PyZDDE. Plotting the returned array using matplotlib may allow more control and annotation options as shown below:
arr = l.ipzCaptureWindow('Lay', percent=15, gamma=0.08, retArr=True)
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Now that we have the pixel array, we can either use the convenience function provided in PyZDDE to make a quick plot, or make our own figure and plot as we want it. Let's first see how we can use the convenience function, imshow(), provided by PyZDDE to make a cropped plot. The functions takes as input the pixel array, a tuple indicating the number of pixels to crop from the left, right, top, bottom sides of the pixel array, a tuple indicating the matplotlib figure size (optional), and a title string (optional).
pyz.imshow(arr, cropBorderPixels=(5, 5, 1, 90), figsize=(10,10), title='Layout Plot')
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Next, we will create a figure and direct PyZDDE to render the Layout plot in the provided figure and axes. We can then annotate the figure as we like. But first we will get some first-order properties of the lens
l.ipzGetFirst() fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111) # Render the array pyz.imshow(arr, cropBorderPixels=(5, 5, 1, 90), fig=fig, faxes=ax) ax.set_title('Layout plot', fontsize=16) # Annotate Lens numbers ax.text(41, 70, "L1", fontsize=12) ax.text(98, 105, "L2", fontsize=12) ax.text(149, 89, "L3", fontsize=12) # Annotate the lens with radius of curvature information col = (0.08,0.08,0.08) s1_r = 1.0/l.zGetSurfaceData(1,2) ax.annotate("{:0.2f}".format(s1_r), (37, 232), (8, 265), fontsize=12, arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5))) s2_r = 1.0/l.zGetSurfaceData(2,2) ax.annotate("{:0.2f}".format(s2_r), (47, 232), (50, 265), fontsize=12, arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5))) s6_r = 1.0/l.zGetSurfaceData(6,2) ax.annotate("{:0.2f}".format(s6_r), (156, 218), (160, 251), fontsize=12, arrowprops=dict(arrowstyle="->", linewidth=0.45, color=col, relpos=(0.5,0.5))) ax.text(5, 310, "Cooke Triplet, EFL = {} mm, F# = {}, Total track length = {} mm" .format(50, 5, 60.177), fontsize=14) plt.show()
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Example of Ray Fan plot
l.ipzCaptureWindow('Ray', percent=17, gamma=0.55) rarr = l.ipzCaptureWindow('Ray', percent=25, gamma=0.15, retArr=True) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111) pyz.imshow(rarr, cropBorderPixels=(5, 5, 48, 170), fig=fig, faxes=ax) ax.set_title('Transverse Ray Fan Plot for OBJ: 20.00 (deg)', fontsize=14) plt.show()
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Example of Spot diagram
l.ipzCaptureWindow('Spt', percent=16, gamma=0.5) sptd = l.ipzCaptureWindow('Spt', percent=25, gamma=0.15, retArr=True) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111) pyz.imshow(sptd, cropBorderPixels=(150, 150, 30, 180), fig=fig, faxes=ax) ax.set_title('Spot diagram for OBJ: 20.00 (deg)', fontsize=14) plt.show()
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Examples of using ipzCaptureWindowLQ() function in Zemax 13.2 or earlier ipzCaptureWindowLQ() is useful for quickly capturing a graphic window, and embedding into an IPython notebook or QtConsole. In order to use this function, please copy the ZPL macros from "PyZDDE\ZPLMacros" to the macro directory where Zemax is expecting the ZPL macros to be (i.e. the folder set in Zemax->Preference->Folders->ZPL). For this particular example, the macro folder path is set to "C:\PROGRAMSANDEXPERIMENTS\ZEMAX\Macros"
l.zSetMacroPath(r"C:\PROGRAMSANDEXPERIMENTS\ZEMAX\Macros") l.ipzCaptureWindowLQ(1)
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Note that the above command didn't work, because we need to push the lens from the DDE server to the Zemax main window first. Then we also need to open each window.
l.zPushLens()
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Now open the layout analysis window in Zemax. Assuming that this is the first analysis window that has been open, Zemax would have assigned the number 1 to it.
l.ipzCaptureWindowLQ(1)
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Open the MTF analysis window in Zemax now.
l.ipzCaptureWindowLQ(2) pyz.closeLink()
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Examples of using ipzCaptureWindowLQ() function in Zemax 14 or later (OpticStudio) In order to do this experiment, a new instance of Zemax 15 was opened, and new link created.
l = pyz.createLink() zfile = os.path.join(l.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx') l.zLoadFile(zfile) l.zPushLens() # Set the macro path l.zSetMacroPath(r"C:\PROGRAMSANDEXPERIMENTS\ZEMAX\Macros")
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Now open the layout analysis window in OpticStudio as before.
l.ipzCaptureWindowLQ(1)
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Open FFT MTF analysis window
l.ipzCaptureWindowLQ(2)
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Next, the FFT PSF analysis window was opened
l.ipzCaptureWindowLQ(3)
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
A few others .... just for show
l.ipzCaptureWindowLQ(4) # Shaded Model l.close()
Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb
indranilsinharoy/PyZDDE
mit
Prerequisites This cookbook assumes a working knowledge of Python and Numpy. The concept of broadcasting is particularly important both in this cookbook and in JAX MD. We also assume a basic knowlege of JAX, which JAX MD is built on top of. Here we briefly review a few JAX basics that are important for us: jax.vmap allows for automatic vectorization of a function. What this means is that if you have a function that takes an input x and returns an output y, i.e. y = f(x), then vmap will transform this function to act on an array of x's and return an array of y's, i.e. Y = vmap(f)(X), where X=np.array([x1,x2,...,xn]) and Y=np.array([y1,y2,...,yn]). jax.grad employs automatic differentiation to transform a function into a new function that calculates its gradient, for example: dydx = grad(f)(x). jax.lax.scan allows for efficient for-loops that can be compiled and differentiated over. See here for more details. Random numbers are different in JAX. The details aren't necessary for this cookbook, but if things look a bit different, this is why. The basics of user-defined potentials Create a user defined potential function to use throughout this cookbook Here we create a custom potential that has a short-ranged, non-diverging repulsive interaction and a medium-ranged Morse-like attractive interaction. It takes the following form: \begin{equation} V(r) = \begin{cases} \frac{1}{2} k (r-r_0)^2 - D_0,& r < r_0\ D_0\left( e^{-2\alpha (r-r_0)} -2 e^{-\alpha(r-r_0)}\right), & r \geq r_0 \end{cases} \end{equation} and has 4 parameters: $D_0$, $\alpha$, $r_0$, and $k$.
def harmonic_morse(dr, D0=5.0, alpha=5.0, r0=1.0, k=50.0, **kwargs): U = np.where(dr < r0, 0.5 * k * (dr - r0)**2 - D0, D0 * (np.exp(-2. * alpha * (dr - r0)) - 2. * np.exp(-alpha * (dr - r0))) ) return np.array(U, dtype=dr.dtype)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
plot $V(r)$.
drs = np.arange(0,3,0.01) U = harmonic_morse(drs) plt.plot(drs,U) format_plot(r'$r$', r'$V(r)$') finalize_plot()
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Calculate the energy of a system of interacting particles We now want to calculate the energy of a system of $N$ spheres in $d$ dimensions, where each particle interacts with every other particle via our user-defined function $V(r)$. The total energy is \begin{equation} E_\text{total} = \sum_{i<j}V(r_{ij}), \end{equation} where $r_{ij}$ is the distance between particles $i$ and $j$. Our first task is to set up the system by specifying the $N$, $d$, and the size of the simulation box. We then use JAX's internal random number generator to pick positions for each particle.
N = 50 dimension = 2 box_size = 6.8 key, split = random.split(key) R = random.uniform(split, (N,dimension), minval=0.0, maxval=box_size, dtype=f64) plot_system(R,box_size)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
At this point, we could manually loop over all particle pairs and calculate the energy, keeping track of boundary conditions, etc. Fortunately, JAX MD has machinery to automate this. First, we must define two functions, displacement and shift, which contain all the information of the simulation box, boundary conditions, and underlying metric. displacement is used to calculate the vector displacement between particles, and shift is used to move particles. For most cases, it is recommended to use JAX MD's built in functions, which can be called using: * displacement, shift = space.free() * displacement, shift = space.periodic(box_size) * displacement, shift = space.periodic_general(T) For demonstration purposes, we will define these manually for a square periodic box, though without proper error handling, etc. The following should have the same functionality as displacement, shift = space.periodic(box_size).
def setup_periodic_box(box_size): def displacement_fn(Ra, Rb, **unused_kwargs): dR = Ra - Rb return np.mod(dR + box_size * f32(0.5), box_size) - f32(0.5) * box_size def shift_fn(R, dR, **unused_kwargs): return np.mod(R + dR, box_size) return displacement_fn, shift_fn displacement, shift = setup_periodic_box(box_size)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
We now set up a function to calculate the total energy of the system. The JAX MD function smap.pair takes a given potential and promotes it to act on all particle pairs in a system. smap.pair does not actually return an energy, rather it returns a function that can be used to calculate the energy. For convenience and readability, we wrap smap.pair in a new function called harmonic_morse_pair. For now, ignore the species keyword, we will return to this later.
def harmonic_morse_pair( displacement_or_metric, species=None, D0=5.0, alpha=10.0, r0=1.0, k=50.0): D0 = np.array(D0, dtype=f32) alpha = np.array(alpha, dtype=f32) r0 = np.array(r0, dtype=f32) k = np.array(k, dtype=f32) return smap.pair( harmonic_morse, space.canonicalize_displacement_or_metric(displacement_or_metric), species=species, D0=D0, alpha=alpha, r0=r0, k=k)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Our helper function can be used to construct a function to compute the energy of the entire system as follows.
# Create a function to calculate the total energy with specified parameters energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0) # Use this to calculate the total energy print(energy_fn(R)) # Use grad to calculate the net force force = -grad(energy_fn)(R) print(force[:5])
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
We are now in a position to use our energy function to manipulate the system. As an example, we perform energy minimization using JAX MD's implementation of the FIRE algorithm. We start by defining a function that takes an energy function, a set of initial positions, and a shift function and runs a specified number of steps of the minimization algorithm. The function returns the final set of positions and the maximum absolute value component of the force. We will use this function throughout this cookbook.
def run_minimization(energy_fn, R_init, shift, num_steps=5000): dt_start = 0.001 dt_max = 0.004 init,apply=minimize.fire_descent(jit(energy_fn),shift,dt_start=dt_start,dt_max=dt_max) apply = jit(apply) @jit def scan_fn(state, i): return apply(state), 0. state = init(R_init) state, _ = lax.scan(scan_fn,state,np.arange(num_steps)) return state.position, np.amax(np.abs(-grad(energy_fn)(state.position)))
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Now run the minimization with our custom energy function.
Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size )
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Create a truncated potential It is often desirable to have a potential that is strictly zero beyond a well-defined cutoff distance. In addition, MD simulations require the energy and force (i.e. first derivative) to be continuous. To easily modify an existing potential $V(r)$ to have this property, JAX MD follows the approach taken by HOOMD Blue. Consider the function \begin{equation} S(r) = \begin{cases} 1,& r<r_\mathrm{on} \ \frac{(r_\mathrm{cut}^2-r^2)^2 (r_\mathrm{cut}^2 + 2r^2 - 3 r_\mathrm{on}^2)}{(r_\mathrm{cut}^2-r_\mathrm{on}^2)^3},& r_\mathrm{on} \leq r < r_\mathrm{cut}\ 0,& r \geq r_\mathrm{cut} \end{cases} \end{equation} Here we plot both $S(r)$ and $\frac{dS(r)}{dr}$, both of which are smooth and strictly zero above $r_\mathrm{cut}$.
dr = np.arange(0,3,0.01) S = energy.multiplicative_isotropic_cutoff(lambda dr: 1, r_onset=1.5, r_cutoff=2.0)(dr) ngradS = vmap(grad(energy.multiplicative_isotropic_cutoff(lambda dr: 1, r_onset=1.5, r_cutoff=2.0)))(dr) plt.plot(dr,S,label=r'$S(r)$') plt.plot(dr,ngradS,label=r'$\frac{dS(r)}{dr}$') plt.legend() format_plot(r'$r$','') finalize_plot()
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
We then use $S(r)$ to create a new function \begin{equation}\tilde V(r) = V(r) S(r), \end{equation} which is exactly $V(r)$ below $r_\mathrm{on}$, strictly zero above $r_\mathrm{cut}$ and is continuous in its first derivative. This is implemented in JAX MD through energy.multiplicative_isotropic_cutoff, which takes in a potential function $V(r)$ (e.g. our harmonic_morse function) and returns a new function $\tilde V(r)$.
harmonic_morse_cutoff = energy.multiplicative_isotropic_cutoff( harmonic_morse, r_onset=1.5, r_cutoff=2.0) dr = np.arange(0,3,0.01) V = harmonic_morse(dr) V_cutoff = harmonic_morse_cutoff(dr) F = -vmap(grad(harmonic_morse))(dr) F_cutoff = -vmap(grad(harmonic_morse_cutoff))(dr) plt.plot(dr,V, label=r'$V(r)$') plt.plot(dr,V_cutoff, label=r'$\tilde V(r)$') plt.plot(dr,F, label=r'$-\frac{d}{dr} V(r)$') plt.plot(dr,F_cutoff, label=r'$-\frac{d}{dr} \tilde V(r)$') plt.legend() format_plot('$r$', '') plt.ylim(-13,5) finalize_plot()
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
As before, we can use smap.pair to promote this to act on an entire system.
def harmonic_morse_cutoff_pair( displacement_or_metric, D0=5.0, alpha=5.0, r0=1.0, k=50.0, r_onset=1.5, r_cutoff=2.0): D0 = np.array(D0, dtype=f32) alpha = np.array(alpha, dtype=f32) r0 = np.array(r0, dtype=f32) k = np.array(k, dtype=f32) return smap.pair( energy.multiplicative_isotropic_cutoff( harmonic_morse, r_onset=r_onset, r_cutoff=r_cutoff), space.canonicalize_displacement_or_metric(displacement_or_metric), D0=D0, alpha=alpha, r0=r0, k=k)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
This is implemented as before
# Create a function to calculate the total energy energy_fn = harmonic_morse_cutoff_pair(displacement, D0=5.0, alpha=10.0, r0=1.0, k=500.0, r_onset=1.5, r_cutoff=2.0) # Use this to calculate the total energy print(energy_fn(R)) # Use grad to calculate the net force force = -grad(energy_fn)(R) print(force[:5]) # Minimize the energy using the FIRE algorithm Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size )
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Specifying parameters Dynamic parameters In the above examples, the strategy is to create a function energy_fn that takes a set of positions and calculates the energy of the system with all the parameters (e.g. D0, alpha, etc.) baked in. However, JAX MD allows you to override these baked-in values dynamically, i.e. when energy_fn is called. For example, we can print out the minimized energy and force of the above system with the truncated potential:
print(energy_fn(Rfinal)) print(-grad(energy_fn)(Rfinal)[:5])
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
This uses the baked-in values of the 4 parameters: D0=5.0,alpha=10.0,r0=1.0,k=500.0. If, for example, we want to dynamically turn off the attractive part of the potential, we simply pass D0=0 to energy_fn:
print(energy_fn(Rfinal, D0=0))
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Since changing the potential moves the minimum, the force will not be zero:
print(-grad(energy_fn)(Rfinal, D0=0)[:5])
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
This ability to dynamically pass parameters is very powerful. For example, if you want to shrink particles each step during a simulation, you can simply specify a different r0 each step. This is demonstrated below, where we run a Brownian dynamics simulation at zero temperature with continuously decreasing r0. The details of simulate.brownian are beyond the scope of this cookbook, but the idea is that we pass a new value of r0 to the function apply each time it is called. The function apply takes a step of the simulation, and internally it passes any extra parameters like r0 to energy_fn.
def run_brownian(energy_fn, R_init, shift, key, num_steps): init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, kT=0.0, gamma=0.1) apply = jit(apply) # Define how r0 changes for each step r0_initial = 1.0 r0_final = .5 def get_r0(t): return r0_final + (r0_initial-r0_final)*(num_steps-t)/num_steps @jit def scan_fn(state, t): # Dynamically pass r0 to apply, which passes it on to energy_fn return apply(state, r0=get_r0(t)), 0 key, split = random.split(key) state = init(split, R_init) state, _ = lax.scan(scan_fn,state,np.arange(num_steps)) return state.position, np.amax(np.abs(-grad(energy_fn)(state.position)))
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
If we use the previous result as the starting point for the Brownian Dynamics simulation, we find exactly what we would expect, the system contracts into a finite cluster, held together by the attractive part of the potential.
key, split = random.split(key) Rfinal2, max_force_component = run_brownian(energy_fn, Rfinal, shift, split, num_steps=6000) plot_system( Rfinal2, box_size )
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Particle-specific parameters Our example potential has 4 parameters: D0, alpha, r0, and k. The usual way to pass these parameters is as a scalar (e.g. D0=5.0), in which case that parameter is fixed for every particle pair. However, Python broadcasting allows for these parameters to be specified separately for every different particle pair by passing an $(N,N)$ array rather than a scalar. As an example, let's do this for the parameter r0, which is an effective way of generating a system with continuous polydispersity in particle size. Note that the polydispersity disrupts the crystalline order after minimization.
# Draw the radii from a uniform distribution key, split = random.split(key) radii = random.uniform(split, (N,), minval=1.0, maxval=2.0, dtype=f64) # Rescale to match the initial volume fraction radii = np.array([radii * np.sqrt(N/(4.*np.dot(radii,radii)))]) # Turn this into a matrix of sums r0_matrix = radii+radii.transpose() # Create the energy function using r0_matrix energy_fn = harmonic_morse_pair(displacement, D0=5.0, alpha=10.0, r0=r0_matrix, k=500.0) # Minimize the energy using the FIRE algorithm Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size )
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
In addition to standard Python broadcasting, JAX MD allows for the special case of additive parameters. If a parameter is passed as a (N,) array p_vector, JAX MD will convert this into a (N,N) array p_matrix where p_matrix[i,j] = 0.5 (p_vector[i] + p_vector[j]). This is a JAX MD specific ability and not a feature of Python broadcasting. As it turns out, our above polydisperse example falls into this category. Therefore, we could achieve the same result by passing r0=2.0*radii.
# Create the energy function the radii array energy_fn = harmonic_morse_pair(displacement, D0=5.0, alpha=10.0, r0=2.*radii, k=500.0) # Minimize the energy using the FIRE algorithm Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system( Rfinal, box_size )
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Species It is often important to specify parameters differently for different particle pairs, but doing so with full ($N$,$N$) matrices is both inefficient and obnoxious. JAX MD allows users to create species, i.e. $N_s$ groups of particles that are identical to each other, so that parameters can be passed as much smaller ($N_s$,$N_s$) matrices. First, create an array that specifies which particles belong in which species. We will divide our system into two species.
N_0 = N // 2 # Half the particles in species 0 N_1 = N - N_0 # The rest in species 1 species = np.array([0] * N_0 + [1] * N_1, dtype=np.int32) print(species)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Next, create the $(2,2)$ matrix of r0's, which are set so that the overall volume fraction matches our monodisperse case.
rsmall=0.41099747 # Match the total volume fraction rlarge=1.4*rsmall r0_species_matrix = np.array([[2*rsmall, rsmall+rlarge], [rsmall+rlarge, 2*rlarge]]) print(r0_species_matrix) energy_fn = harmonic_morse_pair(displacement, species=species, D0=5.0, alpha=10.0, r0=r0_species_matrix, k=500.0) Rfinal, max_force_component = run_minimization(energy_fn, R, shift) print('largest component of force after minimization = {}'.format(max_force_component)) plot_system(Rfinal, box_size, species=species )
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Dynamic Species Just like standard parameters, the species list can be passed dynamically as well. However, unlike standard parameters, you have to tell smap.pair that the species will be specified dynamically. To do this, set species=2 be the total number of types of particles when creating your energy function. The following sets up an energy function where the attractive part of the interaction only exists between members of the first species, but where the species will be defined dynamically.
D0_species_matrix = np.array([[ 5.0, 0.0], [0.0, 0.0]]) energy_fn = harmonic_morse_pair(displacement, species=2, D0=D0_species_matrix, alpha=10.0, r0=0.5, k=500.0)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Now we set up a finite temperature Brownian Dynamics simulation where, at every step, particles on the left half of the simulation box are assigned to species 0, while particles on the right half are assigned to species 1.
def run_brownian(energy_fn, R_init, shift, key, num_steps): init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, kT=1.0, gamma=0.1) # apply = jit(apply) # Define a function to recalculate the species each step def get_species(R): return np.where(R[:,0] < box_size / 2, 0, 1) @jit def scan_fn(state, t): # Recalculate the species list species = get_species(state.position) # Dynamically pass species to apply, which passes it on to energy_fn return apply(state, species=species, species_count=2), 0 key, split = random.split(key) state = init(split, R_init) state, _ = lax.scan(scan_fn,state,np.arange(num_steps)) return state.position,np.amax(np.abs(-grad(energy_fn)(state.position, species=get_species(state.position), species_count=2)))
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
When we run this, we see that particles on the left side form clusters while particles on the right side do not.
key, split = random.split(key) Rfinal, max_force_component = run_brownian(energy_fn, R, shift, split, num_steps=10000) plot_system( Rfinal, box_size )
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Efficeiently calculating neighbors The most computationally expensive part of most MD programs is calculating the force between all pairs of particles. Generically, this scales with $N^2$. However, for systems with isotropic pairwise interactions that are strictly zero beyond a cutoff, there are techniques to dramatically improve the efficiency. The two most common methods are cell list and neighbor lists. Cell lists The technique here is to divide space into small cells that are just larger than the largest interaction range in the system. Thus, if particle $i$ is in cell $c_i$ and particle $j$ is in cell $c_j$, $i$ and $j$ can only interact if $c_i$ and $c_j$ are neighboring cells. Rather than searching all $N^2$ combinations of particle pairs for non-zero interactions, you only have to search the particles in the neighboring cells. Neighbor lists Here, for each particle $i$, we make a list of potential neighbors: particles $j$ that are within some threshold distance $r_\mathrm{threshold}$. If $r_\mathrm{threshold} = r_\mathrm{cutoff} + \Delta r_\mathrm{threshold}$ (where $r_\mathrm{cutoff}$ is the largest interaction range in the system and $\Delta r_\mathrm{threshold}$ is an appropriately chosen buffer size), then all interacting particles will appear in this list as long as no particles moves by more than $\Delta r_\mathrm{threhsold}/2$. There is a tradeoff here: smaller $\Delta r_\mathrm{threhsold}$ means fewer particles to search over each MD step but the list must be recalculated more often, while larger $\Delta r_\mathrm{threhsold}$ means slower force calculates but less frequent neighbor list calculations. In practice, the most efficient technique is often to use cell lists to calculate neighbor lists. In JAX MD, this occurs under the hood, and so only calls to neighbor-list functionality are necessary. To implement neighbor lists, we need two functions: 1) a function to create and update the neighbor list, and 2) an energy function that uses a neighbor list rather than operating on all particle pairs. We create these functions with partition.neighbor_list and smap.pair_neighbor_list, respectively. partition.neighbor_list takes basic box information as well as the maximum interaction range r_cutoff and the buffer size dr_threshold.
def harmonic_morse_cutoff_neighbor_list( displacement_or_metric, box_size, species=None, D0=5.0, alpha=5.0, r0=1.0, k=50.0, r_onset=1.0, r_cutoff=1.5, dr_threshold=2.0, format=partition.OrderedSparse, **kwargs): D0 = np.array(D0, dtype=np.float32) alpha = np.array(alpha, dtype=np.float32) r0 = np.array(r0, dtype=np.float32) k = np.array(k, dtype=np.float32) r_onset = np.array(r_onset, dtype=np.float32) r_cutoff = np.array(r_cutoff, np.float32) dr_threshold = np.float32(dr_threshold) neighbor_fn = partition.neighbor_list( displacement_or_metric, box_size, r_cutoff, dr_threshold, format=format) energy_fn = smap.pair_neighbor_list( energy.multiplicative_isotropic_cutoff(harmonic_morse, r_onset, r_cutoff), space.canonicalize_displacement_or_metric(displacement_or_metric), species=species, D0=D0, alpha=alpha, r0=r0, k=k) return neighbor_fn, energy_fn
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
To test this, we generate our new neighbor_fn and energy_fn, as well as a comparison energy function using the default approach.
r_onset = 1.5 r_cutoff = 2.0 dr_threshold = 1.0 neighbor_fn, energy_fn = harmonic_morse_cutoff_neighbor_list( displacement, box_size, D0=5.0, alpha=10.0, r0=1.0, k=500.0, r_onset=r_onset, r_cutoff=r_cutoff, dr_threshold=dr_threshold) energy_fn_comparison = harmonic_morse_cutoff_pair( displacement, D0=5.0, alpha=10.0, r0=1.0, k=500.0, r_onset=r_onset, r_cutoff=r_cutoff)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Next, we use neighbor_fn.allocate and the current set of positions to populate the neighbor list.
nbrs = neighbor_fn.allocate(R)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
To calculate the energy, we pass nbrs to energy_fn. The energy matches the comparison.
print(energy_fn(R, neighbor=nbrs)) print(energy_fn_comparison(R))
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Note that by default neighbor_fn uses a cell list internally to populate the neighbor list. This approach fails when the box size in any dimension is less than 3 times $r_\mathrm{threhsold} = r_\mathrm{cutoff} + \Delta r_\mathrm{threshold}$. In this case, neighbor_fn automatically turns off the use of cell lists, and instead searches over all particle pairs. This can also be done manually by passing disable_cell_list=True to partition.neighbor_list. This can be useful for debugging or for small systems where the overhead of cell lists outweighs the benefit. Updating neighbor lists The function neighbor_fn has two different usages, depending on how it is called. When used as above, i.e. nbrs = neighbor_fn(R), a new neighbor list is generated from scratch. Internally, JAX MD uses the given positions R to estimate a maximum capacity, i.e. the maximum number of neighbors any particle will have at any point during the use of the neighbor list. This estimate can be adjusted by passing a value of capacity_multiplier to partition.neighbor_list, which defaults to capacity_multiplier=1.25. Since the maximum capacity is not known ahead of time, this construction of the neighbor list cannot be compiled. However, once a neighbor list is created in this way, repopulating the list with the same maximum capacity is a simpler operation that can be compiled. This is done by calling nbrs = neighbor_fn(R, nbrs). Internally, this checks if any particle has moved more than $\Delta r_\mathrm{threshold}/2$ and, if so, recomputes the neighbor list. If the new neighbor list exceeds the maximum capacity for any particle, the boolean variable nbrs.did_buffer_overflow is set to True. These two uses together allow for safe and efficient neighbor list calculations. The example below demonstrates a typical simulation loop that uses neighbor lists.
def run_brownian_neighbor_list(energy_fn, neighbor_fn, R_init, shift, key, num_steps): nbrs = neighbor_fn.allocate(R_init) init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, kT=1.0, gamma=0.1) def body_fn(state, t): state, nbrs = state nbrs = nbrs.update(state.position) state = apply(state, neighbor=nbrs) return (state, nbrs), 0 key, split = random.split(key) state = init(split, R_init) step = 0 step_inc=100 while step < num_steps/step_inc: rtn_state, _ = lax.scan(body_fn, (state, nbrs), np.arange(step_inc)) new_state, nbrs = rtn_state # If the neighbor list overflowed, rebuild it and repeat part of # the simulation. if nbrs.did_buffer_overflow: print('Buffer overflow.') nbrs = neighbor_fn.allocate(state.position) else: state = new_state step += 1 return state.position
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
To run this, we consider a much larger system than we have to this point. Warning: running this may take a few minutes.
Nlarge = 100*N box_size_large = 10*box_size displacement_large, shift_large = setup_periodic_box(box_size_large) key, split1, split2 = random.split(key,3) Rlarge = random.uniform(split1, (Nlarge,dimension), minval=0.0, maxval=box_size_large, dtype=f64) dr_threshold = 1.5 neighbor_fn, energy_fn = harmonic_morse_cutoff_neighbor_list( displacement_large, box_size_large, D0=5.0, alpha=10.0, r0=1.0, k=500.0, r_onset=r_onset, r_cutoff=r_cutoff, dr_threshold=dr_threshold) energy_fn = jit(energy_fn) start_time = time.process_time() Rfinal = run_brownian_neighbor_list(energy_fn, neighbor_fn, Rlarge, shift_large, split2, num_steps=4000) end_time = time.process_time() print('run time = {}'.format(end_time-start_time)) plot_system( Rfinal, box_size_large, ms=2 )
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Bonds Bonds are a way of specifying potentials between specific pairs of particles that are "on" regardless of separation. For example, it is common to employ a two-sided spring potential between specific particle pairs, but JAX MD allows the user to specify arbitrary potentials with static or dynamic parameters. Create and implement a bond potential We start by creating a custom potential that corresponds to a bistable spring, taking the form \begin{equation} V(r) = a_4(r-r_0)^4 - a_2(r-r_0)^2. \end{equation} $V(r)$ has two minima, at $r = r_0 \pm \sqrt{\frac{a_2}{2a_4}}$.
def bistable_spring(dr, r0=1.0, a2=2, a4=5, **kwargs): return a4*(dr-r0)**4 - a2*(dr-r0)**2
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
Plot $V(r)$
drs = np.arange(0,2,0.01) U = bistable_spring(drs) plt.plot(drs,U) format_plot(r'$r$', r'$V(r)$') finalize_plot()
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0
The next step is to promote this function to act on a set of bonds. This is done via smap.bond, which takes our bistable_spring function, our displacement function, and a list of the bonds. It returns a function that calculates the energy for a given set of positions.
def bistable_spring_bond( displacement_or_metric, bond, bond_type=None, r0=1, a2=2, a4=5): """Convenience wrapper to compute energy of particles bonded by springs.""" r0 = np.array(r0, f32) a2 = np.array(a2, f32) a4 = np.array(a4, f32) return smap.bond( bistable_spring, space.canonicalize_displacement_or_metric(displacement_or_metric), bond, bond_type, r0=r0, a2=a2, a4=a4)
notebooks/customizing_potentials_cookbook.ipynb
google/jax-md
apache-2.0