id
stringlengths 24
24
| idx
int64 0
402
| paragraph
stringlengths 106
17.2k
|
---|---|---|
63f39b8ffcfb27a31f1dc775 | 15 | The electrochemical measurements were carried out with a PalmSens Muti EmStat3+ potentiostat, whereas the PEC measurements were carried out with a Ivium CompactStat potentiostat and a Newport Oriel 67005 solar light simulator equipped with an Air Mass 1.5 Global (AM 1.5G) solar filter. All experiments were carried out in a two compartment H cell separated by a bipolar membrane in reverse bias. For the electrochemical experiments in the aqueous system, the catholyte typically consisted of the capture solution (resulting from CO2 capture from pure CO2 or flue gas or air as previously described) with added 0.1 M K2SO4 and the14oluteion was purged with N2 containing 2% CH4 as an internal standard for 15 min. The anolyte consisted of 0.1 M K2SO4 purged with N2 containing 2% CH4 for 15 min, with the working, counter and the reference electrodes being CoPcNH2@MWCNT, Pt mesh, and an Ag/AgCl (sat. NaCl) electrode, respectively. For the glycolic capture solutions, the catholyte typically consisted of the capture solution with added 20% v/v MeCN co-solvent and 0.15 M TBABF4 supporting electrolyte, whereas the anolyte was 0.6 M NaOH, 0.15 M TBABF4 in 20% v/v MeCN in EG solvent mixture (both catholyte and anolyte were purged with N2 containing 2% CH4 for 15 min prior to experiment), and the working, counter and reference electrodes were CoPcNH2@MWCNT, Ni foam|Cu26Pd74 and Ag/AgNO3 (in 0.1 M n-Bu4NPF6 in MeCN) electrode, respectively. The potentials recorded with the Ag/AgCl (sat. NaCl) and Ag/AgNO3 (in 0.1 M n-Bu4NPF6 in MeCN) reference electrodes were converted to the RHE and Fc/Fc + scales, respectively, as per following equations: |
63f39b8ffcfb27a31f1dc775 | 16 | The E1/2 of the Fc/Fc + couple was determined as 0.41 V vs. Ag/AgNO3 (in 0.1 M n-Bu4NPF6 in MeCN) by cyclic voltammetry (50 mV s -1 ) in a single compartment three-electrode configuration with glassy carbon as the working electrode, Pt mesh as the counter electrode and Ag/AgNO3 (in 0.1 M n-Bu4NPF6 in MeCN) as the reference electrode, using an electrolyte containing 5 mM ferrocene and 0.15 M TBABF4 in 20% v/v MeCN in EG media. The dark CV scans for the Ni foam|Cu26Pd74 were taken in a three electrode twocompartment setup with Pt mesh as the counter electrode and Ag/AgCl (sat. NaCl) or Ag/AgNO3 (in 0.1 M n-Bu4NPF6 in MeCN) as the reference electrode for aqueous and glycolic systems, respectively (scan rate 10 mV s -1 ). Similarly, the CV scans of the assembled photocathode (PVK|CoPcNH2@MWCNT) were recorded in a three-electrode system in both aqueous TEA captured and glycolic NaOH captured CO2 medium under chopped (5 s on, 5 s off), continuous or no simulated solar illumination (1 sun), with Ni foam|Cu26Pd74 counter electrode, and Ag/AgCl (sat. NaCl) or Ag/AgNO3 (in 0.1 M n-Bu4NPF6 in MeCN) as the reference electrode for aqueous and glycolic systems, respectively, at a scan rate of 10 mV s -1 . The Newport Oriel 67005 solar light simulator was calibrated to 1 sun (100 mW cm -2 ) using a Newport light meter before each experiment. The working potentials of both cathode and anodes under solar illumination (without any external voltage) were determined from CV overlaps. |
63f39b8ffcfb27a31f1dc775 | 17 | The two-electrode two-compartment PEC experiments were carried out with PVK|CoPcNH2@MWCNT photocathode and Ni foam|Cu26Pd74 as the dark anode, where captured CO2 conversion to syngas and EG oxidation to GA were performed simultaneously under 1 sun solar irradiation without applying any external voltage. The catholytes were capture solution + 0.1 M K2SO4 (aqueous medium) or capture solution + 20% v/v MeCN + 0.15 M TBABF4 (organic medium), and the anolytes were 0.5 M KOH, 0.5 EG in water (aqueous medium) or 0.6 M NaOH, 0.15 M TBABF4 in 80/20 v/v EG/MeCN solvent mixture (organic medium). CV scans in this two-electrode configuration were recorded at 10 mV s -1 scan rate to observe the photoelectrochemical response under chopped, continuous, and no light irradiation. The PEC experiments were carried out under chopped irradiation (50 min on, 10 min off) at no applied voltage for a certain time period, and the obtained photocurrents were normalized to the perovskite active area. Product analysis and quantification was carried out after each PEC experiment. |
63f39b8ffcfb27a31f1dc775 | 18 | For the isotopic labelling experiments with 13 CO2, the 13 CO2 capture was performed by stirring the 1 M TEA solution in water or 1 M NaOH solution in EG for 2 h under CO2 atmosphere. The obtained solutions were purged with N2 for 15 min to remove physically dissolved CO2 and were then used for electrochemical reduction as per standard electrochemical conditions. The product gas mixture generated in the cathode after 2 h (at -0.7 V vs. RHE, TEA/H2O system) or 20 h (at -1.85 V vs. Fc/Fc + , NaOH/EG system) of CPE were analysed by IR spectroscopy to determine the isotopic abundance in the product CO. |
63f39b8ffcfb27a31f1dc775 | 19 | For the experiment with real-world waste PET plastic derived EG, a commercial sparkling water PET bottle (Highland Spring, sourced from Sainsbury's UK) was pre-treated with alkaline pre-treatment. The bottle was cut in small pieces, dipped in liquid N2 and then pulverized in a grinder. 1 M aqueous KOH was then added (PET concentration 50 mg mL -1 ) and the solution was heated at 80 ΒΊC for 5 days under continuous stirring. The solution was then filtered to remove PET fragments and the clear solution was directly used as anolyte for the PEC experiments. The concentration of EG after pre-treatment was determined as 10.3Β±2.8 mg mL -1 (~0.2 M) by HPLC. |
63f39b8ffcfb27a31f1dc775 | 20 | The gases H2 and CO produced at the cathode were detected and quantified using a Shimadzu GC-2010 Plus gas chromatogram with ultrapure Helium as the carrier gas. 2% CH4 in the purging N2 gas after CO2 capture was used as an internal standard for quantification. Gaseous aliquots were taken from the headspace and were analysed by manual injection to the GC. The liquid aliquots from both cathodic and anodic compartments were taken after the experiments, and then analysed by 1 H NMR spectroscopy or HPLC. A Waters breeze HPLC system equipped with a Phenomenex Rezex 8% H + column and refractive index (RIS-2414) and diode array UV-Vis (254 nm) detectors was used for the oxidation product quantification. The faradic efficiencies (FE) of products were determined from equation 7, |
63f39b8ffcfb27a31f1dc775 | 21 | where Z is the number of electrons required for the respective product formation, n is the number of moles of product formed, F is the Faraday constant (96485 C mol -1 ) and Qpassed is the total amount of change passed during experiment. The turnover number (TON) of the molecular catalyst was calculated as per the following equation: |
63f39b8ffcfb27a31f1dc775 | 22 | For isotopic labelling experiments with 13 CO2, the isotopic abundance of the generated product CO was recorded in a Thermo Scientific Nicolet iS50 IR spectrometer in a gas-phase transmission mode. The generated gas mixture after the experiment was transferred from the cathode headspace by vacuum extraction to an IR cell equipped with KBr windows with 10 cm path length. |
627bddd544bdd532395fb4b5 | 0 | The "unreasonable effectiveness" of deep learning in domains such as computer vision (CV) and natural language processing (NLP) relies on the ability of deep neural networks (DNNs) to leverage ever-increasing amounts of compute, data, and model capacity. Large-scale models, including Bidirectional Encoder Representations from Transformers (BERT) and DALL-E , have been so successful at synthesizing information from large datasets via selfsupervised pre-training and performing a variety of downstream tasks with little to no finetuning that most state-of-the-art (SOTA) models in NLP and CV are adapted from a small set of large, pre-trained models . Naturally, we might expect that massive model and dataset scaling will be a prerequisite to achieving out-sized success for deep learning in science. Recent work such as AlphaFold , the Open Catalyst Project , and ChemBERTa indicate that larger datasets and models, pre-training, and self-supervised learningall key ingredients in CV and NLP -unlock new capabilities for deep learning in chemistry. However, unlike in CV and NLP, the path to scaling deep chemical networks and the potential benefits are unclear. Chemical deep learning can incorporate physics-based priors that may ameliorate the steep resource requirements seen in other fields . Moreover, because of the heterogeneity and complexity of chemical space and molecular machine learning (ML) tasks , training general and robust models that perform well on a wide variety of downstream tasks remains a pressing challenge . The enormity of chemical space and heterogeneity of these tasks motivates investigations of large-scale models in chemistry, because such models are wellsuited to unlabeled, multi-modal datasets . Recently, neural scaling laws have emerged as a way to characterize the striking trends of improved model performance over many orders of magnitude with respect to model size, dataset size, and compute; however, these experiments require immense computational resources and rely on well-known, domain-specific model training procedures that do not apply outside of traditional deep learning application areas. |
627bddd544bdd532395fb4b5 | 1 | With the inordinate costs of developing and deploying large models , it is difficult to investigate neural scaling behaviors of scientific deep learning models, which require expensive hyperparameter optimization (HPO) and experimentation. Architectures and hyperparameters that work well for small models and small datasets do not transfer to larger scales . This presents a risk that scientific deep learning will become increasingly inaccessible as resource demands increase. Techniques for accelerating neural architecture search (NAS) and hyperparameter transfer such as training speed estimation (TSE) and Β΅Transfer could accelerate the development of large-scale scientific deep learning models, where rapid advances in architecture design and complex data manifolds prevent the easy transfer of parameters and settings used in CV and NLP. To investigate the capabilities of deep chemical models across resource scales, practical and principled approaches are needed to accelerate hyperparameter transfer and characterize neural scaling. |
627bddd544bdd532395fb4b5 | 2 | In this paper, we develop strategies for scaling deep chemical models and investigate neural scaling behavior in large language models for generative chemical modeling and graph neural networks (GNNs) for machine-learned interatomic potentials. We introduce ChemGPT, a generative pre-trained transformer for autoregressive language modeling of small molecules. We train ChemGPT models with over one billion parameters, using datasets of up to ten million unique molecules. We also examine large, invariant and equivariant GNNs trained on trajectories from molecular dynamics and investigate how physics-based priors affect scaling behavior. To overcome the challenges of hyperparameter tuning at scale in new domains, we extend techniques for accelerating neural architecture search to reduce total time and compute budgets by up to 90% during HPO and neural architecture selection. We identify trends in chemical model scaling with respect to model capacity and dataset size, and show the performance improvements seen with increasing scale. We demonstrate the capability to tune ChemGPT's outputs via "prompt engineering" and sampling strategies. Pre-trained ChemGPT models are also robust, self-supervised representation learners that generalize to previously unseen regions of chemical space and enable embedding-based nearest-neighbor search. The scaling strategies and results enable immediate, significant improvements to model performance, as well as computational and data efficiency for deep chemical models. Our results provide motivation and practical guidance for scaling studies in scientific deep learning, as well as many fruitful new research directions at the intersection of massive scale and physics-informed deep learning. |
627bddd544bdd532395fb4b5 | 3 | In this section we describe the aspects of the workflow developed in this paper, summarized graphically in Figure . We define neural scaling and the model architectures considered here, which are chosen specifically for their likelihood to exhibit interesting scaling behavior. Then we introduce strategies to enable scaling large chemical models and investigations of scaling behavior. |
627bddd544bdd532395fb4b5 | 4 | for coefficient Ξ±, scaling exponent Ξ², and resource R. R is the number of model parameters, dataset size, or compute. Ξ² measures the slope of the power-law and indicates the scaling efficiency of the model with respect to a scaling factor, R. The power-law trends break down in "resolution-limited" regimes , indicating that the model (dataset) size is insufficient for the given amount of data (model parameters). |
627bddd544bdd532395fb4b5 | 5 | Neural scaling presents a best-case scenario for model performance with increasing resources, and allows for optimal allocation of fixed budgets, e.g., to decide whether longer training, more data, or larger models will be most efficient for improving performance. Comparing neural scaling exponents also provides a fundamental metric for measuring resource efficiency across model architectures. Investigations into neural scaling in the NLP domain have revealed general conclusions about overfitting, sensitivity to architectural choices, transfer learning, and sample efficiency . These factors are equally or more important in scientific deep learning applications, where rapid advances are being made in specialized architecture development, and it is often unclear how architectures will perform beyond the small benchmark datasets that are commonly available in scientific settings. |
627bddd544bdd532395fb4b5 | 6 | Large chemical language models. Strings are a simple representation for molecular graphs , thereby making sequence-based ML models a natural choice for working with chemical data. Following the demonstrated performance improvements of Transformer-based models with increasing model and dataset sizes , we designed a large generative language model for chemistry called ChemGPT to investigate the impact of dataset and model size on pre-training loss. ChemGPT is a Generative Pre-trained Transformer 3 (GPT3)-style model based on GPT-Neo with a tokenizer for Self-referencing embedded strings (SELFIES) representations of molecules. SELFIES enforce chemical validity and are straightforward to tokenize, but ChemGPT can easily be used with simplified molecular-input line-entry system (SMILES) strings as well . For chemical language modeling, a set of molecules (x 1 , x 2 , ..., x n ) is represented with each molecule as a sequence of symbols (s 1 , s 2 , ..., s n ). The probability of a sequence, p(x) is factorized as the product of conditional probabilities : |
627bddd544bdd532395fb4b5 | 7 | (2) ChemGPT uses the Transformer architecture with a self-attention mechanism to compute conditional probabilities, estimate p(x), and sample from it to generate new molecules. ChemGPT is pre-trained on molecules from Pub-Chem with a causal language modeling task, where the model must predict the next token in a sequence, given the previous tokens. ChemGPT models of up to one billion non-embedding parameters are trained on up to ten million molecules, whereas typical chemical generative models have less than one million parameters and are trained on less than one million samples . |
627bddd544bdd532395fb4b5 | 8 | Graph neural network force fields (GNNFFs). For many tasks in chemistry, molecular geometry and 3D structure is essential and string-based representations are not sufficient. Neural force fields (NFFs) are graph neural networks (GNNs) that take molecular geometries as inputs, described by a set of atomic numbers (Z 1 , ..., Z n |Z i β N) and Cartesian coordinates ( r 1 , ..., r n | r i β R 3 ). The NFF with parameters ΞΈ, f ΞΈ , predicts a real-valued energy Γ = f ΞΈ (X) for an atomistic configuration X. The NFF produces energyconserving atomic forces by differentiating the energies with respect to the atomic coordinates, |
627bddd544bdd532395fb4b5 | 9 | In this work we consider four flavors of NFFs: SchNet , PaiNN , Allegro , and SpookyNet . This series of models represents increasingly physics-informed model architectures, from models with internal layers that manipulate only E(3) invariant quantities (SchNet) to those that use E(3) equivariant quantities (PaiNN, Allegro, SpookyNet), strictly local models with learned many-body functions and no message passing (Allegro), and physically-informed via empirical corrections (SpookyNet). The power and expressivity of these GNNs can be defined in terms of their capacity , |
627bddd544bdd532395fb4b5 | 10 | where d is depth (number of layers or convolutions ) and w is width (the embedding dimension or number of basis functions employed by each convolution). Capacity is a simple parameter to vary during neural scaling experiments, because model size is not a strictly useful scaling parameter for GNNs . Typical evaluations of NFFs consider training dataset sizes of less than 1,000 3D geometries of a single chemical species, which leads to insensitivity to model capacity because of the simplicity of the learning task . |
627bddd544bdd532395fb4b5 | 11 | Accelerating hyperparameter optimization with training performance estimation. Because model hyperparameters, including learning rates and batch sizes, are essential for achieving optimal model performance and are non-transferable between different domains and model/dataset sizes , we need efficient strategies for scalable HPO in deep chemical models. We adapt Training Speed Estimation (TSE) , a simple technique for ranking computer vision architectures during neural architecture searches, to accelerate HPO and model selection for ChemGPT and GNNs. We call this method "Training Performance Estimation" (TPE), as it uses training speed to more generally enable performance estimation across a wide range of applications. TPE generalizes TSE to HPO for new deep learning domains (Large Language Models [LLMs], GNNs) and can be used to directly predict converged loss, in addition to rank ordering different architectures. HPO typically involves training tens or hundreds of networks and using random search and/or Bayesian optimization to identify optimal hyperparameters. For optimal performance, the process must be repeated when considering new datasets or distribution shift. |
627bddd544bdd532395fb4b5 | 12 | By calculating the "training speed" from only the first few epochs of training, the converged model performance is predicted and optimal hyperparameters are identified using only a small fraction of the total training budget. For example, networks that require 100 epochs to train to convergence are trained for only 10-20 epochs, and the final performance is predicted using TPE to identify the best performing networks, thereby saving 80-90% of the total training budget. |
627bddd544bdd532395fb4b5 | 13 | for a loss function L and a neural network f ΞΈ(t,i) , with parameters ΞΈ at epoch t and mini-batch i. (X i , y i ) is a tuple of inputs and labels in the i-th mini-batch. TSE is correlated with the converged performance of the network and can be used to rank networks early in training to yield substantial compute savings. Given a sufficient number of networks (5-10) that are trained to convergence, a linear regression of the form |
627bddd544bdd532395fb4b5 | 14 | is fit with parameters m and b and the calculated T SE values to predict the converged loss, L. This allows predictions of converged network loss for partially-trained networks evaluated during HPO based on its T SE values. In our experiments, we noted that L is monotonic in T SE, meaning that Equation is not needed to simply choose the best hyperparameters. The T SE values computed after a small number of epochs are sufficient for ranking model configurations and finding the optimal ones. Although leveraging Equation requires training some small number of networks to convergence in order to fit the parameters, it provides the benefit of being able to predict the expected performance of new hyperparameter choices. |
627bddd544bdd532395fb4b5 | 15 | Training performance estimation accelerates hyperparameter optimization for new datasets, models, and scales. To conduct extensive scaling experiments, we first need to find reasonable hyperparameters and training settings. Unlike for NLP and CV, there are no default model architectures, datasets, tasks, hyperparameter settings, or training settings for large-scale chemical deep learning. Simply transferring empirical results from other deep learning domains or smaller scale experiments will lead to suboptimal results . Whereas large models and datasets are standard in traditional deep learning application areas, to investigate scaling in deep chemical models, we must lay the groundwork for large-scale experiments. To this end, we first tackle the problem of accelerating HPO in general settings, for new model architectures, heterogeneous datasets, and at scales that have not been previously investigated. |
627bddd544bdd532395fb4b5 | 16 | Default settings Figure shows the results of TPE for ChemGPT models trained on two million molecules from the Molecular Sets (MOSES) dataset. MOSES is significantly smaller than PubChem and is representative of datasets on which chemical generative models are typically trained . Here, we use MOSES to demonstrate how optimal settings for a chemical LLM such as ChemGPT can be quickly discovered using TPE. To enable scaling experiments, we are mainly concerned with settings related to the learning dynamics (e.g., batch size and learning rate), that will significantly impact large-scale training and fluctuate depending on the type of model and the characteristics of the dataset. To demonstrate the effectiveness of TPE, we initialize ChemGPT with the default learning rate and batch size for causal language modeling in Hugging-Face. We then vary the learning rate and batch size and train models with different hyperparameters for 50 epochs. Figure shows the true loss after 50 epochs versus the predicted loss using TPE after only 10 epochs. R 2 = 0.98 for the linear regression (Equation ), and Spearman's rank correlation Ο = 1.0. |
627bddd544bdd532395fb4b5 | 17 | With only 20% of the total training budget, we are able to identify model configurations that significantly outperform the default settings from Hug-gingFace. The procedure is easily repeatable for new datasets and enables accelerated HPO. While training procedures for large language models like ChemGPT are well established, scaling NFFs to larger datasets and more expressive models requires new, scalable training procedures . Large-batch training through data parallelism is one method for accelerating training, but there are known limitations and correct batch sizes vary widely for different domains . This problem is particularly acute for NFFs, where each datapoint actually contains 3N + 1 labels for energies and atomic forces, where N is the number of atoms, creating a large effective batch size with large variance within each mini-batch. Hence, it has been observed that small batch sizes (even mini-batches of 1) work well across different NFF architectures . TPE provides a method for quickly evaluating the speed-accuracy trade off for different combinations of batch size and learning rate, which are interdependent and must be varied together to enable large-batch training. |
627bddd544bdd532395fb4b5 | 18 | TPE performs equally well for GNNs. We repeat the TPE procedure, varying the learning rate and batch size, for SchNet, PaiNN, and SpookyNet, training on 10,000 frames (1,000 frames/molecule) from the revised MD-17 dataset of 10 small organic molecules. Using only 20% of the total training budget, we achieve excellent predictive power (Figure ) with TPE for SchNet and PaiNN. The variance in model performance using the entire training budget is significant, indicating the importance of proper HPO. |
627bddd544bdd532395fb4b5 | 19 | Because SpookyNet is a complex architecture that includes non-local interactions and empirical corrections, it exhibits slow convergence and the training speed is less correlated with the model performance than SchNet and PaiNN. However, the rank ordering of model configurations for SpookyNet from TPE is still robust (Spearman's Ο = 0.92), which allows for discarding non-optimal model configurations early in training, representing significant computational savings. The goodness of fit metrics for linear regressions using TPE are given in Table . |
627bddd544bdd532395fb4b5 | 20 | Neural scaling quantifies the performance improvements of large chemical models with increasing model and dataset sizes. Next, with a strategy in place to efficiently scale up experiments using TPE, we investigate neural scaling in ChemGPT and NFFs. For each model, we perform TPE to identify good hyperparameter choices that are predicted to perform well over a range of model and dataset sizes. Then, we systematically vary the dataset size (d) and model size (m) and perform exhaustive experiments to determine the converged loss, L(m, d). For efficiency and to isolate scaling behavior, we fix hyperparameters from TPE as m and d are varied, but strictly speaking the optimal hyperparameters will change as m and d vary . Due to computational resource limitations, we train ChemGPT models for a fixed number of epochs to determine the loss. |
627bddd544bdd532395fb4b5 | 21 | Figure shows the pre-training loss as a function of model and dataset size over many orders of magnitude. Models are trained in a self-supervised, causal language modeling setting and evaluated on next-token prediction for a fixed validation set. Surprisingly, no limitations in performance improvement are seen with increasing scale. The pre-training loss monotonically improves with increasing dataset size up to nearly 10 million molecules. Furthermore, for a fixed data budget, increasing model size provides monotonic improvements to the pre-training loss until the model reaches 1B+ non-embedding parameters. This indicates that even for small datasets, much larger models than were previously considered for deep generative modeling may be useful for pre-training. For the largest dataset considered here, diminishing returns to performance improvements are seen for models above 100 million non-embedding parameters. Interestingly, greater performance improvements are seen with increasing model sizes for smaller datasets than larger ones. For the largest dataset considered, model performance saturates quickly beyond 100 million parameters. However, for the smallest dataset considered, performance plateaus for model sizes between 10 5 -10 7 parameters and then improves considerably. This indicates that for a fixed, small pre-training data budget, significant improvements are possible simply by scaling up the model size. Irrespective of model size, increasing dataset size provides continuous performance improvements with no evidence of diminishing returns for the dataset sizes considered here. Depending on the dataset size, regimes of power-law-like scaling behavior are seen for different ranges of model sizes. Power-law scaling is graphically identifiable as an approximately straight line fit of loss versus model size on a log-log plot. For larger datasets, power law scaling is observed for smaller model sizes. For example, the largest dataset exhibits approximate power law scaling for models between 10 5 and 10 7 non-embedding parameters (Figure A.1). Conversely, for smaller datasets power law scaling is observed for larger models and over a more limited range of model sizes. The smallest dataset exhibits approximate power law scaling for models between 10 7 and 10 8 nonembedding parameters (not shown). |
627bddd544bdd532395fb4b5 | 22 | The break down in power-law scaling is indicative of "resolution-limited" neural scaling , where the model is sufficiently large but the dataset is not, or vice-versa. Identifying these resolution-limited regimes from the neural scaling relations allows us to understand in general terms whether model performance is limited by data availability or model capacity. The scaling exponent Ξ² is equal to 0.17 Graph neural network (GNN) interatomic potentials exhibit robust neural scaling behavior. The potential benefits of large-scale GNNs are less clear than for LLMs, as are the relevant parameters to vary, due to the inequivalence of depth and width for GNNs and additional parameters beyond notions of model size that impact performance, e.g., nearest-neighbor cutoff in graph construction. To simplify GNN scaling experiments, here we vary GNN capacity (depth * width) by systematically changing network width and the number of convolutions (depth). We train GNNs to predict atomic forces from the ANI-1x dataset , the largest publicly available dataset of energies and forces for small molecules. NFF models are trained with a learning rate scheduler that reduces the learning rate every 50 epochs without improvement in the validation loss, until the learning rate reaches 10 -7 . The loss is an L1 loss (Equation ), shown in Figure over four orders of magnitude of dataset size. |
627bddd544bdd532395fb4b5 | 23 | The neural scaling results for the equivariant GNN, PaiNN (Figure ), show monotonic improvements to the loss with increasing dataset size. For a fixed dataset size, the converged loss is strongly correlated with the total training time (compute) and model capacity. Other than for 10 4 datapoints (for which some small models reach convergence quickly), the converged loss has a Spearman correlation coefficient Ο >= 0.88 with the model capacity and Ο >= 0.75 with the total training time. This means that the best models are those with optimal capacity that are able to train the longest without the validation loss plateauing. The optimal capacity and depth versus width change with the dataset size, i.e., the ideal GNN capacity is dataset-size dependent, and these choices can significantly impact the converged loss. These effects may also be artifacts of random initialization that would diminish with repeated trials. Interestingly, there is a stark change at 10 4 datapoints -the converged loss is then nearly perfectly rank correlated with model capacity (Spearman's Ο >= 0.93). This might indicate that substantial overlap exists between the training and validation set, such that higher capacity models are merely exhibiting better memorization than lower capacity models. In these experiments, the validation set is constructed from unseen geometries and seen species (chemical species are the same in the training and validation sets). Repeating these experiments with a hold-out set of unseen chemical species will reveal if the same trend holds, which would indicate that rather than memorizing, the network is achieving generalization to new chemistries. |
627bddd544bdd532395fb4b5 | 24 | We observe similar trends in neural scaling for the invariant GNN, SchNet ( .1). That is, not only do the equivariant GNNs achieve better performance for a given data budget, they achieve increasingly greater performance gains given more training data. This is due to the models' equivariance, which is known to produce greater sample efficiency , but it is interesting to note that this trend persists to much larger and more chemically diverse datasets than were previously considered, which typically include only 10 2 -10 3 molecular geometries from a single molecular species. |
627bddd544bdd532395fb4b5 | 25 | Training performance estimation and neural scaling enable significant improvements to model performance, and computational and data efficiency. Next, we briefly highlight the practical outcomes and usages of TPE and neural scaling as enabling technologies for scalable scientific deep learning. Based on the results presented above, TPE can be used in conjunction with any HPO routine to enable aggressive early stopping and accelerate HPO without sacrificing model performance. Clearly, the benefits of this approach become more pronounced in chemical and biological applications, where new network architectures must be continuously retrained, optimized, and evaluated on heterogeneous datasets. |
627bddd544bdd532395fb4b5 | 26 | Similarly, neural scaling provides practical ways to improve model performance and efficiency. Given an unlimited data and computation budget, the minimum loss in the neural scaling plot and corresponding model can be used. For example, the 300 million parameter ChemGPT model trained on 300 million tokens minimizes the loss in Figure . Likewise, the PaiNN model with capacity β 1000 trained on 10 5 frames minimizes the loss in Figure . This may be valuable for pre-trained models that are designed to be reused and finetuned, where the training cost is amortized over many downstream applications. However, for many scientific applications, greedily optimizing for the minimum loss is not practical or even necessary. From the neural scaling results, identifying regions with the steepest slope allows for optimal and efficient allocation of resources. For example, for large chemical language models, the greatest performance improvements (Figure ) are seen for large data budgets when scaling up small models (10 5 parameters). For small data budgets, more rapid performance improvements are seen when scaling up medium-sized models (10 7 parameters). For NFFs, there are diminishing returns with increasing dataset sizes for low capacity models, while high capacity models show rapid performance improvements with increasing dataset size (Figure ). The benefits from scaling model and dataset sizes should therefore be balanced against the increased computational costs in order to find the most computationally-and data-efficient opportunities for performance improvement. Beyond optimizing resource allocation to achieve better model performance, our results on large chemical models suggest potentially new capabilities of these models, which we will explore next. |
627bddd544bdd532395fb4b5 | 27 | Large chemical language model outputs are tunable via prompt engineering. There is a vast space of potential applications for a pre-trained LLM for chemistry, including but not limited to downstream tasks like property prediction and distribution learning . Here, to demonstrate potentially unique capabilities of ChemGPT, we focus on two applications: 1) prompt engineering and 2) representation learning. Prompt engineering is an emerging field in generative LLMs, wherein training a model is only the beginning of the process, and the "quality" and diversity of outputs is tunable based on the sampling strategy and sequences used to condition the generation. In contrast to smaller, simpler generative models, the outputs of GPT-style models are highly tunable. The goal here is not to exhaustively investigate prompt engineering and representation learning, which have massive scope, but instead to demonstrate interesting capabilities of ChemGPT and provoke new research questions. |
627bddd544bdd532395fb4b5 | 28 | We consider choosing a molecule as the basis for generation (Figure ) and picking a scaffold from the molecule, or otherwise fixing some part of the molecule. This will be used as a conditioning sequence for generation, and represents a potential use case for a generative LLM: pre-training and/or fine-tuning the LLM on a particular region of chemical space and using its generative capabilities to explore the chemical space around a known hit or lead compound in a drug discovery campaign. Molecules are then generated using the conditioning sequence with either top-k/nucleus sampling or beam search. We may be interested in generating new samples that have similar properties to our original molecule, or in generating samples that preserve a scaffold or substructure but otherwise have significantly different properties. |
627bddd544bdd532395fb4b5 | 29 | We consider the distribution of molecular properties of samples from top-k sampling and beam search with reference to an original training molecule. We find that the property distributions of samples vary considerably with sampling parameters, which introduce many degrees of freedom in the generation process . These include the number of beams, B, used in beam search, which explores a graph of generation possibilities by choosing samples that have the highest overall probability according to the model. Greedy search (not considered here) always selects the next token with the highest probability, hence beam search is guaranteed to find higher probability samples than a purely greedy strategy. The softmax temperature, T , controls the sensitivity of sampling to low probability tokens. Lowering T makes the distribution sharper, decreasing the likelihood of low probability tokens. For chemical generation, this results in less random, more repetitive samples. Conversely, increasing T results in more random, less repetitive samples. In top-k sampling , the k most likely next tokens are considered at each step of generation. This limits the sampling to a small set of high likelihood tokens, while expanding the diversity of sampling beyond greedy and beam search. In top-p (nucleus) sampling, the smallest set of tokens with cumulative probability > p are chosen. This dynamically changes the number of possible next tokens according to the changing probability distributions. We combine top-k and top-p sampling in our experiments, which avoids low probability tokens while providing more randomness and "surprising" outputs than beam search . We also vary the number of generated samples, n s , and the prompt length, l max (Figure ). To better understand the effects of sampling strategy, we repeat generation for ten different molecules randomly chosen from PubChem10M, and take prompts of length l max β (5, 10, 20, l -3) tokens for each molecule, where l is the length of the original training molecule (Figure ). For the smallest and largest pre-trained ChemGPT models, we then plot the distributions of the percentages of samples that pass MOSES filters and the difference in molecular weight of generated samples compared to the original molecules used for conditioning (Figure ). The smaller model (green shading in Figure ) generally outperforms the larger model (orange) in generating samples that pass filters, for both top-k and beam search, except for k = 1000. However, the larger model generates samples that are more uniformly distributed with respect to molecular weight. The k parameter in top-k sampling has a limited effect for the large model, but tends to shift the samples to a lower pass filter rate for the small model. Changes in the beam size can induce shifts from multi-modal to unimodal distributions of samples. Overall, the property distributions of samples are highly dependent on the sampling strategy, sampling parameters, and model size, suggesting that the complexity of ChemGPT introduces many new and important degrees of freedom for generative modeling. Importantly, beam search is typically employed in sampling from chemical generative models and this restricts the diversity of outputs compared to top-k/nucleus sampling. Large chemical language models are self-supervised representation learners. A particularly exciting prospective application of large, pre-trained chemical models is to create a general representation learner that operates across diverse chemistries . To demonstrate the representation learning capabilities of ChemGPT, we show the illustrative examples of clustering and nearest-neighbor similarity search using unsupervised embeddings. We choose a representative dataset from Therapeutic Data Commons and MoleculeNet , the FreeSolv dataset of 642 druglike molecules and their hydration free energies in water. |
627bddd544bdd532395fb4b5 | 30 | We generate embeddings for the FreeSolv dataset from the hidden states in the last layer of ChemGPT models pre-trained on PubChem10M, without fine-tuning on FreeSolv. This is intended to simulate the usage of pre-trained ChemGPT models in general representation learning settings on new, previously unseen chemical spaces. Of course, the model could be fine-tuned on a target dataset to improve the quality of embeddings. We project the 1024dimensional embeddings down to two dimensions using principal component analysis (PCA) (Figure ) and t-distributed stochastic neighbor embedding (t-SNE) ((Figure ). For both dimensionality reduction techniques, the embeddings cluster based on the molecular scaffolds. Additionally, the t-SNE embeddings are clustered with respect to the target property, hydration free energy (Figure ). The structure in the latent space suggests that the learned embeddings from ChemGPT are chemically meaningful and useful for reasoning about chemical spaces outside of the pre-training set. We also show unsupervised ChemGPT embeddings for the Tox21 dataset, which is a binary classification task from toxicity measurements for 7,831 small molecules on 12 different targets. These embeddings cluster active and inactive compounds, again without finetuning on the dataset. The FreeSolv and Tox21 tasks show the relevance of unsupervised ChemGPT embeddings for both physicochemical and ADMETox tasks, where properties depend on intrinsic qualities of the molecules. |
627bddd544bdd532395fb4b5 | 31 | Learned embeddings can also be used for a common cheminformatics task: similarity search and retrieval. Given some query molecule (Figure ), the goal is to find "chemically similar" molecules that are nevertheless different and diverse with respect to the initial query. This workflow may be used to design a library of chemical matter for high-throughput virtual screening or assays, or to preserve desirable characteristics of a query molecule while "hopping" to new molecular scaffolds or discarding undesirable moieties. The molecules identified as similar depend sensitively on the similarity/distance metric used, and the molecular representation. We propose a general representation learning problem defined as returning a distribution of nearest neighbors that comprise a smooth manifold in property space. That is, regardless of the representation and distance metric, the objective is to identify molecules that are similar to the query molecule in property space. This framing transforms the abstract problem of defining "chemical similarity" into the more tractable, easily understood problem of identifying molecules with similar properties. For simplicity, we consider molecular weight (MW), partition coefficient (LogP), quantitative estimate of druglikeness (QED), and synthetic accessbility (SA) , although any accessible properties could be used. |
627bddd544bdd532395fb4b5 | 32 | To benchmark our similarity search performance, we compare to the traditional method of encoding molecules using Morgan fingerprints and computing similarities using a Tanimoto distance. As an illustrative example, we show a query molecule (Figure ) from the Enamine HTS Collection in property space and the property distributions of its 100 nearest neighbors computed via ChemGPT embeddings (Figure , red contour plots) and the fingerprint method (Figure , blue contour plots). Again, ChemGPT is not fine-tuned on this dataset, the pre-trained model is able to generate "chemically meaningful" embeddings for Enamine HTS, which is a standard chemical library for high throughput screening. The ChemGPT embeddings are dense, real-valued, high-dimensional vectors, so a combination of dimensionality reduction and flexible choice of distance metric (L1, L2, cosine, etc.) yields a smooth and tunable distribution of nearest neighbors in property space. In this example, we use PCA to reduce the embedding dimension from 1024 to 100 and compute distances with a cosine similarity. |
627bddd544bdd532395fb4b5 | 33 | Although it is difficult to see by eye in the contour plots in Figure , by computing the statistics of the nearest neighbor property distributions we find that the nearest neighbors identified from learned embeddings are closer to the query in property space than those from non-learned fingerprints, using the following equation, β p = p N N -p query . |
627bddd544bdd532395fb4b5 | 34 | We compute the mean of the property values, p N N for properties p (LogP, synthetic accessibility, molecular weight, and QED), of the 100 nearest neighbors identified with embedding and fingerprint methods and calculate the difference (β p ) between the mean property value of the nearest neighbors and the query molecule p query . We report these values in Table .2. In this example, for all properties considered, the nearest neighbors from embeddings are closer to the query molecule in property space. The same trend is observed if we consider median rather than mean property values of nearest neighbors. For some applications, this may be desirable, for others the goal may be to discover nearest neighbors that are chemically similar, but with significantly different properties, e.g., avoiding activity cliffs. |
627bddd544bdd532395fb4b5 | 35 | In this paper, we developed and applied strategies for scaling large chemical language models and GNN interatomic potentials. To enable the efficient scaling of deep chemical models under computational resource constraints, we introduced Training Performance Estimation (TPE), a generalization of Training Speed Estimation that significantly reduces the computational costs of hyperparameter optimization and model selection for both chemical models trained on large datasets and on GNN interatomic potentials. The use of TPE enabled large-scale experiments, training GPT-style chemical models with over one billion non-embedding parameters on nearly ten million molecules. It also made training tractable for invariant and equivariant GNNs with a wide range of model capacities on up to 100 thousand 3D molecular geometries ( 4.5 million force labels). We discovered empirical power law "neural scaling" behavior that quantifies how converged model loss depends on the scale of model and dataset size over many orders of magnitude. These results enable optimal allocation of computational and data budgets for maximally efficient model performance improvements, and make scalable scientific deep learning more accessible to a broader community of researchers. Finally, we showed that the outputs of large chemical generative models are tunable via prompt engineering and sampling strategies, and that the model embeddings can be used for unsupervised representation learning and similarity search. |
627bddd544bdd532395fb4b5 | 36 | A key motivation for this work was to begin to investigate scientific deep learning at scale -using large models on massive datasets. Unlike areas such as natural language processing (NLP) and computer vision (CV), where scale has proven to be a key ingredient to recent breakthroughs, scientific domains are built on physics-based priors that impose high levels of structure on data generation processes. For this reason, it is unclear what, if any, benefits massive scale will confer to scientific deep learning. And although there is significant engineering effort required to train and deploy large-scale deep learning models, the study of such models is of inherent scientific interest because these models may display surprising, emergent characteristics that are not predictable by extrapolating from small scales. There is an opportunity in scientific deep learning to anticipate trends towards so-called foundational models, similar to those seen in NLP and CV, to ensure that such important investigations are not limited to a few extremely well-resourced research organizations. |
627bddd544bdd532395fb4b5 | 37 | Our work, specifically our findings with ChemGPT, suggests a "bittersweet lesson" for scientific deep learning -although incorporating principled physical priors and domain knowledge into scientific machine learning frameworks will continue to play an important role in this field, achieving sheer massive scale in model size and data diversity is likely to be a key component in some scientific deep learning breakthroughs. This finding presents exciting opportunities to further explore what techniques and lessons can be taken from traditional deep learning application domains. The neural scaling results indicate that traditional techniques for training large language models apply when scaling models using string-based representations of molecules. The additional dataand model-efficiency gained by careful incorporation of physical priors observed in the neural force field experiments suggests non-trivial interactions with typical scaling techniques. The trend towards larger models introduces challenges in any field, namely, increasing resource demands and environmental impact. We provide strategies and examples for scalable chemical deep learning here, but studies at massive scale remain inaccessible to most researchers. Likewise, strategies for reducing the environmental impact and carbon footprint of largescale deep learning exist, but the engineering resources and hardware needed to do so are concentrated in large organizations . However, a growing body of research suggests that large models accelerate the optimization problem and finding of robust solutions, and that once trained, techniques like model distillation, quantization, and dataset pruning are effective at reducing model and dataset sizes while retaining performance. |
627bddd544bdd532395fb4b5 | 38 | The work presented here invites many directions for future research. Our goal is to enable neural scaling studies and investigate the potential benefits of scaling for chemical deep learning. Future work will investigate the complex relationships between pre-training performance improvements and downstream tasks. Large, pre-trained models can be fine-tuned on any smaller chemical space of interest to investigate the benefits of transfer learning and potential for generalization. Similarly, further study of the representations learned by large pre-trained models is warranted, including the proposal of new benchmarks for evaluating the quality of learned representations beyond downstream property prediction tasks. We were limited by computational and engineering constraints, but much larger chemical models and pre-training datasets clearly warrant investigation; this will require more advanced model parallelism approaches. Finally, the effects of physics-based priors on scaling behavior give a rich description of how the incorporation of physics, known empirical relationships, and other forms of knowledge into machine learning frameworks impact both learning quality and efficiency. Future work in this area is well-poised to yield fundamental advances in scientific machine learning. |
627bddd544bdd532395fb4b5 | 39 | In this section, we report settings for the experiments performed in this paper. All experiments described in this paper were conducted on NVIDIA Volta V100 graphics processing units (GPUs) with 32 GB of memory per node and 2 GPUs per node. All models were implemented in PyTorch and trained with the Distributed Data Parallel (DDP) accelerator , the NVIDIA Collective Communication Library (NCCL), PyTorch Lightning and LitMatter for multi-GPU, multi-node training. |
627bddd544bdd532395fb4b5 | 40 | Large Language Models (LLMs). The ChemGPT model architecture is based on the GPT-Neo transformer implementation in Hugging-Face . The model has 24 layers, with variable width, w, where w β and w determines the model size. Model sizes range from 77,600 to 1,208,455,168 non-embedding parameters. The model is trained via stochastic gradient descent (SGD) with the AdamW optimizer, using a learning rate of 2 * 10 -5 , a per-GPU batch size of 8, and a constant learning rate schedule with 100 warmup steps for scaling experiments. Models were trained for 10 epochs in a self-supervised manner, with a cross-entropy loss for causal language modeling. |
627bddd544bdd532395fb4b5 | 41 | The training dataset for scaling experiments is PubChem10M , a set of 10 million SMILES strings. 5% of the data is randomly sampled and held out as a fixed validation set of size 500,000 molecules. Variable training datasets with sizes 10 n , where n β (2, 3, 4, 5, 6), were used. The largest training dataset includes all molecules in PubChem10M, excluding the validation set. The maximum vocabulary size was 10,000 and the maximum sequence length was 512 tokens. SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library . SELFIES were tokenized by splitting individual strings into minimally semantically meaningful tokens denoted by brackets, including start-of-string, end-of-string, and padding tokens. Dataset sizes range from 51,200 to 304,656,384 tokens. |
627bddd544bdd532395fb4b5 | 42 | Graph Neural Networks (GNNs). We train GNNs to predict the forces of molecular geometries. Force-only training (Ξ± E = 0 in Eq. 5) was used for neural scaling experiments to improve convergence and avoid issues with systematic drift in predicted energies, which we identified during the course of this work and plan to address in future work. We use the SchNet , PaiNN , Allegro , and SpookyNet models. Model implementations are from the NeuralForceField repository and the Allegro repository . Model sizes (w in Equation ) were varied between 16, 64, and 256, while the number of layers/convolutions (d in Equation ) was chosen to be 2, 3, or 4. A 5 Γ
nearest-neighbor cutoff was used. All other model hyperparameters were set to default values from the original implementations. GNN models were trained with SGD using the Adam optimizer. |
627bddd544bdd532395fb4b5 | 43 | A learning rate scheduler reduced the learning rate by 0.5Γ after 30 epochs without improvement in the validation loss, with a minimum learning rate of 10 -7 . Early stopping was applied after 50 epochs without improvement in the validation loss, and training was capped at 1000 epochs. Initial learning rates of 10 -3 , 10 -4 , and 10 -5 , and per-GPU batch sizes of 4, 8, 16, 32, and 64 were used during hyperparameter optimization experiments, while keeping the network architecture hyperparameters fixed. Models were trained for 50 epochs during hyperparameter optimization to approximate a full training budget, with a limited percentage of the total training budget used to calculate TSE. |
627bddd544bdd532395fb4b5 | 44 | The training dataset was assembled from ANI-1x , which contains energies and forces from 5 million density functional theory calculations for small molecules. A fixed validation dataset of 50,000 frames was held out by random sampling. Different splits of training were taken with sizes 10 n where n β . Training datasets for TPE were assembled by randomly sampling 1,000 structures from molecular dynamics (MD) trajectories for each of the 10 molecules available in the revised MD-17 dataset, for a total of 10,000 training samples. A validation dataset of equal size was constructed from the remaining geometries. Revised MD-17 is an updated version of the MD-17 dataset, recomputed at the PBE/def2-SVP level of theory with strict convergence criteria to remove noise found in the original MD-17 dataset. |
627bddd544bdd532395fb4b5 | 45 | Code used to perform the experiments and Training Performance Estimation (TPE) reported in this paper is available via GitHub in the LitMatter repository . Neural force field model code is available here and Allegro model code is available here. The GPT-Neo model that ChemGPT is based on is available here. PubChem10M tokenizers using SELFIES versions 1.0.4 and 2.0.0 are available through the LitMatter repository and the Hugging Face Hub. Because of the significant computational resources required to train large models and the value of those models, pre-trained model checkpoints for ChemGPT are available via the Hugging Face Hub. Pre-trained model checkpoints for PaiNN and Allegro are available through Figshare. |
6501ed1a99918fe537e49e59 | 0 | The implementation of computerized algorithms into organic chemistry has had a rich history, with early emphasis centered around deriving linear relationships from observed results. By the 1970s, synthetic chemists had turned their attention to utilizing more complex functions to model more abstract observations. In 1977, Corey et al. published the first recognized retrosynthetic analyzer, LHASA (Logics and Heuristics Applied to Synthetic Analysis) which featured hand coded expert rules resulting in over 30,000 lines of FORTRAN code. With the turn of the century, improvements in computational hardware and the development of new computer learning algorithms, including machine learning (ML), have seeded new avenues for algorithm-based predictive chemistry. ML is the process of taking complex inputs, abstracting their relevant features through non-linear equations, and correlating those features to a given output. Despite its simplistic framework, variations upon this theme have yielded advances in numerous areas including fundamental molecular property prediction (e.g., quantum chemical, ADMET), reaction property prediction (e.g., regioselectivity, yield), and generative modeling. With these more power algorithms come higher data requirements. Where the first data-driven chemistry models may have necessitated a few experimental results, ML often demands tens of thousands of data points. Purely computational datasets from density functional theory (DFT) or semi-empirical methods have been generated and utilized in ML for prediction of quantum chemical properties (e.g., HOMO-LUMO gaps, dipole moments) and in molecular scaffold generation. Benefits of using these datasets include larger sizes and less noise present within each observation. Whilst experimental data is inherently noisier, more expensive, and often more laborious to collate than computational data, it presents a more wholistic representation of a chemical system, even if we do not fully understand the intricacies present within that system. It is therefore important for the community to find ways to incorporate these smaller, experimental datasets as a key feature into ML tools. |
6501ed1a99918fe537e49e59 | 1 | One method that has seen potential towards the utilization of smaller datasets in deep ML is transfer learning. In this process, a model is first trained on a large dataset. The target prediction of the first task (pretraining task) does not need to be directly related to the desired final task (finetuning task), however, the initial knowledge gained from pretraining must have some relevancy to the finetuning. In a neural network, each non-linear function is referred to as a layer; each layer in the neural network is responsible for extracting relevant chemical features for a given task. The first layer is defined as the input layer, the final layer as the output layer, and for this manuscript, the penultimate layer is defined the latent space (Figure ). One may imagine the latent space as a complete digitization of a molecule, whereby each molecular feature has been assigned a series of numbers. The key to effective transfer learning hinges around this latent space, whereby each molecule's chemically relevant features are so well characterized that the substitution of one output layer for another output layer results in accurate prediction of a different chemical property. In essence, the model transfers the knowledge it learnt from pretraining to finetuning. To date, transfer learning in data-driven chemistry has been applied on a case-by-case basis, where pretraining tasks are expertly chosen for specific finetunings to minimize domain mismatch. This limits the possibilities of fulling harnessing smaller datasets. |
6501ed1a99918fe537e49e59 | 2 | Here I report the development of a general chemistry-centric foundational model. Approximately 1 million experimentally validated organic crystal structures from the Cambridge Crystallographic Data Centre (CCDC) was used to train this foundational model. The key hypothesis is that that if a ML model was capable of predicting accurate crystal structure information, its latent space would be useful in the prediction of many other chemical outcomes. It was envisioned that the size and scope of the CCDC would allow for a deep neural network approach, capable of inferring nuanced interactions between atoms and/or motifs from a given molecule, to facilitate rapid specialization. Importantly, as only the final output block of the neural network would be undergoing training, this would yield a modular, flexible framework capable of reliable prediction even on limited training data. The modularity and accuracy of this approach on 3 chemistry-related prediction tasks: toxicity prediction, yield prediction, and olfaction prediction are reported, showcasing a performance improvement over other documented ML techniques and, when applicable, comparison to other deep learning models (Figure ). |
6501ed1a99918fe537e49e59 | 3 | The first task was to generate a foundational model whose latent space could digitize a molecule for subsequent downstream finetuning. As the CCDC dataset is centered around crystal structure data, whose focus is on the geometry of the molecule(s), I opted for a graph-based model. Graph-based models represent molecules as mathematic graphs, interpreting atoms as abstract objects which contain information about their corresponding atoms as nodes: entities which store information, and bonds as edges: entities which represent relationships between nodes. It is of note that the information stored within the atom-nodes is completely user defined and may be as sparse or as dense as the user wishes. The specific graph neural network utilized was a message passing neural network (MPNN), a flavor of graph convolutional neural network which have noted success in a variety of chemistry prediction tasks. Briefly, an MPNN deduces the local chemical environment for each atom within the molecule, preserving the symmetry of chemically identical atoms. The training set for this initial task was comprised of carbon-containing crystal structures in the CCDC. Molecules which contained "rare" atoms (atoms that were represented fewer than 100 times in the dataset) were excluded, which still yielded a broad atomic scope (Figure ). Additionally, structures whose bonding pattern was ambiguous and conformational polymorphs with a significant difference in their conformations were also removed. A large message passing neural network was trained to accept 2D-data of a molecular structure and predict an atomic coordinate proxy of each atom within the molecule. Atomic coordinates are not unique to a molecule or conformation (a molecule rotated through space is the same molecule, but its 3D coordinates will have changed), thus the model predicted the through-space distances of an atom to its nearest neighbors and the corresponding bond angles formed. It was discovered that such an MPNN could accurately predict the bond lengths and angles of unseen molecules (scaffold split) (Table ). Thus, the investigation into the transferability of the foundational model's latent space commenced. It should be noted that crystal structure data has been used to predict solid form and crystal structure based properties but has seen limited application in transfer learning to more tangential tasks. Given the potential foundational chemistry knowledge that could be extracted from the CCDC dataset, it was hypothesized a transfer learning approach from CCDC data would traverse an unexplored gap within the data-driven chemistry literature. |
6501ed1a99918fe537e49e59 | 4 | With the trained CCDC base model in hand, the final output layer was discarded and replaced with a new, untrained feedforward neural network. This new network was shallow (2 linear layers) and small (~130K parameters) to allow for rapid and facile training towards other chemistry-centric tasks. The pretrained layers, the bulk of this system, are "frozen": no additional training is performed upon them. For the following examples, little to no further hyperparameter optimization was conducted to highlight the ease of translation from base model to finetuned model (Figure ). |
6501ed1a99918fe537e49e59 | 5 | To validate the foundational model's applicability as a "springboard" for other chemistry tasks, three datasets were sourced and used for finetuning that covered a broad range of structure-to-function prediction tasks: acute toxicity, Suzuki and Buchwald-Hartwig yield regression, and odor classification. These tasks are of interest to areas within data-driven chemistry, given their utility to drug discovery and development, chemical synthesis, and perfume production and have open-source datasets available for modelling and facile benchmarking. Toxicity: |
6501ed1a99918fe537e49e59 | 6 | The connection between molecular structure and toxicity has been well documented and has been subject to excellent QSAR modelling. Thus, it was believed that toxicity prediction was a natural first exploration for the modular framework. For this proof-of-concept, the regression prediction of therapeutic toxicities (LD50) from basic, 2D structural information of a given small molecule drug was attempted. Data was sourced from the Therapeutics Data Commons (TDC), a repository of drug relevant data for both small molecules and biologics providing useful benchmarks for in silico drug design. For this task, the dataset of interest was the Acute Toxicity dataset consisting of 7,358 small molecule pharmaceutics. This dataset had a built-in scaffold split which was used for their benchmarking. A scaffold split partitions the dataset so that the test set has unseen molecules (or unseen scaffolds). This is considered to be a more challenging target than a random split, whereby randomly chosen molecules are assigned to the test set. Specifically, it is believed that there is less data leakage when employing a scaffold split than a random split. For this toxicity prediction task and all subsequent tasks, an unseen molecules test set policy (scaffold split) was employed. TDC leaderboards from time of writing indicate a top model, Oloren ChemEngine, with an accuracy of 0.552 mean absolute error (MAE) (Eq. 1), indicating that on the test set, Oloren ChemEngine predicted, on average, within Β±0.552 units of the true experimental value. For reference, a perfect model would have an MAE = 0. Oloren ChemEngine is an open-source flexible system, which removes much of the code writing for task-specific prediction, instead choosing the best molecular, occasionally proprietary, featurization and model for the user's data. Random Forest, Gaussian Process, and Adaboost models, all of which have noted good performance on molecule property prediction, were chosen as baseline models. Superior performance of Oloren ChemEngine (lower MAE) on TDC's test set against these three baseline models was noted (Table ). The model pretrained on CCDC data, Crystal-Tox, was modestly more accurate than Oloren ChemEngine and significantly more accurate that the three baselines with an average MAE of 0.52 over 5 initializations. As the hypothesis was that pretraining from crystal structure data would yield a model with more chemical knowledge than models without pretraining, each model and baseline was challenged to predict the toxicities of non-therapeutic molecules across a greater range of chemical space. To this end, 12 molecules were chosen: 4 benign molecules (water, sucrose, glucose, and monosodium glutamate), with benign defined as an LD50 value greater than 1,500 mg kg -1 , 4 natural toxins (THC, CBD, aconitine, and epibatidine), and 4 illicit substances (MDMA, cocaine, LSD, and heroin) (Figure ). This new test set comprised a different distribution of chemical toxicity, with the mean and mean toxicity (units = log(kg mol -1 )) of 3.05 and 2.79, compared to the training data with mean and median toxicity measurements of 2.53 and 2.36, and the TDC testing data with the nearly identical mean and median to the training data (Figure ). Additionally, the minimum toxicity values were also lower in this new test set than in the training set, -0.70 compared to -0.34. These new molecules were also, on average, structurally less similar to the training data than the TDC's test set (Figure ). In summary, these new molecules represented a greater slice of chemical toxicity space than had been previously trained or tested upon. Unsurprisingly, baseline models' performance upon testing on the aforementioned compounds dropped significantly to 1.54 (Random Forest), 1.86 (Gaussian Process), and 1.73 (Adaboost) MAE. Oloren ChemEngine still outperformed baselines with a mean MAE of 1.46 with Crystal-Tox showing again a modest improvement over all 4 models with an MAE of 1.38 (Table ). Crystal-Tox was most accurate at compounds in the mid-range toxicity scale, with the most benign (water) and most toxic (epibatidine) having far less extreme predicted toxicity values than reality (Table ). Interestingly, both Oloren ChemEngine and Crystal-Tox correctly identified the higher toxicity of aconitine, also known by its common names "wolfsbane" and "monkshood". The high accuracy of Crystal-Tox was primarily due to its better understanding of non-toxic substances, which were often ranked as ~0.5 units more toxic by Oloren ChemEngine's best model (Table ). Crystal-Tox's outperformance over baselines and Oloren ChemEngine models highlighted the promising nature of this framework towards toxicity prediction of pharmaceutical compounds and toxic, non-pharmaceutical compounds. |
6501ed1a99918fe537e49e59 | 7 | Reaction Yields: Reaction yield prediction, both qualitative and quantitative, has been a rich area for machine learning in chemistry. Driven by its importance to synthetic chemists and by the relative abundance of data through patent literature curation (USPTO) and high throughput experimentation (HTE), several notable ML models have been generated with the intention of accurate yield prediction. It has been noted that a model with good yield prediction accuracy could be utilized for a number of valued fields including reagent ranking and retrosynthesis design, a challenging field often considered by synthetic chemists to showcase the "artform" of synthetic chemistry. The link between solid state molecular structure and reaction outcomes, which are typically performed in solution, is more tenuous than for toxicity, thus yield prediction is a challenge use case for foundational chemistry knowledge extracted from structural data. Two reactions which have seen enormous utility in the synthetic community are the Suzuki and Buchwald-Hartwig coupling reactions, highly versatile palladium-catalyzed carbon-X bond formation reactions. Indeed, Suzuki and Buchwald-Hartwig couplings made up nearly a third of all reactions performed in medicinal chemistry and natural product total synthesis in 2016, with its prevalence only increasing in the following years. Attention was first directed to Suzuki coupling reactions, whose data was sourced from the US patent literature (USPTO). Prior deep learning models have noted excellent performance, sometimes only a 5% discrepancy between experimental and predicted values. However, when a leave-molecule-out approach was taken on Buchwald-Hartwig coupling data, whereby the tested molecules have an unseen molecules, model performance drops precipitously. The model is therefore challenged on this style of splitting, where the top reagents/reactants are left out for the test set. This Suzuki dataset comprised of 5,143 electrophiles, 1,122 nucleophiles, 10 catalysts, and 90 ligands (Figure ). Similar to toxicity predictions, yield prediction baselines of Random Forest and Adaboost with MAE as the metric. Initial trials with Gaussian Process regression models yielded far lower accuracies, thus they were omitted from the baseline measure. |
6501ed1a99918fe537e49e59 | 8 | Upon testing each baseline measure against the test reactions with unseen boronic acids or aryl halides, a modest performance of 19.5 (Random Forest) / 21.6 (Adaboost) and 19.5 (Random Forest) / 21.5 (Adaboost) average MAE, respectively was observed. Yield-BERT, a transformer-based model which acts as a machine translation from the language of reaction SMILES to product yield, was used as an accurate ML benchmark. Yield-BERT has been reported to have excellent performance on both Suzuki and Buchwald-Hartwig yield prediction, even when trained on limited data, making it a suitable benchmark for my framework. Additionally, similar to this pretraining approach, Yield-BERT uses no precomputed data and even outperforms modeling with DFT-based chemical descriptors. A large MPNN, GraphRXN, was similarly chosen as an additional benchmark. GraphRXN had observed notable success on external and internal HTE data. Interestingly, Yield-BERT achieves similar accuracy on my dataset splits to Random Forest models applied to Morgan Fingerprints, yielding 21.9 and 22.0 MAE on unseen boronic acid and aryl halides, respectively. GraphRXN performs worse that Yield-BERT, a surprising outcome given that modeling of GraphRXN on Suzuki, albeit HTE data and not the historical, collated, literature patent data used in this interrogation, was on-par with Yield-BERT. The framework pretrained from CCDC data, dubbed Crystal-Yield, showed a modest performance increase across both the top nucleophile and electrophiles test sets, with a mean MAE of 18.4 for unseen nucleophiles and a mean MAE of 18.5 for unseen electrophiles (Table ). |
6501ed1a99918fe537e49e59 | 9 | The similarity in performance of the models is likely due to the inherently noisy nature of experimental chemistry: where different chemists, different reagent lots, and different environments can cause shifts within the experimental value. By its very nature, the USPTO Suzuki dataset is comprised of reactions from multiple chemists across the country and across decades of research. Indeed, historical bias is often present in this style of collated data, which can translate to poor machine understanding. Additionally, negative data is rarely observed in this dataset, with only 0.9% of all reactions reporting a yield under 5% (Figure ). Prior research have noted the importance of this "negative data" in predictive modelling. Crystal-Yield's average MAE of ~18% is a step forward towards accurate modeling of noisier data. |
6501ed1a99918fe537e49e59 | 10 | To showcase the framework's potential on more systematically sampled data, reaction yield of Buchwald-Hartwig cross couplings generated by Ahneman et al. was modeled. Unlike the Suzuki coupling data, reaction yields were determined solely through high throughput experimentation (HTE) from a single laboratory and 26% of the dataset consisted of "negative reactions" (Figure ). For this dataset, a single amine is reacted against 16 aryl halides, 3 bases, 4 ligands, and 24 additives (Figure ). Prior modeling revealed that testing on unseen additives can lead to much higher error rates. Notably, additives are not the only critical factor in yield determination, thus, I probed the performance of Crystal-Yield when predicting on not just unseen additives, but unseen aryl halides, bases, and catalysts. Baseline models of Random Forest, Adaboost, and Gaussian Process were used. As with each previous test set, unseen molecules (halides, bases, ligands, additives) were used. Model performance for the Buchwald-Hartwig reactions was the average of k-fold validation, where each fold consisted of several unseen halides/bases/ligands/additives. Once again, similar performance of Yield-BERT and the baseline models was observed, however, GraphRXN clearly showcased its specialization in modelling HTE data. Notably, when the base Crystal-Yield model (~130,000 parameters) was used to predict Buchwald-Hartwig yields on unseen ligands, GraphRXN outperformed with an MAE of 13.8 compared to Crystal-Yield's MAE of |
6501ed1a99918fe537e49e59 | 11 | Finally, model finetuning on molecule odor prediction was investigated. Similar to toxicity, perceived fragrance is highly dependent upon molecule structure, with most humans being able to distinguish between select enantiomers. However, odor is, by definition, perceived odor and the current mechanism by which olfactive detection and recognition occurs is still opaque. Indeed, it is well known that two individuals may interpret a molecule's fragrance differently or be anosmic to a given molecule (unable to smell the molecule). Thus, the finetuning task is not a regression, but a multilabel multiclass prediction. Multilabel refers to the fact that there are many possible odors for a molecule, and whilst there is no universal definition of fragrance classes, there are generally accepted realms: sweet vs herbaceous vs floral. Multiclass indicates the possibility that a molecule may have multiple possible olfactive notes; a molecule may smell sweet and buttery, not just one or the other. It can be observed from the fragrance distribution of the dataset that many molecules offer a fruity component (Figure ) which may itself be accompanied by a specific fruit, such as melon or blackcurrant (Figure ). have not yet made their code nor their dataset available for benchmarking. As such, I used two standard multilabel classification baselines: Random Forest and K-Nearest Neighbors, as the some of the previous baselines, Adaboost and Gaussian Process, are unable to work with multilabel classification. Random Forest models have been noted previously for their excellence in machine olfaction. Small molecule fragrance data was obtained from the Pyrfume project, an open-source database of pre-processed, literature-curated olfactive data. This dataset consisted of 3,502 small molecules each with a corresponding label which could be any number of combinations from the 113 odor classes. Interestingly, a minority of molecules in this dataset had no detectable fragrance (Figure ). A cross validation (5-fold) split was performed, where each molecule in the test split was an unseen molecule. Unlike yield and LD50 prediction, which were regression tasks, classification was to be performed and thus, the F-score classification metric was utilized. F-scores are typically used for binary classifications but the multiclass extension of the F-score has been developed. Briefly, the F-score for each class is computed before being averaged. The averaging can be unweighted (macro) or weighted by class size (weighted). For F-scores, higher values indicate better model performance. With Random Forest and K-Nearest Neighbor baselines, relatively low F-scores were observed, highlighting the challenge of predicting reasonable olfactive notes from structurally different molecules. However, my crystal structure foundational model combined with an olfactive-specific feedforward neural network, Crystal-Olfaction, observed a generous increase in both weighted and unweighted F-scores to 0.62 and 0.92 (Table ). One notable challenge for all networks was the prediction of non-fragrant molecules, which were all incorrectly assigned with multiple odor labels. This may be due to a dataset bias of few non-fragrant molecules or it is possible that these molecules have an olfactive profile, but the volatility of these compounds is so low, that they are not detectable to most humans. I believe this to be an interesting future direction for further exploration of this modular framework. |
6501ed1a99918fe537e49e59 | 12 | Finally, Crystal-Olfaction was challenged to distinguish between not just structurally dissimilar molecules, but between enantiomers, distinguished by a one-hot coded chiral tag. It is well understood that whilst their physical properties are identical, enantiomers can be perceived as two distinct scents. A classic example is that of carvone, where (R)-carvone is often described as spearmint and (S)-carvone as caraway. Thus, 11 enantiomer pairs, 5 of which have identical olfactive profiles and 6 which have distinct olfactive profiles were chosen. Crystal-Olfaction and the baseline models were trained on all training data folds and tested on these 22 compounds (Figure & Figure ). It was noted that all found this new test set difficult, although Crystal-Olfaction performed significantly better than baselines (Table ). Given baseline models' difficulty in distinguishing odor classes on structurally dissimilar molecules, it is unsurprising that enantiomer distinctions were similarly challenging (Figure ). Crystal-Olfaction was more accurate at determining the correct scent labels, however, its ability to distinguish between enantiomeric olfactive profiles was more limited. Of the 11 pairs, 5 were correctly identified as having identical or different fragrance notes (Figure ). Commonly, Crystal-Olfaction would predict identical odor classes for enantiomers with substantially different scents. However, some successes were noted: the differentiation between menthone enantiomers 23 and 24 and of enantiomers 19 and 20. Whilst the none of the top 5 predicted labels for 19 were correct, it was observed that a "fermented fatty" odor could be construed as cheesy, a label that Crystal-Olfaction deemed highly likely for 19. The challenge of this test set is highlighted in the low density (5%) of chiral molecules in the training dataset, of which only 21% of those chiral molecules were enantiomeric pairs (Figure ). Inclusion of additional chiral molecules into the training set may yield more discernment between enantiomer pairs. |
6501ed1a99918fe537e49e59 | 13 | I have demonstrated a proof-of-concept for a foundational chemistry model, capable of predicting toxicity, palladium catalyzed cross coupling yields, and molecule fragrance from the 2D structures of these compounds. Key to this success was the utilization of a subset of the CCDC's dataset to generate a foundational model with enough chemical knowledge to be applicable across a range of chemistry fields. This was showcased with the development of three chemistry specific tasks, which outperformed baselines and other deep learning model benchmarks. The foundational model as well as the trained models are offered to the public for future exploration of toxicity, reaction outcome, and olfactive predictive modeling as well as other chemistry-relevant tasks such as structure activity relationship exploration (SAR) and reagent design. |
67c477f081d2151a02bafb24 | 0 | Langmuir monolayers of phospholipids at air-water interfaces are ubiquitous systems that exhibit unique two-dimensional structural and mechanical properties upon lateral compression and expansion. Pulmonary surfactant (PS) monolayers line the alveolar air-liquid interface where they dynamically regulate surface tension during breathing cycles, a critical function for maintaining normal respiratory mechanics. Lack or disintegration of LS leads to respiratory diseases such as respiratory distress syndrome (RDS) in premature infants and acute respiratory distress syndrome (ARDS) in Covid -19 patients. Monolayer behavior during compression-expansion cycles can be characterized across different temperatures using complementary experimental techniques including Langmuir-Wilhelmy balance measurements and constrained bubble surfactometry (CBS), each providing unique insights into interfacial dynamics. However, while the accuracy of experimental measurements is limited by several factors, computer simulations are constrained by the lack of accurate force fields and the small length and time scales involved. Here, we develop a novel coarse-grained computational approach based on dissipative particle dynamics (DPD) to study the temperature-dependent interfacial and mechanical properties of phospholipid monolayers and bilayers. |
67c477f081d2151a02bafb24 | 1 | Lung surfactant is a mixture of 90 wt% lipids and 10 wt% surfactant proteins. The main lipid components are phosphatidylcholines (PC), while phosphatidylglycerols (PG), phosphatidylethanolamines (PE), phosphatidyl inositols (PI), and cholesterol are present in minor amounts. It is well known that the major PC in lung surfactant is dipalmitoyl phosphatidylcholine (DPPC), which accounts for about 40-50 wt% of the total mass, while the remaining PCs are unsaturated. The detailed mechanism by which LS regulates the surface tension in the alveoli is still unclear. DPPC monolayers are the most studied, being the major component and is primarily responsible for reducing the alveolar surface tension to near-zero values at the end of exhalation. The mechanical behavior of phospholipid monolayers is characterized by the surface pressure-area (P-A) isotherm that describes the surface pressure Ξ = Ξ³ 0 -πΎ π as a function of the area per lipid π πΏ of the monolayer, where πΎ 0 is the surface tension of water, and πΎ π is the surface tension of the air-water interface with the monolayer. Additionally, DPPC monolayers undergo 2D phase transitions upon lateral compression. At high π πΏ , the monolayer exhibits a gas (G) phase, where the surfactant molecules are well separated with practically no interactions between them. As the monolayer is compressed and π πΏ decreases, a liquid phase known as the liquid expanded (LE) phase is formed. In the LE phase, the lipid molecules diffuse laterally, with their tails oriented randomly with respect to the surface normal. Further compression results in a solid or gellike phase called the liquid condensed (LC) phase, in which the tails are ordered and oriented along the surface normal. Under specific temperature and surface pressure conditions, these phases can coexist within the monolayer, forming microscopically observable domains with distinct structural and mechanical properties: the LE and LC phases coexist at intermediate values of π πΏ and temperature, characterized by a plateau region in the isotherm, while a coexisting gas phase may appear as holes in the monolayer. This behavior is strongly temperature-dependent; the LE-LC coexistence and the plateau region in the isotherm occur only below the critical point (42Β°C), near the main transition temperature of DPPC, where the LE and LC phases become indistinguishable. Numerous experimental and computational studies on DPPC monolayers over the past several decades, however, have not provided accurate results due to various limitations. An excellent review on this topic is provided by Stachowicz-Kusnierz et al. As shown in Figure , the experimental data from various groups spread over a wide range at various temperatures. The spread in the experimental data is associated with various experimental difficulties, the experimental apparatus, subphase conditions (such as pH, salinity, and temperature), compression rate, lipid deposition, leakage, impurities, and equilibration time. Accurate experimental isotherms are crucial for developing precise computational models for molecular simulations. The overall trend, shape and behavior of the experimental isotherms in the literature are comparable. However, variations do still exist among the isotherms across different papers. Key characteristic features of DPPC isotherms-including the surface pressure lift-off point, phase transition plateau, and monolayer collapse pressure-show significant variability across different experimental studies, highlighting the need for standardized protocols. These disparities stem from various factors that need to be addressed. |
67c477f081d2151a02bafb24 | 2 | Surface pressure measurements are conducted using complementary techniques: the Langmuir-Wilhelmy balance method, which provides direct force measurements and enables simultaneous microscopy, and the pendant drop technique, which offers advantages for high-temperature studies despite increased sensitivity to vibration and evaporation effects. Inherent strengths and limitations of each method were carefully considered in data analysis and interpretation. The pendant drop apparatus determines surface tension by analyzing the droplet's shape using the Young-Laplace equation. The accuracy of surface tension measurements directly hinges on the precise fitting of the droplet shape. Particularly at low surface tensions, irregularities in droplet shape can result in inaccurate measurements. Pendent drop measurements are highly sensitive to environmental conditions, making experiments challenging at high temperatures due to droplet evaporation. On the other hand, the Langmuir trough measures surface tension by gauging forces with a Wilhelmy plate. Temperature regulation with precision of Β±0.5Β°C throughout the experimental measurements is essential for obtaining reproducible isotherms, as even small thermal fluctuations can significantly impact phase transitions and molecular reorganization in the monolayer. However, the Langmuir trough apparatus is prone to leakage issues at low surface tensions, potentially causing premature monolayer collapse. Nevertheless, using the Langmuir trough offers the advantage of simultaneous in-situ fluorescent imaging, providing additional insights into lipid configuration. Compression rate significantly influences isotherm characteristics through its impact on molecular reorganization kinetics. Our systematic investigation of compression speeds (5-20 mm/min) revealed that rates exceeding 15 mm/min led to artificial smoothing of phase transitions and premature collapse, while rates below 8 mm/min increased susceptibility to environmental perturbations and film loss. The optimal rate of 10 mm/min was selected based on reproducibility of phase transition plateaus (Β±0.5 mN/m) and collapse pressures (Β±1 mN/m). Quasi-equilibrium compression conditions, achieved at rates β€10 mm/min, allow sufficient time for local molecular reorganization and domain formation, as verified through real-time Brewster angle microscopy showing stable domain structures. This equilibration is particularly critical during the LE-LC phase transition, where domain growth and coalescence processes require timescales on the order of minutes. This results in more accurate and reproducible isotherms, reflecting the true phase transitions and properties of the monolayer. Rapid compression rates artificially smooth the isotherm features by preventing the monolayer from achieving local equilibrium, thereby masking important phase transitions and potentially leading to premature collapse through non-uniform stress distribution. For both the tensiometers and Langmuir-troughs, the method of depositing the DPPC molecules at the interface can have an impact on the isotherm. The initial monolayer structure is highly sensitive to multiple experimental parameters: the choice of spreading solvent (which affects molecule-substrate interactions), the volume and distribution of deposited droplets (which influence local concentration gradients), and the solvent evaporation period (which determines molecular reorganization during film formation). For instance, insufficient waiting time for the solvent to dry before beginning the compression can cause the initial lift off pressure to vary between experimentalists. |
67c477f081d2151a02bafb24 | 3 | Maintaining precise temperature control is crucial for obtaining accurate isotherms. As temperature increases, deviations in the isotherm become more pronounced primarily due to the heightened molecular motion within the DPPC film complicates the replication of isotherms. This is obvious in Figure at temperature of 310 and 313 K. The general trend indicates that lift-off surface pressures shift towards higher molecular areas, while phase coexistence regions shift towards higher surface pressures. The collapsing surface pressure of the isotherm also shows more variation among the various sources as the temperature increases. To maximize isotherm reproducibility and accuracy, we performed surface pressure-area measurements under carefully controlled conditions, including precise temperature regulation (Β±0.5Β°C), controlled compression rates (10 mm/min), and standardized deposition protocols. |
67c477f081d2151a02bafb24 | 4 | Molecular dynamics (MD) simulations can provide detailed insights into monolayer phase behavior and mechanical properties on molecular scales, which are often inaccessible to experiments. However, most existing force fields struggle to reproduce DPPC pressure-area isotherms even qualitatively. This limitation has been attributed to the inability of water force fields to accurately capture the air-water surface tension. All-atom water models, such as SPC, SPC/E, 12 TIP3P, and TIP4P, as well as coarsegrained models like MARTINI, significantly underestimate air-water surface tension. Among atomistic models, OPC4 achieves accurate air-water surface tension, while TIP4P/2005 has shown the best performance among other models. Meanwhile, the lipid force fields (Berger, CHARMM, Slipids, LJ-PME 22 etc ) have also some effects on the P-A isotherm results. Figure -c demonstrates that the results using TIPnP water models in conjunction with various lipid force fields perform poorly; TIP4P/2005 with Berger lipids , CHARMM-specific TIP3P (TIPS3P) water model with LJ-PME, as well as TIP3P and Slipids, yield unphysically large negative values of surface pressure. The OPC4 model, provides the best performance among atomistic models, showing near-quantitative agreement with experimental results. All-atom simulations are computationally expensive and generally unsuitable for studying lipid monolayer phases, which exhibit 2D phase behavior and large-scale domain structures. The coarse-grained (CG) forcefields, however, have also struggled to accurately predict DPPC P-A isotherms, including the MARTINI forcefield which underestimate the water surface tension by 60%. Other CG water models such as CSJ and BMW used with MARTINI lipid force field have also failed produce correct P-A isotherms. CSJ model, however, reproduced water-surface tension accurately, while BMW substantially overestimated it. Another force-field developed by Klein' group, the SDK forcefield, provided isotherms at 323K at larger values of π πΏ in good agreement with experimental data (Figure ). While it is apparent that efficient computational approaches for studying properties at the air-water interface are lacking, it is also evident that the accuracy of experimental isotherms is compromised by technical limitations. This means that a force field can only be validated within a certain range of experimental values shown in Figure . One of the most commonly used and computationally efficient coarse-grained methods is the dissipative particle dynamics (DPD) approach. However, standard DPD cannot simulate vapor-liquid coexistence and consequently, cannot be used for surfactant systems at airwater interface. To overcome this limitation, many body dissipative particle dynamics (MDPD) was suggested. However, MDPD remains relatively unpopular due to its complex many-body framework, though recent attempts to apply it to DPPC monolayers have shown very promising results. We recently developed a novel DPD gas model to use with standard DPD, in which the gas phase is represented by fictitious beads that interact with other beads through an exponential potential. This approach, which we refer to as WSN (Wang-Santo-Neimark), successfully simulated the adsorption of CTAB (cetyl trimethyl ammonium chloride) surfactants at the air-water interface, demonstrating excellent agreement with experimental isotherms. In the current work, we develop a DPD approach to simulate phospholipid monolayers and bilayers across a range of temperatures, consistently reproducing their interfacial and mechanical properties. We develop new DPD model for DPPC with systematic parameterization, extend our WSN model to DPPC lipids, and devise a temperature scaling methodology that allows DPD systems to be simulated across various temperatures with parameters transferable across temperature changes. Importantly, this results in a coarse-grained WSN water model that accurately reproduces air-water surface tension over a temperature range. Using our approach, we simulate DPPC monolayers and bilayers, successfully replicating temperature-dependent interfacial phase behavior and mechanical properties in near-quantitative agreement with our experimental results as well as literature. |
67c477f081d2151a02bafb24 | 5 | The paper is organized as follows: in Section 2, we describe the DPD methodology, and the experimental set up used in this work. Section 3 presents and discusses the results of parameterization, temperature scaling methodology, DPPC monolayer phase behavior and pressure-area isotherms, and characteristics of DPPC bilayer. Conclusions are provided in Section 4. |
67c477f081d2151a02bafb24 | 6 | The DPD formalism. DPD was formulated by Hoogerbrugge and Koelman as a coarse-grained molecular dynamics approach to simulate the hydrodynamic phenomena. Shortly after, Espanol and Warren, and Groot and Warren, reformulated it, providing with a robust statistical mechanics base. In DPD, atomic groups are lumped into coarse-grained beads that move according to an MD scheme, while each CG bead in the system is acted upon by a total force exerted by its neighboring beads, The drag force represents the viscous forces in the fluid, and is determined by the friction coefficient πΎ and the relative velocities between the particles π ππ = π π -π π , while the random force represents thermal motion and is modelled with the random noise function π ππ , that is Gaussian distributed. π 2 = 2πΎπ π΅ π and π€ π
= βπ€ π· = 1 -π ππ /π
ππ . The electrostatic interactions, π ππ πΈ (π ππ ) between charged beads are incorporated with the smeared charge approach, developed by Melchor et al. The bonded interactions are represented by harmonic forces. π ππ π΅ = -πΎ ππ π΅ (π ππ -π ππ ) π Μππ , restrain bond lengths π ππ with the bond force constant πΎ ππ π΅ for the bead pair π and π. Similarly, πΉ π΄ (π πππ ) = -πΎ π΄ (π πππ -π πππ 0 ) represent angular forces that keep bond angles to the equilibrium value π πππ 0 , with the angle force constant πΎ π΄ . In DPD, simulations are run with reduced units; energy and temperatures are measured in units of π π΅ π = 1, mass in units of the DPD water particles and the length in units of π
π = 1, which is the size of a water particle (see below). |
67c477f081d2151a02bafb24 | 7 | The coarse-grained models for DPPC and water. The CG model of the DPPC molecule is depicted in Figure . We adopt a general 3-to-1 mapping for the solvent water and the lipid tails, whereas the head group is modelled as 4 beads. The N bead represents the choline atoms N + (CH3)3-CH2, the phosphate group CH2-PO4 is represented by the P bead, and the glycerol backbone, CH2-CH(CH2-O-C=O)-O-C=O, including the carbonyl groups is represented by two G beads. The carbonyl carbons are included in the G beads since, they are not so hydrophobic due to the presence of the carbonyl oxygen. The remaining 15 alkyl groups in the tail, (CH2)14-CH3, are mapped into five beads: four C beads and one E bead at the end. This mapping results in a 14-bead representation of the DPPC molecule. The solvent water bead W represents 3 water molecules. The various interactions between the coarse-grained (CG) beads are parameterized using a systematic approach. |
67c477f081d2151a02bafb24 | 8 | Modeling gas phase. The gas phase is modelled using the recently developed Wang-Santo-Neimark (WSN) model, which provides with alternate approach to remedy the limitation of standard DPD to simulate vapor-liquid coexistence. In this model, the gas phase is modelled by fictitious beads B, with interact with beads at the interface via the exponential conservative force, πΉ ππ ππ₯π (π ππ ) = π π΅π π |
67c477f081d2151a02bafb24 | 9 | π π΅π and π π΅π are parameters specific to the beads. The exponential force in Eq. ( ) increases steeply when particles come into contact, effectively creating a 'hard core' interaction (Figure ). This reprodcues interfacial properties, particularly between water and vapor, more effectively than the linearly increasing standard DPD conservative force (Figure ). The B particles themselves, however, interact via the standard linear conservative force. |
67c477f081d2151a02bafb24 | 10 | To parametrize air-water interaction parameters, systems are constructed having a water/air slab in the middle with air/water above and below. These systems are initially constructed as a rectangular box with dimensions 30 Γ 30 Γ 60 π
π 3 (19.4 Γ 19.4 Γ 38.8 ππ 3 ) with interfaces in the XY plane and a total of 162000 particles, consisting of equal number of W and B beads. After a short NPT equilibration at the ambient pressure 23.7, NVT simulations are performed for 4 million steps, with quantities averaged over the last 3 million steps. The DPD timesteps used is 0.01 π, where π is the DPD time unit. Simulations are performed at different DPD temperatures with three independent runs at each temperature. Different initial configurations for NVT runs are created by running NPT equilibration at different lengths varying from 80000 to 120000 steps. |
67c477f081d2151a02bafb24 | 11 | We set up simulation systems consisting of two opposing monolayers, separated by a water slab on the hydrophilic side and the gas phase on the hydrophobic side, as shown in Figure of the Supporting Information (SI). This set up leads to periodically symmetric interfaces, when periodic boundary conditions are applied. Our main simulation systems, denoted as S400, are composed of smaller monolayers with 400 lipids, while we also simulate additional, larger systems that feature monolayers of 3200 lipids, denoted as L3200. Several S400 systems as constructed at different lateral sizes (π₯ and π¦ ), corresponding to area per lipid, π πΏ ranging from 0.51 ππ 2 to 1.54 ππ 2 , keeping the number of particles in the system constant; the number of water beads, π π and gas beads π πΊ are 52720, while the total number of particles, π π‘ππ‘ = 116640. This means that, as the lateral size is increased to higher π πΏ , the normal size π§ of the system is reduced accordingly, so that the system volume remains constant. Thus, the S400 normal size varies from π§ = 26.23 π
π (17.0 nm) at π πΏ = 1.54 ππ 2 to π§ = 78.4 π
π (50.7 ππ) at π πΏ = 0.51 ππ 2 , while the lateral size varies from π₯, π¦ = 38.38 π
π (24 nm) to π₯, π¦ = 22.1 π
π (14.3 ππ). In the L3200 systems, the total number of particles are π π‘ππ‘ = 574200 with π π = π πΊ = 242300 and the system dimensions vary from 56.4 Γ 56.4 Γ 16.1 ππ 3 to 41.8 Γ 41.8 Γ 29.2 ππ 3 as π πΏ varies from 0.99 ππ 2 to 0.49 ππ 2 . |
67c477f081d2151a02bafb24 | 12 | The simulations are performed at different DPD temperatures π π·ππ· , using DL_MESO 2.7 package, which we modified to include the exponential conservative interactions. The initial configuration of the systems are created using the packmol program. First the systems are equilibrated at NPT conditions at pressure equal to 23.7, which is the standard pressure in DPD that corresponds to atmospheric pressure, and used for parametrization. This is followed by an NVT simulation of 4 million steps. The surface tension of the monolayer film is a function of the area per lipid, which is calculated from the simulations for plannar films using, |
67c477f081d2151a02bafb24 | 13 | In Eq. ( ) π π is the normal pressure, which is π π§π§ , the normal component of the pressure tensor, and π πΏ is the lateral pressure, which is the average of the pressure tensor components in x and y directions. πΏ π§ is the length of the system in the normal direction and the factor 2 accounts for the existence of two monolayers in the simulaitons, which are in similar conditions. If πΎ 0 is the surface tension of water at the air-water interface, without lipids, then the monolayer surface pressure is, |
67c477f081d2151a02bafb24 | 14 | Bilayer initial configurations at various area per lipid (50-70 Γ
2 ) are created using the packmol program. The membrane consists of 400 lipids in each leaflet and a total of 800 lipids. The simulations are run at different temperatures 293K, 304K, and 324K. Intially, the system is equilibrated at constant pressure of 23.7 with a 100000 steps long simulation at Ξπ‘ = 0.01π, which is followed by an NVT simulation of 1 million steps. Analysis is performed over the last 500000 steps. The bilayers are characterized with variation of surface tension as a function of area per lipid at different temperatures. |
67c477f081d2151a02bafb24 | 15 | Surface Pressure-Area Isotherm Collection. Surface pressure-area isotherms were collected with a Biolin Nima Langmuir medium trough (max area: 243 cm 2 ) at the air-subphase interface. The aqueous subphase consisted of ultrapure water (18.2 MΞ©β’cm resistivity, TOC < 5 ppb) obtained from a Millipore purification system, with pH and surface tension measured before each experiment to verify subphase quality, whose temperature was precisely controlled using a water circulating bath with an accuracy of Β± 0.5 o C. A precisely measured aliquot (30 Β΅L) of DPPC solution (1 mg/mL in HPLC-grade chloroform), doped with 0.5 wt% TR-DHPE as a fluorescent probe, was carefully deposited onto the aqueous subphase using a gas-tight Hamilton microsyringe to ensure reproducible spreading conditions. Stock solutions were prepared in amber glass vials, purged with nitrogen to prevent lipid oxidation, and stored at -20Β°C; working solutions were equilibrated to room temperature for 30 minutes prior to monolayer deposition to prevent temperature-induced spreading artifacts. The compression rate was optimized at 10 mm/min based on preliminary studies showing this speed allows sufficient time for molecular reorganization while minimizing film loss through subphase dissolution and maintaining experimental feasibility; faster rates led to artifactual smoothing of phase transitions while slower rates increased susceptibility to environmental perturbations. Each isotherm measurement was performed in triplicate using fresh monolayers and subphases, with statistical analysis of key transition points (lift-off area, collapse pressure, phase transition plateau) showing coefficients of variation < 5% across independent measurements. |
67c477f081d2151a02bafb24 | 16 | Lipid monolayers were prepared using an LB trough (Biolin Nima) using the Langmuir-Schaefer method. Muscovite mica sheets were freshly cleaved immediately before use using adhesive tape to expose an atomically flat surface, then immersed into the subphase at a controlled rate of 1 mm/s while maintaining a constant dipping angle of 45Β° to ensure uniform substrate wetting. 25 |
67c477f081d2151a02bafb24 | 17 | We first parametrize the gaswater interface. Water is taken as the reference compound as suggested by Anderson et al. The size of the water bead is designated as the DPD unit of length, π
π , (that is, π
ππ = 1) and is calculated based on the assumed number density of water beads, π = 3 beads per π
π . This gives π
π = (ππ£π π ) 1/3 = 0.6464 nm, where π£ = 0.030 ππ , the volume of a water molecule and π π = 3, the mapping number. The DPD repulsion parameter between water beads, π ππ = 25.0 π π΅ π, was determined by Groot and Warren by matching the compressibility of water. Mass of the water bead, π π = 3π π»2π = 54.05 π·π is taken to be the DPD unit of mass ( that is π π = 1 in DPD units). The gas beads are taken to be the same size as water beads (π
π΅π΅ = 1) but with a much lower relative mass calculated based on the density of air 1.3 ππ/π 3 , which is 0.01 in DPD units. B beads themselves, interact via linear conservative force with π π΅π΅ = 25 π π΅ π. The interactions parameters π π΅π , π π΅π and π
π΅π are obtained by matching the air-water surface tension and interfacial density profile as described in the original work of WSN. Here, to describe the behavior of DPD systems across different temperatures, we introduce a temperature scaling approach, based on matching the surface tension of water, that provides a correlation between simulation temperature and the corresponding real temperature, without the need for reparameterization at each temperature. In coarse-grained models, the reduction of the number of particles, and the use of much smoother potentials causes faster particle motion and less frequent collisions compared to the corresponding atomistic system, which leads to an improper scaling of temperature. For instance, CG water freezes at a much higher temperature (280-300 K) than the freezing point of water when MARTINI force field is used. DPD utilizes much softer forces, and to simulate a system different temperatures, reparameterization at each temperature is often employed. In DPD, simulations are performed at reduced temperature, assuming π π΅ π πππ = 1, where π πππ is the temperature at which the system is parametrized. If a DPD simulation is performed with π π·ππ· β 1, then the linear DPD-to-real temperature correspondence is, π ππππ = π π·ππ· π πππ (7). |
67c477f081d2151a02bafb24 | 18 | In phospholipid bilayer/monolayer systems the solid like gel (πΏ π½ ) or LC states have essentially frozen lipid tails oriented along the normal direction, and the kinetic energy of the beads is low. Therefore, simulating at reduced temperature π π·ππ· = 1 may be unsuitable. Kranenburg et al simulated gel phase using DPD simulations, setting π π·ππ· as low as 0.3 and obtained an LπΌ -πΏ π½ transition in the range π π·ππ· =0.3-0.65, depending on the parameter set. Howerver, Equation ( ) is quite simplistic and does not account for the effects of coarse-graining or the use of smooth and soft interaction potentials. It scales too fast and cannot be used as a proper temperature scaling for CG systems. For example, with π πππ = 300 πΎ, π π·ππ· = 0.6 indicates π ππππ = 180πΎ, which is far from a relevant temperature range for colloidal systems. If one sets π π·ππ· to the corresponding real temperature, then π πππ becomes unrealistically high to be parametrized at; π ππππ = 293 πΎ implies π πππ = π ππππ π π·ππ· = 488πΎ, at which practically no parametrization can be performed. Thus, to reproduce the temperature-dependent phase behavior of phospholipid films, we adopt the following strategy: |
67c477f081d2151a02bafb24 | 19 | The first step involves choosing the parametrization temperature, π ππππ 0 and assigning a corresponding DPD temperature, π π·ππ· 0 . The initial set (π ππππ 0 , π π·ππ· 0 ) is chosen intuitively, taking into account the system characteristics. For instance, π π·ππ· 0 is chosen to be < 1 when there is solid-like or gel-like phase. We evaluated several parameter sets before choosing one that is appropriate. Our chosen temperature set for parametrization is (π ππππ 0 = 293πΎ, π π·ππ· 0 = 0.65 ) and the B-W interaction parameters are determined by matching the experimental air-water surface tension at 293K ( πΎ 0 (293πΎ) = 72.8 mN/m) and interfacial density profiles obtained from ab initio and atomistic MD simulations (Figure ). The air-water surface tension from DPD simulations obtained using Eq. ( ) is converted to real unit as, |
67c477f081d2151a02bafb24 | 20 | For our chosen set (π ππππ 0 , π π·ππ· 0 ), π πππ 0 =450.8 K, and is unphysically high, however, the systems are parametrized at π ππππ 0 , assuming DPD temperature π π·ππ· 0 and therefore, π πππ 0 here is only some sort of scaling parameter, without much physical significance. Note also that Eq. ( ) can be used without any reference to π πππ . The DPD gas-water parameters determined at 293K are provided in Table . In Figure , the interfacial density profile at 293K is shown which are compared with results for ab initio MD , and classical MD simulations with TIP4P/2005 and SPC/E water models. The DPD density profile at 293K is in excellent match with both ab initio MD and TIP4P water model, while SPC/E profile is broader. The DPD profile is fitted with the function |
67c477f081d2151a02bafb24 | 21 | Thus, once the gas-water parameters are parametrized at a chosen temperature, they are transferable and used to simulate DPD systems at other temperatures using the scaling described above and depicted in Figure . The interfacial density profiles at additional temperatures 304K, 313K and 324 K are also shown in Figure . The density profiles at higher temperatures are slightly wider; the thickness parameters obtained by fitting with Eq. ( ) are 1.52Γ
, 1.52Γ
and 1.68 Γ
at 304 K, 313 K and 324 K respectively, which are close to the values obtained in ab initio MD calculations. The air-water surface tension at different temperatures are reproduced in excellent agreement with experimental values in all sets (Figure ). This means that our water model accurately reproduces surface tension of water over a range of temperature. This temperature scaling methodology can be used in general, to simulate temperature dependent behavior of other DPD systems. |
67c477f081d2151a02bafb24 | 22 | Bead sizes. To parametrize DPPC bead types, we adopt the approach recently developed by Anderson et al, with some modifications. The bead diameter, π
πΌπΌ , of the bead types are calculated based on the partial molar volumes, π πΌ , obtained using the method of Durchslag and Zipper (DZ), π
πΌπΌ 3 β π πΌ . DZ method determines the partial molar volume of a group of atoms by summing the increments from each atom, a co-volume, and contributions due to charge and ring formation. π
πΌπΌ is also the intra-bead interaction cutoff for the DPD forces. The inter-bead cutoffs π
πΌπ½ are calculated by adopting the mixing rule π
πΌπ½ = 1 2 (π
πΌ + π
π½ ). The approach of Anderson et al allows variable bead sizes, improving on the equal bead volume approach of Groot and Warren. For the DPPC head group, the N and P beads are of different sizes. The glycerol backbone must be represented by two beads of different sizes, as no division can result in two beads of identical type. Similarly, due to the presence of CH3 group, the end bead E must be of different size than the C bead. Here, we deviate from the scheme of Anderson et al, and represent the glycerol backbone as two identical beads and the lipid tails as five beads of equal size. This approach is adopted for two key reasons: (1) For amphiphilic molecules such as DPPC, which form bilayer and monolayer phases, lipid packing is crucial. The area per lipid, denoted as π πΏ , is a critical parameter for phospholipid films, influencing their mechanical and phase behaviors. Therefore, it is essential that both the glycerol backbone and tails to have the correct volume and packing. Especially, at low π πΏ values, these beads are tightly packed, unlike N and P beads which enjoy more space due to the lipid being double-tailed. (2) Different bead sizes may result in local density variations, which are unsuitable for describing the 2D monolayer/bilayer phases formed by DPPC, including gel/liquid crystalline, and liquid expanded/liquid condensed phases, which have uniform density. The partial molar volume of the G bead is obtained by halving the volume of glyceryl diacetate after subtracting the volume of an OH group and two CH3 groups (see Supporting Information). The volume of the C/E beads is calculated as one fifth of the volume of pentadecane excluding a hydrogen atom. Note that the C and E beads are identical; they are labeled differently only for convenience of analyzing the data. The calculated bead sizes relative to the water bead (units of π
π ) are provided in |
67c477f081d2151a02bafb24 | 23 | Our parametrization is based on matching properties of compounds such as pentadecane, which have the same or similar number of carbons as the tails of DPPC. Furthermore, although n-alkanes such as pentadecane are known to condense into ordered bulk phases at low temperatures, the thin nanometer sized monolayer film is at different conditions compared to the bulk phase. So, we first get an estimate of the repulsion parameters by matching the bulk properties of relevant alkanes such as density and water -octanol partition coefficients (log π ππ ) and then further refinement of the parameters is done to reproduce the monolayer phase behavior quantitively at temperature 293K. The C/E bead repulsion parameter π πΆπΆ = π πΆπΈ = π πΈπΈ is initially estimated by matching pentadecane density at 293K with π π·ππ· = 0.65. Then the parameter π ππΆ = π ππΈ is obtained by matching log π ππ of water and octanol. The parameters involving the hydrophilic head groups are estimated initially by a pragmatic mean-field approach following Anderson et al , π πΌπ½ = 25/π
πΌπ½ . All parameters are further refined by matching monolayer π -π΄ isotherms at 293K. The obtained DPD repulsion parameters are given in Table . provided in Table . Ideally, one should calculate bond lengths and angles by matching distributions from atomistic simulations. However, such bond distributions between atom groups corresponding to the CG beads, may depend on the temperature as well. |
67c477f081d2151a02bafb24 | 24 | The final configurations of the 4000 π long simulations of the S400 systems are provided. In Figure , the top views of the configurations are shown at different π πΏ , colored according to the tail order parameter β¨πβ© = 1 2 (π π‘πππ 1 + π π‘πππ 2 ), averaged over the two tails. The snapshots described in this way clearly show different monolayer phases containing lipids with definite tail orientations. At high compression (π πΏ = 51. π΄ π 2 ), the whole monolayer is in LC phases, with all lipids oriented along the bilayer normal (β¨πβ© β 1). Domains of lower lipid order, representing the LE phase, appear and grow as the area per lipid increases, leading to LC-LE coexistence at intermediate values of π πΏ , and finally a complete LE phase. The LC phase is dominated by lipids with π β 1(red) while the LE phase is dominated with lipids that orient parallel to the interface (π β -0.5, blue). In the monolayers with coexisting LE and LC phases, the LC and LE phases are separated by a region of π β 0.25 (green), which corresponds to a tail orientation π = 45 0 . |
67c477f081d2151a02bafb24 | 25 | Further distinction is made in terms of the monolayer hydrophobic thickness of each phase; the LC phase, characterized by the normal orientation of extended lipid tails, is the thick phase, while the LE phase, characterized by randomly oriented and somewhat coiled tails, is the thin phase. To characterize the local monolayer thickness, we define π‘ πΆπΈ , which is the normal zdistance between the centers of the first C (C1) bead and the last E bead of the tails. We assume that the hydrophobic phase of the monolayer is bounded by the z-coordinates of the C1 and E beads of each tail. These coordinates are obtained as functions of the π₯ and π¦ coordinates, which are then linearly interpolated on the 2D meshgrid (π₯, π¦), π§ πΆ1 (π₯ πΆ1 , π¦ πΆ1 ) β π§ πΆ1 (π₯, π¦); π§ πΈ (π₯ πΈ , π¦ πΈ ) β π§ πΈ (π₯, π¦) (11) |
67c477f081d2151a02bafb24 | 26 | Here, the monolayer is assumed to be oriented such that its hydrophobic tails pointing the positive π§ direction. If the tails are oriented upright as in the LC phase, then π‘ πΆπΈ = π‘ πΆπΈ πππ₯ , equal to the maximum extended length of the C-E chain, which is the length of the 4 bonds connecting the five tails beads. That is π‘ πΆπΈ πππ₯ = 4 Γ 0.516 π
π = 1.34 ππ. If the tails are oriented at an angle π from the normal, the local hydrophobic thickness would be, π‘ πΆπΈ = π‘ πΆπΈ πππ₯ cos π (13) For π = 45 π , π‘ πΆπΈ (π/4)=0.94 nm. Since π‘ πΆπΈ is the center-to-center bead distance, the actual hydrophobic thickness will be larger by the radii of the first and the end beads. Thus, for a normal tail orientation, the monolayer hydrophobic thickness, π‘ β = π‘ πΆπΈ πππ₯ + 0.9824 π
π = 1.97 ππ. In Figure , π‘ πΆπΈ is plotted for the monolayers of different π πΏ . We distinguish LE and LC phases by a π‘ πΆπΈ value of 0.94 nm, based on the information from the π plots and assuming a tail vector orientation of 45 0 at the LE-LC boundary. The plots clearly demonstrate the LC phase with a thickness over 0.94 nm, and the LE domains of lower thickness that grow to the complete LE phase at π πΏ > 104 π΄ π 2 per lipid. |
67c477f081d2151a02bafb24 | 27 | Figures show the LE-LC coexistence in the large L3200 monolayer systems. The morphological features remain the same as in the smaller systems, in both tail order and hydrophobic thickness parameters. Collapse of the monolayer is observed (Figure ) in the large monolayer system at π πΏ = 49 π΄ 0 2 with protrusions of bilayer reservoirs, which was not observed with S400 system at the same π πΏ . This means that monolayers the simulations can remain in metastable states within the smaller time and length scales of the simulations. |
67c477f081d2151a02bafb24 | 28 | AFM experiments reveal that increasing temperature induces melting of LE and LC domains, decreasing their sizes (see Supporting Information for details, Figure ). In our simulations, both S400 and L3200 systems are studied at three additional temperatures, π ππππ = 304πΎ, 313πΎ, and 324πΎ (π π·ππ· = 0.7, 0.75 and 0.8 respectively), although only a few large systems are simulated due to computational limitations. Figure depicts the behavior of LE and LC phases with temperature in the L3200 systems at π πΏ = 64.6 Β± 0.13 π΄ π 2 . The reported uncertainty in the area per lipid (aL Β± 0.13 Γ
Β²) reflects the statistical variation observed across monolayers at four different temperatures (293K, 304K, 313K, and 324K), providing a measure of thermal effects on molecular packing. Figure provides an excellent demonstration of the monolayer behavior upon temperature change in terms of tail order and thickness parameters. The large LC and LE domains melt between 293 K and 304 K, which is still below the transition temperature of DPPC, 314 K. It is interesting to note that as the temperature 16 increases, both blue regions, where lipids orient parallel to the interface as well as the red regions, where the lipids orient normal to the interface, diminish (Figure ). The blue region is the lowest lipid density phase where a lipids occupies most interfacial area and the formation of holes or a gas phase can occur when the lipids are parallel to the interface. The temperature 293 K is also close to the suggested triple point of DPPC monolayers at 290 K, where the gas, LE and LC phases coexist. Thus, increase in temperature not only decreases order in the LC phase, but also increases the order in the LE phase. This is also clearly evident in the snapshots at 313 K, which is close to the critical point, and at 324 K, which is well above the critical point. These snapshots show that the LE and LC phases have become mixed. |
67c477f081d2151a02bafb24 | 29 | Figure provides π(π), the distributions of the order parameter S at different areas per lipid in S400 (solid lines) and L3200 (dashed lines) systems at different temperatures. π(π) is calculated with a precision of Ξπ =0.01, over 20 frames spanning the last 1 million steps (1000 π) in the S400 systems, while it is calculated over 10 frames spanning the last 1 million steps in the L3200 systems. The order parameter values are obtained for each tail in a frame, totaling 1600 tails per frame in the S400 systems and 12,800 tails per frame in the L3200 systems, providing sufficient statistics. Monolayer phase at 293K is narrowly peaked at S values 0.91-0.95 at π πΏ = 53.1 π΄ 0 2 indicating a predominant LC phase. Increasing the temperature to 304 K leads to a broadening of the distribution, peaking at a lower S value of 0.89 and a reduction in the peak height by ~30 %. Further increase in temperature, showed a slower reduction in peak point (π ππππ =0.87 for both 313 K and 324 K) and peak height (by 36 % and 41% respectively with respect to 293 K ), as the temperatures are close to or beyond the critical point. At a low area per lipid, the lipids are compressed to the extent that they cannot adopt a slanted orientation relative to the interface normal. As a result, the tail order parameters cannot become much lower with temperature increase. The S distributions at 293 K exhibit distinctive peaks corresponding to orientations normal (S βΌ 0.9, low π πΏ ) or parallel (S βΌ -0.4, high π πΏ ) to the interface; at intermediate values of π πΏ the distributions tend to be double peaked. As the temperature increases, the distribution peaks shift towards the intermediate values of S, corresponding to orientations at intermediate angles. This is most evident at the π πΏ values corresponding to LE-LC coexistence. For example, the black solid curve (π πΏ β 72 π΄ π 2 ) in Figure , has peaks at S=-0.4 and S=0.86 at 293 K, which shift towards -0.2 and 0.76 at 304 K. At 313 K, it changes to a plateau distribution without a distinct peak, while at 324 K, it is most populated within S values -0.2 to 0.5. |
67c477f081d2151a02bafb24 | 30 | Next, we analyze the LE/LC domains based on the hydrophobic thickness parameter π‘ πΆπΈ . Figure demonstrates temperature dependence of the monolayer domains, similar to the images obtained from AFM experiments (SI, Figure ). At 293 K, the LE domains are substantially large and embedded in the LC phase, with more or less regular boundaries, indicating the existence of a line tension at the LE-LC interface. Increasing the temperature results in increasingly irregular boundaries and smaller domains, indicating a reduction in line tension. At 313 K and 324 K, the LE and LC domains appear to be mixed, consistent with the behavior near and above the critical point. The distributions π(π‘ πΆπΈ ) of hydrophobic thickness parameter at different areas per lipid and temperatures in S400 (solid lines) and L3200 (dashed lines) systems are provided in Figure . At low π πΏ (53 π΄ π 2 ), π‘ πΆπΈ of the dominant LC phase peaks at 1.3 nm β π‘ πΆπΈ πππ₯ , which indicates complete extension of the lipid tails along the z-direction. Note that in the distributions, π‘ πΆπΈ takes larger values because the instantaneous values can be larger due to the stretching of beads about the harmonic bonds. With increase in π πΏ , the peaks heights decreases, developing a second peak corresponding to LE phase in the range π‘ πΆπΈ = 0.4 -0.5 ππ. For the intermediate values of π πΏ = 64 -82π΄ π 2 , the distributions appear to be double-peaked, indicating the LE-LC coexistence. The complete LE phase at π π β₯ 99 π΄ π 2 peaks at π‘ πΆπΈ β 0.45 ππ. As the temperature increases, double peak distributions of LE-LC coexistence become single peaked increasing the π‘ πΆπΈ values between 0.5 nm and 1 nm. The LC peak heights also decrease with temperature. Figure presents the fraction of the LE phase defined as π πΏπΈ = π΄ πΏπΈ /π΄ π , where π΄ πΏπΈ is the total area of the LE phase and π΄ π is the total area of the monolayer. While π πΏπΈ seems to depend non-linearly on π πΏ , it increases with temperature more substantially in the LE-LC coexistence region. |
67c477f081d2151a02bafb24 | 31 | Overall, these results are consistent with the observations of LC-LE-Gas coexistence at low temperatures and the mixing of LE and LC phases at the critical point of temperature. The growth of LC domains with decreasing temperature leads to close packing of lipids and increased density in the LC region. This induces a decrease in domain boundaries and low lipid density in the LE region, tending to create a gas phase. As the temperature increases, LC domains melt, increasing the lipid density in the LE region, and eventually leading to the mixing of LC and LE phases at the critical point. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.