id
stringlengths 24
24
| idx
int64 0
402
| paragraph
stringlengths 106
17.2k
|
---|---|---|
664c278821291e5d1dd5ecc1 | 2 | From data fitness aspect, one critical assay assessment phenomenon -activity cliff (AC) -could significantly influence the affinity prediction performances. AC is defined as pairs of structurally similar compounds but having significant differences in bioactivities, which represents the discontinuity in activity landscapes of affinity/bioactivity datasets and in structure-activity relationships. For ML-based data-driven approaches that are always trained based on "chemical similarity principle", the AC molecules are typically recognized as abnormal samples and lead to the inaccurate affinity/bioactivity prediction and the failure of differentiating ACs and non-ACs. Meanwhile, the small structure changes between positive ACs and negative ACs will cause high false positive successful rate that is particularly harmful for virtual screening methods21. Previous AC studies have explored and emphasized the large performance differences on ACs and non-ACs for target-specific bioactivity prediction models, while some studies applied AC concept for binding affinity and bioactivity prediction-related tasks such as hit-to-lead optimization and structural alert development. Although the AC concept has been well-defined, the AC-induced performance discrepancies in CPI binding affinity and bioactivity prediction domain remains uncertain. Firstly, the identification of ACs is thought to relate to the target structures and binding conformations. Therefore, compared to most previous studies for target-specific bioactivity prediction models that only takes ligand features into account, CPI methods generate predictions based on both ligand and protein representations thus may potentially alleviate AC discrepancies. Besides, since ACs are not "isolated" pairs that over 90% of ACs are formed by groups of structural analogs with varying potency, larger-scale involved data that better depicts the AC landscapes may also benefit CPI prediction performance on AC molecules. Meanwhile, experimental uncertainty and CPI studies suggested the high correlations between different bioactivity measurements and the performance gain from joint training, which may contribute to better overall and AC performances. However, the limited targets in current AC benchmarks hinder valid and comprehensive AC estimation for CPI methods from data availability aspect. |
664c278821291e5d1dd5ecc1 | 3 | Inspired by these observations and challenges, we first construct a large-scale CPI dataset (CPI2M) with activity cliff annotations and multiple activity types available (K i , K d , EC 50 , and IC 50 ). Then, a crystal structure-free CPI model, called GGAP-CPI (protein Graph and ligand Graph network with Attention Pooling for Compound-Protein Interaction prediction), is proposed for accurate CPI binding affinity and bioactivity prediction. GGAP-CPI adopts a pretrained ligand encoder (KANO ) and a protein embedding generator (ESM-2 ) for advanced representation learning, with a multi-head cross attention pooling assembled to simulate and aggregate the interactions between ligand atoms and protein residues. GGAP-CPI-IntEns (GGAP-CPI trained with an Integrated Ensemble bioactivity learning regime), is proposed to be flexible for general CPI bioactivity prediction. |
664c278821291e5d1dd5ecc1 | 4 | Regarding the main results, the dataset quality of CPI2M is firstly investigated by overlapping analysis and chemical space visualization to ensure the validity of CPI2M for overall and AC performance evaluation. Then, we conduct extensive comparison experiments on CPI2M and MoleculeACE 23 with internal/external/transfer learning validation dataset setting, highlighting the superiority of GGAP-CPI compared to various baselines on multiple general and challenging scenarios. Regarding activity cliff samples, our results demonstrate the fusion of protein information in GGAP-CPI that transfers target-specific bioactivity prediction task to CPI prediction benefits to more accurate predictions, and integrated ensemble learning on multi-typed bioactivity dataset can further enhance the improvements. Meanwhile, considering practical virtual screening assessment, CASF-2016, Merck FEP, DUD-E, DEKOIS-v2, and LIT-PCBA 36 benchmarks are also introduced, indicating GGAP-CPI is comparable to other widely used scoring function methods with superior ranking and scoring powers. In all, building on previous studies, our study promotes the current investigation of activity cliffs from target-specific exploration to integrated CPI prediction with substantially larger datasets, more comprehensive evaluations, and more effective methodology designs. |
664c278821291e5d1dd5ecc1 | 5 | Overlaps in source data. The CPI2M dataset, derived from the integration of EquiVS and Papyrus data and originated from various public databases, necessitates a thorough examination of data overlaps to assess the need for data integration and redundancy elimination. Fig. and their intersecting dataset occupy similar structural spaces. This coherence in structural adjacency suggests that the integration of these datasets is unlikely to introduce significant shifts or discontinuities within the CPI2M dataset, supporting the validity of our data integration approach. In contrast, the visualization of protein space is notably sparser, featuring some isolated samples. A further investigation into the protein diversity was conducted by analyzing the Top 10 protein families from the InterPro 37 database within the EquiVS and Papyrus datasets, as depicted in Fig. ). This result highlights a high degree of overlap (8 out of 10) among the most prevalent protein families, such as the protein kinase superfamily, G-protein coupled receptor 1 family, and Tyr protein kinase family. This overlap signifies that the integration of EquiVS and Papyrus datasets not only complements but significantly enriches the protein diversity in the CPI2M dataset without incorporating detrimental outof-distribution samples. The examination of unique measurements and structural spaces for the CPI2M source data validates the comprehensive nature of our data curation and processing strategies. |
664c278821291e5d1dd5ecc1 | 6 | Protein assay distributions. We further analysed the distribution of the number of CPIs for each target protein within the CPI2M dataset to illustrate the overall sparsity of assay data. As depicted in Fig. ), with results for the four datasets in CPI2M are shown in Fig. , Fig. and Fig. for an overall performance evaluation. Task-specific performance across all protein subsets is detailed in Fig. and Fig. . These results demonstrate that GGAP-CPI consistently surpasses both target-specific and general CPI baselines across most bioactivity-specific datasets, maintaining its superiority in activity cliff-specific metrics across all datasets. Specifically, GGAP-CPI ranks first in most metrics and datasets, except for ranking second in RMSE and R-squared on CPI2M-main-pK d . Moreover, the relative improvements (Table ) achieved by GGAP-CPI compared to the best-performing baseline results highlight its exceptional performance on AC data. Notably, previously proposed porting the superior performance of GGAP-CPI, especially with AC molecules. |
664c278821291e5d1dd5ecc1 | 7 | Comparison on external validation dataset. Following analysis of internal validation results, we extended method evaluation to include GGAP-CPI's performance on unknown proteins and rare samples. We compiled an external validation dataset (CPI2Mfew) consisting of proteins with fewer than 200 available assay data points. This dataset is designed to simulate the challenges of practical zero-shot virtual screening scenarios, where target-specific methods are inapplicable due to the absence of reference data for model training. |
664c278821291e5d1dd5ecc1 | 8 | We assessed the performance of GGAP-CPI and GGAP-CPI-IntEns against other CPI baseline methods across CPI2M-few datasets. The results, presented in Fig. , reveal that all CPI methods experienced significant performance declines in this demanding experimental context. Many baselines were incapable of generating valid predictions, with their PCC scores falling below 0.5. In contrast, GGAP-CPI-IntEns demonstrated superior performances on the CPI2M-few-pK i , -pK d " -pEC 50 , and -pIC 50 , datasets, showcasing moderate predictive reliability. These findings underscore the robustness and broader applicability of GGAP-CPI in practical CPI bioactivity prediction scenarios, particularly in settings that challenge conventional target-specific approaches. |
664c278821291e5d1dd5ecc1 | 9 | Comparison on transfer learning validation dataset. In our study, we employed a widely used activity cliff estimation benchmark, MoleculeACE, as the validation dataset for transfer learning. We finetuned GGAP-CPI and GGAP-CPI-IntEns on MoleculeACE-pK i and -pEC 50 datasets. To ensure the integrity of the training process, any overlapping data in CPI2M were excluded to prevent data leakage. We evaluated the performance of GGAP-CPI, GGAP-CPI-IntEns, other selected target-specific and CPI baselines on MoleculeACE-pK i and MoleculeACE-pEC 50 datasets. As shown in Table and Table , with full-set overall and average ranking results shown in Fig. and Fig. , GGAP-CPI-IntEns significantly enhanced the performance over GGAP-CPI and outperformed other baselines on all datasets, with notable relative improvements achieved compared to suboptimal baseline methods (e.g., pEC 50 : 15.05% for RMSE and 9.04% for RMSE clif f ; pK i : 17.09% for RMSE and 7.40% |
664c278821291e5d1dd5ecc1 | 10 | We further considered investigating the relationship between model performance on AC data and overall data to expose the AC-induced discrepancies in CPI prediction. Firstly, we defined the metric ∆RMSE=RMSE clif f -RMSE as the difference between AC performance and overall performance. Then, the ∆RMSE values for each protein subsets were calculated, and the density distributions of ∆RMSE for GGAP-CPI and selected baselines (ECFP-ESM-RF, PerceiverCPI, KANO, and GBM) on four datasets were shown in Fig. . The results indicate that GGAP-CPI and other CPI baselines achieve lower ∆RMSE compared to target-specific baselines, suggesting that the integration of protein information can mitigate AC-induced performance discrepancies. Considering the performance results from the previous section, it remains essential to address these discrepancies, as there is still room for improvement in the ∆RMSE and RMSE clif f values for CPI methods. |
664c278821291e5d1dd5ecc1 | 11 | Taking GGAP-CPI as the reference model, as shown in Fig. (B), the model performances on AC molecules were highly correlated to the overall performances on most of protein subsets, suggesting AC molecules are abnormal samples exposing the out-of-distribution performances for CPI methods. Moreover, subsets with a lower percentage of AC data The performances of GGAP-CPI and its four variants were assessed across internal validation sets (CPI2M-main-pK i , -pK d " -pEC 50 , and -pIC 50 ) and are depicted in Fig. . |
664c278821291e5d1dd5ecc1 | 12 | The results demonstrate that GGAP-CPI outperformed all variants, affirming the synergy and effectiveness of its comprehensive design. Notably, the variants without the molecular encoder and the ESM-2 embeddings -GGAP-CPI -w/o Mol. Encod. and GGAP-CPI -w/o ESM-2 Emb. -showed the worst performance across all datasets, underscoring the critical contributions of the pretrained ligand encoder and protein embeddings to model accuracy. Although the other architectural components, such as protein graph convolution and multi-head cross attention pooling, also enhanced performance, their impact was comparatively moderate. These findings from the ablation study suggest that each component of GGAP-CPI is essential and effectively contributes to its performance, with no redundancy in the model's architecture. The integration of these modules results in optimal performance, thereby validating the model's design. |
664c278821291e5d1dd5ecc1 | 13 | Following previous studies, we employed standard deviations of predictions from GGAP-CPI-IntEns as the model uncertainties and explored the ability of enriching highly accurate predictions and potential relationships between uncertainty and AC. Since AC is defined as discontinuity between ligand structure and bioactivity that hard to be captured by QSAR modeling, it is likely that deep ensemble model may give higher uncertainties on AC data. Separated by activity type, the overall uncertainty distribution results shown in Fig. (C) prove this hypothesis, showcasing difference distributions between AC samples and non-AC samples on pK i , pEC 50 , and pIC 50 data. This phenomenon reveals that GGAP-CPI-IntEns can expose a proportion of AC data with higher uncertainty being assigned. Details including results calculated by other metrics and overall uncertainty distribution can be found in Fig. and Fig. . |
664c278821291e5d1dd5ecc1 | 14 | To explore the capability of GGAP-CPI in providing structural insights, specifically in iden- tifying pocket residues, we used GGAP-CPI without fine-tuning on the PDBbind-v2020 general set as the reference model. For each protein-ligand pair in CASF-2016, we extracted the cross-attention matrix and assigned attention coefficients to the corresponding protein residues. To quantify the results, we analyzed the distribution of TopN enriched residues (N=the number of pocket residues for a given protein) for proteins in CASF-2016, as shown in Fig. . The results indicate that GGAP-CPI can effectively retrieve notable proportions of pocket residues compared to the overall distribution. Specifically, the enrichment factor for retrieving pocket residues was found to be 5.506 ± 5.533. Furthermore, there was a significant difference (P = 0.0016, T-test) between the average attention coefficients for pocket residues and other residues (Fig. ). This suggests that GGAP-CPI assigns higher attention to residues involved in binding pockets, demonstrating its potential for practical applications in virtual screening. Two case studies, represented by PDB IDs 3GY4 and 3KWA, are illustrated in Fig. |
664c278821291e5d1dd5ecc1 | 15 | for structure-based virtual screening. To evaluate the applicability of GGAP-CPI-IntEns and other CPI baselines as SFs, we conducted a comprehensive virtual screening assessment using five well-recognized benchmarks: CASF-2016, MerckFEP, DUD-E, 34 DEKOIS-v2, and LIT-PCBA. We compared GGAP-CPI-IntEns with multiple existing SFs. Among these benchmarks, CASF-2016 and MerckFEP are used to evaluate ranking and scoring power, while LIT-PCBA, DUD-E, and DEKOIS-v2 are used to assess screening power. |
664c278821291e5d1dd5ecc1 | 16 | Assessment on scoring and ranking power. Scoring power describes the capacity of an SF to generate binding scores that exhibit a linear correlation with experimental binding data. Meanwhile, ranking power refers to the ability of a SF to correctly rank the known ligands of a certain target protein by their binding affinities, while scoring power refers to the ability of a SF to produce binding scores in a linear correlation with experimental binding data. We highlight these two evaluation metrics as the most important ones because they directly align with the primary goal of CPI bioactivity prediction models-to accurately predict bioactivities that closely match experimental measurements. According to the original definition, 36 the Pearson's correlation coefficient and the Spearman's correlation coefficient are employed to evaluate scoring power and ranking power, respectively. |
664c278821291e5d1dd5ecc1 | 17 | For CASF-2016, we collected 19,443 protein-ligand binding complexes along with their experimental binding affinities from the PDBbind-v2020 general set, with 285 of them identified as the CASF-2016 benchmark. The ligand and protein AlphaFold 2-predicted structures and sequences were taken as input for GGAP-CPI-IntEns and other CPI baselines to directly predict binding affinities. The performance results, listed in Table , show that GGAP-CPI-IntEns consistently performed competitively than other CPI-SFs in terms of scoring power and ranking power, with relative improvements of 24.44% for ranking power compared to the next best methods (ECFP-ESM-GBMr). Compared to other SFs, GGAP-CPI-IntEns is comparable to most traditional-SFs, DL-docking, and DL-SFs, such as AutoDock Vina, 10 CarsiDock, and RTMScore. It is important to note that GGAP-CPI-IntEns and all CPI-SFs were trained using complex structure-free CPI data without any binding pose or crystal structure information. |
664c278821291e5d1dd5ecc1 | 18 | Previous studies have shown that many well-performing DL-SFs experience significant performance drops when using Alphafold 2-predicted structures, indicating their limited generalizability and overconfident performance results in unknown binding pose scenario. Moreover, since they are generally trained on PDBbind datasets with similar data distributions as CASF-2016, to achieve a fairer comparison between GGAP-CPI-IntEns and other SFs, we finetuned GGAP-CPI-IntEns on PDBbind-v2020 general set data with Alphafold 2-predicted protein structures and tested it on CASF-2016. The results, labelled as "GGAP-CPI-IntEns-ft", indicated that with crystal structure information available for model finetuning, the performances of GGAP-CPI-IntEns were much improved (0.857 vs. 0.679 for scoring power; 0.818 vs. 0.649 for ranking power). These results outperform the most competitive ∆-learning-SFs and DL-SFs which were also trained on PDBbind data. The results for GlideScore-XP and CarsiDock are collected from Ref. The results for BIND are reproduced based on the released model. This assessment on CASF-2016 and MerckFEP demonstrates the ability of GGAP-CPI for practical virtual screening, especially for power of accurately predicting the specific affinities and ranking binders with the highest efficacies. This would highlight GGAP-CPI's potential in the hit optimization task, which focuses on differentiating strongest binders among a set of structurally similar ligands. |
664c278821291e5d1dd5ecc1 | 19 | Assessment on Screening Power. Screening power refers to the ability of a SF to accurately distinguish true binders from a pool of random molecules for a given target protein. A high-performing SF is typically trained on both known ligands and randomly selected decoys to better simulate the actual positive-to-negative ratio encountered in virtual screening. In this context, the enrichment factor (EF) calculated among the top 1% of ranked ligands across all datasets is commonly used as the evaluation metric. |
664c278821291e5d1dd5ecc1 | 20 | DUD-E is an enhanced benchmark comprising 102 diverse targets and 22,886 clustered ligands from ChEMBL, each supplemented with 50 decoys from ZINC. As shown in Table , our GGAP-CPI-IntEns achieved an EF 1% of 41.642-slightly lower than BIND's 46.35 but outperforming most traditional and machine learning-based scoring functions such as Vina 10 |
664c278821291e5d1dd5ecc1 | 21 | and RTMScore. In addition, the DEKOIS-v2 benchmark, which features 81 targets, 3,239 active compounds, and approximately 97,000 decoys, further demonstrated the strength of our approach: GGAP-CPI-IntEns achieved an EF 1% of 28.73, marking a 17.46% relative improvement over the next best baseline (Table ). For LIT-PCBA, an unbiased bioactivity classification dataset containing 15 targets with 8,020 actives and 2,675,399 inactive molecules (a positive-to-negative ratio close to 1:1000), our method displayed competitive performance relative to other CPI-based scoring functions (Table ). Moreover, when compared with widely used scoring functions that incorporate docking procedures (Table ), GGAP-CPI-IntEns not only achieved comparable screening power but also offers the distinct advantage of being completely independent of preprocessed docking or crystal structures, thereby delivering enhanced speed and flexibility for large-scale virtual screening. |
664c278821291e5d1dd5ecc1 | 22 | It is worth noting that the overall screening performance of GGAP-CPI-IntEns is slightly lower than that of BIND on benchmarks such as DUD-E and LIT-PCBA. This outcome stems from our intentional exclusion of generated decoys during model training-a strategy adopted to prevent the inclusion of misleading false negative samples that could undermine precise affinity prediction and risk data leakage. As a result, our model excels in accurately predicting the affinities of true binders while maintaining a robust, if relatively comparable, ability to differentiate binders from non-binders. |
664c278821291e5d1dd5ecc1 | 23 | In summary, the evaluation results on CASF-2016, MerckFEP, DUD-E, DEKOIS-v2, and LIT-PCBA underscore the effectiveness of GGAP-CPI-IntEns in handling unseen and imbal- The baseline results are collected from Ref. anced datasets. The integration of bioactivity learning, which mitigates activity cliff-induced discrepancies, combined with a complex structure-free framework, significantly improves the flexibility, efficiency, and accuracy of virtual screening-making our approach a versatile and rapid tool for high-throughput drug discovery applications. f The results for Smina+Vina are collected from Ref, and other results are collected from Ref. |
664c278821291e5d1dd5ecc1 | 24 | In this study, we investigated performance discrepancies induced by ACs in the field of CPI prediction. We established a large-scale CPI benchmark with AC annotations, named CPI2M, which substantially expands the current scope of AC benchmarks in terms of data scale, enabling a comprehensive comparison across both target-specific and general CPI prediction methods. Moreover, we developed a new deep learning-based CPI method, GGAP-CPI, which enhances CPI prediction performance by mitigating the effects of ACs. GGAP-CPI consistently outperforms other methods in both general and AC-specific metrics, leveraging advanced features such as pretrained ligand encoders, protein residue embeddings, and multi-head cross attention pooling. This method maintains its superiority across various testing scenarios, including regular CPI task, "unseen" proteins with rare samples, and limited protein information. Results on CASF-2016, LIT-PCBA, and Merck FEP benchmarks also emphasize the competitive scoring and screening power of GGAP-CPI in virtual screening. Further analyses highlight GGAP-CPI's ability to discern between AC and non-AC endpoints and offer structural insights into activity cliff-related substructures through attention visualization. |
664c278821291e5d1dd5ecc1 | 25 | Future work will aim to refine the CPI2M benchmark with pairwise AC annotations to enhance the granularity of AC effect estimation. Moreover, to improve the utility of GGAP-CPI in practical virtual screening settings, which often suffer from a high sensitivity to false positives, we plan to incorporate more experimentally verified negative measurements and develop binder/non-binder classification benchmarks. These enhancements will involve optimizing the GGAP-CPI architecture using techniques such as Siamese networks and supervised contrastive learning to better capture core substructures for pairwise AC molecule inputs. With these ongoing developments, the CPI2M benchmark and the GGAP-CPI model are anticipated to become increasingly suitable for practical, structure-free virtual screening. |
664c278821291e5d1dd5ecc1 | 26 | BindingDB, 9 PubChem, Probe&Drugs, 62 IUPHAR/BPS, 63 EXCAPE, and literature datasets. The data available from EquiVS and Papyrus differ in terms of accessible descrip- After standardizing molecule and protein structure representations, duplicate and con-flicting CPI data entries were removed using the conflict value filtering strategy initially applied to EquiVS data. These filtering strategies were conducted for reducing measurement noise and enhancing the reliability of the dataset used in our model training and the subsequent analysis. |
664c278821291e5d1dd5ecc1 | 27 | Bioassay selection. Based on the definitions of activity cliff from previous studies, specific bioactivity data types including pK i , pK d , pEC 50 , and pIC 50 were selected for comprehensive AC effect estimation. Other assay data types were excluded due to the unreliability of their experimental noise levels in supporting consistent AC analysis. The constructed pK i , pK d , pEC 50 , and pIC 50 benchmark datasets were used for subsequent model training and comparison. |
664c278821291e5d1dd5ecc1 | 28 | Activity cliff identification and labelling. Activity cliffs were identified using the criteria established in MoleculeACE. Initially, comprehensive structural similarities between molecule pairs targeting the same protein were calculated. This included combining the Tanimoto coefficient for all-atom molecule fingerprints and scaffold molecule fingerprints, along with the Levenshtein distance of SMILES sequences. Molecule pairs exhibiting a structural similarity greater than 0.9 were identified. Subsequent bioactivity value checks filtered out those structurally similar pairs that exhibited a negative log bioactivity value difference greater than 2 (equivalent to a 100-fold difference in nM units). Pairs meeting these criteria were labelled as AC data, while others were classified as non-AC data. |
664c278821291e5d1dd5ecc1 | 29 | Data splitting and benchmark dataset construction. For the construction of benchmark datasets, an AC-specific data splitting approach was employed as described in MoleculeACE. This method involved spectral clustering based on molecular fingerprints for each target-specific subset to ensure an even distribution of AC and non-AC molecules across training and testing sets. This was crucial to avoid overestimation or underestimation of model performance. Each cluster then underwent a stratified data splitting at a ratio of 8:2, using the AC labels to appropriately allocate molecules between training and testing sets. This approach ensured that similar molecules were preserved in the training set, while |
664c278821291e5d1dd5ecc1 | 30 | Ligand encoder. Pretrained models were proven to show highly competitive performances on molecular property prediction tasks. In our study, a pretrained molecule encoder (knowledge graph-enhanced molecular contrastive learning with functional prompt, KANO ) was introduced as the ligand encoder to learn advanced atom-level and moleculelevel hidden embeddings. In our study, the KANO without function prompt generator was adopted based on empirical analysis shown in the Supplementary Materials. Concretely, KANO is a L-layered C-MPNN-based graph neural network which is pretrained based on chemical element knowledge graph and contrastive learning. Regarding the L-layered C-MPNN process, given an atom node v and its node set N v for a molecular graph G, an intermediate message vector M (l) (v) in l -th layer is obtained by |
664c278821291e5d1dd5ecc1 | 31 | (ESM-2_t33_650M_UR50D) to generate residue embeddings. Then, inspired by other protein function prediction, we considered introducing protein graphs with multiple residue contact types to acquire higher-order protein representations by a graph convolution process. Specifically, 7 residue contact types (Peptide bond, Hydrogen bond interaction, Disulfide interaction, Ionic interaction, Aromatic interaction, Aromatic Sulphur interaction, Cation-Pi interaction) were considered for adjacency matrix generation for protein graphs using Graphein. Then, give the initial protein residue embedding H |
664c278821291e5d1dd5ecc1 | 32 | Multi-head cross attention pooling. After the input ligand and protein structures were encoded as multi-scale embeddings, a multi-head cross attention pooling method was designed to aggregate the current four embeddings (H atom , H lig , H res , H prot ) to protein-ligand complex embedding with hierarchical structure representation. Cross attention strategy has been applied to various biomedical tasks for cross-modality representation aggregation. Compared to self-attention, a more well-recognized attention-based mechanism, cross attention sets embedding from one modality/source as query vector, and the embedding from other modality/source as key and value vector to execute attention matrix calculation and weighted aggregation. The matrix computing and aggregating flow regarding the multi-head cross attention pooling is shown in Fig. . Specifically, given the number of attention head as N attn for n-th head and feature dimension as d h , the ligand atomic embedding was defined as the query vector (Q |
664c278821291e5d1dd5ecc1 | 33 | where ŷj is the predicted bioactivity of the j -th endpoint which measures the bioactivity value for an AC molecule, while y j is the true bioactivity value. n c represents the number of endpoints belonging to AC molecules for calculation. Among them, RMSE and RMSE clif f were taken as the major metrics while others were calculated as minor ones. |
672256e95a82cea2fa881776 | 0 | In face of the mounting pressure towards more sustainable chemical processes, it is increasingly important to rethink process design from the molecular level on, considering functionality along with cost and environmental aspects. Redesigning entire processes from the molecular level and in particular molecule selection have traditionally been a laborintensive, trial-and-error procedure, often constrained by time, limited chemical, and financial resources. To overcome these challenges, computer-aided molecular design (CAMD) poses a powerful tool. CAMD is a term that comprises a variety of approaches that all share the common feature to generate and evaluate a molecule's suitability for a given task using purely computational methods. The approaches used for CAMD solutions differ widely. Some approaches define their search space by screening large databases, others make use of group contribution methods, or molecules are assembled by a genetic algorithm. In general, genetic algorithms optimize a population of candidates towards a target criterion by performing genetic operations, such as crossover and mutation, to iteratively improve candidates over multiple generations. Genetic algorithms have been introduced to the engineering sciences by Holland as derivative-free methods to solve mathematical tasks and optimization problems. Through guided stochastic search, genetic algorithms improve solutions by exploring sampling areas of the search space. Genetic algorithms utilize the population to investigate various search space areas that facilitate global exploration, and 40 are thus regarded as global optimization methods, assuming that an infinite number of generations is available. In practice, a well-designed genetic algorithm typically avoids becoming trapped in local optima and often finds good and even near-optimal solutions. Venkatasubramanian et al. have first applied genetic algorithms for CAMD. Genetic algorithms have since been utilized in the design of chemicals in several different fields, including polymers, reaction solvents and drug design. More recently, Jensen has published a graph-based genetic algorithm to optimize molecules towards logP values, showing that the genetic algorithm is as good as or better than the machine learning-based approaches for certain applications in CAMD. |
672256e95a82cea2fa881776 | 1 | Moreover, genetic algorithms offer advantages by allowing the direct incorporation of 50 higher-level chemical knowledge and reasoning strategies, making the search process more efficient. Boone et al. have used supervised machine learning and a genetic algorithm to design potent antimicrobial peptides. Huang et al. have coupled a genetic algorithm with the random forest algorithm to select the most suitable molecular descriptors for CAMD. |
672256e95a82cea2fa881776 | 2 | Xu et al. have applied the thermodynamic model COSMO-SAC in a genetic algorithm to 55 design double-salt ionic liquid solvents. Scheffczyk et al. have introduced the COSMO-CAMD framework, which integrates the COSMO-RS model for obtaining quantum mechanical information about molecules. And the genetic algorithm in use, LEA3D, creates 3D molecular structure information as input for COSMO-RS. The framework COSMO-CAMD has been originally developed for extraction solvent selection in separation processes. Since its original publication, the framework has been extended by various features, including the consideration of process characteristics, as well as environmental 28 and economic aspects. Furthermore, the applicability of the framework has been extended from solvent selection for separation tasks to reaction solvent selection in reactors and even catalyst design. Despite their broad applications in CAMD, genetic algorithms, due to their evolutionary nature, often experience time-consuming, difficult-to-achieve convergence, and easy-to-get premature convergence, which can lead to infeasible and low-quality solutions within a limited number of generations. Consequently, executing genetic algorithms often requires highperformance computing clusters, or alternatively, may necessitate compromising on solution quality. |
672256e95a82cea2fa881776 | 3 | To enhance the performance of genetic algorithms, various approaches have been developed. One promising approach is to improve the quality of the initial population. A genetic algorithm can commence with either a cold or warm start. A cold start refers to a random initialization from scratch, whereas a warm start leverages existing knowledge about desired candidates for targeted initialization. |
672256e95a82cea2fa881776 | 4 | Recently, researchers in the mathematical optimization community have focused on warmstarting genetic algorithms using various techniques. These studies highlight significant potential of the warm-start approach in enhancing the overall performance of genetic algorithms. However, in the CAMD community, despite the widespread use of genetic algorithms, their initialization still relies on intuition, and a systematic warm-start approach remains absent. |
672256e95a82cea2fa881776 | 5 | To cover this gap, this work introduces a screening-guided warm start into a genetic algorithm for CAMD. The proposed method is built on the COSMO-CAMD framework, where the candidate population consists of molecules assembled from molecular fragments. Given a design task, we first conduct a large-scale molecular screening. The top-ranked candidate molecules from the screening are then automatically split into molecular fragments. The output of the fragmentation procedure is integrated into a genetic algorithm for molecular design, wherein the initialization is fine-tuned in two key aspects: the molecular fragment library and the initial population. We apply the warm-started COSMO-CAMD framework to two case studies from the literature. By comparing our results with existing bench-90 marks, our method demonstrates a significant improvement in the performance of genetic algorithms for CAMD. |
672256e95a82cea2fa881776 | 6 | The paper presents the proposed warm-start method in Section 2, followed by its application in two case studies presented in Sections 3 and 4. The first case study in Section 3 focuses on fine-tuning the method's parameters, and the best-performing parameter con-95 figuration is then applied in the second case study in Section 4. Section 5 concludes the work. |
672256e95a82cea2fa881776 | 7 | The warm-started COSMO-CAMD framework consists of three phases: Screening, Fragmentation, and Design, as illustrated in Figure . The main idea behind integrating molecular 100 screening into molecular design is to leverage the low computational cost of database screening by using stored σ-profiles, quantum-chemistry-derived molecular descriptors that encode the polarity of molecules, In the Screening phase, an existing database with stored σ-profiles serves as the molecular screening space. The framework evaluates each molecule against a specific target criterion, such as the distribution coefficient for use as an extraction solvent, to identify promising can-didates. This evaluation generates a ranking of all candidate molecules in the database that 110 then serves as input for the Fragmentation phase. In this phase, the candidate molecules are automatically broken down into molecular fragments, stored in a fragment library, while simultaneously allowing for their reconstruction based on these fragments. The reconstruction information is stored in lea-string notations, a computational representation of molecules. |
672256e95a82cea2fa881776 | 8 | The Screening phase is based on the automated screening approach introduced by Scheffczyk et al. . For a given process, the approach aims to identify promising molecules through 125 extensive and automated molecular screening within a given database. The Screening phase considers both molecular properties and a process model: The COSMO-RS model is used to predict properties for a large number of molecules, and a pinch-based shortcut process model is available for a comprehensive process-level assessment of the screened molecules if needed. As a result, the approach generates a list of all valid candidate molecules, which are 130 sorted according to predefined screening criteria, such as minimum energy demand. |
672256e95a82cea2fa881776 | 9 | where the decision variable y indicates a candidate molecule inside the chemical design 135 space of all possible molecular structures Y. Depending on the specific process system being analyzed, the objective function (Equation (1a)) can take the form of, for example, minimizing energy demand of a distillation column or maximizing the distribution coefficient of an extraction solvent. Please note that maximization and minimization problems can be easily converted into one another. Meanwhile, the thermodynamic model and process model used for CAMD can be formulated as equality constraints (Equation (1b)), for example, a liquid-liquid equilibrium, and inequality constraints (Equation (1c)), e.g., molecular size. |
672256e95a82cea2fa881776 | 10 | The simplified workflow of the COSMO-CAMD framework is illustrated in Figure . In COSMO-CAMD, molecular structures are generated through the genetic algorithm LEA3D. The concept of LEA3D relies on a fragment-based molecular description. After defining the 145 design problem, the candidate molecules for the initial population (or in the 0th generation) are randomly constructed using the molecular fragments from a given fragment library in the original COSMO-CAMD framework. In the proposed method, candidate molecules for the 0th generation are provided by the warm start. The number of the constructed candidate molecules per generation is referred to as the population size. The evaluation of candidate molecules is based on their values of the objective function. |
672256e95a82cea2fa881776 | 11 | In the context of the maximization problem, candidate molecules with higher values have a correspondingly higher selection probability for constructing the candidate molecules of the next generation. Here, different genetic operators, e.g., crossover and mutation (see Figure ), can be used to control the propagation process inside LEA3D. Subsequently, the candidate molecules of the new generation are again forwarded to the COSMO-RS model 160 for computing the necessary properties. This process is repeated until the a predefined maximum number of generations is reached. The final results are a ranked list of candidate molecules based on their values of the objective function. |
672256e95a82cea2fa881776 | 12 | In establishing the proposed method, we systematically align the setups in the Screening and Design phases, focusing on two essential areas: Firstly, the objective function in the Design phase mirrors the desired target of the designed candidate molecules. In the warm-started COSMO-CAMD framework, the same target is also pursued during the Screening phase, leading to the utilization of the objective function from the Design phase as the screening criterion in the Screening phase. |
672256e95a82cea2fa881776 | 13 | The second concern is the different computational representations of candidate molecules utilized in the Screening and Design phases. A summary of these representations across all different phases is provided in the Supporting Information. In the warm-started COSMO-CAMD framework, we use the linear notation Simplified Molecular Input Line Entry System (SMILES) for computationally storing and transferring candidate molecules. For interpreting the three-dimensional (3D) structure of molecules or fragments, we turn to the Structure Data File (SDFile) format. |
672256e95a82cea2fa881776 | 14 | Additionally, in the Design phase, as mentioned before, the genetic algorithm LEA3D generates the design space of all possible molecular structures using a fragment-based molecular description. In LEA3D, two terms are introduced to describe a molecule: X-dummy-atom and lea-string notation. These terms are also crucial for our Fragmentation phase, as detailed in the next section. The usage of these two terms is exemplified in Table to describe the molecule propyne. |
672256e95a82cea2fa881776 | 15 | An X-dummy atom inside a molecular fragment signifies an available anchor, allowing the fragment to connect with another fragment. A molecular fragment can have multiple X-dummy-atoms. The lea-string notation is used to record the combination information of different molecular fragments for representing a molecule. As displayed in Table , the leastring of the designed molecule indicates that propyne is the combination of the fragment 1 and 2 , with the third atom of the first fragment 3 linked to the first atom of the second fragment 1. For further details regarding the grammar of the lea-string notation, please refer 190 to the handbook of LEA3D. |
672256e95a82cea2fa881776 | 16 | The goal of the Fragmentation phase is to integrate the results of the Screening phase into the Design phase, thereby providing a warm start for the initialization of the genetic algorithm used in the framework COSMO-CAMD. Accordingly, the output of the Fragmentation phase must align with the requirements necessary for initializing LEA3D. The initialization of LEA3D is composed of two components: the fragment library and the candidate molecules of the initial population, as seen in Figure . Thus, it is imperative that both components are included in the output of the Fragmentation phase. |
672256e95a82cea2fa881776 | 17 | To initialize the fragment library, we break down the promising candidate molecules from For each candidate molecule i (i ∈ N F in , step 1 ○), we begin by analyzing the molecular 215 structure and identifying its functional groups (step 2 ○). To tailor the functional groups, we draw inspiration from the drug development and discovery field, where fragment-based molecular design is extensively employed. To this end, many computational tools for building the fragment libraries or fragmenting the molecules are being developed. Two widely utilized methods for fragmenting are Retrosynthetic Combinatorial Analysis Procedure (RECAP) and Breaking Retrosynthetically Interesting Chemical Substructures (BRICS). Based on the chemical substructures used in RECAP and BRICS, we predefine 16 functional groups for our Fragmentation phase, as listed in Table . |
672256e95a82cea2fa881776 | 18 | Here, in alignment with ecological design principles, it is also possible to identify envi-ronmentally harmful elements or atoms within the screened candidate molecule i, such as halogens. We choose these 16 functional groups primarily due to their relevance to the case studies used in this work. Note that other functional groups can be easily added if needed. |
672256e95a82cea2fa881776 | 19 | Two distinct labels, single anchor and multiple anchors, categorize 16 predefined functional groups. A single anchor implies that the fragment can connect with only one other fragment, while multiple anchors indicate the ability to connect with more than one frag-230 ment, infusing complexity into the fragmentation procedure. Furthermore, recognizing their unique properties, we retain circular structures as functional groups and classify them as circular anchor(s) as the third label. Despite the numerous variations, the circular structure in a molecule can be easily identified by examining the connectivity of its atoms and the type of their bonds. During the fragmentation procedure, if the molecule i contains a circular 235 structure, it can be categorized as either a single-anchor structure (e.g., phenol, cyclohexane, naphthalene) or a multiple-anchor structure (e.g., dimethylbenzene). This categorization is executed in step 2 |
672256e95a82cea2fa881776 | 20 | We include this step solely to enhance effectiveness of the fragmentation in progress, particularly when N F in is chosen excessively large by users. This step is not active when N F in is reasonably determined or when the upper limit of the total number of carbon atoms is defined sufficiently high. In the context of our two case studies, where the goal is to design solvents for chemical processes, we follow the COSMO-CAMD framework's default setting and limit the number to 12 carbon atoms. The candidate molecule i, if overly complex, is deemed unsuitable as a solvent and is therefore excluded from further consideration. To streamline the implementation, we treat the entire complex candidate molecule i as a fragment, substituting all hydrogen atoms with X-dummy atoms (step 4 ○). And this fragment 250 can be rendered obsolete by the genetic algorithm with a high degree of probability. |
672256e95a82cea2fa881776 | 21 | The Fragmentation phase is written in programming language Perl version 5, and has undergone testing with over 3640 candidate molecules from the COSMO database. It is worth noting that the fragmentation procedure involves several setup parameters, such as N F in , W F and S F . These parameters have a discernible impact on the composition and quality of the resulting fragment library, consequently impacting the performance of the Additionally, a trade-off is required for the fragment library: On the one hand, broad diversity and variability are necessary to construct a potentially expansive design space Y for CAMD; on the other hand, a compact and concise size is crucial to ensure efficient computation in the Design phase, in particular to avoid expensive quantum mechanical calculations on unpromising candidate molecules. To assess the impact of the parameters N F in , W F , and S F , we conduct a parameter study in our first case study. In the following, we apply the warm-started COSMO-CAMD framework to design an optimized solvent for the purification of γ-valerolactone (GVL) from an aqueous stream in a hybrid extraction-distillation process. This process has already been utilized in a solvent screening approach and an integrated solvent and process design framework. The process flowsheet is illustrated in Figure . |
672256e95a82cea2fa881776 | 22 | The setup of the warm-started COSMO-CAMD framework for this case study is summarized in Table . In the Screening phase, we employ the same COSMO database as in the literature, which consists of 3640 molecules pre-selected for suitability as solvent molecule structures, considering factors such as molecular size. In the Fragmentation phase, we adjust the values of the three mentioned parameters N F in , S F and W F via a subsequent parameter study within this case study. The goal is to evaluate the impact of different resulting fragment 335 libraries and, consequently, determine the configuration for a more effective and reasonable fragment library. In the Design phase, we set the maximum number of generations to 50 with a population size of 40, resulting in a total of 2040 potential fragment-based molecular structures. To assess the performance of the genetic algorithm, each case is executed six times. Other specifications and assumptions are taken from Scheffczyk et al. To evaluate our method, we analyse the results of each case in three different aspects: efficiency, effectiveness, and robustness. For clarity, we define these three aspects as follows: |
672256e95a82cea2fa881776 | 23 | In addition, the top-ranked candidate molecules identified during the Screening phase serve as the initial population for the Design phase. The Design phase aims to generate novel candidate molecules that surpass the promising screened candidate molecules. Thus, to evaluate the effectiveness of the proposed method, our analysis centers on the quality of the designed candidate molecules: Via Equation ( |
672256e95a82cea2fa881776 | 24 | The aim of this subsection is to compare the results between cases with and without the Figure shows the minimum energy demand of the initial population over the minimum solvent demand across all six runs. In comparison to the cold start, where the molecules of the initial population are randomly generated, the warm start provides a consistently better start. The influence of the quality of the initial population on the performance of the genetic value in each generation f D max (y), along with the fluctuation observed in six independent runs, is shown. Within ten generations, all runs of the warm start converge to the same molecule hex-3-ene-1,5-diyne. In contrast, the results of the cold start display a relatively large variation. In one out of six runs, a sub-optimal candidate molecule is identified after 385 the 50th generation, which also explains the persistent fluctuation observed in the cold start until the end of the calculation. Please note that the molecule hex-3-ene-1,5-diyne contains multiple unsaturated bonds and might not be suitable for use as a solvent in practice. Overall, it can be inferred that the direct warm start, as compared to the cold start, significantly enhances the efficiency and robustness of the genetic algorithm, while maintaining comparable effectiveness. |
672256e95a82cea2fa881776 | 25 | In addition to the initial population, the fragment library is another crucial element for the warm start of the genetic algorithm. So far, the fragment library is built by a heuristic selection on a rough analysis of the screening results. It is also possible to add extra fragments manually. In the warm-started COSMO-CAMD framework, one of the intentions is to automatically generate a target-oriented fragment library. To this end, we have conducted a parameter study to analyze how various configurations of the fragment library impact the results of the Design phase, aiming to fine-tune the warm start. |
672256e95a82cea2fa881776 | 26 | In the parameter study, we evaluate three parameters in the Fragmentation phase, as previously mentioned in Section 2.3: (1) N F in indicates the number of screened candidate molecules as input to the Fragmentation phase; (2) W F determines whether repeatedly occurring molecular fragments should be included. When set to True, this parameter ensures that the weighting of fragments is integrated into the composition of the resulting fragment library; and (3) S F signifies if all hydrogen atoms in a molecule fragment should be replaced by X-dummy atoms. When S F is set to True, the fragment library potentially provides more anchors for the Design phase, compared with a False value. |
672256e95a82cea2fa881776 | 27 | In the parameter study, the cases where W F = True, S F = True, and N F in equals 40, 50, and 500, respectively, exhibit the most promising fragment libraries. To be more precise, when W F and S F are activated, the effect of varying N F in can be neglected. However, when either of these two parameters is deactivated, increasing N F in negatively impacts the performance of the genetic algorithm. In summary, a fragment library with activated weighting and substitution parameters has a positive impact on the Design phase in the warm-started COSMO-CAMD framework. |
672256e95a82cea2fa881776 | 28 | Furthermore, the candidate molecule hex-3-ene-1,5-diyne emerges as the optimal solvent in this case study. As illustrated in Figure , the molecular structure of this optimal solvent comprises one ethylene and two acetylene chemical substructures. These two chemical substructure are consistently found across all fourteen fragment libraries used in this case study, as provided in the Supporting Information. This consistency explains why, in this case study, no candidate molecules superior to those identified in the cold start were found. |
672256e95a82cea2fa881776 | 29 | Please note that the molecule hex-3-ene-1,5-diyne is probably not stable as a solvent in practice due to its multiple unsaturated bonds. However, practical validation is beyond the scope of this work. have used the COSMO-CAMD framework to design an optimal solvent for the removal of phenol from wastewater. We use the same design problem, but we now apply the augmented COSMO-CAMD framework with fine-tuned warm start. For a benchmark case, we initialize the genetic algorithm using the configuration outlined by Scheffczyk (E) and raffinate (R), respectively. The activity coefficients γ ∞ phenol,R and γ ∞ phenol,E for the solute phenol in R and E are computed using the COSMO-RS model at the liquid-liquidequilibrium with the solute at infinite dilution and at a temperature T of 25 • C. M c denotes the molar mass of component c present in the mixture. Due to the assumption of infinite dilution, mole fractions of the solute phenol are x phenol,E = x phenol,R ≈ 0. Accordingly, the 460 screening criterion in the Screening phase f S (y) and the objective function f D (y) in the Design phase are defined as follows: |
672256e95a82cea2fa881776 | 30 | Table summarizes the setup of this case study. As mentioned earlier, we adopt the parameters from the literature for the benchmark case. In the augmented case, we employ the screening-guided warm start. Following the Screening phase, the top 40 screened candidate molecules are passed on to the Fragmentation phase (N F in = 40). During fragmentation, the parameters W F (weighting of fragments) and S F (substitution of hydrogen atoms) are both set to True. Subsequently, the top 40 screened candidate molecules and the resulting fragment library are integrated into the Design phase, maintaining a population size of 40. |
672256e95a82cea2fa881776 | 31 | The number of generations is set to 20 in this study. Two cases are performed with six computational runs each. For additional details about the case study setup, please refer to the Supporting Information. In the benchmark case, the convergence of six runs shows significant fluctuations. The molecule 2-ethoxypropane is identified as the optimal solvent for phenol extraction, appearing in the results in three out of six runs. In contrast, in the augmented case, the same candidate molecule, 1-methoxy-3-methyl-2-butene, is consistently identified as the optimal solvent within ten generations for across all 6 runs. Moreover, 1-methoxy-3-methyl-2-butene did not occur in the benchmark case. Thanks to the screening-guided warm start, the maximum objective function value f D max of the 0th generation of the augmented case is even higher than that of the optimal designed candidate molecule in the benchmark case (cf. dashed line in Figure ). In addition to its fast convergence and high robustness, the warm-started 485 COSMO-CAMD framework outperforms the original framework by generating a superior optimal candidate molecule. Please note that our results are based exclusively on computational calculations and do not include experimental validation of the designed molecular structures. |
672256e95a82cea2fa881776 | 32 | To evaluate the quality of the designed candidate molecules, we categorize them into the 490 three classes defined in Table . Figure presents the total number and the distribution of designed candidate molecules across each class of two cases. The application of the warm-started COSMO-CAMD framework substantially increases the number of elite molecules: Around 42% of the total designed candidate molecules in the augmented case belong to the elite molecules, while this percentage value drops to 9.5% in the benchmark case. Particularly noteworthy is the discovery of two novel candidate molecules in the excellent class in all six runs in the augmented case, which is absent in the benchmark case. Furthermore, the number of candidate molecules found in the good class is five times higher than that in the benchmark case. In addition, the standard deviation of the benchmark case is significantly larger than that in the augmented case. In this context, the 500 warm-started COSMO-CAMD framework outperforms the original framework by generating a greater number of high-quality candidate molecules. Overall, these results further validate the enhanced efficiency, effectiveness and robustness of our method for targeted molecular design. |
672256e95a82cea2fa881776 | 33 | The benchmark case has a total of 10 fragments. In addition to the six shared fragments, the fragment library also includes four individual fragments: carboxylic acid group, alcohol group and two ring groups (benzene and cyclohexane groups). As each fragment appears only once in the fragment library, the individual fragments in the benchmark case (not present in the augmented case) account for 40% of the total. |
672256e95a82cea2fa881776 | 34 | The fragment library in the augmented case is more extensive, comprising 199 fragments, including 13 unique molecular fragments, each associated with different weightings. As depicted with a red background in Figure , the six highest weightings are highlighted, while the weightings of other fragments are all below 3%. Notably, nearly 60% of fragments in the fragment library are the methyl group. In the augmented case, seven individual fragments, including ethylene group and six ring groups, are newly generated during the Fragmentation phase. The overall weighted percentage of the individual fragments in the fragment library in the augmented case is 16 %. |
672256e95a82cea2fa881776 | 35 | The presence of individual fragments in one case implies that certain molecular structures cannot be generated in another case, which could potentially lead to sub-optimal candidate molecules. To assess the impact of individual fragments (with the weighted amounts of 40% and 16%, respectively) from one case on another, an analysis is conducted on the designed elite molecules across all six runs. It is noteworthy that, on average, about eight candidate molecules designed in the bench-535 mark case will never be generated in the augmented case, because these candidates can only be constructed with the fragments exclusive to the fragment library of the benchmark case. |
672256e95a82cea2fa881776 | 36 | Additionally, a relatively large standard deviation among six runs is observed in the benchmark case. In the augmented case, 44.6% of elite molecules contain at least one ethylene group, and 28.4% have at least one ring group. In other words, the fragment library in 540 the augmented case facilitates the construction of, on average, 108 candidate molecules that can never be identified in the benchmark case. These also include the two best-performing candidate molecules. |
672256e95a82cea2fa881776 | 37 | We represent the cumulative count of these individual elite molecules, along with their objective function values of all six runs in Figure . Here, an individual elite molecule is 545 defined as a candidate molecule that cannot be generated in the other case because of the absence of the individual fragments. In total, there are 234 individual elite molecules (on average 39 in each run) generated in the augmented case with a objective function value higher than 2.2, providing ample options for experts to identify an optimal solvent for practical or industrial applications. |
672256e95a82cea2fa881776 | 38 | In this work, we present a method to fine-tune a genetic algorithm for computer-aided molecular design (CAMD), focusing specifically on its initialization. The proposed method builds on the framework COSMO-CAMD, and integrates the automated screening approach developed by Scheffczyk et al. . The results and insights from the molecular screening are utilized to provide favorable warm-start conditions for molecular design in two key aspects: the fragment library and the initial population. |
672256e95a82cea2fa881776 | 39 | These molecular fragments are then used to construct a fragment library for the Design phase. At the same time, the top-ranked screened candidate molecules are reconstructed and serve as the initial population for the genetic algorithm used in the Design phase. This approach allows the Design phase to commence with a fine-tuned warm start, guiding the CAMD towards a promising vector in the design space. |
672256e95a82cea2fa881776 | 40 | We apply the proposed method to design solvents for two different applications, namely the purification of γ-valerolactone and the extraction of phenol from water. The results are compared with benchmarks from the literature, where initialization is based on intuition, which we refer to as the cold start. Both case studies demonstrate that the screening-guided warm start significantly accelerates convergence of the genetic algorithm in the COSMO-580 CAMD framework. With a warm start, the identical optimal solvent can be found within a maximum of 10 generations across all cases analyzed in this work. In contrast, a cold start requires much more generations to achieve the same optimal molecule candidate and can yield a sub-optimal solution even after 50 generations. |
672256e95a82cea2fa881776 | 41 | In the γ-valerolactone case study, the setup of the most favourable fragment library is 585 identified through a parameter study. The study indicates that a fragment library with acti-vated parameters W F (weighting of the fragments) and S F (substitution of hydrogen atoms) positively impacts the performance of molecular design. With this parameter combination, the effect of the parameter N F in (the number of top-ranked screened candidate molecules used as input in the Fragmentation phase) is negligible. |
672256e95a82cea2fa881776 | 42 | Overall, introducing a screening-guided warm start into the genetic algorithm-based framework COSMO-CAMD significantly enhances the efficiency, effectiveness, and robustness of the computer-aided molecular design process. The presented method is generic in nature and can be directly applied to all subsequent developments of the COSMO-CAMD framework and the genetic algorithm LEA3D. It can also be readily adapted to other genetic algorithm applications within the field of computer-aided molecular design. |
628b165e59f0d64ce39a2491 | 0 | Let D = (X, y) = (x i , y i ) i=1 be a training data set of molecular features x i ∈ R p and properties y i ∈ O. We would like to learn a linear hypothesis h(x; w 0 , w) = w 0 + x, w by minimizing the regularized risk [� � �] to predict the property of a molecule using its features. By calculating molecular descriptors and/or fingerprints, we typically obtain thousands of features of various forms, and if the true h has only a small subset of features with actually nonzero weights, we would want to impose a complexity penalty like Lasso to enforce sparsity. One drawback of Lasso is that for a subset of interrelated features it tends to arbitrarily choose only one feature, ignoring the rest. Reasonably, if there is a group of highly-correlated molecular descriptors, or categories integrating into a single descriptor, our model should either select or remove the entire group depending on its relation to the property. To address this issue, we present three objectives with sparse, grouped feature selection (see Figure � � � � � for penalized least-squares objectives): |
628b165e59f0d64ce39a2491 | 1 | Empirical risk (e.g., mean squared error) Regularizer (e.g., lasso penalty) regularization weight, α ≥ 0 has a complexity penalty that mixes Ridge (L 2 ) and Lasso, thus, aiming to retain benefits of both penalties. While preserving sparsity with Lasso, it tries to proportionally assign nonzero weights to highly-correlated features with Ridge. ] lets manually partition features into groups and controls the complexity with the weighted sum of √ L 2 penalties for each group. It is designed to select or remove the groups of features at a time, rather than considering each feature individually. enforces sparsity both with respect to groups and features within a group by mixing Lasso and Group Lasso. When a particular group is informative but also contains redundant features, it is desirable to filter them out rather than shrink their weights. |
628b165e59f0d64ce39a2491 | 2 | One way to find the optimal h is to solve the objectives coordinate-wise-by iterating through w and updating w j via soft-thresholding while keeping w -j fixed. Repeating iterations multiple times, until some predefined stopping criteria, we get the cyclic coordinate descent optimization method, which can be implemented for . Similarly for grouped objectives |
628b165e59f0d64ce39a2491 | 3 | We experiment with four single-task data sets from MoleculeNet : BBBP (2K compounds) and BACE (1.5K) for binary classification, ESOL (1.1K) and FreeSolv (0.6K) for regression. To generate molecular representations from SMILES strings, we calculate 208 RDKit descriptors and extendedconnectivity fingerprints with radius 2 and 2048 bits . Next, we standardize numerical values and combine the obtained features to pass into one of the models from Figure � � � � � (note that depending on the distribution of a target property, we may use a different loss function for the empirical risk in [� � �] including huber, hinge, or log-loss). |
628b165e59f0d64ce39a2491 | 4 | In this work, we present sparsity-enforcing linear models for molecular-property prediction. In our experiments, we seek to find quick baselines for single-task supervised-learning problems. Presented pipeline combines multiple types of calculated descriptors and fingerprints, applies basic feature normalization, optionally arranges combined features into groups, and selects a small group of informative features. Since the pipeline has no feature engineering and requires tuning mainly regularization hyperparameters, it can be a desirable baseline for high-dimensional, single-task molecular-property prediction. |
628b165e59f0d64ce39a2491 | 5 | Still, there are further cases to investigate. Experiments should be expanded to bigger data sets with higher number of descriptors/fingerprints/learned representations, which will also require more refined feature grouping for grouped penalties. Furthermore, can multiple interrelated properties be predicted with multi-task penalties? In an attempt to select a small number of joint features for each task, one can try combining the Frobenius norm of residuals as an empirical risk L in [� � �] with the sum of Elastic Net or Group Lasso penalties of each task as a grouped penalty C. |
6690e02801103d79c5cb42f9 | 0 | Soft matter spans materials science applications in cosmetics, 1-3 pharmaceuticals 4-7 and water decontamination 8 among many others. Advances in synthetic chemistry and formulation science have lead to the development of complex, bespoke soft matter architectures while ever-increasing computational power and simulation techniques have opened up the study of a broad range of such systems in silico. An understanding of the interplay of molecular structure, conformational dynamics and intermolecular interactions of the constituent molecules is required to build up generalisable structure-property relationships to in turn support the rational design of new functional soft matter materials with PySoftK. |
6690e02801103d79c5cb42f9 | 1 | MD simulations have been widely used in the study of soft matter self-assembly. MD simulations generate a large amount of data from which it is typically challenging to extract meaningful predictive understanding. This difficulty arises not only from the high dimensionality of the output data, but also from the fast and complex dynamics that are intrinsic to soft matter. Interpreting the molecular mechanisms present in MD simulations typically requires bespoke computational tools to quantify such complex behavior. As a result, it is often not possible to replicate experimental findings or to reproduce quantifiable results. The computational soft matter community has invested significant effort in simplifying the creation of inputs for soft matter simulations, as exemplified by tools such as PySoftK, Polymer Structure Predictor, Radonpy 20 and MoSDeF. However, a comprehensive package for analyzing soft matter material properties has not yet been developed. To address this issue, PySoftK (version 1.0) now includes a toolkit designed for analysis of soft matter simulations, providing a unified computational framework in which modelling and analysis can be streamlined under modern software development standards. This feature supports data provenance and reproducibility of results. In line with the design commitment of PySoftK to minimize user inputs and provide highly efficient code, PySoftK v1.0 enables the analysis of large-scale soft matter systems. |
6690e02801103d79c5cb42f9 | 2 | In this work, new computational analysis tools will be introduced, providing illustrated case studies. These tools which have not been previously shared with the community via a publicly available and well-tested software package, provide an automated approach to investigate the interfaces present in soft matter systems, the interactions which govern the association of the molecules within soft matter, and the self-assembly of soft matter. The software can be employed to investigate different kinds of soft matter systems (and for that matter could be utilised for any kind of nanoparticle or interface), since the implemented algorithms are entirely chemically agnostic. With this new release, we aim to support the acceleration of the computer-aided development of novel materials. |
6690e02801103d79c5cb42f9 | 3 | In order to accurately understand the interfacial properties of nanoparticles, one must first be able to describe where the interface of interest is located in space. Once the interface is identified then not only can the interfacial properties be measured but also the inter-nal distribution of the components within a nanoparticle can be measured, which provides significant insight into the structure of the nanoparticle. Spherical Density. For nanoparticles that are approximately spherical, the interface of the nanoparticle can be described by identifying the radius of the nanoparticle from the particle's center of mass. Then the density of their various components and its environments can be calculated with reference to the aggregate's center of mass ('spherical density'). There are numerous MD studies that measure the density of components of polymer micelles using this approach, but there are limited open-sourced codes that can be used to carry out such analysis. The PySoftK spherical\_density tool allows users to easily calculate the spherical density over time, even for structures with varying molecule numbers throughout the simulation. It computes the average density (over time) with respect to the distance from the center of mass of the molecular structure. This is achieved by dividing up the simulation space into spherical bins with reference to the origin at the center of mass of the aggregate. For each of these bins, the number of particles in them is counted and divided by the volume of the bin. This calculation is described in Equation . |
6690e02801103d79c5cb42f9 | 4 | Where ρ spherical bin is the spherical density of a particular bin, n beads is the number of beads in the same bin, R out is the outer bin radius (the upper bound of the bin) and R in is the inner bin radius (the lower bound of the bin). Therefore, the output of the spherical_density tool is a Numpy array with the average spherical density values across time for each bin. |
6690e02801103d79c5cb42f9 | 5 | Additionally, the spherical_density_water class provides a customized version of this algorithm to investigate only the water density. It is a separate function because to properly calculate the distances of water with respect to the center of mass of the soft-matter ag-gregate, the water coordinates need to be wrapped around the coordinates of the aggregate while correctly making it whole through the periodic boundary conditions. Since this process is computationally more expensive, a different class was developed which works in exactly the same way as the spherical_density function, but the atom names of the solvent need to be inputted by the user. For water density calculations, only the oxygen atoms need to be selected. |
6690e02801103d79c5cb42f9 | 6 | Intrinsic Density. If a nanoparticle is significantly aspherical, then assuming that it is spherical and defining its interface as such will lead to inaccurate results. As a result, intrinsic techniques have been previously employed to identify the location of the interface at different points on the surface of the nanoparticle. On such method that has been used |
6690e02801103d79c5cb42f9 | 7 | to study nanopartiles with a distinct core-shell structure is the intrinsic core-shell interface method (ICSI). This approach divides the constituent molecules within a molecular aggregate into the core and shell region with an intermediate region in between these. The masses of the core and shell are determined, and the volumes of the core, shell, and interface are calculated. The total density of the aggregate is then computed by dividing the total mass by the sum of the volumes of the core, shell, and interface, which accounts for the varying properties of the core and shell. PySoftK's intrinsic_density class harnesses the ICSI method to perform intrinsic density calculations. PySoftK's implementation enables seamless processing of the entire reconstructed coordinate set provided by make_micelle_whole . Also, it can handle varying numbers of molecules constituting the structure of interest at each time step. The usage of this function closely resembles that of the spherical density function. Additionally, there is a intrinsic_density_water class for the computation of the intrinsic density of water. |
6690e02801103d79c5cb42f9 | 8 | This section describes the tools that we have developed to analyze different intermolecular interactions that play important roles in the self-assembly of soft matter. We have developed novel methods for investigating the ring-ring interactions, which are commonly found within aggregates of conjugated polymers, proteins and other biopolymers, and for determining the solvation of different regions of molecules. Finally we have developed code that allows a more general assessment of all interactions between molecules within the simulated system. |
6690e02801103d79c5cb42f9 | 9 | Ring Stacking Analysis. Ring stacking interactions are the driving force behind many collective phenomena ranging from DNA base pairing, protein-drug binding, and through-space charge transfer in conjugated polymers. A class to identify ring-ring interactions has been developed for PySoftK v1.0. ProLIF 29 can also calculate ring stacking interactions, but only in protein-ligand systems. The approach implemented in PySoftK v1.0. has been specifically engineered for large soft matter systems, where ring-stacking interactions across more than two molecules can be calculated. PySoftK's ring-ring interaction algorithm consists of three stages; firstly, all atoms belonging to aromatic (conjugated) rings within the chosen molecules (or parts of molecules) are detected. Secondly, pairs of molecules within the system are screened using a cutoff to define molecules (or parts of molecules) in close contact. Finally, those molecules (or parts of molecules) which are found to be in contact are selected to have the necessary geometrical properties between their aromatic units calculated. The algorithm is explained in detail in the SI. The RSA class allows users to identify ring stacking patterns within a simulation and calculate the network formed by the stacking interactions and their evolution over time. This tool implemented in PySoftK 1.0 enables the user to perform this analysis based on minimal input parameters. These input parameters are: the maximum distance cutoff between two rings for which the ring stacking is calculated, the angle cutoff used to determine the range for which two rings are considered to be stacked, the frames on which to run the analysis and the output file name. Default values adopted are a distance cutoff of 10 Å and an angular cutoff of 20 • . This algorithm has been tested on amorphous F8BT (see Figure (a)) and the protein-potein interaction complex formed between TREM2 and DAP12 (see Figure ). |
6690e02801103d79c5cb42f9 | 10 | Solvation analysis plays a crucial role in understanding the structure and dynamics of amphiphilic soft matter. This analysis allows us to quantify the solvation cells around molecules, and to predict hydrophobic interactions. between molecules by measuring the distance between selected atoms. If the intermolecular distance between two selected atoms is less than a user-defined cutoff, it is considered a contact. The values of the distance cutoff used to characterise the interactions of different types of molecules will vary, and can be determined from radial distribution functions between the specific atoms of the two molecules used in the calculation or through an analysis of the minimum distance between these atoms on two molecules that are known to have aggregated during the simulation. Values of the distance cutoff have been found to be between 4 Å and 7 Å as shown elsewhere. A visual representation of intermolecular contacts between two poly(ethylene oxide) (PEO) -poly(methyl acrylate) (PMA) polymers within a micelle is shown in Figure (a). The contacts class can utilize the output of the make_micelle_whole tool as an input. This ensures that the distances between atoms within the molecules are |
6690e02801103d79c5cb42f9 | 11 | Tracking Self-Assembly. The first tool presented is a method to track soft matter self-assembly. The Spatial Clustering Protocol (SCP) algorithm provides a fast way to label molecules based on the cluster or aggregate in which they reside during a self-assembly process. We make use of simple graph theory to represent molecules as a graph, where each molecule is a node and if the distance between specified atoms of any two given molecules is less than the defined cutoff, an edge is added between these two nodes in the graph. In this representation, clusters are rapidly identifiable as connected subgraphs. This makes this analysis suitably fast, such that the dynamical self-assembly process can be quickly rationalized over an entire trajectory. The algorithms returns a pandas dataframe which contains the molecule resids for each cluster and the cluster size at each time step. More details about the application of graph theory to investigating self-assembly can be found in the Electronic Supplementary Information (ESI). |
6690e02801103d79c5cb42f9 | 12 | The choice of which atom to use in the identification of molecular contacts is specific to the molecule of interest and one can select as many atoms as necessary to accurately describe the self-assembly of the molecules. The SCP algorithm calculates the distances between all the selected atoms of the chosen molecules. If any of these distances is below the cutoff it will then add them to the same subgraph. It is important to note that choosing a large number of atoms will slow down this calculation. Figure shows how different atom selection choices affect the output of the SCP clustering for an ABA triblock copolymer. Figure (a) shows the system, two micelles formed by ABA triblock polymers (the A block is hydrophilic; the B block is hydrophobic). Since the hydrophobic block tightly interacts with those of other polymers in the micelle, picking atoms within this domain is a reasonable selection. On the other hand, Figure (c) shows the clustering performed for the same system but picking atoms at the end of the hydrophilic blocks. The hydrophilic atoms at the end of the polymer chains are not suitable choices for clustering and as such, the clusters obtained do not reflect the formation of two micelles. |
6690e02801103d79c5cb42f9 | 13 | Apart from selecting atoms, users must also specify a cutoff distance for clustering. This distance determines whether two molecules belong to the same cluster. The cutoff distance might be obtained from the radial distribution function (RDF) of the selected atoms (either the position of the RDF maximum or first minimum). The lack of a single clear choice for the cutoff distance is caused by the complex structures of self-assembled structures compared to, for examples, ions in solutions (i.e., the system is inherently not well-mixed). It is necessary to consider a range of cutoff distances, in our experience typically investigating a range of cutoffs between 8 Å and 13 Å is useful. Visual inspection of the resulting clusters determines the most appropriate cutoff distance straightforwardly in most cases. As well as soft matter, the SCP algorithm can be readily applied to biological systems, highlighting its broad applicability. For example, Figure shows the result of applying the SCP algorithm to a coarse-grained protein simulation to measure the aggregation of the transmembrane domains of a protein within a lipid bilayer. |
6690e02801103d79c5cb42f9 | 14 | Unwrapping Large Aggregates Across the PBC. Upon self-assembly, a resultant nanoparticle can span the more than half the length of the simulation box in at least one dimension. In order to accurately analyse the nanoparticle and its environment, it is necessary to accurately represent the location of all of the molecules that make up the nanoparticle while accounting for periodic boundary conditions. Various tools have been implemented |
6690e02801103d79c5cb42f9 | 15 | Since the input needed to run this tool is not exclusive to polymers, it can be applied to any other type of system to measure molecular clustering. In this case, this is the result of the SCP being applied on CG transmembrane peptides inserted into a membrane. This is a top-view of the peptides-membrane system. Proteins colored in the same way belong to the same cluster. Blue cluster contains two peptides, the pink cluster has 6 peptides and the orange cluster has 8 peptides. The lipids membrane is colored in silver. Representation is not to scale. |
6690e02801103d79c5cb42f9 | 16 | within a self-assembled aggregate if it spans one or more dimensions of the simulation box and if the aggregate is larger than a half of the length of the simulation box in one or more dimensions. Figure (a) shows a polymer micelle that has spanned the simulation box in at least two dimensions. Failing to accurately reproduce structures across the PBC can lead to inconsistencies in the analysis and interpretation of soft matter systems. In contrast, the PySoftK make_micelle_whole tool is able to accurately represent the coordinates of molecular structures which span the PBC, even if their size is greater than half the box size. More details about this algorithm are described in the ESI. Therefore, make_micelle_whole provides an accurate representation of the system, which ensures that any physical properties of the self-assembled structure and its environment is calculated correctly. For example, certain functions of MDAnalysis, such as radius_of_gyration() or moment_of_inertia(), may produce erroneous results when applied to molecules and aggregates that span at least a single dimension, leading to artefacts in the simulation analysis. For instance, Figure shows the difference between using solely the MDanalysis radius_of_gyration(pbc=True) function (Figure Auxiliary Functions. We have included a number of smaller functions to perform simple analysis tasks in a unified way. These are briefly described below. |
6690e02801103d79c5cb42f9 | 17 | It utilises the MDAnalysis function radius_of_gyration(pbc=True), but allows users to specify the atom positions and their corresponding resids on which to perform this calculation at each time step. Figure shows the comparison between using the MDAnalysis radius of gyration function alone compared to the PySoftK rgyr, which captures the right radius of gyration of the micelle when computed on the whole coordinates from make_micelle_whole. |
6690e02801103d79c5cb42f9 | 18 | Eccentricity. Eccentricity, a metric quantifying a structure's deviation from a perfect sphere, serves as a useful tool for assessing the shape of spherical-like soft matter aggregates. The ecc tool calculates the eccentricity for any molecular structure by leveraging the MDAnalysis function moment_of_inertia() and employing the following formula: where ϵ is the eccentricity value, I min is the minimum moment of inertia across all axis of the molecule(s), and I mean is the mean moment of inertia over all axis of the molecule(s). A perfect sphere corresponds to ϵ = 0, while increasing values indicate more oblong structures. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.