id
stringlengths
24
24
idx
int64
0
402
paragraph
stringlengths
106
17.2k
65ef0567e9ebbb4db972fc82
21
In Figures 4a and 4b ) at the final concentration was varied as 1, 1.5, and 2. When the 𝑐+ , at the final concentration was fixed at 1.5, the 𝑐 ' "# was varied as 1.25×10 -2 , 2.5×10 -2 , and 5.0×10 - 2 U/µL. To calculate the division rate, rdiv, we binarized the fluorescent images and analyzed them using Fiji 62 . The experimental details are described in Supplemental Methods S2.
65ef0567e9ebbb4db972fc82
22
C•A•B-droplet sample solution (2.4 µL) was placed in the 5 mm hole of the observation chamber. The sample solutions were covered with mineral oil to prevent evaporation. The chamber was incubated on a stage heater at 60°C for 30 min and 63°C for 15 min to increase the fluidity of the DNA droplets. After incubation, we added 3.6 µL of a division trigger mixture to the sample solution in the chamber and observed it at 63°C. Depending on the experiment, different division trigger contents and concentrations were used in the division trigger mixtures. The experimental details are described in Supplemental Methods S3.
61b4d2acd10aa5d91c0cbd2a
0
Vitamin D (1, VitD) was identified in the early 20 th century as one of the essential molecules of the diet to support human life. A deficiency of VitD3 (4, Figure ) can result in a range of maladies such as a weakened immune system, rickets and osteomalacia. The structure of VitD3 was first elucidated by Windaus and Thiele in 1936 building on the findings of Askew et al. who demonstrated that VitD2 (5) could be derived from irradiation of ergosterol. Radiolabeling studies established the most active form of VitD in the body to be calcitriol (6). From 1965-1975, foundational pharmacological studies established the basic role that VitD plays in regulating serum calcium and phosphorus and bone homeostasis. Critically, VitD is also implicated in a number of biological processes specifically through the regulation of the transcription of hundreds of genes in a cell-specific fashion. As a consequence, starting in the 1950's, an enormous amount of effort was expended by medicinal chemists to search for VitD analogs that could exhibit prodifferentiating and antiproliferative effects on normal and cancer cells as well as immunomodulatory effects without causing hypercalcemia. Such programs resulted in the synthesis of >3000 modified VitD analogs and the commercial launch of about a dozen new medicines (e.g. Hectorol, Zemplar, Calcijex, Rocaltrol). Calcipotriol (7, Dovonex TM , LEO Pharma) is currently the most successful VitD analog and is prescribed for the treatment of psoriasis, an autoimmune skin disease. To date, the majority of approaches to access VitD derivatives (1) have utilized a convergent assembly of an A-ring fragment (2) with a fully formed CD bicycle (3, including the side-chain). As the side-chain appears to play a critical role in modulating bioactivity this results in very lengthy medicinal chemistry routes. The case study from LEO Pharma depicted in Figure is emblematic of this challenge. Thus, in order to explore a simple aryl side-chain substituent, it must be tediously stitched onto the bicyclic aldehyde (8) [derived from degradation of VitD2 (5)], followed by coupling to the A-ring thereby severely limiting accessible chemical space. The goal of this study was two-fold: (1) design an enantioselective, scalable, and convergent approach to 7 not wedded to semisynthesis; and (2) use of such a platform to create known and novel VitD analogs in a modular fashion (such as 1). Extensive explorations from both academic and industrial arenas in this area provided a useful foundation for this study and set the stage for a completely unique approach. In this Article, the realization of the plan outlined in Figure through the strategic application of radical retrosynthesis for both ring annulation and substitution to address the CD rings, as well as symmetry recognition to scalably access 8 ring A is described. This convergent approach enables late-stage side-chain installation onto triene 9 (a stable precursor to VitD analogs) prepared via the union of fully formed A-ring (10) and CD ring (11) fragments derived from inexpensive materials (p-cresol and cyclohexanone, respectively).
61b4d2acd10aa5d91c0cbd2a
1
Just as the medicinal chemistry of steroids largely relies on a rich history of degradation and semisynthesis, explorations of the VitD class rely on such an approach. Thus, semi-synthesis has been employed by a variety of research programs, reported in ca. 90 publications and 90 patents to access thousands of VitD analogs (Figure ). Specifically, VitD2 could be oxidatively degraded to the so-called Inhoffen Lythgoe diol ( ) containing the CD-ring system. Subsequent side-chain installation and reconstitution with a suitably functionalized A-ring surrogate (derived through synthesis) provided a multitude of VitD analogs and natural products (Path A, Figure ). In this way, many commercial medicines such as those outlined in Figure have been discovered. An alternative approach (Path B, Figure ) often involves protection of the configurationally and chemically labile triene system (typically through cycloaddition with SO2, 13), followed by oxidative side-chain removal, reconstitution, and retro-cycloaddition/isomerization to unveil the bioactive VitD analog. Such a route is used by LEO pharma to manufacture calcipotriol (7), and requires a 14 step sequence including a tedious HPLC separation resulting from a lack of stereocontrol in a key reduction and several PG-manipulations. In contrast, the use of totally synthetic routes to access VitD analogs in an industrial setting, to the best of our knowledge, has been unreported. Such efforts have therefore been confined to the academic space where the VitD synthetic challenge has inspired numerous instructive pathways such as the five routes illustrated in Figure . Path A represents an elegant example of how a cyclopropane ( ) could be leveraged to protect the triene moiety yet it relies on diazo chemistry for its preparation which could be prohibitive on scale. 11 Path B is the oldest strategy reported, employing a photochemical ring opening of 7-DHC derivatives (15) to allow for facile seco-Btriene formation. However, numerous redox manipulations to establish the triene core and requisite oxidation at C1 renders this approach less attractive. The most convergent approach that has received widespread adoption for the synthesis of the VitD core has been the Lythgoe-Roche strategy (Path C) between a trans-hydrindrane core 17 and advanced phosphonate 16. The synthesis of 16, initially developed by Baggiolini, was achieved in 13 steps, from (L)-carvone, and is even available commercially from a single vendor (Syncom BV). The coupling of this advanced substrate with 17 (Figure ) or a synthetically prepared trans-hydrindrane moiety (generally accessed through degradation of VitD2) provides the desired derivatives. Another well-adopted strategy is the metal-catalyzed cycloisomerization approach, pioneered by Trost and co-workers as shown in Path D. This clever strategy employs a vinyl halide, typically prefunctionalized with the desired side-chain derivative, together with seco-A-ring synthon enyne 18. As such, this powerful method allows for the direct A-ring and triene formation in a single reaction step, however obtaining the acyclic enyne 18 with high enantiopurity has proven to be a challenge. Finally, Mourino and co-workers initially developed the reduction/isomerization approach to access the requisite triene system found in the Vitamin D scaffold (Path E). This approach relies on a semihydrogenation/thermal isomerization of dienyne 19 to the desired scaffold. Given the ease with which such an approach could potentially be performed on process scale, the convergency it enables, and the hidden symmetry of a building block such as would offer an alternative to semi-synthesis both for the commercial preparation of 7 and analogs thereof. Figure outlines the blueprint that was devised to achieve this goal. Thus, VitD analogs (1) differing at the key side-chain could conceivably be accessed through a late-stage decarboxylative cross-coupling between triene 20 and a suitable coupling partner followed by reduction/isomerization. This modern disconnection could even be employed in a semi-synthetic approach to rapidly access new chemical space while the full total synthesis was developed. The dienyne precursor of 20 could be disconnected into fragments 21 and 22. Since the former fragment could be accessed easily using semisynthesis, it was critical that the route to the latter building block be robust and scalable. Nevertheless, a scalable route to 21 was devised based on the strategic combination of semi-pinacol and HAT annulation tactics from diene 23. This diene could then be accessed through a recently devised electrochemical reductive cross-coupling between enone 24 and vinyl iodide 25. For the A-ring fragment 22, an ambitious desymmetrization approach was targeted by engaging a suitably substituted dienone 26.
61b4d2acd10aa5d91c0cbd2a
2
At the outset of this work, certain criteria needed to be met from the standpoint of starting material and reagent cost so as to be competitive with the semi-synthetic approach on scale. Aside from keeping the step count low, a fully stereocontrolled route was needed to avoid tedious separations on scale. Ideally, a process-friendly blueprint that would feature crystalline intermediates, no cryogenic reaction conditions, and inexpensive reagents was targeted. The classic strategy of cyclohexadienone desymmetrization was strategically appealing due to the low cost of the aromatic starting materials and the multitude of options for functional group installation. To be sure, enyne 22 could conceivably be accessed through the stepwise difunctionalization of a symmetrical pro-chiral dienone 23 (Figure ). As such, two parallel pathways leading to the same advanced intermediate 24 can be envisaged, which could be transformed to the final A-ring enyne 22 following diastereoselective ketone reduction and subsequent dehydration. Specifically, access to enantiopure 24 could potentially be achieved through either an enantioselective alkynylation or hydroxylation or a formal equivalent thereof. The ability to interchangeably employ either path with multiple tactics proved appealing from a route-scouting standpoint. For either route the initial desymmetrization requires both high regio-and facial selectivity to control the ee and de, respectively (see Figure ). The fact that the C-5/C-10 stereocenters are eventually ablated after dehydration provided additional flexibility in the design. Such logic has been shown to be a powerful strategy to create multisubstituted cyclohexane scaffolds in a stereocontrolled fashion. Finally, since A-ring building block 22 shares the same formal oxidation state as benzene, all reactions need to be carefully orchestrated to avoid potential re-aromatization/decomposition. Some initial forays to deconvolute this maze of options are outlined in Figure . Path A, which involves an initial conjugate addition followed by formal b-oxygenation was evaluated first. In a racemic approach, initial conjugate addition worked smoothly followed by diastereoselective epoxidation and global reduction to open the epoxide and access the correct alcohol stereochemistry in 22. Unfortunately, this concise route could not be rendered enantioselective despite extensive efforts (see SI for a summary). The route was nevertheless instructive in that it set an important precedent for the diastereoselective installation of the critical C-1/C-3 hydroxyl groups in 22. Concurrent with this approach, an ambitious proposal to enlist the Zard-alkyne synthesis was pursued wherein an isoxazolone serves as a surrogate for the unveiling of an alkyne. Although isoxazolone engaged in a Michael addition to deliver 25, such an adduct could not be unraveled to provide the requisite alkyne. Turning to Path B, the most intuitive approach of enantioselective epoxidation was surveyed. However, as with the aborted route described above, the racemic pathway delivered the desired diastereomeric diol 24 but could not be rendered enantioselective in a practical fashion. Finally, an approach inspired by You's work on desymmetrization of cyclohexadienones via Brønsted acid-catalyzed enantioselective oxa-Michael reaction was pursued by targeting tartrate-derived dienone 26. The tartrate was expected to act as an auxiliary to introduce the chirality, however, this target proved to be inaccessible in synthetically useful yields (3-5% observed after screening multiple conditions for oxidative dearomatization). It is postulated that p-cresol oligomerization was faster than the nucleophilic attack resulting in the low yield of 27.
61b4d2acd10aa5d91c0cbd2a
3
We next investigated alternate oxidation approaches, in particular borylation-oxidation. Although the desymmetrization of 1,4-dienones via borylation is not known, there are examples of asymmetric borylation on both acyclic and cyclic enones. As mentioned previously, while the dienone substrate appeared to be unreactive under most epoxidation conditions, borylation of dienone 28a under Kobayashi's borylation conditions gave the desired product in racemic form. Translation of these conditions to the asymmetric version proved challenging wherein Kobayashi's bipy-derived ligand failed to deliver any appreciable yields of the desired borylated compound 29; however, a survey of various BOX ligands led to the discovery that the commercially available (R,R)-i Pr-Pybox L1 readily desymmetrized the molecule in excellent enantioselectivity albeit in low yields. Furthermore, there seemed to be a delicate balance between conversion and enantioselectivity at the 4-hydroxy position, wherein the judicious choice of a TES protecting group proved optimal for both ee and overall yield (Figure , table ). After further optimization, this symmetry-breaking borylation-oxidation sequence could be performed on 50 mmol scale with 75% yield and 94% ee over 3 steps (see SI for full details). Importantly, the use of L1 in process chemistry is precedented. With enantiopure diol 30 in hand, attention turned towards installing the requisite alkyne and internal olefin found in enyne 22. Conjugate addition of the resulting dihydroxyenone 30 occurred with complete diastereoselective control to deliver ketone 31 in excellent yield. It is worth noting that the chelation-controlled alkynylation proved highly chemoselective even at ambient temperature. NaBH4 reduction of 31 proceeded with high trans selectivity (d.r. = >20:1, relative stereochemistry confirmed via nOE), followed by selective bis-TBS protection of triol 32 to deliver tertiary alcohol 33. It was anticipated the elimination of alcohol 33 would prove challenging due to its inaccessibility, however after various attempts (see the SI), the use of SOCl2 in pyridine/DCM smoothly provided the desired enyne 22 after TMS deprotection.
61b4d2acd10aa5d91c0cbd2a
4
To summarize the route to enyne 22, a highly enantioselective conjugate borylation of an easily accessible cyclohexadienone sets the stage for all subsequent transformations to occur with both high chemo-and diastereoselective control. From an efficiency and scalability perspective, it traverses through 3 isolated intermediates from inexpensive p-cresol and can be conducted at noncryogenic temperatures on multi-gram scale. This route thus represents one of the most processfriendly paths to date to a fully synthetic A-ring VitD module.
61b4d2acd10aa5d91c0cbd2a
5
Within the rich history of steroid synthesis, one extremely popular chapter involves approaches to the CD-ring system, a deceptively simple looking fused bicycle (Figure ). This thermodynamically unfavored trans-hydrindane (6,5-trans-fused) core is inherently strained, rendering syntheses of this bicycle challenging. The most popular general strategies are summarized in Figure . Amongst all the total synthetic efforts towards the CD ring fragment (>100!), few examples can deliver this structure within ten steps. In the vast majority of cases, the starting materials for the key ring constructing step either required lengthy routes or extensive concession steps (redundant redox manipulations and functional group interconversions) were employed after the ring system had been constructed. For the specific purpose of synthesizing VitD, key intermediate 11 with oxygenation at C-8 was targeted. Many of the routes that are used to access a CD-ring precursor for steroids are not easily adapted for VitD. For example, the elegant 3-component coupling between 2-methylcyclopentenone, a vinylsilyl methyl ketone, and an optically active iodo-olefin 34 (4-step synthesis from leucine) reported by Tsuji and co-workers arrived at 35 and was shown by others to require additional steps to forge the trans-hydrindrane (Path A). Wilson's intramolecular Diels-Alder approach to a minimally functionalized ring system required a lengthy sequence (racemic) to the unsaturated precursor and resulted in poor diastereocontrol (Path B). Johnson's classic cation-olefin cyclization approach (Path C) successfully forged the trans-6-5 ring system (87:13 d.r., 82% isolated yield) however precursor 36 was arduous to prepare (8 steps and an expensive non-recyclable chiral auxiliary) and the resulting allene required several additional steps to install the side-chain functionality. The Mourino group utilized a Pauson-Khand cyclization (Path D) to form the CD-ring skeleton and a Si-assisted allylic substitution to set the challenging pivotal quaternary methyl group at the fusedring junction of the CD-trans-hydrindane core followed by an additional 9 steps to install the side chain . Takahashi reported a radical cyclization of 37 (10-step preparation), provided the transring fusion after anionic ring cyclization (Path E). The C-ring could also be installed first followed by cyclization of ring-D as exemplified by Stork's classic approach (Path F). As with the other routes, this indirect path required multiple steps. As mentioned above, the six examples shown here are not a comprehensive summary, but rather a selection of strategically diverse synthetic blueprints to this ring system known at the outset of these studies. Ultimately, the Hajos-Parrish ketone ( ) is historically one of the most popular starting materials for total synthesis approaches when the CD ring system is retained. By far, the most practical approach to CD fragments remains semi-synthesis via vitamin D2 degradation (to the Lythgoe-Inhoffen diol, 12). Indeed, LEO Pharma employed this starting material for their in-house medicinal chemistry efforts (Figure ). In contemplating a new approach to this strained 6,5-fused bicyclic system in 38, two strategies emerged to the forefront (Figure ). Late-stage stereocontrolled hydrogenation of olefin 39 or semi-pinacol rearrangement of protected diol 40. For the former approach, numerous concise routes were designed (see SI for examples). In order to de-risk the path, 39 could be prepared from VitD2 semisynthetically. Unfortunately, despite screening dozens of reduction conditions, the coveted trans-stereochemistry could never be obtained. Efforts then shifted completely to the latter semi-pinacol approach. The advantage of this design was that the trans-stereochemistry could be pre-programmed by virtue of the diol precursor 40. In turn, this isopropylidene-protected diol served as a rigid scaffold (cis-6,5 bicycle) that could potentially control the stereochemical outcome of an intramolecular reductive HAT-based olefin coupling 32 on substrate 23. Such a disconnection was ambitious since it would need to controllably form a key quaternary center and two contiguous tertiary stereocenters in one step. From modeling studies there was confidence that the key C13-C17 bond would form in the desired fashion, although the stereochemical outcome at C20 was uncertain. HAT-precursor 23 could be traced back to inexpensive 2-cyclohexenone and an acrylate. Our initial entry towards the enantioselective synthesis of advanced CD-ring 38 commenced by targeting enoate 40 (Figure ). Baylis-Hillman reaction on 2-cyclohexenone with t-butyl acrylate in the presence of catalytic DBU delivered enone 41 in 81% yield on multi-gram scale. Although various disconnections and conditions were employed to deliver the diol in 42 (see SI), an asymmetric dihydroxylation of such an olefin would appear the most appealing choice. Although using NMO as the co-oxidant and citric acid in the presence of catalytic OsO4 efficiently delivered the dihydroxy product in a racemic fashion, the asymmetric version proved more challenging. Indeed, dihydroxylation on electron deficient alkenes suffers from poor enantioselectivity, especially in endocyclic systems. Moreover, alkyl α-substitutions further exacerbate this issue, and is known to interfere with the coordination between the osmium catalyst and alkene. Supported by the fact that there are only a few examples on asymmetric dihydroxylation of cyclopentenone or cyclohexenone with α-substitutions, dihydroxylation on such systems still prove challenging today. In the initial optimization, potassium ferrocyanide(III) was able to achieve the desired dihydroxy ketone 42 in 66% ee. Consistent with Sharpless's original study on dihydroxylation of α,βunsaturated ketones, 35b the reaction utilizing AD-mix only gave trace amount of 42, requiring extra osmium due to the less reactive olefin as well as additional base to buffer the reaction. After screening a variety of additives, ligands and various solvent combinations (see SI), diol 42 was obtained with 67% conversion and 74% ee. In its most optimized form, diol 42 could be obtained in 42% yield with 82% ee on gram scale. The desired enantiomer could be enantioenriched through recrystallization (gram scale). With ample quantities of diol 42 in protection with 2-methoxypropene gave the corresponding acetonide which served to rigidify the system. Subsequent methylenation required an extensive optimization as standard olefination conditions (Wittig, salt-free Wittig, Peterson olefination) gave low conversion or low yields on scale, while most Ti-based reagents (Takai-Utimoto, Lambardo, Petasis) proved similarly unsuccessful. The use of the Ti-based Nysted reagent ultimately delivered olefin 43 in respectable yields. After further optimization (see SI), the use of Nysted reagent, in conjunction with Ti(O i Pr)2Cl2, delivered the desired olefin 43 in 61% (2 steps from diol 42, gram-scale). Saponification of tert-butyl ester 43 delivered acid 44 in quantitative yield, which then set the stage for a key C-C bond forming step (44 to 40). A number of approaches were evaluated for achieving this conversion on 44 and related intermediates; ultimately, the newly developed electrochemical reductive cross-coupling proved to be successful in accessing advanced intermediate 40. Thus, in-situ activation of acid 44 with NHPI, followed by reductive coupling with vinyl iodide 45, enabled by Ag-nanoparticle functionalized electrochemical cross-coupling , delivered HAT precursor 40 in 48% yield on small-scale. Gratifyingly, a correlation of yield and reaction concentration was observed, providing 40 in 63% yield (gram-scale). With enoate 40 in hand, attention turned towards establishing the CD-ring core via an intramolecular HAT-mediated annulation (Figure ). During the course of extensive optimization (see SI for full details), the utilization of PhSi(O i Pr)H2 was found to be crucial, whereas phenyl silane failed to give 46. In general, the solvent choice had very little effect on the reaction conversion (entries 1-4), although ethyl acetate/ i PrOH proved detrimental (entry 5). A survey of catalysts revealed that manganese(III)-based catalysts only led to unsatisfactory yields (entry 7), while use of Fe-based catalysts (Fe(dpm)3, Fe(acac)3) delivered the desired annulated product in appreciable yields (entries 1,6). After a thorough screening of conditions, the use of Fe(dpm)3 and PhSi(O i Pr)H2, in DCE/(CH2OH)2 delivered the desired stereoisomer 46 in 54% isolated yield. It is noteworthy that this HAT-mediated cyclization installs 3 contiguous stereocenters with the major diastereomer being the correct C20 stereochemistry belonging to the VitD family. the direct coupling could be practically achieved only under newly established electrochemical conditions, with vinylbromide 50A being the most suitable choice for this transformation (see SI). In the event, RAE 49 and 50A were subjected to Ag-nanoparticle based electrochemical conditions in delivering 51 in 48% NMR yield (carried crude to the next step) as a 1:1 mixture of diastereomers (ca. 75 mg scale). The endgame sequence involved tandem semihydrogenation/isomerization followed by global deprotection to furnish calcipotriol in 38% isolated yield over 3 steps from 49.
61b4d2acd10aa5d91c0cbd2a
6
To facilitate access to arylated VitD analogs that had previously required ca. 9 steps to prepare (Figure ), late-stage side-chain installation of triene 52 was pursued (9, Figure ). Thus, enyn-ene 48 was transformed to triene acid 52 via a two-step protocol (reduction/isomerization, saponification) in 46% isolated yield (over 2 steps). Fe-catalyzed decarboxylative arylation, 41f via in-situ activation of acid 52 with HATU and addition of the corresponding Ar2Zn in the presence of 20 mol% Fe(acac)3, provided the desired arylated VitD analogs in appreciable yields (53-57, 21-40%) as a mixture of diastereomers. The value of this approach is evident by the synthetic short-cut that can be taken to access such arylated calcipotriol analogs. Convergent access to such analogs is facilitated by the chemoselective nature of the decarboxylative coupling that does not require protection of the reductively, oxidatively, and photochemically sensitive triene system that historically requires an additional round of protection.
61b4d2acd10aa5d91c0cbd2a
7
With C20-analogs 53-57 in hand, attention turned towards determining their cellular efficacy and overall potency in a human PBMCs assay measuring the secretion of IL-17A. IL-17A is a major effector cytokine in the pathogenesis of psoriasis, and antibodies targeting IL-17A or its receptors have shown to be highly efficacious in psoriasis patients (e.g. calcipotriol (7) potently inhibits the secretion of IL-17A). In the assay event, human PBMCs were supplemented with IL-23 to stimulate the Th17 pathway and incubated with T-cell receptor crosslinking antibodies anti-CD3/antiCD28 for 3 days to stimulate the secretion of IL-17A.
61b4d2acd10aa5d91c0cbd2a
8
Comparison of the activity of C20-arylated vitamin D analogs 53-57 in the human PBMC IL-17 release assay identified several potent compounds, although a moderate overall loss in potency was observed compared to calcipotriol (7) (Figure ). In general, a clear difference in potency was observed between the two C20-stereoisomers of each arylated compound tested. Hence, (R)-53a showed an EC50 of 104 nM while the corresponding (S)-53b exhibited a complete loss of activity. (R)-54a, bearing a -OMe group, resulted in a 9-fold increase in potency compared to 53a, while its (S)-isomer counterpart in 54b showed only moderate activity. Interestingly in the case of the
61b4d2acd10aa5d91c0cbd2a
9
Calcipotriol (7) VitD 3 (6) Viability EC 50 (nM) >10000 >10000 >10000 >10000 >10000 >10000 >10000 >10000 >10000 >10000 >10000 >10000 R para-substituted i Pr analogs 55a and 55b, an inverse in potency between the two isomers of 55 was observed wherein the (S)-isomer 55b was approximately 40-fold more potent than the (R)isomer 55a. An overall marked loss in activity was seen for both isomers of the p-CO2Me analogs in 56. The meta-biphenyl compound 57a was the most potent of the analogs tested with an EC50 of 5.9 nM, while a near-complete loss of activity was observed for its corresponding isomer 57b. Gratifyingly, none of the compounds tested showed any toxicity (EC50 > 10 µM) in a human PBMC viability assay.
61b4d2acd10aa5d91c0cbd2a
10
In conclusion, a completely synthetic approach, not wedded to semi-synthesis, towards the scalable synthesis of calcipotriol and related analogs is described. Several steps are worth noting: 1) symmetry recognition to scalably access the A-ring via an unprecedented enantiocontrolled conjugate borylation of a cyclohexadienone; 2) strategic application of radical retrosynthesis for the rapid development of the CD ring using recently developed methods such as Ag-nanoparticle enabled electrochemical reductive cross coupling and a highly diastereoselective HAT-mediated annulation; 3) implementation of a semi-pinacol rearrangement to address the thermodynamically unfavored 6,5-trans ring fusion; 4) modular access to medicinally relevant analogs bypassing the need for a custom route for each derivative that is again reliant on Ag-nanoparticle electrochemical reductive cross coupling; and 5) scalability of routes to key fragments 47 and 22 via inexpensive starting materials without the need of any cryogenic temperatures. This modern take on the synthesis of a classic molecule (Vitamin D) builds on the rich history of prior syntheses and provides uniquely efficient access to both natural and unnatural analogs for exploration in medicine.
64d2ae694a3f7d0c0dd611e2
0
much faster by optimizing known good solutions. To use this strategy, however, prior knowledge is necessary. Additionally, this may decrease the population diversity and not allow the exploration of non-intuitive candidates. The other and most commonly used option is random selection from the search space. With this method, no prior knowledge of the system is required and no restrictions are imposed. This helps start the GA with a diverse pool of candidates and allows a wide range of chemical space to be covered.
64d2ae694a3f7d0c0dd611e2
1
The individuals in each population need to be scored to assess how well they optimize a given property. The fitness function is a model that scores each individual and can range widely in complexity, from relatively simple tasks such as maximizing molecular weight that can be calculated simply with RDKit to more time-intensive tasks such as calculating the dipole moment with quantum-chemical tools like DFT. In most GAs, the fitness function is the time-limiting step, so care must be taken to use cost-effective calculations. One way to do this is to use computationally inexpensive techniques, such as semi-empirical methods like sTD-DFT-xTB for calculating optical bandgap instead of the more costly but accurate TD-DFT. Another common method is to utilize ML as the fitness function. For properties that can be either computationally time-intensive or too difficult to calculate, a pre-trained ML model can drastically speed up fitness evaluation time. One example of using ML as the fitness function in a GA is reported in our previous work , where we developed an ensemble random forest and neural network ML model to predict the power conversion efficiency (PCE) of materials for organic solar cells. This model was then used as the fitness function in a series of GAs to develop and find the best combination of new materials for tandem organic solar cells. Power conversion efficiency is arguably one of the most important metrics when developing solar cells, yet there is no simple computation to calculate it. It is highly dependent on multiple properties and ML is one of the most widely-used methods for computationally predicting it.
64d2ae694a3f7d0c0dd611e2
2
To repopulate the subsequent generations, the first step is to select two parents from the previous generation. A known problem of GAs is "premature convergence", or having a population dominated by one type of solution . This can prevent the GA from discovering the global best candidate by getting stuck in a local extremum. Thus, this operation is crucial for maintaining diversity and allowing for rapid convergence with highly fit individuals.
64d2ae694a3f7d0c0dd611e2
3
There are multiple types of selection methods, such as a k-way tournament, roulette wheel, stochastic universal sampling (SUS), rank, and random. In a k-way tournament, k random individuals are selected and their fitness scores are compared with each other . The best-scoring individual is selected as a parent and this process is repeated to select the second parent. Another popular method is roulette wheel, where the chances of an individual being selected are proportionate to its score . Just like a roulette wheel, the wedges on the wheel are the size of the individual score divided by the sum of all scores in that generation. A random number is generated, or the "wheel is spun", to pick the location on the wheel for the selection of the parent. This process is repeated for the other parent. Similarly, SUS is another wheel-based method, where instead of spinning the wheel twice, the wheel is spun once and two points on the wheel are chosen for the two parents . Another similar method to roulette wheel is rank selection, where the sizes of the wedges are proportionate to the ranking of the population, not the individual score . This can be beneficial in multiple circumstances, such as negative fitness scores or when the fitness scores are close together (leading to almost equal wedge sizes and no selection pressure).
64d2ae694a3f7d0c0dd611e2
4
These aforementioned methods are all classified as fitness proportionate methods, meaning that higher-scored individuals have a higher chance of being selected, and the entire population is accessible for selection. A different and much simpler method is "random" selection, where 2 parents are randomly selected either from the entire population or from some top percentage of individuals . By selecting from only the top candidates, this selection can allow for faster convergence.
64d2ae694a3f7d0c0dd611e2
5
While there has not been a thorough comparison between these methods, the most popular ones for chemical applications are roulette wheel and tournament selection . An analysis comparing GA parameters for heterogeneous catalysts found that, out of wheel, rank, threshold, and tournament selection methods, 3-way tournament selection was the best . Another technique implemented by De Sousa et al. randomly selects parents from a pool of the previous generation, plus the top 10 individuals obtained thus far . Previous GA implementations in our group used a random selection method by randomly selecting two parents from the top 50% of candidates .
64d2ae694a3f7d0c0dd611e2
6
A strategy to speed up GAs while balancing exploration and exploitation is the use of elitism. This approach keeps a certain percentage of the top candidates to pass down to the next generation unchanged, ensuring there are always good traits in each population . Without using elitism, the GA will converge much slower and is less likely to reach champion performers. Elitism is almost always used in GAs, although the percentage of candidates denoted as "elites" can vary.
64d2ae694a3f7d0c0dd611e2
7
In biological systems, an individual's physical traits are encoded in its DNA, which is unique to each individual, and contain information from both of its parents. During reproduction, parts of the DNA from both parents crossover to form a new sequence for the child. During this process, there is a chance a gene may undergo mutation and lead to a different trait. Similarly in GAs, each individual solution has unique sets of genes that contain information from parents in the previous generation. Each individual's genes need to be represented computationally in a way that allows for efficient crossover and mutation. The simplest representation is binary strings, where each position codes for a trait. In chemical applications, it is difficult to maintain all relevant molecular information with this strategy. The most common representations for chemical GAs are molecular-based strings such as SMILES and SELFIES , or molecular graphs where the nodes are atoms and the edges are bonds . SMILES-based representations commonly use a fragment-based approach, using a library of SMILES fragments to mix and match and design new molecules. One drawback to this method is that it places more limits on molecular design and therefore on the search space. Alternatively, performing random mutations to a SELFIES string will always maintain chemical validity and allow for near-infinite possibilities. This representation has already been incorporated into successful GAs, allowing for the insertion, deletion, and replacement of characters in the string. A more recent technique is the highly successful graph-based GA developed by Jensen 10 that has been shown to quickly explore chemical space with the molecules having little resemblance to the initial population. Since each individual is a graph instead of a string, crossover of parents occurs by cutting each parent into fragments and swapping the fragments to make the new molecule, while ensuring the newly formed children are chemically valid. Graphs can have mutations similar to SELFIES, such as insertions, deletions, and replacing atoms. Additionally, there are opportunities to mutate the bonds, such as changing the order of the bonds or adding a ring bond . In this work, we use fragment-based GAs with SMILES representations because of their simplicity and easily defined search space.
64d2ae694a3f7d0c0dd611e2
8
The crossover and mutation operators are essential for traversing chemical space. One of the main challenges with genetic algorithms is the balance of exploration and exploitation. In each generation, there is a population of candidates, some of which perform much better than others in an optimization task. Considering that the goal of the GA is to find high-performing solutions, we want to optimize the top candidates in each generation by mixing and matching their genes. This crossover step is crucial for exploiting the nearby chemical space. However, if we were to rely on just the crossover operation, the GA can easily get trapped at a local maximum and fail at finding the global best solution. To resolve this, the mutation operation can be performed on the newly formed child, during which one of the child's genes receives a random change. Mutation ensures diversity and introduces new genes into each population. Since we still want to balance exploitation and exploration, each child has a chance of undergoing a mutation. The probability of a mutation occurring is set by the mutation rate, one of the hyperparameters of the GA. Although a wide variety of mutation rates have been reported in the literature , ranging from 1% to 50%, there have been few reports on optimizing this parameter for chemical applications . In this work, we examine mutation rates ranging from 10% to 90% with 10% increments.
64d2ae694a3f7d0c0dd611e2
9
Across the wide variety of chemical problems probed by GA searches in recent years, the most common method of ending a run is by simply having the GA terminate after completing a predetermined number of generations. The strategy most often employed is to run the GA for a set number of generations and stop the GA at that point, declaring it "converged" as demonstrated in the schematic in Figure2. The number of generations is set at a level thought to be substantially greater than the number needed to reach a plateau where the top performer ceases to change for many generations. This approach is common because it generally works and is easy to implement, but it has several drawbacks. First, this approach requires an initial guess of the approximate number of generations necessary to reach convergence, resulting in initial runs that may either have to be run for hundreds of generations more than necessary or be actively monitored by the researcher to determine when to manually stop the GA. Even once a good estimate of the number of generations necessary to reach convergence has been established, the GA must be set to complete a substantial number of additional generations to ensure that the random variability of each run is taken into account and the GA does not terminate prematurely; this can result in wasted resources as a GA runs for far longer than necessary. Finally, this predetermined endpoint approach is not easily transferable between GA methods for different applications. Whenever the optimization target changes (for example, choosing to optimize a different chemical property or set of properties), or even when the hyperparameters of a GA are changed, the number of generations needed for convergence can be impacted. This means that new initial guesses are needed and it is difficult to compare the endpoints of GAs without identical parameters due to the lack of a standard convergence protocol.
64d2ae694a3f7d0c0dd611e2
10
Kwon et al. recently implemented a genetic algorithm with a generalized convergence criterion as part of a broader evolutionary design method. In their algorithm, they defined two convergence conditions: a minimum of 500 total generations and a period of 30 consecutive generations during which the fitness was not enhanced. They terminated the evolutionary cycle only after both conditions were met. Previous work in our group also explored termination in a molecular GA focused on building new chemical structures from a pool of known monomers. This method calculated Spearman's rank correlation coefficient of the top given percentage of monomers used at intervals over 100 generation GA runs with variably-sized monomer pools. The point at which the average of this coefficient met or exceeded a value of 0.5 was considered convergence and was used to estimate the number of generations needed to reach convergence for a GA with a similar monomer pool size, and therefore search space.
64d2ae694a3f7d0c0dd611e2
11
In this work, we go a step further and completely divorce the concept of convergence from completing a set number of generations. Instead, we define two convergence criteria: the Spearman coefficient and the number of consecutive convergence generations. Spearman's rank correlation coefficient (here referred to as "Spearman coefficient" for brevity) is calculated at each generation to measure the correlation between the current and previous generation's ranks of top monomers ordered by frequency, as measured by total usage of each monomer in all oligomers generated over the course of the GA run. This coefficient measures covariance as the nonparametric correlation between the ranks of two variables, in this case, top monomers ranked by frequency. Because it is defined as the Pearson correlation coefficient between the rank variables, it is a normalized measure with values ranging from -1 to 1, where the most negative value indicates perfect negative correlation, the most positive value indicates perfect positive correlation, and zero indicates no correlation. In principle, the Spearman coefficient should asymptotically approach 1.0 with increasing GA generations, indicating that the top candidates have been definitively found and therefore there is no change in subsequent generations. Due to the general lack of convergence strategies among molecular GAs, we are unaware of other similar projects that currently use the Spearman coefficient as a measure of convergence. We believe it is a mathematically appropriate measure of correlation for this application, however, and as noted above previous work in our group successfully used this coefficient to measure GA convergence. To determine convergence, a minimum Spearman coefficient and a minimum number of consecutive convergence generations are set at the outset of the GA run. When the Spearman coefficient between two generations meets or exceeds the minimum convergence Spearman coefficient, meaning the two generations are meaningfully similar, that generation is considered a convergence generation, and the counter for consecutive convergence generations is advanced by one.
64d2ae694a3f7d0c0dd611e2
12
To evaluate the optimal values for each GA hyperparameter, three metrics are examined: champion, coverage, and speedup. The champion is the rank of the best-performing candidate found in the entire search space. A champion of 1 means the global best polymer was discovered. This metric is important since it will give insight into the global optimization performance of the GA.
64d2ae694a3f7d0c0dd611e2
13
The coverage is the percentage of the top 100 polymers discovered throughout each GA run. This metric helps evaluate how well the GA can explore and jump out of local optima to find multiple good candidates. In some optimization tasks, it may be more important to find a myriad of top performers as opposed to only the global optimum. The speedup is the size of the search space divided by the number of unique individuals examined by the GA until convergence. This metric shows how much quicker using the GA is compared to a brute-force approach.
64d2ae694a3f7d0c0dd611e2
14
A list of 447 monomer SMILES previously used in our group was used as building blocks to create oligomers of 6 monomers (hexamers). The complete list is available at . These monomers are all small conjugated organic molecules. The constraints for designing the hexamers were a maximum of 2 monomer types, with the alternating ABABAB sequence. Since end groups were not included, in most cases the sequences ABABAB and BABABA are chemically identical. Additionally, homopolymers were allowed, meaning A can be the same as B. This led to a total search space of 100,128 oligomers.
64d2ae694a3f7d0c0dd611e2
15
The three chemical properties examined in this paper are the polarizability, optical bandgap, and solvation energy ratio between water and hexane. Since the goal of this work is to understand the effects of the GA hyperparameters on the search through the chemical space, the chemical space needs to be fully mapped. With this method, the global optimum will be known and one way the hyperparameters can be evaluated is based on if they found the champion hexamer. To do this, the three chemical properties were exhaustively calculated for all 100,128 oligomers. This database of properties can be accessed within the GA during the fitness evaluation step.
64d2ae694a3f7d0c0dd611e2
16
To calculate these properties, the hexamers (built from the monomer SMILES) first underwent force field geometry optimization with MMFF94 49 using OpenBabel , and then further geometry optimization with GFN2-xTB (xTB version 6.4.1). The polarizabilities were extracted from the GFN2-xTB calculation. To calculate the optical bandgap, simplified time-dependent-DFT-xTB (sTD-DFT-xTB) was performed up to 5 eV. This was performed with the sTDA-xTB package, version 1.0 for xTB for sTDA and sTDA 55 version 1.6.2. The solvation energies in water and hexane were calculated using GFN2-xTB with the ALPB solvation models. The solvation energy ratio was calculated as: Solvation energy ratio = (G
64d2ae694a3f7d0c0dd611e2
17
Although this paper is a tutorial on the GA methodology and not the quantum chemical methods, we explored if the molecular conformations had an effect on the polarizability, optical bandgap, and solvation energies in water and hexane. The Conformer-Rotamer Ensemble Sampling Tool (CREST) was used to sample accessible conformers of the champion hexamer discovered for all three chemical properties. The number of stable conformers for the hexamer champions for polarizability, optical bandgap, and solvation energy ratio were 1, 6, and 169 conformers, respectively. The polarizability, optical bandgap, and solvation energies in water and hexane were calculated using GFN2-xTB, sTD-DFT-xTB, and ALPB implicit solvation model in xTB, respectively, for all stable conformations of the 3 hexamers. Analysis of the distribution of polarizabilities found that the conformation has a negligible effect on the polarizability, with the average range in polarizability among conformers to be 0.18 Å 3 . Evaluating the optical bandgap conformed bandgap was a little trickier, due to limitations in calculating very small with sTD-DFT-xTB or lack of excitations under 5 eV. Only one hexamer yielded all valid optical bandgap calculations, and this gave a range of 0.085 eV among conformers, again showing the conformation has a minimal effect (ESI Figure ). The average range in solvation energy in water and hexane was 0.0049 E h and 0.0014 E h , respectively. Thus, exhaustive conformer sampling has minimal effect on these chemical properties and is not necessary for the purposes of this GA.
64d2ae694a3f7d0c0dd611e2
18
To visualize the search space, t-SNE and UMAP were used. t-SNE was performed with scikitlearn. Principal component analysis (PCA) was first performed to reduce the data to 50 dimensions. This was then fit for t-SNE to reduce it further to 2 dimensions. UMAP was used as well for comparison. The data were reduced to two dimensions, with the number of neighbors as 25 (size of the neighborhood to look at), the minimum distance for clustering as 0.001, and the distance between points is calculated with the Jaccard metric.
64d2ae694a3f7d0c0dd611e2
19
Cumulative frequency counts are kept of the number of times each monomer in the monomer dataset is found in the GA's population. These frequency counts are updated after each generation and used to rank monomer indexes from most to least frequently used. The Spearman coefficient is calculated at each generation to measure the rank correlation between the ranked monomer index order of the current generation and the previous generation. To determine convergence, a minimum Spearman coefficient and a minimum number of consecutive convergence generations are set at the outset of the GA run. When the Spearman coefficient between two generations meets or exceeds the minimum convergence Spearman coefficient, that generation is considered a convergence generation, and the counter for consecutive convergence generations is advanced by one. This counter continues advancing as long as the following generations continue to meet or exceed the minimum convergence Spearman coefficient; if this does not happen, the counter resets to zero. Once the counter reaches the minimum number of consecutive convergence generations required, termination is triggered and the GA cycle halts.
64d2ae694a3f7d0c0dd611e2
20
A detailed schematic of the GA for hexamers in this work is shown in Figure . In this schematic, maximizing the polarizability is used as an example fitness function. Starting from a database of manually-generated monomer SMILES, the GA will randomly select monomers to form the initial population of alternating-sequence hexamers. The number of hexamers in the population is set by the population size. To evaluate the polarizability (i.e. fitness function), each hexamer undergoes geometry optimization and property calculations with GFN2-xTB/D4. The rest of the steps for this generation involve repopulating the next generation's population, either with good solutions from the previous population or with slight modifications through crossover and mutation operations. After the polarizability is calculated for each hexamer in the population, a percentage of them, set by the elitism percentage, is selected to pass to the next generation unchanged. The remainder of the population is replenished with crossover and mutation. For example, if the population size is 32 and the elitism percentage is 50%, then the top 16 hexamers will pass to the next generation unchanged. The process of selecting parents and undergoing crossover and mutation to produce a new hexamer will be repeated 16 times until the new population contains 32 hexamers. In this schematic, 3-way tournament selection is used to select two parents.
64d2ae694a3f7d0c0dd611e2
21
The parents undergo crossover, by taking one monomer from each parent, to form a new "child" hexamer. This child has a chance, set by the mutation rate, to undergo mutation, which in this example is replacing one monomer with a new monomer from the database. This new population will undergo polarizability calculations and this cycle will continue until some pre-defined convergence conditions have been met.
64d2ae694a3f7d0c0dd611e2
22
In the second part of this work, we test the optimized hyperparameters on a larger, not premapped search space. The 447 monomer SMILES list used in the first part of this work was expanded to 1200 monomer SMILES, including additional common repeat units in conjugated materials, as well as common aryl-vinyl and aryl-azo combinations. These monomers are used to create hexamers (oligomers containing 6 monomer units). Contrary to the first part of this work, where only the alternating ABABAB sequence was allowed, all possible 2 monomer sequences are permitted. This yields 2 = 64 possible sequences. Thus, the total search space for this part of the work is 46,041,600 oligomers.
64d2ae694a3f7d0c0dd611e2
23
The chemical properties examined were polarizability, optical bandgap, and solvation energy ratio. These were calculated in the same manner as in the Search Space Calculations section, the only difference being that these calculations were done on the fly rather than being precomputed. In this work, 447 unique monomers are used to create hexamers with an alternating or homopolymer sequence. This yields a total search space of 100,128 possible polymers. The search space was exhaustively examined to understand how well each GA is traversing the search space.
64d2ae694a3f7d0c0dd611e2
24
Figure shows a visualization of chemical space computed with t-Distributed Stochastic Neighbor Embedding (t-SNE). This is a dimensionality reduction technique Analysis of variance (ANOVA) was performed in these sets and shows a significant difference overall between the 3 properties (p=2.2 × 10 -13 ). However, this does not tell us what those differences are. Using Tukey's honestly significantly differenced test, which compares each group which each other, we see that polarizability has a significantly different set of Tanimoto coefficients than optical bandgap (p<=0.001) or solvation energy ratio (p<=0.001). This indicates that among the three properties, polarizability has a more localized search space of top performers, while the other target properties are more diverse. Examining the GA performance across all three parameters will give generalized hyperparameters that can work on future chemical properties, regardless of chemical space diversity.
64d2ae694a3f7d0c0dd611e2
25
Figure shows the distributions of the median performance metrics for all 15 runs using each combination of convergence criteria. Looking at the champion metric (ESI Table ), the median is 1 in most runs using a minimum consecutive convergence generation of 25 or higher, meaning that the global extremum was found in most runs with that criterion. Also of note, the global extremum was usually found in all runs using both a minimum Spearman coefficient of 0.7 or higher and a minimum number of consecutive convergence generations of 50 or higher, as
64d2ae694a3f7d0c0dd611e2
26
Given the broad number of convergence criteria in which the global champion was found consistently, we focused on the coverage and speedup metrics (ESI Tables and, respectively) when selecting our generalized convergence method, as shown in Figure (error bars are shown on an alternative version of this plot in ESI Figure ). Fitting the data to a power curve, we see there is a clear trade-off between these two metrics for our convergence methods. Using the fit to extrapolate out, we recognize that even at 100% coverage, the GA still provides a substantial speedup of approximately 19.6 over a comprehensive search, verifying the benefit of GA-led searches with even relatively resource-inefficient GAs. Future work should examine convergence methods and techniques beyond those suggested here to test whether the apparent Pareto front can be broken.
64d2ae694a3f7d0c0dd611e2
27
For the purposes of this work, however, we focus on finding the convergence method that gets closest to a median coverage of 50% while maximizing speedup. We thus chose the method that uses a minimum Spearman coefficient of 0.8 with 50 consecutive convergence generations, which has a median coverage of 42%. Although it performed very similarly to the convergence method using a minimum Spearman coefficient of 0.7 with 50 consecutive convergence generations, we selected the former method as its coverage distribution skewed slightly higher than the latter method.
64d2ae694a3f7d0c0dd611e2
28
To understand how different GA hyperparameters tune the GA's emphasis on exploration versus exploitation, the population size, elitism percentage, selection method, and mutation rate are examined. While testing each parameter, all other parameters remained constant. The default parameters were a population size of 32, an elitism percentage of 50%, random selection, and a mutation rate of 40%. The previously determined convergence method was used with a minimum Spearman coefficient of 0.8 and 50 convergence generations. To measure the performance of the GAs with tuned hyperparameters, the champion, coverage, and speedup performance metrics are calculated in the same manner as described in the convergence criteria section.
64d2ae694a3f7d0c0dd611e2
29
Population sizes of 16, 20, 24, 28, 32, 48, 72, and 96 individuals are examined. Figure shows that for all population sizes, the median champion was 1, meaning in a majority of runs it found the global optimum. However, the standard deviation is much larger for a population size of 16. When examining the performance among the chemical properties (ESI Table ), all population sizes performed well for maximizing polarizability. For optical bandgap and solvation ratio, although some runs were able to find the global best polymer, many runs did not. The champions among the runs that did not find the global optimum had a better rank as the population size increased.
64d2ae694a3f7d0c0dd611e2
30
As expected, coverage increases as the population size increases. Having a larger population size allows for more traits in the generation to select from during crossover and allows for efficient exploration. Examining the coverage among each run (ESI Table ) indicates that the GA has difficulty finding high performers when optimizing the optical bandgap, most likely due to the more diverse search space. Increasing the population size to 72 or 96 allowed the GA to find 99-100% of the top 100 candidates for polarizability.
64d2ae694a3f7d0c0dd611e2
31
As the population size increases, more calculations are performed, which decreases speed-up (Figure ). The speedup was similar among the 3 chemical properties (ESI Table ). Figure shows the balance between coverage and speedup. There is a negative correlation, indicating that the best population size depends on the optimization task. If the goal is to find many highperforming candidates and the computation cost is not important, then a population size of 96 is recommended. If computational cost is essential, then a population size of 16 is suggested. The best balance of coverage and speedup is a population size of 32, since it has similar speedups to 24 and 28 individual populations but with higher coverage. Examination of this coveragespeedup trade-off for each individual chemical property (ESI Figure ) is consistent with this result, suggesting a population size of 32 gives the best balance across multiple chemical space topographies.
64d2ae694a3f7d0c0dd611e2
32
Elitism percentages of 0, 5, 10, 15, 20, 25, 30, 40, 50, 60, 70, and 80% are examined. The percentage of elitism is the percentage of the population that is passed on unchanged to the next generation. Figure shows the performance of each elitism rate. The median champion was 1 for all elitism rates except for 0%. Having no elitism severely limits the exploitation of good candidates already found, making it difficult to pass on good traits to find similarly good candidates.
64d2ae694a3f7d0c0dd611e2
33
Although optical bandgap found the global optimum in a majority of runs across all elitism rates, the runs that did not find it had very poor champions, with some as high as a champion rank of 74. Solvation energy had one run that never found the global champion across all elitism rates, an inherent issue of the stochastic nature of the GA.
64d2ae694a3f7d0c0dd611e2
34
Although there is some fluctuation, the coverage remains relatively constant among 5-50% elitism and starts to decrease after 60% elitism. Looking at individual runs (ESI Table ) reveals that the best coverage of 97% was found at 20% elitism to optimize polarizability. There is much higher coverage for polarizability compared to optical bandgap and solvation energy due to the more clustered polarizability search space.
64d2ae694a3f7d0c0dd611e2
35
Figure also shows that as the elitism rate increases, so does the speedup (ESI Table ). This is because as you increase the elitism rate, more of the population remains unchanged and there are fewer new individuals to evaluate per generation. Since the convergence criteria are set to have a Spearman's correlation coefficient above 0.8 between each generation for 50 generations, it is much easier for the GA to converge before efficiently exploring chemical space. A GA with no elitism is only slightly better than a brute force approach, with a median speedup of only 4. Since there is no guarantee of good traits in the generation, it is very difficult to converge on a good population.
64d2ae694a3f7d0c0dd611e2
36
Various types of selection methods were examined, such as random, tournament style, and fitness proportionate methods. The fitness proportionate methods that are dependent on the actual fitness score are unable to perform optimization tasks that can have a negative fitness score. Minimizing the solvation ratio allows for negative fitness scores, and thus roulette and SUS could not be used. For the performance evaluation of these two methods, 10 runs were run for polarizability and optical bandgap for a total of 20 runs. The other selection methods were run with the typical 5 runs per property, resulting in 15 total runs per selection method.
64d2ae694a3f7d0c0dd611e2
37
Figure shows the distribution of each method for the champion, coverage, and speedup. All methods had a median champion of 1, although random, 3-way and 4-way tournament, and rank selection had more consistent high-performing results. Selecting parents with random selection from the entire population compared to only the top 50% found the champion more frequently and shows a higher median coverage. A possible explanation is restricting selection to the top candidates limits the explorative abilities of the GA. Some monomers found in the poor performers may perform really well when paired with a new monomer, and impeding the GA from selecting these makes it difficult to find all high performers. The selection methods performed similarly for coverage, and all the methods showed comparable outliers. Looking at the individual run performance for champion and coverage (ESI Tables and, respectively), all methods performed worse on optimizing the optical bandgap. ESI Figure shows 2-way and 4-way tournament selection led to worse champions for optimizing the optical bandgap, although performed very well on the other 2 properties. The speedup is also similar across all methods (ESI Table ), with a 3-way tournament selection yielding the highest median speedup. Using ANOVA, comparing methods for the champion, coverage, and speedup show they are statistically indistinguishable (p>0.05).
64d2ae694a3f7d0c0dd611e2
38
Mutation rates of 10% and above had a median champion of 1, with very low standard deviations. Low mutation rates of 1% and 5% found champions of median rank 14 and 4, respectively, with very large standard deviations. Examining the individual runs (ESI Table ) revealed that one run found a champion of 1,180 for 1% and 5% mutation rates, with multiple others finding a champion above 100. With mutation rates of 10% and higher, the worst champion dramatically decreases.
64d2ae694a3f7d0c0dd611e2
39
When the mutation rate is increased to above 30%, the amount of diversity required is most likely saturated. Since elitism is used in these GAs, good traits will always remain in the population so if a new monomer is introduced during mutation and it does not perform well, it will just be discarded, and there are still high performers for the next generation. Surprisingly, the coverage does not decrease as would be expected from less opportunity to locally search after crossover.
64d2ae694a3f7d0c0dd611e2
40
Comparing the coverage and speedup shows that a mutation rate of 40% has similar coverage to 30%, 50%, 70%, 80%, and 90% mutation, but much larger speedups (ESI Table ). Examination of this trade-off for chemical properties individually (ESI Figure ) showed 40% mutation gave the best balance of coverage and speedup for all 3 properties. For future GAs, 40% mutation rate is recommended.
64d2ae694a3f7d0c0dd611e2
41
As a final phase of testing, we ran a realistic scenario trial in which we used the best GA convergence criteria and hyperparameters found in earlier testing to search for hexamers with the same three optimized properties, but in a much larger search space. For this phase, our monomer list is expanded to include SMILES for 1200 unique units, including more common repeat units as well as aryl-vinyl and aryl-azo combinations. Contrary to the work reported thus far, this part of the project allowed any of the 64 possible sequences, instead of limiting it to ABABAB. This yielded a new search space of approximately 46 million possible hexamers, increasing its size by two orders of magnitude in comparison to our originally defined search space. This allows us to see how our recommended best practices performed in a more realistic setting. We again ran five trials with unique starting random states for each of the three properties covered.
64d2ae694a3f7d0c0dd611e2
42
Looking at the results for the realistic trial, we saw that all runs met or exceeded the true champion values for the limited search space used in hyperparameter testing, indicating that even in a vastly larger search space the GA was able to efficiently find the top values. The chemical structures of the champions from the previously limited search space, as well as from this realistic trial, can be seen in ESI Figures . As shown in Figure , individual runs within a given property search did not always converge to the same top performer. The figure also demonstrates that different properties resulted in varying levels of agreement between individual runs. This is likely due to differences in the "roughness" of the search spaces defined for each of the properties, with the polarizability search space being considerably smoother than the solvation energy search space. The polarizability GAs had several runs converge to the same champion while the solvation energy GA's individual runs all converged to different champions. The differences in the convergence between individual runs reinforce the need to perform several trials when possible for any given GA search, especially when a goal is to get as close to the global extremum as possible. In order to examine how the GA performed in terms of elite performer coverage, we found the top 10% of values across all 5 runs for each chemical property and then compared the results of each individual run to this elite pool (ESI Figure ). We note that no single run captured even a majority of this elite pool and some runs captured very little, even less than one percent. This leads us to recommend that multiple GA runs are highly suggested when practical to allow the best coverage of elite search space.
64d2ae694a3f7d0c0dd611e2
43
Finally, we examine the popular chemical motifs found by the GA in each of the property searches. Looking at the most commonly used monomers in the top 10% by fitness across all runs of each property search, we saw some common themes. As shown in Figure , the top monomers for polarizability are notably longer, larger molecules than those found in the other two property searches. This makes chemical sense considering that longer conjugated systems allow for greater charge mobility and therefore greater polarizability. While the top monomers for the optical bandgap search do not share especially close motifs, we know that extended conjugation, such as having a vinylene group on unit 273, redshifts the absorption and decreases the bandgap .
64d2ae694a3f7d0c0dd611e2
44
Additionally, recent studies on non-fullerene acceptors show the lone pairs on the nitrogens in unit 539 can delocalize to further reduce the bandgap . The solvation ratio top monomers interestingly tend to contain sulfonyl groups, which are highly polar and hydrophilic. We also determined the most commonly used sequences in the top 10% by fitness for each property search (ESI Figure ). Although we did not find obvious trends in the top sequences for either optical bandgap or solvation ratio, the top polarizability sequences were all near-homopolymers and occurred with incredibly similar rates of incidence. This supports previous findings in our group that homopolymeric sequences tend to increase overall molecular polarizability (near homopolymer sequences occur often simply because they are statistically more likely to occur and have extremely similar actual polarizabilities compared to true homopolymers). While a more in-depth analysis of the chemical motifs found in our realistic GA trial is beyond the scope of this work, our preliminary findings support the utility of implementing our GA best practices when conducting searches for a range of different optimized chemical properties.
64d2ae694a3f7d0c0dd611e2
45
With these general recommendations, we acknowledge several caveats. While we believe our convergence method to be useful and an important step in automating molecular GA methods, further work is needed to explore the potential of breaking through the perceived Pareto front that currently exists at the tradeoff between elite search space coverage and speedup. Additionally, such methods could also potentially reduce the need for multiple GA runs if the convergence method is able to better ensure consistent coverage of elite search spaces. Although this work was performed with polymer GAs, we believe the results should transfer to other types of molecular GAs. Using three different chemical properties as optimization targets, we demonstrated that the GA best practices suggested in this work are suited for a wide and varied range of molecular discovery applications.
64d2ae694a3f7d0c0dd611e2
46
These best practices are tested in a realistic search scenario, where we found candidate oligomers tailored specifically to each of the three chemical properties explored. After vastly expanding our search space, the GA runs in these trials were able to self-terminate appropriately and found candidates as good as or better than the best candidates in our original limited search space. This indicates that our GA method with self-termination and tuned hyperparameters is proficient in efficiently locating top chemical structures for a variety of different properties and is a recommended starting place for general use.
65fae84366c138172964a7e2
0
Driven by the continuous development of functional materials and technology, ion-selective electrodes (ISEs) became routine means of analysis for industrial, environmental and clinical applications (e.g. Rapidpoint 400 -Siemens Healthcare, Cobas c 311 -Roche Diagnostics, GEM Premier 5000 -Werfen, i-STAT 1 -Abbott Diagnostics). Clinical analysers and wearable health-monitoring devices allow for rapid, non-invasive and easy disease monitoring and diagnostic through such parameters as Na + , K + , Cl -, Ca 2+ , Mg and Li + 1 . Potentiometric sensors have been also used in environmental monitoring of anions e.g. F -, PO₄³⁻, SO₄² -, NO3 -, NO2 -, Cl -, CN -as well as cations Cu 2+ , NH4 + , Cr 6+ , Cd 2+ , Hg , Pb . The biggest advantages of this non-destructive technique include high dynamic range, ease of operation and low cost of the detectors, when compared to other sensing modalities.
65fae84366c138172964a7e2
1
In standard ion-selective electrodes the ion-selective membrane is positioned between the sample solution and the internal solution of known, constant ionic composition. A silver/silver chloride wire is placed in this internal solution providing ionic to electronic transduction of the signal. As the composition of the internal solution remains constant the change of potential of the electrode can be attributed solely to the changes at the membrane-sample interface. Functioning of such sensors is well understood and their response can be described with the Nernst equation.
65fae84366c138172964a7e2
2
Need for miniaturization drove the development of solid-contact ion-selective electrodes, in which the membrane is deposited directly on the electrical contact or on the intermediate layer e.g. formed from a conducting polymer. Although it is stipulated that the ion to electron transduction is based on redox or double layer capacitance the precise mechanism is not well described .
65fae84366c138172964a7e2
3
Understanding of the mechanism of potential development on each of the interfaces allows to adjust the design and measurement protocol and push the limits of analytical parameters of the sensors. Properly selected composition of the liquid internal solution allows lowering the detection limit from micro-to sub-nanomolar range , and provides more precise means of estimation of selectivity coefficients . The internal solution can be optimized by adding chelators such as EDTA or NTA , an ion-exchanging resin , complexing agents , or lipophilic interfering ion with a salt of the analyte, which all influence the magnitude of transmembrane ion flux.
65fae84366c138172964a7e2
4
The potentiometric response is a function of the activity of the free, uncomplexed ions in the solution. The selectivity of the sensors originates from doping of the membrane with ionophore but it will also depend on other components, such as the lipophilic salt which provides the ionic sites and the type of plasticizer if the membrane is based on poly(vinyl chloride). Even optimized membranes based on the most selective ionophores, such as valinomycin, are not specific. No perturbation, such as current or voltage sweep is applied to the system and due to the steady state nature of the measurement, each sample can be characterized by only one, unique data point. The signal is an average of all the processes taking place at the membrane/sample interface. Analysis of selectivity coefficients, which describe the preference of the sensor towards the primary ion over others, indicates that in complex samples, with many interfering ions the response will be subject to considerable error. The problem can be solved by preparing calibration curves in the sample matrix, however one should expect much narrower linear range (as seen in the Fixed Interference Method) . Variations in the electrode systems, liquid/liquid junctions, calibration and measurement procedures are indicated in the guidelines for the analysis of blood samples as reasons why routine measurements can produce highly precise results, which might be far from being accurate .
65fae84366c138172964a7e2
5
In this work we have developed low-cost ion-selective electrodes made from 2 mL disposable syringes equipped with plasticized PVC membranes. The membrane solution was optimized in a way that after solvent evaporation it forms a thin, uniform layer enclosing the tip of the syringe. To test the low-cost, easy and fast to prepare alternative to standard commercial sensors, we have chosen the most selective and well-studied ionophore -valinomycin. Characterization of the system in model solutions provided comparable results to sensors assembled using commercial electrode bodies. However, quantification of potassium in real samples, showed, as expected, that measurement in complex, more concentrated solutions of higher ionic strength, such as beetroot soup, tomato-based food products, dried fruits are reproducible but subject to relative error up to 76 %.
65fae84366c138172964a7e2
6
As ion-selective electrodes based on PVC can be easily fabricated for a set of different ions, they are a popular choice for creation of electronic tongue systems . In such systems, non-specific receptors in the tongue are replaced with chemical sensors, and the generated signal is analysed not by the brain but by a machine learning algorithm. After training the algorithm with a sufficient number of samples (a few dozen to a few hundred), an unknown sample can be tested and assigned to some group (e.g., healthy or cancer patients) through classification or attributed with a certain value through multivariate regression. The first electronic tongue systems were proposed in the 1990's . They can be assembled using different kind of sensors, however potentiometric probes still remain the most used. Electronic tongues have truly amazing capabilities: they can account for very complex background changes from sample to sample, as in fermentation broths , elucidate information that cannot be easily obtained with a single sensor, as in case of differentiation between potable or contaminated water or reduce the impact of interfering species .
65fae84366c138172964a7e2
7
We have constructed an electronic tongue based on low-cost, ion-selective syringe electrodes, which allows to quantify potassium in a wide range of food products, and pharmaceutical supplements without recalibration. We have also shown how the array can be designed to achieve comparable analytical performance but using less selective ionophores. In this way quantification of potassium is a wide range of real samples is possible not only using low-cost electrodes but also membrane reagents.
65fae84366c138172964a7e2
8
The composition of polymeric membranes, including the amounts of appropriate ionophores, lipophilic salt, plasticizer, and highmolecular-weight PVC as well as respective conditioning solutions are listed in Table . All components were dissolved in 1 mL of tetrahydrofuran in case of the syringe electrodes and 500 µL for the standard ISE electrodes. 0.1M KCl was used as the internal solution.
65fae84366c138172964a7e2
9
Fabrication of standard ISE electrodes The membrane solution was poured into a plastic ring placed on a glass plate and dried overnight. The next day membrane discs were cut to appropriate size using a biopsy punch and mounted in electrode bodies (type IS 561, Philips, Willi Möller AG, Zurich, Switzerland). 0.1 KCl solution was used as an internal filling. The electrodes were conditioning in 0.01 M KCl.
65fae84366c138172964a7e2
10
Fabrication of syringes electrodes The step-by-step procedure is schematically presented in Fig. . First, pistons were removed from 2 mL plastic syringes and the syringe bodies were hanged in an upright position. 50 μL of membrane solution was deposited into the tip of the syringe using an automatic pipette. In case of bubble formation the syringe was discarded and another body was filled with the membrane. Electrodes were dried in room temperature for at least 3 hours. Later they were filled with 0.1 M KCl as an internal solution, and the body was closed with a rubber plug equipped with Ag/AgCl wire. Silver wire was coated with silver chloride layer in saturated solution of FeCl3 and later cleaned through sonication in DI water. The electrodes were conditioned till the next day. Potentiometric measurements All measurements were carried out using PalmSens 4 potentiostat in a standard potentiometric system containing the respective ion selective electrode as a working electrode and Ag/AgCl (3 M KCl) as a reference electrode (IJ Cambria Scientific Ltd.). A calibration curve from which we have determined the linear range and sensitivity was prepared for each tested electrode. The selectivity coefficient was determined by Separated Solution Method (SSM) using 0.1 M aqueous solutions of corresponding salts -LiCl, NaCl, KCl, NH4Cl, MgCl2, CaCl2.
65fae84366c138172964a7e2
11
Together of 16 different real samples were tested including food products and pharmaceutical supplements. The samples of fruits were prepared by cutting 100 g of the dried dates and sonicating in 150 ml of DI water. The supplements samples were prepared according to the instructions on the packaging, by dissolving 1 tablet of the mixed electrolytes supplement and potassium supplement purchased from apothecary in 200 ml and 100 ml respectively. The concentrated beetroot soup sample was prepared by 10-fold dilution in DI water. All of the other samples were measured directly without any pretreatment. Detailed information concerning different sample types, as well as pH of samples measured with a standard pH meter (Mettler Toledo, FiveEasy F20, Switzerland) is provided in the Table SI 1.
65fae84366c138172964a7e2
12
Data analysis Data was processed using MS Office Excel and Origin 2020b. Multivariate analysis was performed in Python using Google Colab space and numpy, matplotlib, pandas, seaborn, sklearn libraries. Data was first scaled using sklearn StandardScaler. In case of K+Val, K+DB, NH4+, An, Cat, pH array and the same array without pH measurement PLS and PCR algorithms were based on 4 components/latent variables, for all other compositions of the matrixes 2 components/latent variables were used. Supervised analysis was performed with train/test split of 25 %. All algorithms were tested for 4 different random splits the same for each algorithm (sklearn train_test_split routine with random_states: 1,2,3,4) to allow proper comparison of algorithms metrics. Evaluation of the models was based on sklearn metrics routines.
65fae84366c138172964a7e2
13
Characterization of the low-cost electrodes Calibration curves were prepared using both low-cost syringe sensors and standard ISE electrodes with cut membranes assembled using commercial bodies. To compare this two types of electrodes potassium-selective membranes were used with valinomycin or dibenzo-18-crown-6 as ionophores. Valinomycin shows excellent selectivity towards potassium, when compared to sodium ions, whereas the crown ether is less selective but more lipophilic, resulting in longer electrode lifetime . Selectivity coefficients were calculated based on SSM. The results are summarized in Table . The sensitivity and linear range for both types of sensors with the membranes based on the same ionophore are comparable, with differences in the range of measured experimental error. All electrodes presented satisfactory repeatability (n=3). Selectivity coefficients from sensors present the same trend for sensors of both types of architecture, with membranes based on valinomycin. As expected a decrease in selectivity was observed in relation to sodium ions for the sensors with dibenzo(18crown-6) as an ionophore as compared with valinomycin. The syringe electrodes with a valinomycin-based membrane were applied to measure potassium ion in different types of samples, including pharmaceutical supplements (potassium supplement, mixed electrolytes supplement), bottled mineral water from different brands (3 samples), tomato juices from different brands (7 samples), banana juice, dried fruits (dates), tomato sauce (passata), and a beetroot soup concentrate. Measurements results were in good agreement with the concentration calculated from the information given on the package for mineral water samples, pharmaceutical supplements and banana juice (Fig. ). Higher deviation was observed for dates sample (average of 26 %), tomato sauce (average of 17 %), all the tomato juices (average of 32 %), with the highest deviation observed for beetroot soup reaching 75 % of the expected value. Some deviation is expected as the value on the package is given as average and some lot to lot variation is possible, however such error should not exceed few percent.
65fae84366c138172964a7e2
14
As expected the measurement based on both ionophores were of high precision but low accuracy due to complexity of the samples. Because of this we have prepared an array of syringe electrodes forming an electronic tongue, that could account for the changes of the background composition of the samples without the need for recalibration. Fig. Comparison of potassium ion concentration from information on the package and measured with a valinomycin ISE syringe electrode in different types of samples. Each sample was measured 3 times, the number of samples in each category is given in the brackets.
65fae84366c138172964a7e2
15
Real samples were analyzed using all sensors listed in Table . i.e. K + Val, K + DB, NH4 + , cation and anion selective electrodes. The array consisted of 3 K + Val and two repetitions of each other sensor type, 11 sensors in total. Additional analysis included pH measurement performed with a commercial pH meter. In the next step the composition of the array was refined based on unsupervised data analysis.
65fae84366c138172964a7e2
16
Principal Component Analysis Loadings plot shows good reproducibility of all the prepared electrodes, as repetitions for each electrode type are grouped in specific places of the plot (Fig3). It also shows that all of the electrode types provide unique information about the samples. Reproducibility of the syringe sensors was also confirmed through the correlation matrix, which indicated perfect correlation for the two K + DB sensors (1.00) and two NH4 + (1.00), three K + Val sensors (0.99-1.00), two Cat sensors (0.99), and two An sensors (0.99).
65fae84366c138172964a7e2
17
PCA Scores showed (Fig. SI 1) that three valinomycin based membranes are not sufficient to cluster the samples into 7 groups. Analysis showed that at least two different sensor types are needed to obtain proper clustering of the samples. Interestingly presence of valinomycin based sensors was not a requisite, and similar results of PCA analysis were obtained for arrays (NH4 + ; K + Val) and (An, Cat, NH4 + , K + DB) (Fig. ). To predict the concentration potassium in real samples different models were tested, including Partial Least Squares (PLS), Multivariate Linear Regression (MLR), Multivarite Polynomial Regression (MLP), Principal Component Regression (PCR) and an ensemble Random Forest algorithm (RF). To compare all the methods in equal manner 25 % of the dataset was randomly chosen as the test set and the rest of the data was used to train the models. The same 4 random splits were used to test all the models and all the proposed compositions of the sensor arrays.
65fae84366c138172964a7e2
18
The best parameters were obtained for an array composed of only two types of sensors: the valinomycin based electrodes and ammonium selective sensors (Fig. ). Fig. shows the comparison of the information on the package with the predicted quantities of potassium from 4 models created on different 25 % test/training splits of the data. The best performance of the MLR algorithm was obtained for the array composed from all ion-selective electrodes and the pH electrode. Although the general metrics presented in Table are promising, the actual comparison of predicted and real values shown in Fig. 3A reveal that for some categories of samples this model is subject to considerable error. PLS test score was highest for the ion-selective array without the pH measurement and the PCR for an array with further reduced size (without valinomycin based sensors). However the best results in total were obtained for the ensemble Random Forest algorithm based on only two types of sensors. As previously seen on PCA loadings plot the two chosen sensors provide different information regarding the sample. Although pH and anion-selective electrodes contribute with also diverse information when compared to valinomycin electrodes, such arrays are not optimal (Tab. 3., Fig. ). The best performing model was also quite stable, as the difference between R 2 for test samples was around 1 % between the 4 tested random train/test splits.
65fae84366c138172964a7e2
19
Use of multivariate data analysis was beneficial even in case of arrays composed of only 3 valinomycin based sensors -the same used to construct the calibration curve, which indicates that performance of many systems, also commercial ones might be enhanced by simple adjustment of the regression parameters. Direct quantification based on the calibration curve was only better than the RF model in case of one pharmaceutical supplement (ZiNIQ+ Electrolites -Tab. SI1) Fig. Prediction of potassium for real samples using a Random Forest algorithm -red stars compared to use of valinomycin ISE calibration curve -black circles.
65fae84366c138172964a7e2
20
We describe a new, simple way to construct ion-selective electrodes with internal conditioning solution. Electrodes can be easily made from readily available materials, such as disposable syringe bodies. Comparison of main analytical parameters with standard ISE electrodes prepared using commercial bodies showed proper functioning of the syringe sensors. Reproducibility of the sensors was further confirmed by the analysis of PCA Loadings and a correlation matrix.
65fae84366c138172964a7e2
21
Electrodes based on the most selective ionophore -valinomycin were used to quantify potassium in different, real samples. Results showed high precision but low accuracy for samples of more complex composition and higher ionic strength (beetroot soup, dried fruit, tomato juice). To account for the background changes different composition of the sensing array were tested. Best results were obtained for an ensemble Random Forest algorithm based on only two types of sensors. In this case the root mean square error of prediction was almost six times lower than when quantification of potassium was achieved using a standard calibration curve.
65fae84366c138172964a7e2
22
It is worth noting that the array could also be tailored in terms of the overall cost of fabrication. Better prediction than with standard calibration curve was also achieved using an array with four types of sensors, which did not contain valinomycin based electrodes. In this way it is possible to quantify potassium using not only low-cost electrode bodies but also affordable membrane components.
65fae84366c138172964a7e2
23
We have also shown that better results can be obtained even in case of single type of sensors prepared in few replicates, when the analysis is based on a multivariate algorithm. The biggest change in the performance of the system was observed for ensemble models. Commercial systems could easily benefit from enhanced precision by simply changing the regression algorithm, without changes in the architecture of the system. Ensemble models such as the random forest algorithm are not easily explainable, for arrays composed of more than one type of sensors other models, offering high degree of interpretability, such as MLR, PCR or PLS can be used.
673c9af07be152b1d09eedbb
0
In drug discovery and many other complex endeavors, multidisciplinary approaches are essential. In science, it is fairly common to come across a combination of concepts, methodologies, and viewpoints that generate and develop novel research areas and pose novel ideas. Indeed, multidisciplinary research teams have been recognized as a key element to address health care problems . Chemoinformatics itself (also named in the literature cheminformatics, chemical informatics, etc.) , is a good example of such a merge of "traditional" disciplines . Another good example is Bioinformatics .
673c9af07be152b1d09eedbb
1
In drug discovery, abundant examples of the synergistic combination of compounds are well known to exhibit a better performance than the individual isolated compounds. The combinations can be very complex giving rise to study areas by themselves such as polypharmacy , traditional medicine , botanicals , nutraceuticals, screening, and deconvolutions of mixture combinatorial libraries . For example, polypharmacy refers to using multiple medications to treat one disease or condition, while traditional medicine often employs combinations of herbs or natural extracts to treat various ailments.
673c9af07be152b1d09eedbb
2
Botanicals can also contain a variety of compounds that act synergistically. Screening and deconvolution of combinatorial mixture libraries involves testing large numbers of compound combinations to identify those that show desired therapeutic activity. These are a few examples of research areas founded on the idea that individual and "best" single compounds, medicines, chemical libraries, methods, etc., are outperformed by their combination which, by itself, can be quite challenging.
673c9af07be152b1d09eedbb
3
Over the years, combinations of methodologies and approaches have been emerging and evolving in chemoinformatics for various practical applications including, but not limited to, molecular representation, chemical space analysis, similarity searching, property prediction, and structure-activity relationships (SAR), to name a few examples. Such combinations can be influenced by the numerous attempts to identify the best single approach through benchmark or comparative studies. In many instances, the outcome is that the most appropriate approach depends on the study case or research system. This is frequent in molecular docking, one of the most widely used methods in computer-aided drug design (CADD). It serves as a fundamental technique for predicting the binding mode of bioactive compounds and conducting virtual screening . Due to the requirement of enhancing its reliability in pose prediction and performance in virtual screening, new docking algorithms and scoring functions have been developed and optimized. However, it is unlikely to identify a single procedure that overcomes others in terms of reliability and precision or proves to be suitable for all types of molecular targets .
673c9af07be152b1d09eedbb
4
The goal of this manuscript is to survey various types of combinations that are done in Chemoinformatics. In light of the current rise of machine learning (ML), we also comment on emerging combinations that are being developed, paving the way for original and improved research areas. In addition, in this review, we provide the reference to the literature and/or link to the code when the tools are freely available. As discussed, the combination of research methods can be particularly interesting, as an alternative to single conventional strategies in which the research objectives are seen from a unique perspective. The manuscript is organized into four main sections. After this Introduction, section two discusses sub-disciplines that have emerged or are evolving as the combination of more traditional or longestablished disciplines. Section three presents exemplary types of combinations done involving chemoinformatics with different applications in molecular representation, property prediction, structureproperty (activity) relationships -SP(A)R-, virtual screening, chemical space, ML, and other applications such as chemistry and art. This section does not include all hybrid methodologies but most representative ones around the world were selected. Section four presents summary conclusions.
673c9af07be152b1d09eedbb
5
Science has evolved towards a more holistic and multi-and transdisciplinary perspective. Now, various disciplines are emerging, being the product of the combination of different perspectives that have emerged from multidisciplinary research groups. Thus, chemoinformatics, being a relatively young discipline (≈ 25 years old), has given rise to the creation of new disciplines and subdisciplines that combine chemical, biological, and biomedical science data . Figure illustrates examples of data used in chemistry, biology, and biomedical sciences, which lead to related disciplines. Now, disciplines related to the study of materials, polymers, and food chemicals have emerged, and other disciplines most related to biology and biomedical concepts have also benefited from methodologies, concepts, and protocols originally used in chemoinformatics focused on drug discovery. For example, molecular modeling, drug design, and toxicology-related informatics disciplines.
673c9af07be152b1d09eedbb
6
There are numerous examples in which the integration of research strategies offers possibilities for evaluating situations beyond what a single technique or methodology would allow, for example, the combination of metabolomics (an emerging field of omics sciences that deals with quantitative and qualitative analysis of molecules in a biological sample) and chemoinformatics, offers a powerful combination to analyze metabolites and identify biomarkers .
673c9af07be152b1d09eedbb
7
Combined methodologies have led to the generation of useful new approaches for different kinds of disciplines, like pharmacology, food chemistry, toxicology, and molecular and material design. Table presents examples of emerging combined approaches inspired by integrated methodologies and their utilities. The examples presented in the table are based on the author's experience. The combinations in different areas, along with their representative references, are further discussed below and are organized by general application as outlined in Table .
673c9af07be152b1d09eedbb
8
One of the key questions in computational chemistry, particularly in chemoinformatics, is how to create valid and useful representations of different types of molecules. Actually, the development of current molecular representations have been promoted by the combination of methods, descriptors, and features to simplify the most important data to represent each kind of molecular data set. This has occurred in chemoinformatics and many other areas due to the need to organize more information in a simple format, which can be used to generate new information from new artificial intelligence (AI) methods (i.e. machine and deep learning). A clear example is the evolution of the Morse code, through the traditional barcode, which enabled the creation of its two-dimensional representation (QR codes) with greater capacity to store data, as illustrated in Figure . Something similar happened in chemoinformatics which the availability of new experimental or predictive features must be condensed in a unique representation. For example, simple chemical structure representations (e.g., molecular formula), which uniquely simplify atomic data of chemical structures, have laid the groundwork for creating string-based molecular representations capable of encoding functional groups and atom-connectivity information into linear notations (e.g. SMILES, SMARTS, and InChi keys) . Which in turn led to the creation of bit-based molecular representations capable of being used to analyze these structures with high-performance computing (e.g. MACCS keys fingerprints) . In the same way as traditional barcodes, bit-based molecular representations evolved to contain more information, for example, related to their connectivity and chirality (e.g. extended connectivity fingerprints -ECFPs), which was originally designed to generate a unique code for molecules . This allows to establish reproducible chemoinformatics algorithms for substructure searching , structureproperty modeling, and proved to be useful representations to construct ML tasks to predict multiple properties . On the other hand, and similarly to traditional barcodes, bit-based molecular representations have limitations related to their ability to create unique codes for each molecule. This need inspired the creation of new bit-based molecular representations using ML approaches, like DeepSMILES , and SELFIES ). SELFIES, which has been associated with fewer generating rates of invalid molecules. This allows to convert every string into a valid molecule .
673c9af07be152b1d09eedbb
9
Parallelly, the creation of domain-specific molecular representations used to combine data (of a specific domain) into new fingerprints, which contain chemical and non-chemical data, for example about biological activity, reactivity, side-effects, or any other properties. A key example of this is the evolutionary multipattern fingerprint (EvoMFP) , a novel molecular representation, generated by AI algorithms which combine substructures from SMARTS queries. Another example is the neural fingerprint which allows to combine chemical structural information with others, like bioactive or clinical information to create a unique fingerprint. The authors showed improvement in similarity-based virtual screening using this type of approaches inspired by the combination of different kinds of information. With the advent of large and ultra large chemical libraries has become the need to develop molecular representations that encode entire chemical libraries. One of such strategies is the database fingerprint and its natural extension, the statistical-based database fingerprint . Both approaches code in a single dimension the most significant bit present/absent in a compound database that can be of any size. The database fingerprints can be used with a variety of fingerprints, either general representations or domain-specific representations. However, for now there are limitations in the computational operations that allow us to generate molecular representations equivalent to a QR code in order to contain all the information available for a specific compound (e.g., drug-like properties, ADME, bioactivity, side effects, quantum descriptors, etc.). In the near future, the scientific community could develop more efficient methods for the analysis of twodimensional fingerprints (Chemical QR codes) that contain more information than what we can currently process with a traditional fingerprint. This will only happen if the combination of different disciplines and advances in hardware and software occur in parallel.
673c9af07be152b1d09eedbb
10
Property prediction is a common practice in many chemistry applications. In drug discovery, typical examples are the prediction of biological activities and binding modes of molecules with molecular targets . To this aim, it has been recognized that consensus predictions and ensemble models usually perform better than single predictors. As such, ensemble models have become quite common. In the realm of predicting binding poses and simulating protein-ligand interactions, consensus docking has been largely admitted, in general, outperforming single docking approaches . Similarly, several studies have shown that there is not a single docking protocol that is "the best" across a broad range of molecular targets. It often happens that a given docking protocol works well for a given receptor family and, again, consensus docking is a more reliable alternative to a single docking software or protocol.
673c9af07be152b1d09eedbb
11
In drug discovery it is desirable to predict, whenever possible, compounds that interfere with the assays but do not show actual biological activity. Anticipating such compounds before experimental screening is a challenging task. It has been recently reviewed that the best practice is using different models as opposed to using a single approach .
673c9af07be152b1d09eedbb
12
Nowadays, property predictions are reconsider, and novel methodologies based on multi-parametric and multi-disciplinary data have emerged. For example, novel binding affinity predictors use a combination of ligand-and target-based approaches, like molecular similarity and molecular dynamics techniques, fused with NMR spectral data to predict the putative activity of compounds . Another key example is the use of chemical, in vitro, and in vivo data to anticipate future side effects of lead compounds . And, recent advances, like network pharmacology, have opened the possibility of decoding complex natural product mixtures to identify the compounds associated with their reported bioactivity . Namely, the paradigm to predict properties based on unique kinds of data (i.e. chemical, biological, or clinical) has been broken now.
673c9af07be152b1d09eedbb
13
Pharmacokinetics and pharmacodynamics approaches are key points in modern molecular design and development, especially for small and biotech drugs applied in medical, nutritional, agricultural, and industrial areas. Properties involved with the absorption, distribution, metabolism, and toxicity (ADMETox) of drugs could determine their success in clinical trials. There are some software and servers oriented to predict ADMETox properties for small molecules and peptides, however, their dataset is normally constructed based on the direct modulation of key targets, but their prediction in more complex systems (i.e., in vivo context) is limited . This current methodological gap points out the need to fuse different kinds of datasets and approaches to improve the capacity of future models to predict ADMETox endpoints.
673c9af07be152b1d09eedbb
14
There are novel approaches based on consensus algorithms and data fusion techniques that have demonstrated a dramatic improvement when different kinds of in vitro, in vivo, and clinical data were used to decode complex pharmacokinetics and pharmacodynamics issues . In other words, the ADMETox properties predictions must be addressed by a multidisciplinary group of specialists that consider chemical, biological, and clinical implications related to these.
673c9af07be152b1d09eedbb
15
In drug discovery, biological activity vs. one or multiple endpoints is one of the primary properties to be predicted and thus, SAR analysis is a cornerstone in medicinal chemistry . Moreover, predicting properties of pharmaceutical relevance such as those related to ADMETox are also crucial, as commented in section 3.2.1. Recently it has been emphasized the need to explore systematically structure-inactivity relationships -SIR -as part of the generation of predictive models .